Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/configuration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -559,9 +559,9 @@ These properties do not apply to metadata storage connections.

|Property|Possible Values|Description|Default|
|--------|---------------|-----------|-------|
|`druid.access.jdbc.enforceAllowedProperties`|Boolean|When true, Druid applies `druid.access.jdbc.allowedProperties` to JDBC connections starting with `jdbc:postgresql:` or `jdbc:mysql:`. When false, Druid allows any kind of JDBC connections without JDBC property validation. This config is for backward compatibility especially during upgrades since enforcing allow list can break existing ingestion jobs or lookups based on JDBC. This config is deprecated and will be removed in a future release.|true|
|`druid.access.jdbc.allowedProperties`|List of JDBC properties|Defines a list of allowed JDBC properties. Druid always enforces the list for all JDBC connections starting with `jdbc:postgresql:` or `jdbc:mysql:` if `druid.access.jdbc.enforceAllowedProperties` is set to true.<br/><br/>This option is tested against MySQL connector 5.1.48 and PostgreSQL connector 42.2.14. Other connector versions might not work.|["useSSL", "requireSSL", "ssl", "sslmode"]|
|`druid.access.jdbc.allowUnknownJdbcUrlFormat`|Boolean|When false, Druid only accepts JDBC connections starting with `jdbc:postgresql:` or `jdbc:mysql:`. When true, Druid allows JDBC connections to any kind of database, but only enforces `druid.access.jdbc.allowedProperties` for PostgreSQL and MySQL.|true|
|`druid.access.jdbc.enforceAllowedProperties`|Boolean|When true, Druid applies `druid.access.jdbc.allowedProperties` to JDBC connections starting with `jdbc:postgresql:`, `jdbc:mysql:`, or `jdbc:mariadb:`. When false, Druid allows any kind of JDBC connections without JDBC property validation. This config is for backward compatibility especially during upgrades since enforcing allow list can break existing ingestion jobs or lookups based on JDBC. This config is deprecated and will be removed in a future release.|true|
|`druid.access.jdbc.allowedProperties`|List of JDBC properties|Defines a list of allowed JDBC properties. Druid always enforces the list for all JDBC connections starting with `jdbc:postgresql:`, `jdbc:mysql:`, and `jdbc:mariadb:` if `druid.access.jdbc.enforceAllowedProperties` is set to true.<br/><br/>This option is tested against MySQL connector 5.1.48, MariaDB connector 2.7.4, and PostgreSQL connector 42.2.14. Other connector versions might not work.|["useSSL", "requireSSL", "ssl", "sslmode"]|
|`druid.access.jdbc.allowUnknownJdbcUrlFormat`|Boolean|When false, Druid only accepts JDBC connections starting with `jdbc:postgresql:` or `jdbc:mysql:`. When true, Druid allows JDBC connections to any kind of database, but only enforces `druid.access.jdbc.allowedProperties` for PostgreSQL and MySQL/MariaDB.|true|


### Task Logging
Expand Down
2 changes: 1 addition & 1 deletion docs/design/extensions-contrib/dropwizard.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ title: "Dropwizard metrics emitter"

# Dropwizard Emitter

To use this extension, make sure to [include](../../development/extensions.md#loading-extensions) `dropwizard-emitter` extension.
To use this extension, make sure to [include](../../development/extensions.md#loading-extensions) `dropwizard-emitter` in the extensions load list.

## Introduction

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Ambari Metrics Emitter"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `ambari-metrics-emitter` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `ambari-metrics-emitter` in the extensions load list.

## Introduction

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Apache Cassandra"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-cassandra-storage` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-cassandra-storage` in the extensions load list.

[Apache Cassandra](http://www.datastax.com/what-we-offer/products-services/datastax-enterprise/apache-cassandra) can also
be leveraged for deep storage. This requires some additional Druid configuration as well as setting up the necessary
Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/cloudfiles.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Rackspace Cloud Files"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-cloudfiles-extensions` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-cloudfiles-extensions` in the extensions load list.

## Deep Storage

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/distinctcount.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "DistinctCount Aggregator"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) the `druid-distinctcount` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) the `druid-distinctcount` in the extensions load list.

Additionally, follow these steps:

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/gce-extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "GCE Extensions"
-->


To use this Apache Druid (incubating) extension, make sure to [include](../../development/extensions.md#loading-extensions) `gce-extensions`.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `gce-extensions` in the extensions load list.

At the moment, this extension enables only Druid to autoscale instances in GCE.

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/graphite.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Graphite Emitter"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `graphite-emitter` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `graphite-emitter` in the extensions load list.

## Introduction

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/influx.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "InfluxDB Line Protocol Parser"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-influx-extensions`.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-influx-extensions` in the extensions load list.

This extension enables Druid to parse the [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v1.5/write_protocols/line_protocol_tutorial/), a popular text-based timeseries metric serialization format.

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/influxdb-emitter.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "InfluxDB Emitter"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-influxdb-emitter` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-influxdb-emitter` in the extensions load list.

## Introduction

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/kafka-emitter.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Kafka Emitter"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `kafka-emitter` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `kafka-emitter` in the extensions load list.

## Introduction

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,7 @@ title: "Moment Sketches for Approximate Quantiles module"
This module provides aggregators for approximate quantile queries using the [momentsketch](https://github.com/stanford-futuredata/momentsketch) library.
The momentsketch provides coarse quantile estimates with less space and aggregation time overheads than traditional sketches, approaching the performance of counts and sums by reconstructing distributions from computed statistics.

To use this Apache Druid extension, make sure you [include](../../development/extensions.md#loading-extensions) the extension in your config file:

```
druid.extensions.loadList=["druid-momentsketch"]
```
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) in the extensions load list.

### Aggregator

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/opentsdb-emitter.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "OpenTSDB Emitter"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `opentsdb-emitter` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `opentsdb-emitter` in the extensions load list.

## Introduction

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Prometheus Emitter"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `prometheus-emitter` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `prometheus-emitter` in the extensions load list.

## Introduction

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/sqlserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Microsoft SQLServer"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `sqlserver-metadata-storage` as an extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `sqlserver-metadata-storage` in the extensions load list.

## Setting up SQLServer

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/statsd.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "StatsD Emitter"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `statsd-emitter` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `statsd-emitter` in the extensions load list.

## Introduction

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/thrift.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Thrift"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-thrift-extensions`.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-thrift-extensions` in the extensions load list.

This extension enables Druid to ingest thrift compact data online (`ByteBuffer`) and offline (SequenceFile of type `<Writable, BytesWritable>` or LzoThriftBlock File).

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-contrib/time-min-max.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Timestamp Min/Max aggregators"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-time-min-max`.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-time-min-max` in the extensions load list.

These aggregators enable more precise calculation of min and max time of given events than `__time` column whose granularity is sparse, the same as query granularity.
To use this feature, a "timeMin" or "timeMax" aggregator must be included at indexing time.
Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-core/approximate-histograms.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Approximate Histogram aggregators"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-histogram` as an extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-histogram` in the extensions load list.

The `druid-histogram` extension provides an approximate histogram aggregator and a fixed buckets histogram aggregator.

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-core/avro.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Additionally, it provides an InputFormat for reading Avro OCF files when using
[native batch indexing](../../ingestion/native-batch.md), see [Avro OCF](../../ingestion/data-formats.md#avro-ocf)
for details on how to ingest OCF files.

Make sure to [include](../../development/extensions.md#loading-extensions) `druid-avro-extensions` as an extension.
Make sure to [include](../../development/extensions.md#loading-extensions) `druid-avro-extensions` in the extensions load list.

### Avro Types

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-core/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "Microsoft Azure"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-azure-extensions` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-azure-extensions` in the extensions load list.

## Deep Storage

Expand Down
11 changes: 5 additions & 6 deletions docs/development/extensions-core/bloom-filter.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,12 @@ title: "Bloom Filter"
-->


This Apache Druid extension adds the ability to both construct bloom filters from query results, and filter query results by testing
against a bloom filter. Make sure to [include](../../development/extensions.md#loading-extensions) `druid-bloom-filter` as an
extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-bloom-filter` in the extensions load list.

A Bloom filter is a probabilistic data structure for performing a set membership check. A bloom filter is a good candidate
to use with Druid for cases where an explicit filter is impossible, e.g. filtering a query against a set of millions of
values.
This extension adds the ability to both construct bloom filters from query results, and filter query results by testing
against a bloom filter. A Bloom filter is a probabilistic data structure for performing a set membership check. A bloom
filter is a good candidate to use with Druid for cases where an explicit filter is impossible, e.g. filtering a query
against a set of millions of values.

Following are some characteristics of Bloom filters:

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-core/druid-kerberos.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ title: "Kerberos"

Apache Druid Extension to enable Authentication for Druid Processes using Kerberos.
This extension adds an Authenticator which is used to protect HTTP Endpoints using the simple and protected GSSAPI negotiation mechanism [SPNEGO](https://en.wikipedia.org/wiki/SPNEGO).
Make sure to [include](../../development/extensions.md#loading-extensions) `druid-kerberos` as an extension.
Make sure to [include](../../development/extensions.md#loading-extensions) `druid-kerberos` in the extensions load list.


## Configuration
Expand Down
6 changes: 3 additions & 3 deletions docs/development/extensions-core/druid-lookups.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,12 +31,12 @@ The main goal of this cache is to speed up the access to a high latency lookup s
Thus user can define various caching strategies or and implementation per lookup, even if the source is the same.
This module can be used side to side with other lookup module like the global cached lookup module.

To use this extension please make sure to [include](../../development/extensions.md#loading-extensions) `druid-lookups-cached-single` as an extension.
To use this Apache Druid extension, [include](../extensions.md#loading-extensions) `druid-lookups-cached-single` in the extensions load list.

> If using JDBC, you will need to add your database's client JAR files to the extension's directory.
> For Postgres, the connector JAR is already included.
> For MySQL, you can get it from https://dev.mysql.com/downloads/connector/j/.
> Copy or symlink the downloaded file inside the folder `extensions/druid-lookups-cached-single` under the distribution root directory.
> See the MySQL extension documentation for instructions to obtain [MySQL](./mysql.md#installing-the-mysql-connector-library) or [MariaDB](./mysql.md#alternative-installing-the-mariadb-connector-library) connector libraries.
> Copy or symlink the downloaded file to `extensions/druid-lookups-cached-single` under the distribution root directory.

## Architecture
Generally speaking this module can be divided into two main component, namely, the data fetcher layer and caching layer.
Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-core/druid-ranger-security.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ title: "Apache Ranger Security"

This Apache Druid extension adds an Authorizer which implements access control for Druid, backed by [Apache Ranger](https://ranger.apache.org/). Please see [Authentication and Authorization](../../design/auth.md) for more information on the basic facilities this extension provides.

Make sure to [include](../../development/extensions.md#loading-extensions) `druid-ranger-security` as an extension.
Make sure to [include](../../development/extensions.md#loading-extensions) `druid-ranger-security` in the extensions load list.

> The latest release of Apache Ranger is at the time of writing version 2.0. This version has a dependency on `log4j 1.2.17` which has a vulnerability if you configure it to use a `SocketServer` (CVE-2019-17571). Next to that, it also includes Kafka 2.0.0 which has 2 known vulnerabilities (CVE-2019-12399, CVE-2018-17196). Kafka can be used by the audit component in Ranger, but is not required.

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-core/google.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ This extension allows you to do 2 things:
* [Ingest data](#reading-data-from-google-cloud-storage) from files stored in Google Cloud Storage.
* Write segments to [deep storage](#deep-storage) in GCS.

To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-google-extensions` extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-google-extensions` in the extensions load list.

### Required Configuration

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-core/hdfs.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ title: "HDFS"
-->


To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-hdfs-storage` as an extension and run druid processes with `GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account_keyfile` in the environment.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-hdfs-storage` in the extensions load list and run druid processes with `GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account_keyfile` in the environment.

## Deep Storage

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ title: "Apache Kafka Lookups"

> Lookups are an [experimental](../experimental.md) feature.

To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-lookups-cached-global` and `druid-kafka-extraction-namespace` as an extension.
To use this Apache Druid extension, [include](../../development/extensions.md#loading-extensions) `druid-lookups-cached-global` and `druid-kafka-extraction-namespace` in the extensions load list.

If you need updates to populate as promptly as possible, it is possible to plug into a Kafka topic whose key is the old value and message is the desired new value (both in UTF-8) as a LookupExtractorFactory.

Expand Down
2 changes: 1 addition & 1 deletion docs/development/extensions-core/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Apache Druid Extension to enable using Kubernetes API Server for node discovery

## Configuration

To use this extension please make sure to [include](../../development/extensions.md#loading-extensions) `druid-kubernetes-extensions` as an extension.
To use this extension please make sure to [include](../../development/extensions.md#loading-extensions) `druid-kubernetes-extensions` in the extensions load list.

This extension works together with HTTP based segment and task management in Druid. Consequently, following configurations must be set on all Druid nodes.

Expand Down
4 changes: 2 additions & 2 deletions docs/development/extensions-core/lookups-cached-global.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ title: "Globally Cached Lookups"

> Lookups are an [experimental](../experimental.md) feature.

To use this Apache Druid extension, make sure to [include](../../development/extensions.md#loading-extensions) `druid-lookups-cached-global` as an extension.
To use this Apache Druid extension, [include](../extensions.md#loading-extensions) `druid-lookups-cached-global` in the extensions load list.

## Configuration
> Static configuration is no longer supported. Lookups can be configured through
Expand Down Expand Up @@ -370,7 +370,7 @@ The JDBC lookups will poll a database to populate its local cache. If the `tsCol

> If using JDBC, you will need to add your database's client JAR files to the extension's directory.
> For Postgres, the connector JAR is already included.
> For MySQL, you can get it from https://dev.mysql.com/downloads/connector/j/.
> See the MySQL extension documentation for instructions to obtain [MySQL](./mysql.md#installing-the-mysql-connector-library) or [MariaDB](./mysql.md#alternative-installing-the-mariadb-connector-library) connector libraries.
> The connector JAR should reside in the classpath of Druid's main class loader.
> To add the connector JAR to the classpath, you can copy the downloaded file to `lib/` under the distribution root directory. Alternatively, create a symbolic link to the connector in the `lib` directory.

Expand Down
Loading