Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions docs/configuration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -1908,12 +1908,12 @@ The broker uses processing configs for nested groupBy queries.
|`druid.processing.fifo`|If the processing queue should treat tasks of equal priority in a FIFO manner|`true`|
|`druid.processing.tmpDir`|Path where temporary files created while processing a query should be stored. If specified, this configuration takes priority over the default `java.io.tmpdir` path.|path represented by `java.io.tmpdir`|
|`druid.processing.merge.useParallelMergePool`|Enable automatic parallel merging for Brokers on a dedicated async ForkJoinPool. If `false`, instead merges will be done serially on the `HTTP` thread pool.|`true`|
|`druid.processing.merge.pool.parallelism`|Size of ForkJoinPool. Note that the default configuration assumes that the value returned by `Runtime.getRuntime().availableProcessors()` represents 2 hyper-threads per physical core, and multiplies this value by `0.75` in attempt to size `1.5` times the number of _physical_ cores.|`Runtime.getRuntime().availableProcessors() * 0.75` (rounded up)|
|`druid.processing.merge.pool.defaultMaxQueryParallelism`|Default maximum number of parallel merge tasks per query. Note that the default configuration assumes that the value returned by `Runtime.getRuntime().availableProcessors()` represents 2 hyper-threads per physical core, and multiplies this value by `0.5` in attempt to size to the number of _physical_ cores.|`Runtime.getRuntime().availableProcessors() * 0.5` (rounded up)|
|`druid.processing.merge.pool.awaitShutdownMillis`|Time to wait for merge ForkJoinPool tasks to complete before ungracefully stopping on process shutdown in milliseconds.|`60_000`|
|`druid.processing.merge.task.targetRunTimeMillis`|Ideal run-time of each ForkJoinPool merge task, before forking off a new task to continue merging sequences.|100|
|`druid.processing.merge.task.initialYieldNumRows`|Number of rows to yield per ForkJoinPool merge task, before forking off a new task to continue merging sequences.|16384|
|`druid.processing.merge.task.smallBatchNumRows`|Size of result batches to operate on in ForkJoinPool merge tasks.|4096|
|`druid.processing.merge.parallelism`|Size of ForkJoinPool. Note that the default configuration assumes that the value returned by `Runtime.getRuntime().availableProcessors()` represents 2 hyper-threads per physical core, and multiplies this value by `0.75` in attempt to size `1.5` times the number of _physical_ cores.|`Runtime.getRuntime().availableProcessors() * 0.75` (rounded up)|
|`druid.processing.merge.defaultMaxQueryParallelism`|Default maximum number of parallel merge tasks per query. Note that the default configuration assumes that the value returned by `Runtime.getRuntime().availableProcessors()` represents 2 hyper-threads per physical core, and multiplies this value by `0.5` in attempt to size to the number of _physical_ cores.|`Runtime.getRuntime().availableProcessors() * 0.5` (rounded up)|
|`druid.processing.merge.awaitShutdownMillis`|Time to wait for merge ForkJoinPool tasks to complete before ungracefully stopping on process shutdown in milliseconds.|`60_000`|
|`druid.processing.merge.targetRunTimeMillis`|Ideal run-time of each ForkJoinPool merge task, before forking off a new task to continue merging sequences.|100|
|`druid.processing.merge.initialYieldNumRows`|Number of rows to yield per ForkJoinPool merge task, before forking off a new task to continue merging sequences.|16384|
|`druid.processing.merge.smallBatchNumRows`|Size of result batches to operate on in ForkJoinPool merge tasks.|4096|

The amount of direct memory needed by Druid is at least
`druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + 1)`. You can
Expand Down
3 changes: 0 additions & 3 deletions docs/configuration/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,9 +95,6 @@ The following example log4j2.xml is based upon the micro quickstart:
</Logger>

<!-- Quieter logging at startup -->
<Logger name="org.skife.config" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
<Logger name="com.sun.jersey.guice" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
Expand Down
6 changes: 3 additions & 3 deletions docs/querying/query-context.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,9 @@ See [SQL query context](sql-query-context.md) for other query context parameters
|`serializeDateTimeAsLong`| `false` | If true, DateTime is serialized as long in the result returned by Broker and the data transportation between Broker and compute process|
|`serializeDateTimeAsLongInner`| `false` | If true, DateTime is serialized as long in the data transportation between Broker and compute process|
|`enableParallelMerge`|`true`|Enable parallel result merging on the Broker. Note that `druid.processing.merge.useParallelMergePool` must be enabled for this setting to be set to `true`. See [Broker configuration](../configuration/index.md#broker) for more details.|
|`parallelMergeParallelism`|`druid.processing.merge.pool.parallelism`|Maximum number of parallel threads to use for parallel result merging on the Broker. See [Broker configuration](../configuration/index.md#broker) for more details.|
|`parallelMergeInitialYieldRows`|`druid.processing.merge.task.initialYieldNumRows`|Number of rows to yield per ForkJoinPool merge task for parallel result merging on the Broker, before forking off a new task to continue merging sequences. See [Broker configuration](../configuration/index.md#broker) for more details.|
|`parallelMergeSmallBatchRows`|`druid.processing.merge.task.smallBatchNumRows`|Size of result batches to operate on in ForkJoinPool merge tasks for parallel result merging on the Broker. See [Broker configuration](../configuration/index.md#broker) for more details.|
|`parallelMergeParallelism`|`druid.processing.merge.parallelism`|Maximum number of parallel threads to use for parallel result merging on the Broker. See [Broker configuration](../configuration/index.md#broker) for more details.|
|`parallelMergeInitialYieldRows`|`druid.processing.merge.initialYieldNumRows`|Number of rows to yield per ForkJoinPool merge task for parallel result merging on the Broker, before forking off a new task to continue merging sequences. See [Broker configuration](../configuration/index.md#broker) for more details.|
|`parallelMergeSmallBatchRows`|`druid.processing.merge.smallBatchNumRows`|Size of result batches to operate on in ForkJoinPool merge tasks for parallel result merging on the Broker. See [Broker configuration](../configuration/index.md#broker) for more details.|
|`useFilterCNF`|`false`| If true, Druid will attempt to convert the query filter to Conjunctive Normal Form (CNF). During query processing, columns can be pre-filtered by intersecting the bitmap indexes of all values that match the eligible filters, often greatly reducing the raw number of rows which need to be scanned. But this effect only happens for the top level filter, or individual clauses of a top level 'and' filter. As such, filters in CNF potentially have a higher chance to utilize a large amount of bitmap indexes on string columns during pre-filtering. However, this setting should be used with great caution, as it can sometimes have a negative effect on performance, and in some cases, the act of computing CNF of a filter can be expensive. We recommend hand tuning your filters to produce an optimal form if possible, or at least verifying through experimentation that using this parameter actually improves your query performance with no ill-effects.|
|`secondaryPartitionPruning`|`true`|Enable secondary partition pruning on the Broker. The Broker will always prune unnecessary segments from the input scan based on a filter on time intervals, but if the data is further partitioned with hash or range partitioning, this option will enable additional pruning based on a filter on secondary partition dimensions.|
|`debug`| `false` | Flag indicating whether to enable debugging outputs for the query. When set to false, no additional logs will be produced (logs produced will be entirely dependent on your logging level). When set to true, the following addition logs will be produced:<br />- Log the stack trace of the exception (if any) produced by the query |
Expand Down
3 changes: 0 additions & 3 deletions examples/conf/druid/auto/_common/log4j2.xml
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@
</Logger>

<!-- Quieter logging at startup -->
<Logger name="org.skife.config" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
<Logger name="com.sun.jersey.guice" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
Expand Down
3 changes: 0 additions & 3 deletions examples/conf/druid/cluster/_common/log4j2.xml
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@
</Logger>

<!-- Quieter logging at startup -->
<Logger name="org.skife.config" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
<Logger name="com.sun.jersey.guice" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
Expand Down
3 changes: 0 additions & 3 deletions examples/conf/druid/single-server/large/_common/log4j2.xml
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@
</Logger>

<!-- Quieter logging at startup -->
<Logger name="org.skife.config" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
<Logger name="com.sun.jersey.guice" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
Expand Down
3 changes: 0 additions & 3 deletions examples/conf/druid/single-server/medium/_common/log4j2.xml
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@
</Logger>

<!-- Quieter logging at startup -->
<Logger name="org.skife.config" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
<Logger name="com.sun.jersey.guice" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@
</Logger>

<!-- Quieter logging at startup -->
<Logger name="org.skife.config" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
<Logger name="com.sun.jersey.guice" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@
</Logger>

<!-- Quieter logging at startup -->
<Logger name="org.skife.config" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
<Logger name="com.sun.jersey.guice" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
Expand Down
3 changes: 0 additions & 3 deletions examples/conf/druid/single-server/small/_common/log4j2.xml
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@
</Logger>

<!-- Quieter logging at startup -->
<Logger name="org.skife.config" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
<Logger name="com.sun.jersey.guice" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
Expand Down
3 changes: 0 additions & 3 deletions examples/conf/druid/single-server/xlarge/_common/log4j2.xml
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@
</Logger>

<!-- Quieter logging at startup -->
<Logger name="org.skife.config" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
<Logger name="com.sun.jersey.guice" level="warn" additivity="false">
<Appender-ref ref="FileAppender"/>
</Logger>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@
import org.apache.druid.discovery.DruidNodeDiscoveryProvider;
import org.apache.druid.discovery.NodeRole;
import org.apache.druid.guice.AnnouncerModule;
import org.apache.druid.guice.BrokerProcessingModule;
import org.apache.druid.guice.JsonConfigProvider;
import org.apache.druid.guice.LazySingleton;
import org.apache.druid.guice.LegacyBrokerParallelMergeConfigModule;
import org.apache.druid.guice.ManageLifecycle;
import org.apache.druid.guice.PolyBind;
import org.apache.druid.guice.SQLMetadataStorageDruidModule;
Expand Down Expand Up @@ -508,8 +508,7 @@ private static Injector makeInjector(
new AnnouncerModule(),
new DiscoveryModule(),
// Dependencies from other modules
new LegacyBrokerParallelMergeConfigModule(),
// Dependencies from other modules
new BrokerProcessingModule(),
new StorageNodeModule(),
new MSQExternalDataSourceModule(),

Expand Down
10 changes: 0 additions & 10 deletions licenses.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2493,16 +2493,6 @@ libraries:

---

name: Config Magic
license_category: binary
module: java-core
license_name: Apache License version 2.0
version: 0.9
libraries:
- org.skife.config: config-magic

---

name: Apache Hadoop
license_category: binary
module: hadoop-client
Expand Down
11 changes: 0 additions & 11 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -388,17 +388,6 @@
<artifactId>airline</artifactId>
<version>2.8.4</version>
</dependency>
<dependency>
<groupId>org.skife.config</groupId>
<artifactId>config-magic</artifactId>
<version>0.9</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>net.minidev</groupId>
<artifactId>json-smart</artifactId>
Expand Down
4 changes: 0 additions & 4 deletions processing/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -65,10 +65,6 @@
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-smile</artifactId>
</dependency>
<dependency>
<groupId>org.skife.config</groupId>
<artifactId>config-magic</artifactId>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
Expand Down
10 changes: 0 additions & 10 deletions processing/src/main/java/org/apache/druid/guice/ConfigModule.java
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,9 @@

import com.google.inject.Binder;
import com.google.inject.Module;
import com.google.inject.Provides;
import org.apache.druid.java.util.common.config.Config;
import org.skife.config.ConfigurationObjectFactory;

import javax.validation.Validation;
import javax.validation.Validator;
import java.util.Properties;

/**
*/
Expand All @@ -39,10 +35,4 @@ public void configure(Binder binder)
binder.bind(Validator.class).toInstance(Validation.buildDefaultValidatorFactory().getValidator());
binder.bind(JsonConfigurator.class).in(LazySingleton.class);
}

@Provides @LazySingleton
public ConfigurationObjectFactory makeFactory(Properties props)
{
return Config.createFactory(props);
}
}

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
import org.apache.druid.guice.annotations.Json;
import org.apache.druid.guice.annotations.JsonNonNull;
import org.apache.druid.guice.annotations.Smile;
import org.skife.config.ConfigurationObjectFactory;

import javax.validation.Validator;
import java.util.Properties;
Expand All @@ -40,7 +39,6 @@
public class DruidSecondaryModule implements Module
{
private final Properties properties;
private final ConfigurationObjectFactory factory;
private final ObjectMapper jsonMapper;
private final ObjectMapper jsonMapperOnlyNonNullValueSerialization;
private final ObjectMapper smileMapper;
Expand All @@ -49,15 +47,13 @@ public class DruidSecondaryModule implements Module
@Inject
public DruidSecondaryModule(
Properties properties,
ConfigurationObjectFactory factory,
@Json ObjectMapper jsonMapper,
@JsonNonNull ObjectMapper jsonMapperOnlyNonNullValueSerialization,
@Smile ObjectMapper smileMapper,
Validator validator
)
{
this.properties = properties;
this.factory = factory;
this.jsonMapper = jsonMapper;
this.jsonMapperOnlyNonNullValueSerialization = jsonMapperOnlyNonNullValueSerialization;
this.smileMapper = smileMapper;
Expand All @@ -69,7 +65,6 @@ public void configure(Binder binder)
{
binder.install(new DruidGuiceExtensions());
binder.bind(Properties.class).toInstance(properties);
binder.bind(ConfigurationObjectFactory.class).toInstance(factory);
binder.bind(ObjectMapper.class).to(Key.get(ObjectMapper.class, Json.class));
binder.bind(Validator.class).toInstance(validator);
binder.bind(JsonConfigurator.class);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,51 @@ public void validate()
docsLink
);
}

validateRemovedProcessingConfigs();
}

private void validateRemovedProcessingConfigs()
{
checkDeletedConfigAndThrow(
"druid.processing.merge.task.initialYieldNumRows",
"druid.processing.merge.initialYieldNumRows"
);
checkDeletedConfigAndThrow(
"druid.processing.merge.task.targetRunTimeMillis",
"druid.processing.merge.targetRunTimeMillis"
);
checkDeletedConfigAndThrow(
"druid.processing.merge.task.smallBatchNumRows",
"druid.processing.merge.smallBatchNumRows"
);

checkDeletedConfigAndThrow(
"druid.processing.merge.pool.awaitShutdownMillis",
"druid.processing.merge.awaitShutdownMillis"
);
checkDeletedConfigAndThrow(
"druid.processing.merge.pool.parallelism",
"druid.processing.merge.parallelism"
);
checkDeletedConfigAndThrow(
"druid.processing.merge.pool.defaultMaxQueryParallelism",
"druid.processing.merge.defaultMaxQueryParallelism"
);
}

/**
* Checks if a deleted config is present in the properties and throws an ISE.
*/
private void checkDeletedConfigAndThrow(String deletedConfigName, String replaceConfigName)
{
if (properties.getProperty(deletedConfigName) != null) {
throw new ISE(
"Config[%s] has been removed. Please use config[%s] instead.",
deletedConfigName,
replaceConfigName
);
}
}
}

Expand Down
Loading