.json
+curl http://static.druid.io/tranquility/releases/tranquility-distribution-0.8.3.tgz -o tranquility-distribution-0.8.3.tgz
+tar -xzf tranquility-distribution-0.8.3.tgz
+mv tranquility-distribution-0.8.3 tranquility
```
+Afterwards, in `conf/supervise/cluster/data.conf`, uncomment out the `tranquility-server` line, and restart the Data server proceses.
+
## Start Query Server
-Copy the Druid distribution and your edited configurations to your Query servers set aside for the Druid Brokers.
+Copy the Druid distribution and your edited configurations to your Query servers.
-On each Query server, *cd* into the distribution and run this command to start the Broker process (you may want to pipe the output to a log file):
+From the distribution root, run the following command to start the Query server:
-```bash
-java `cat conf/druid/broker/jvm.config | xargs` -cp conf/druid/_common:conf/druid/broker:lib/* org.apache.druid.cli.Main server broker
+```
+bin/start-cluster-query-server
```
-You can add more Query servers as needed based on query load.
+You can add more Query servers as needed based on query load. If you increase the number of Query servers, be sure to adjust the connection pools on your Historicals and Tasks as described in the [basic cluster tuning guide](../operations/basic-cluster-tuning.html).
## Loading data
diff --git a/docs/content/tutorials/index.md b/docs/content/tutorials/index.md
index afd58171fc21..f6302dd3e575 100644
--- a/docs/content/tutorials/index.md
+++ b/docs/content/tutorials/index.md
@@ -1,6 +1,6 @@
---
layout: doc_page
-title: "Apache Druid (incubating) Quickstart"
+title: "Apache Druid (incubating) Single-Server Quickstart"
---
-# Apache Druid (incubating) Quickstart
+# Apache Druid (incubating) Single-Server Quickstart
In this quickstart, we will download Druid and set it up on a single machine. The cluster will be ready to load data
after completing this initial setup.
@@ -63,7 +63,7 @@ In the package, you should find:
* `DISCLAIMER`, `LICENSE`, and `NOTICE` files
* `bin/*` - scripts useful for this quickstart
-* `conf/*` - template configurations for a clustered setup
+* `conf/*` - example configurations for single-server and clustered setup
* `extensions/*` - core Druid extensions
* `hadoop-dependencies/*` - Druid Hadoop dependencies
* `lib/*` - libraries and dependencies for core Druid
diff --git a/docs/content/tutorials/tutorial-batch-hadoop.md b/docs/content/tutorials/tutorial-batch-hadoop.md
index 59f2dffb8c48..26b507ebf196 100644
--- a/docs/content/tutorials/tutorial-batch-hadoop.md
+++ b/docs/content/tutorials/tutorial-batch-hadoop.md
@@ -148,13 +148,13 @@ cp /usr/local/hadoop/etc/hadoop/*.xml /shared/hadoop_xml
From the host machine, run the following, where {PATH_TO_DRUID} is replaced by the path to the Druid package.
```bash
-mkdir -p {PATH_TO_DRUID}/quickstart/tutorial/conf/druid/_common/hadoop-xml
-cp /tmp/shared/hadoop_xml/*.xml {PATH_TO_DRUID}/quickstart/tutorial/conf/druid/_common/hadoop-xml/
+mkdir -p {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml
+cp /tmp/shared/hadoop_xml/*.xml {PATH_TO_DRUID}/conf/druid/single-server/micro-quickstart/_common/hadoop-xml/
```
### Update Druid segment and log storage
-In your favorite text editor, open `quickstart/tutorial/conf/druid/_common/common.runtime.properties`, and make the following edits:
+In your favorite text editor, open `conf/druid/single-server/micro-quickstart/_common/common.runtime.properties`, and make the following edits:
#### Disable local deep storage and enable HDFS deep storage
@@ -206,7 +206,7 @@ a task that loads the `wikiticker-2015-09-12-sampled.json.gz` file included in t
Let's submit the `wikipedia-index-hadoop-.json` task:
```bash
-bin/post-index-task --file quickstart/tutorial/wikipedia-index-hadoop.json
+bin/post-index-task --file quickstart/tutorial/wikipedia-index-hadoop.json --url http://localhost:8081
```
## Querying your data
@@ -219,7 +219,7 @@ This tutorial is only meant to be used together with the [query tutorial](../tut
If you wish to go through any of the other tutorials, you will need to:
* Shut down the cluster and reset the cluster state by removing the contents of the `var` directory under the druid package.
-* Revert the deep storage and task storage config back to local types in `quickstart/tutorial/conf/druid/_common/common.runtime.properties`
+* Revert the deep storage and task storage config back to local types in `conf/druid/single-server/micro-quickstart/_common/common.runtime.properties`
* Restart the cluster
This is necessary because the other ingestion tutorials will write to the same "wikipedia" datasource, and later tutorials expect the cluster to use local deep storage.
diff --git a/docs/content/tutorials/tutorial-batch.md b/docs/content/tutorials/tutorial-batch.md
index 9fd5892f29c6..84a7d2702ce6 100644
--- a/docs/content/tutorials/tutorial-batch.md
+++ b/docs/content/tutorials/tutorial-batch.md
@@ -121,7 +121,7 @@ This script will POST an ingestion task to the Druid Overlord and poll Druid unt
Run the following command from Druid package root:
```bash
-bin/post-index-task --file quickstart/tutorial/wikipedia-index.json
+bin/post-index-task --file quickstart/tutorial/wikipedia-index.json --url http://localhost:8081
```
You should see output like the following:
@@ -129,8 +129,8 @@ You should see output like the following:
```bash
Beginning indexing data for wikipedia
Task started: index_wikipedia_2018-07-27T06:37:44.323Z
-Task log: http://localhost:8090/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/log
-Task status: http://localhost:8090/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/status
+Task log: http://localhost:8081/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/log
+Task status: http://localhost:8081/druid/indexer/v1/task/index_wikipedia_2018-07-27T06:37:44.323Z/status
Task index_wikipedia_2018-07-27T06:37:44.323Z still running...
Task index_wikipedia_2018-07-27T06:37:44.323Z still running...
Task finished with status: SUCCESS
@@ -153,7 +153,7 @@ Let's briefly discuss how we would've submitted the ingestion task without using
To submit the task, POST it to Druid in a new terminal window from the apache-druid-#{DRUIDVERSION} directory:
```bash
-curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-index.json http://localhost:8090/druid/indexer/v1/task
+curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/tutorial/wikipedia-index.json http://localhost:8081/druid/indexer/v1/task
```
Which will print the ID of the task if the submission was successful:
diff --git a/docs/content/tutorials/tutorial-compaction.md b/docs/content/tutorials/tutorial-compaction.md
index 97cd8b15b87b..0051796d106c 100644
--- a/docs/content/tutorials/tutorial-compaction.md
+++ b/docs/content/tutorials/tutorial-compaction.md
@@ -41,7 +41,7 @@ For this tutorial, we'll be using the Wikipedia edits sample data, with an inges
The ingestion spec can be found at `quickstart/tutorial/compaction-init-index.json`. Let's submit that spec, which will create a datasource called `compaction-tutorial`:
```bash
-bin/post-index-task --file quickstart/tutorial/compaction-init-index.json
+bin/post-index-task --file quickstart/tutorial/compaction-init-index.json --url http://localhost:8081
```
@@ -99,7 +99,7 @@ In this tutorial example, only one compacted segment will be created per hour, a
Let's submit this task now:
```bash
-bin/post-index-task --file quickstart/tutorial/compaction-keep-granularity.json
+bin/post-index-task --file quickstart/tutorial/compaction-keep-granularity.json --url http://localhost:8081
```
After the task finishes, refresh the [segments view](http://localhost:8888/unified-console.html#segments).
@@ -158,7 +158,7 @@ Note that `segmentGranularity` is set to `DAY` in this compaction task spec.
Let's submit this task now:
```bash
-bin/post-index-task --file quickstart/tutorial/compaction-day-granularity.json
+bin/post-index-task --file quickstart/tutorial/compaction-day-granularity.json --url http://localhost:8081
```
It will take a bit of time before the Coordinator marks the old input segments as unused, so you may see an intermediate state with 25 total segments. Eventually, there will only be one DAY granularity segment:
diff --git a/docs/content/tutorials/tutorial-delete-data.md b/docs/content/tutorials/tutorial-delete-data.md
index a4b1f7e727f5..46fbbdc6f7c8 100644
--- a/docs/content/tutorials/tutorial-delete-data.md
+++ b/docs/content/tutorials/tutorial-delete-data.md
@@ -36,7 +36,7 @@ In this tutorial, we will use the Wikipedia edits data, with an indexing spec th
Let's load this initial data:
```bash
-bin/post-index-task --file quickstart/tutorial/deletion-index.json
+bin/post-index-task --file quickstart/tutorial/deletion-index.json --url http://localhost:8081
```
When the load finishes, open [http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources) in a browser.
diff --git a/docs/content/tutorials/tutorial-ingestion-spec.md b/docs/content/tutorials/tutorial-ingestion-spec.md
index 29b0ea90d66b..5f05d182593c 100644
--- a/docs/content/tutorials/tutorial-ingestion-spec.md
+++ b/docs/content/tutorials/tutorial-ingestion-spec.md
@@ -634,7 +634,7 @@ We've finished defining the ingestion spec, it should now look like the followin
From the apache-druid-#{DRUIDVERSION} package root, run the following command:
```bash
-bin/post-index-task --file quickstart/ingestion-tutorial-index.json
+bin/post-index-task --file quickstart/ingestion-tutorial-index.json --url http://localhost:8081
```
After the script completes, we will query the data.
diff --git a/docs/content/tutorials/tutorial-retention.md b/docs/content/tutorials/tutorial-retention.md
index dafca329a156..6f5c91c6a414 100644
--- a/docs/content/tutorials/tutorial-retention.md
+++ b/docs/content/tutorials/tutorial-retention.md
@@ -38,7 +38,7 @@ For this tutorial, we'll be using the Wikipedia edits sample data, with an inges
The ingestion spec can be found at `quickstart/tutorial/retention-index.json`. Let's submit that spec, which will create a datasource called `retention-tutorial`:
```bash
-bin/post-index-task --file quickstart/tutorial/retention-index.json
+bin/post-index-task --file quickstart/tutorial/retention-index.json --url http://localhost:8081
```
After the ingestion completes, go to [http://localhost:8888/unified-console.html#datasources](http://localhost:8888/unified-console.html#datasources) in a browser to access the Druid Console's datasource view.
diff --git a/docs/content/tutorials/tutorial-rollup.md b/docs/content/tutorials/tutorial-rollup.md
index 483a4636707b..e4ca65818a4c 100644
--- a/docs/content/tutorials/tutorial-rollup.md
+++ b/docs/content/tutorials/tutorial-rollup.md
@@ -117,7 +117,7 @@ We will see how these definitions are used after we load this data.
From the apache-druid-#{DRUIDVERSION} package root, run the following command:
```bash
-bin/post-index-task --file quickstart/tutorial/rollup-index.json
+bin/post-index-task --file quickstart/tutorial/rollup-index.json --url http://localhost:8081
```
After the script completes, we will query the data.
diff --git a/docs/content/tutorials/tutorial-tranquility.md b/docs/content/tutorials/tutorial-tranquility.md
index 10376cda3332..670a91e44792 100644
--- a/docs/content/tutorials/tutorial-tranquility.md
+++ b/docs/content/tutorials/tutorial-tranquility.md
@@ -48,13 +48,13 @@ The startup scripts for the tutorial will expect the contents of the Tranquility
## Enable Tranquility Server
-- In your `quickstart/tutorial/conf/tutorial-cluster.conf`, uncomment the `tranquility-server` line.
-- Stop your *bin/supervise* command (CTRL-C) and then restart it by again running `bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf`.
+- In your `conf/supervise/single-server/micro-quickstart.conf`, uncomment the `tranquility-server` line.
+- Stop your *bin/supervise* command (CTRL-C) and then restart it by again running `bin/supervise -c conf/supervise/single-server/micro-quickstart.conf`.
As part of the output of *supervise* you should see something like:
```bash
-Running command[tranquility-server], logging to[/stage/apache-druid-#{DRUIDVERSION}/var/sv/tranquility-server.log]: tranquility/bin/tranquility server -configFile quickstart/tutorial/conf/tranquility/server.json -Ddruid.extensions.loadList=[]
+Running command[tranquility-server], logging to[/stage/apache-druid-#{DRUIDVERSION}/var/sv/tranquility-server.log]: tranquility/bin/tranquility server -configFile conf/tranquility/server.json -Ddruid.extensions.loadList=[]
```
You can check the log file in `var/sv/tranquility-server.log` to confirm that the server is starting up properly.
@@ -96,7 +96,7 @@ Please follow the [query tutorial](../tutorials/tutorial-query.html) to run some
If you wish to go through any of the other ingestion tutorials, you will need to shut down the cluster and reset the cluster state by removing the contents of the `var` directory under the druid package, as the other tutorials will write to the same "wikipedia" datasource.
-When cleaning up after running this Tranquility tutorial, it is also necessary to recomment the `tranquility-server` line in `quickstart/tutorial/conf/tutorial-cluster.conf` before restarting the cluster.
+When cleaning up after running this Tranquility tutorial, it is also necessary to recomment the `tranquility-server` line in `conf/supervise/single-server/micro-quickstart.conf` before restarting the cluster.
## Further reading
diff --git a/docs/content/tutorials/tutorial-transform-spec.md b/docs/content/tutorials/tutorial-transform-spec.md
index 083268d86165..b30eebbdaccf 100644
--- a/docs/content/tutorials/tutorial-transform-spec.md
+++ b/docs/content/tutorials/tutorial-transform-spec.md
@@ -135,7 +135,7 @@ This filter selects the first 3 rows, and it will exclude the final "lion" row i
Let's submit this task now, which has been included at `quickstart/tutorial/transform-index.json`:
```bash
-bin/post-index-task --file quickstart/tutorial/transform-index.json
+bin/post-index-task --file quickstart/tutorial/transform-index.json --url http://localhost:8081
```
## Query the transformed data
diff --git a/docs/content/tutorials/tutorial-update-data.md b/docs/content/tutorials/tutorial-update-data.md
index d55ce97a1c48..ce0abfc8f2fd 100644
--- a/docs/content/tutorials/tutorial-update-data.md
+++ b/docs/content/tutorials/tutorial-update-data.md
@@ -44,7 +44,7 @@ The spec we'll use for this tutorial is located at `quickstart/tutorial/updates-
Let's submit that task:
```bash
-bin/post-index-task --file quickstart/tutorial/updates-init-index.json
+bin/post-index-task --file quickstart/tutorial/updates-init-index.json --url http://localhost:8081
```
We have three initial rows containing an "animal" dimension and "number" metric:
@@ -72,7 +72,7 @@ Note that this task reads input from `quickstart/tutorial/updates-data2.json`, a
Let's submit that task:
```bash
-bin/post-index-task --file quickstart/tutorial/updates-overwrite-index.json
+bin/post-index-task --file quickstart/tutorial/updates-overwrite-index.json --url http://localhost:8081
```
When Druid finishes loading the new segment from this overwrite task, the "tiger" row now has the value "lion", the "aardvark" row has a different number, and the "giraffe" row has been replaced. It may take a couple of minutes for the changes to take effect:
@@ -98,7 +98,7 @@ The `quickstart/tutorial/updates-append-index.json` task spec has been configure
Let's submit that task:
```bash
-bin/post-index-task --file quickstart/tutorial/updates-append-index.json
+bin/post-index-task --file quickstart/tutorial/updates-append-index.json --url http://localhost:8081
```
When Druid finishes loading the new segment from this overwrite task, the new rows will have been added to the datasource. Note that roll-up occurred for the "lion" row:
@@ -127,7 +127,7 @@ The `quickstart/tutorial/updates-append-index2.json` task spec reads input from
Let's submit that task:
```bash
-bin/post-index-task --file quickstart/tutorial/updates-append-index2.json
+bin/post-index-task --file quickstart/tutorial/updates-append-index2.json --url http://localhost:8081
```
When the new data is loaded, we can see two additional rows after "octopus". Note that the new "bear" row with number 222 has not been rolled up with the existing bear-111 row, because the new data is held in a separate segment.
diff --git a/examples/bin/run-druid b/examples/bin/run-druid
index 82695f60f874..4db0a2f446a5 100755
--- a/examples/bin/run-druid
+++ b/examples/bin/run-druid
@@ -39,5 +39,5 @@ WHEREAMI="$(cd "$WHEREAMI" && pwd)"
cd "$WHEREAMI/.."
exec java `cat "$CONFDIR"/"$WHATAMI"/jvm.config | xargs` \
- -cp "$CONFDIR"/"$WHATAMI":"$CONFDIR"/_common:"$CONFDIR"/_common/hadoop-xml:"$WHEREAMI/../lib/*" \
+ -cp "$CONFDIR"/"$WHATAMI":"$CONFDIR"/_common:"$CONFDIR"/_common/hadoop-xml:"$CONFDIR"/../_common:"$CONFDIR"/../_common/hadoop-xml:"$WHEREAMI/../lib/*" \
`cat "$CONFDIR"/$WHATAMI/main.config | xargs`
diff --git a/examples/conf/druid/cluster/data/historical/jvm.config b/examples/conf/druid/cluster/data/historical/jvm.config
index 3141abd754ac..891312f3c689 100644
--- a/examples/conf/druid/cluster/data/historical/jvm.config
+++ b/examples/conf/druid/cluster/data/historical/jvm.config
@@ -1,7 +1,7 @@
-server
-Xms8g
-Xmx8g
--XX:MaxDirectMemorySize=14g
+-XX:MaxDirectMemorySize=13g
-XX:+ExitOnOutOfMemoryError
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
diff --git a/examples/conf/druid/cluster/data/historical/runtime.properties b/examples/conf/druid/cluster/data/historical/runtime.properties
index 5ee3a1c211e2..326e6eebdfae 100644
--- a/examples/conf/druid/cluster/data/historical/runtime.properties
+++ b/examples/conf/druid/cluster/data/historical/runtime.properties
@@ -26,7 +26,7 @@ druid.server.http.numThreads=60
# Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
druid.processing.numMergeBuffers=4
-druid.processing.numThreads=16
+druid.processing.numThreads=15
druid.processing.tmpDir=var/druid/processing
# Segment storage
@@ -37,4 +37,4 @@ druid.server.maxSize=300000000000
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
-druid.cache.sizeInBytes=2000000000
+druid.cache.sizeInBytes=256000000
diff --git a/examples/conf/druid/cluster/data/middleManager/runtime.properties b/examples/conf/druid/cluster/data/middleManager/runtime.properties
index 8806fd1a27ec..4101ebf94b9a 100644
--- a/examples/conf/druid/cluster/data/middleManager/runtime.properties
+++ b/examples/conf/druid/cluster/data/middleManager/runtime.properties
@@ -24,7 +24,7 @@ druid.plaintextPort=8091
druid.worker.capacity=4
# Task launch parameters
-druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
+druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
@@ -32,7 +32,7 @@ druid.server.http.numThreads=60
# Processing threads and buffers on Peons
druid.indexer.fork.property.druid.processing.numMergeBuffers=2
-druid.indexer.fork.property.druid.processing.buffer.sizeBytes=500000000
+druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100000000
druid.indexer.fork.property.druid.processing.numThreads=1
# Hadoop indexing
diff --git a/examples/conf/druid/cluster/master/coordinator/jvm.config b/examples/conf/druid/cluster/master/coordinator-overlord/jvm.config
similarity index 88%
rename from examples/conf/druid/cluster/master/coordinator/jvm.config
rename to examples/conf/druid/cluster/master/coordinator-overlord/jvm.config
index 084add76057d..5df7d606725a 100644
--- a/examples/conf/druid/cluster/master/coordinator/jvm.config
+++ b/examples/conf/druid/cluster/master/coordinator-overlord/jvm.config
@@ -1,7 +1,8 @@
-server
--Xms1g
--Xmx1g
+-Xms15g
+-Xmx15g
-XX:+ExitOnOutOfMemoryError
+-XX:+UseG1GC
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
diff --git a/examples/conf/druid/cluster/master/coordinator/main.config b/examples/conf/druid/cluster/master/coordinator-overlord/main.config
similarity index 100%
rename from examples/conf/druid/cluster/master/coordinator/main.config
rename to examples/conf/druid/cluster/master/coordinator-overlord/main.config
diff --git a/examples/conf/druid/cluster/master/coordinator/runtime.properties b/examples/conf/druid/cluster/master/coordinator-overlord/runtime.properties
similarity index 77%
rename from examples/conf/druid/cluster/master/coordinator/runtime.properties
rename to examples/conf/druid/cluster/master/coordinator-overlord/runtime.properties
index 52dd09a0e64c..8928cc9f8edc 100644
--- a/examples/conf/druid/cluster/master/coordinator/runtime.properties
+++ b/examples/conf/druid/cluster/master/coordinator-overlord/runtime.properties
@@ -22,3 +22,12 @@ druid.plaintextPort=8081
druid.coordinator.startDelay=PT10S
druid.coordinator.period=PT5S
+
+# Run the overlord service in the coordinator process
+druid.coordinator.asOverlord.enabled=true
+druid.coordinator.asOverlord.overlordService=druid/overlord
+
+druid.indexer.queue.startDelay=PT5S
+
+druid.indexer.runner.type=remote
+druid.indexer.storage.type=metadata
diff --git a/examples/conf/druid/cluster/master/overlord/jvm.config b/examples/conf/druid/cluster/master/overlord/jvm.config
deleted file mode 100644
index 2bb6641a778d..000000000000
--- a/examples/conf/druid/cluster/master/overlord/jvm.config
+++ /dev/null
@@ -1,8 +0,0 @@
--server
--Xms1g
--Xmx1g
--XX:+ExitOnOutOfMemoryError
--Duser.timezone=UTC
--Dfile.encoding=UTF-8
--Djava.io.tmpdir=var/tmp
--Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
diff --git a/examples/conf/druid/cluster/master/overlord/main.config b/examples/conf/druid/cluster/master/overlord/main.config
deleted file mode 100644
index dcf691a380ea..000000000000
--- a/examples/conf/druid/cluster/master/overlord/main.config
+++ /dev/null
@@ -1 +0,0 @@
-org.apache.druid.cli.Main server overlord
diff --git a/examples/conf/druid/cluster/master/overlord/runtime.properties b/examples/conf/druid/cluster/master/overlord/runtime.properties
deleted file mode 100644
index 093758c22972..000000000000
--- a/examples/conf/druid/cluster/master/overlord/runtime.properties
+++ /dev/null
@@ -1,26 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership. The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied. See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-
-druid.service=druid/overlord
-druid.plaintextPort=8090
-
-druid.indexer.queue.startDelay=PT5S
-
-druid.indexer.runner.type=remote
-druid.indexer.storage.type=metadata
diff --git a/examples/conf/druid/cluster/query/broker/jvm.config b/examples/conf/druid/cluster/query/broker/jvm.config
index a66f7513f986..442a7b21bb8c 100644
--- a/examples/conf/druid/cluster/query/broker/jvm.config
+++ b/examples/conf/druid/cluster/query/broker/jvm.config
@@ -1,7 +1,7 @@
-server
--Xms24g
--Xmx24g
--XX:MaxDirectMemorySize=12g
+-Xms12g
+-Xmx12g
+-XX:MaxDirectMemorySize=6g
-XX:+ExitOnOutOfMemoryError
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
diff --git a/examples/conf/druid/cluster/query/broker/runtime.properties b/examples/conf/druid/cluster/query/broker/runtime.properties
index 6d4b3699fd85..6873025f93f7 100644
--- a/examples/conf/druid/cluster/query/broker/runtime.properties
+++ b/examples/conf/druid/cluster/query/broker/runtime.properties
@@ -25,11 +25,11 @@ druid.server.http.numThreads=60
# HTTP client settings
druid.broker.http.numConnections=50
-druid.broker.http.maxQueuedBytes=5000000
+druid.broker.http.maxQueuedBytes=10000000
# Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
-druid.processing.numMergeBuffers=16
+druid.processing.numMergeBuffers=6
druid.processing.numThreads=1
druid.processing.tmpDir=var/druid/processing
diff --git a/examples/conf/druid/single-server/large/broker/jvm.config b/examples/conf/druid/single-server/large/broker/jvm.config
index da8c305bdb3e..6c43c24dbb4b 100644
--- a/examples/conf/druid/single-server/large/broker/jvm.config
+++ b/examples/conf/druid/single-server/large/broker/jvm.config
@@ -1,7 +1,7 @@
-server
--Xms16g
--Xmx16g
--XX:MaxDirectMemorySize=8g
+-Xms12g
+-Xmx12g
+-XX:MaxDirectMemorySize=11g
-XX:+ExitOnOutOfMemoryError
-XX:+UseG1GC
-Duser.timezone=UTC
diff --git a/examples/conf/druid/single-server/large/broker/runtime.properties b/examples/conf/druid/single-server/large/broker/runtime.properties
index a38e324b052d..d32929c42010 100644
--- a/examples/conf/druid/single-server/large/broker/runtime.properties
+++ b/examples/conf/druid/single-server/large/broker/runtime.properties
@@ -25,11 +25,11 @@ druid.server.http.numThreads=60
# HTTP client settings
druid.broker.http.numConnections=50
-druid.broker.http.maxQueuedBytes=5000000
+druid.broker.http.maxQueuedBytes=10000000
# Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
-druid.processing.numMergeBuffers=8
+druid.processing.numMergeBuffers=16
druid.processing.numThreads=1
druid.processing.tmpDir=var/druid/processing
diff --git a/examples/conf/druid/single-server/large/coordinator-overlord/jvm.config b/examples/conf/druid/single-server/large/coordinator-overlord/jvm.config
index 04b4729e66b5..5df7d606725a 100644
--- a/examples/conf/druid/single-server/large/coordinator-overlord/jvm.config
+++ b/examples/conf/druid/single-server/large/coordinator-overlord/jvm.config
@@ -1,6 +1,6 @@
-server
--Xms24g
--Xmx24g
+-Xms15g
+-Xmx15g
-XX:+ExitOnOutOfMemoryError
-XX:+UseG1GC
-Duser.timezone=UTC
diff --git a/examples/conf/druid/single-server/large/historical/jvm.config b/examples/conf/druid/single-server/large/historical/jvm.config
index bd616d11d061..16e1f5d7825b 100644
--- a/examples/conf/druid/single-server/large/historical/jvm.config
+++ b/examples/conf/druid/single-server/large/historical/jvm.config
@@ -1,7 +1,7 @@
-server
-Xms16g
-Xmx16g
--XX:MaxDirectMemorySize=32g
+-XX:MaxDirectMemorySize=25g
-XX:+ExitOnOutOfMemoryError
-XX:+UseG1GC
-Duser.timezone=UTC
diff --git a/examples/conf/druid/single-server/large/historical/runtime.properties b/examples/conf/druid/single-server/large/historical/runtime.properties
index dcb0004a2f9e..540fba6d9d10 100644
--- a/examples/conf/druid/single-server/large/historical/runtime.properties
+++ b/examples/conf/druid/single-server/large/historical/runtime.properties
@@ -26,7 +26,7 @@ druid.server.http.numThreads=60
# Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
druid.processing.numMergeBuffers=8
-druid.processing.numThreads=32
+druid.processing.numThreads=31
druid.processing.tmpDir=var/druid/processing
# Segment storage
@@ -37,4 +37,4 @@ druid.server.maxSize=300000000000
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
-druid.cache.sizeInBytes=1000000000
+druid.cache.sizeInBytes=512000000
diff --git a/examples/conf/druid/single-server/large/middleManager/runtime.properties b/examples/conf/druid/single-server/large/middleManager/runtime.properties
index 54b462f68403..0583b523d722 100644
--- a/examples/conf/druid/single-server/large/middleManager/runtime.properties
+++ b/examples/conf/druid/single-server/large/middleManager/runtime.properties
@@ -24,7 +24,7 @@ druid.plaintextPort=8091
druid.worker.capacity=8
# Task launch parameters
-druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
+druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
diff --git a/examples/conf/druid/single-server/medium/broker/jvm.config b/examples/conf/druid/single-server/medium/broker/jvm.config
index bdb241176937..a4bf3d910971 100644
--- a/examples/conf/druid/single-server/medium/broker/jvm.config
+++ b/examples/conf/druid/single-server/medium/broker/jvm.config
@@ -1,7 +1,7 @@
-server
-Xms8g
-Xmx8g
--XX:MaxDirectMemorySize=16g
+-XX:MaxDirectMemorySize=5g
-XX:+ExitOnOutOfMemoryError
-XX:+UseG1GC
-Duser.timezone=UTC
diff --git a/examples/conf/druid/single-server/medium/broker/runtime.properties b/examples/conf/druid/single-server/medium/broker/runtime.properties
index 17e881490d57..5681b8a729e4 100644
--- a/examples/conf/druid/single-server/medium/broker/runtime.properties
+++ b/examples/conf/druid/single-server/medium/broker/runtime.properties
@@ -25,7 +25,7 @@ druid.server.http.numThreads=60
# HTTP client settings
druid.broker.http.numConnections=50
-druid.broker.http.maxQueuedBytes=5000000
+druid.broker.http.maxQueuedBytes=10000000
# Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
diff --git a/examples/conf/druid/single-server/medium/coordinator-overlord/jvm.config b/examples/conf/druid/single-server/medium/coordinator-overlord/jvm.config
index 38d2e1ebbee8..dbddd50ce8d0 100644
--- a/examples/conf/druid/single-server/medium/coordinator-overlord/jvm.config
+++ b/examples/conf/druid/single-server/medium/coordinator-overlord/jvm.config
@@ -1,6 +1,6 @@
-server
--Xms12g
--Xmx12g
+-Xms9g
+-Xmx9g
-XX:+ExitOnOutOfMemoryError
-XX:+UseG1GC
-Duser.timezone=UTC
diff --git a/examples/conf/druid/single-server/medium/historical/runtime.properties b/examples/conf/druid/single-server/medium/historical/runtime.properties
index 1a70a71fce2f..326e6eebdfae 100644
--- a/examples/conf/druid/single-server/medium/historical/runtime.properties
+++ b/examples/conf/druid/single-server/medium/historical/runtime.properties
@@ -26,7 +26,7 @@ druid.server.http.numThreads=60
# Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
druid.processing.numMergeBuffers=4
-druid.processing.numThreads=16
+druid.processing.numThreads=15
druid.processing.tmpDir=var/druid/processing
# Segment storage
diff --git a/examples/conf/druid/single-server/medium/middleManager/runtime.properties b/examples/conf/druid/single-server/medium/middleManager/runtime.properties
index 55d9f1cbb29d..4101ebf94b9a 100644
--- a/examples/conf/druid/single-server/medium/middleManager/runtime.properties
+++ b/examples/conf/druid/single-server/medium/middleManager/runtime.properties
@@ -24,7 +24,7 @@ druid.plaintextPort=8091
druid.worker.capacity=4
# Task launch parameters
-druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
+druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
diff --git a/examples/conf/druid/single-server/micro-quickstart/middleManager/runtime.properties b/examples/conf/druid/single-server/micro-quickstart/middleManager/runtime.properties
index 8be6e568dc6d..280787bc6729 100644
--- a/examples/conf/druid/single-server/micro-quickstart/middleManager/runtime.properties
+++ b/examples/conf/druid/single-server/micro-quickstart/middleManager/runtime.properties
@@ -24,7 +24,7 @@ druid.plaintextPort=8091
druid.worker.capacity=2
# Task launch parameters
-druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
+druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
diff --git a/examples/conf/druid/single-server/small/coordinator-overlord/jvm.config b/examples/conf/druid/single-server/small/coordinator-overlord/jvm.config
index c853ea840d76..34176680c418 100644
--- a/examples/conf/druid/single-server/small/coordinator-overlord/jvm.config
+++ b/examples/conf/druid/single-server/small/coordinator-overlord/jvm.config
@@ -1,6 +1,6 @@
-server
--Xms6g
--Xmx6g
+-Xms4500m
+-Xmx4500m
-XX:+ExitOnOutOfMemoryError
-XX:+UseG1GC
-Duser.timezone=UTC
diff --git a/examples/conf/druid/single-server/small/historical/runtime.properties b/examples/conf/druid/single-server/small/historical/runtime.properties
index 144a029c4ddf..6cfc704259df 100644
--- a/examples/conf/druid/single-server/small/historical/runtime.properties
+++ b/examples/conf/druid/single-server/small/historical/runtime.properties
@@ -26,7 +26,7 @@ druid.server.http.numThreads=50
# Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
druid.processing.numMergeBuffers=2
-druid.processing.numThreads=8
+druid.processing.numThreads=7
druid.processing.tmpDir=var/druid/processing
# Segment storage
diff --git a/examples/conf/druid/single-server/small/middleManager/runtime.properties b/examples/conf/druid/single-server/small/middleManager/runtime.properties
index 1665e46f9f12..f9a8bae10a0b 100644
--- a/examples/conf/druid/single-server/small/middleManager/runtime.properties
+++ b/examples/conf/druid/single-server/small/middleManager/runtime.properties
@@ -24,7 +24,7 @@ druid.plaintextPort=8091
druid.worker.capacity=3
# Task launch parameters
-druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
+druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
diff --git a/examples/conf/druid/single-server/xlarge/broker/jvm.config b/examples/conf/druid/single-server/xlarge/broker/jvm.config
index a8844b275ebf..f83ad0e18b61 100644
--- a/examples/conf/druid/single-server/xlarge/broker/jvm.config
+++ b/examples/conf/druid/single-server/xlarge/broker/jvm.config
@@ -1,6 +1,6 @@
-server
--Xms24g
--Xmx24g
+-Xms16g
+-Xmx16g
-XX:MaxDirectMemorySize=12g
-XX:+ExitOnOutOfMemoryError
-XX:+UseG1GC
diff --git a/examples/conf/druid/single-server/xlarge/broker/runtime.properties b/examples/conf/druid/single-server/xlarge/broker/runtime.properties
index 6d4b3699fd85..d32929c42010 100644
--- a/examples/conf/druid/single-server/xlarge/broker/runtime.properties
+++ b/examples/conf/druid/single-server/xlarge/broker/runtime.properties
@@ -25,7 +25,7 @@ druid.server.http.numThreads=60
# HTTP client settings
druid.broker.http.numConnections=50
-druid.broker.http.maxQueuedBytes=5000000
+druid.broker.http.maxQueuedBytes=10000000
# Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
diff --git a/examples/conf/druid/single-server/xlarge/coordinator-overlord/jvm.config b/examples/conf/druid/single-server/xlarge/coordinator-overlord/jvm.config
index 04b4729e66b5..f3ca0fdbb9ec 100644
--- a/examples/conf/druid/single-server/xlarge/coordinator-overlord/jvm.config
+++ b/examples/conf/druid/single-server/xlarge/coordinator-overlord/jvm.config
@@ -1,6 +1,6 @@
-server
--Xms24g
--Xmx24g
+-Xms18g
+-Xmx18g
-XX:+ExitOnOutOfMemoryError
-XX:+UseG1GC
-Duser.timezone=UTC
diff --git a/examples/conf/druid/single-server/xlarge/historical/runtime.properties b/examples/conf/druid/single-server/xlarge/historical/runtime.properties
index 11856c5a2e0b..c322fda6d04b 100644
--- a/examples/conf/druid/single-server/xlarge/historical/runtime.properties
+++ b/examples/conf/druid/single-server/xlarge/historical/runtime.properties
@@ -26,7 +26,7 @@ druid.server.http.numThreads=60
# Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
druid.processing.numMergeBuffers=16
-druid.processing.numThreads=64
+druid.processing.numThreads=63
druid.processing.tmpDir=var/druid/processing
# Segment storage
diff --git a/examples/conf/druid/single-server/xlarge/middleManager/runtime.properties b/examples/conf/druid/single-server/xlarge/middleManager/runtime.properties
index 889d20deb18a..28732de6cb6a 100644
--- a/examples/conf/druid/single-server/xlarge/middleManager/runtime.properties
+++ b/examples/conf/druid/single-server/xlarge/middleManager/runtime.properties
@@ -24,7 +24,7 @@ druid.plaintextPort=8091
druid.worker.capacity=16
# Task launch parameters
-druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
+druid.indexer.runner.javaOpts=-server -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
@@ -32,7 +32,7 @@ druid.server.http.numThreads=60
# Processing threads and buffers on Peons
druid.indexer.fork.property.druid.processing.numMergeBuffers=2
-druid.indexer.fork.property.druid.processing.buffer.sizeBytes=500000000
+druid.indexer.fork.property.druid.processing.buffer.sizeBytes=100000000
druid.indexer.fork.property.druid.processing.numThreads=1
# Hadoop indexing
diff --git a/examples/conf/supervise/cluster/data.conf b/examples/conf/supervise/cluster/data.conf
index 3047288de800..32f61d960648 100644
--- a/examples/conf/supervise/cluster/data.conf
+++ b/examples/conf/supervise/cluster/data.conf
@@ -1,11 +1,7 @@
:verify bin/verify-java
-:verify bin/verify-version-check
-historical bin/run-druid historical conf/druid/cluster/data/historical
-middleManager bin/run-druid middleManager conf/druid/cluster/data/middleManager
+historical bin/run-druid historical conf/druid/cluster/data
+middleManager bin/run-druid middleManager conf/druid/cluster/data
# Uncomment to use Tranquility Server
-#!p95 tranquility-server bin/tranquility server -configFile conf/tranquility/server.json
-
-# Uncomment to use Tranquility Kafka
-#!p95 tranquility-kafka bin/tranquility kafka -configFile conf/tranquility/kafka.json
+#!p95 tranquility-server tranquility/bin/tranquility server -configFile conf/tranquility/server.json -Ddruid.extensions.loadList=[]
diff --git a/examples/conf/supervise/cluster/master-no-zk.conf b/examples/conf/supervise/cluster/master-no-zk.conf
index 8b22448f24a5..2730387b8dd9 100644
--- a/examples/conf/supervise/cluster/master-no-zk.conf
+++ b/examples/conf/supervise/cluster/master-no-zk.conf
@@ -1,5 +1,3 @@
:verify bin/verify-java
-:verify bin/verify-version-check
-coordinator bin/run-druid coordinator conf/druid/cluster/data/coordinator
-!p80 overlord bin/run-druid overlord conf/druid/cluster/data/overlord
+coordinator-overlord bin/run-druid coordinator-overlord conf/druid/cluster/master
diff --git a/examples/conf/supervise/cluster/master-with-zk.conf b/examples/conf/supervise/cluster/master-with-zk.conf
index 8eeea0cff8a5..23998274f27a 100644
--- a/examples/conf/supervise/cluster/master-with-zk.conf
+++ b/examples/conf/supervise/cluster/master-with-zk.conf
@@ -1,6 +1,4 @@
:verify bin/verify-java
-:verify bin/verify-version-check
!p10 zk bin/run-zk conf
-coordinator bin/run-druid coordinator conf/druid/cluster/data/coordinator
-!p80 overlord bin/run-druid overlord conf/druid/cluster/data/overlord
+coordinator-overlord bin/run-druid coordinator-overlord conf/druid/cluster/master
diff --git a/examples/conf/supervise/cluster/query.conf b/examples/conf/supervise/cluster/query.conf
index cd6ec376be91..ead75fd9854a 100644
--- a/examples/conf/supervise/cluster/query.conf
+++ b/examples/conf/supervise/cluster/query.conf
@@ -1,5 +1,4 @@
:verify bin/verify-java
-:verify bin/verify-version-check
-broker bin/run-druid broker conf/druid/cluster/data/broker
-router bin/run-druid router conf/druid/cluster/data/router
+broker bin/run-druid broker conf/druid/cluster/query
+router bin/run-druid router conf/druid/cluster/query