From 740589d8dd04df2d05b4c8f3046eeecd2575e2fd Mon Sep 17 00:00:00 2001 From: Charles Smith Date: Mon, 22 Feb 2021 14:29:44 -0800 Subject: [PATCH 01/18] first pass compaction refactor. includes updated behavior for queryGranularity. removes duplicated doc --- docs/ingestion/compaction.md | 153 +++++++++++++++++++++ docs/ingestion/data-management.md | 173 +----------------------- docs/operations/segment-optimization.md | 5 +- website/i18n/en.json | 3 + 4 files changed, 167 insertions(+), 167 deletions(-) create mode 100644 docs/ingestion/compaction.md diff --git a/docs/ingestion/compaction.md b/docs/ingestion/compaction.md new file mode 100644 index 000000000000..ffcff2b09cad --- /dev/null +++ b/docs/ingestion/compaction.md @@ -0,0 +1,153 @@ +--- +id: compaction +title: "Compaction" +description: "Defines compaction and automatic compaction (auto-compaction or autocompaction) as a strategy for segment optimization. Use cases for compaction. Describes compaction task configuration." +--- + + + +Compaction in Apache Druid is a strategy to optimize segment size. Compaction tasks read an existing set of segments for a given time range and combine the data into a new "compacted" set of segments. The compacted segments are generally larger, but there are fewer of them. + +Compaction can sometimes increase performance because it reduces the per-segment processing and memory overhead required for ingestion and for querying paths. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case. + +## Data handling with compaction +During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. + +Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction tasks combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity. + +> In Apache Druid 0.21.0 and prior, Druid set the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments. + +## Types of segment compaction +You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction) for a datasource. Using a segment search policy, automatic compaction periodically identifies segments for compaction starting with the newest to oldest and submits compaction tasks. + +Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments). + +In cases where you require more control over compaction, you can manually submit compaction tasks. For example: +- Cases when automatic compaction is too slow. +- You want to force compaction for a specific time range. +- Compacting recent data before older data suboptimal is suboptimal in your environment. + +TBD are there feature gaps where automatic compaction doesn't work? + +## Setting up a manual compaction task + +To perform a manual compaction, you submit a compaction tasks. Compaction tasks merge all segments for the defined interval according to the following syntax: + +```json +{ + "type": "compact", + "id": , + "dataSource": , + "ioConfig": , + "dimensionsSpec" , + "metricsSpec" , + "segmentGranularity": , + "tuningConfig" , + "context": +} +``` + +|Field|Description|Required| +|-----|-----------|--------| +|`type`|Task type. Should be `compact`|Yes| +|`id`|Task id|No| +|`dataSource`|DataSource name to be compacted|Yes| +|`ioConfig`|ioConfig for compaction task. See [Compaction IOConfig](#compaction-ioconfig) for details.|Yes| +|`dimensionsSpec`|Custom dimensionsSpec. The compaction task uses the specified dimensionsSpec if it exists instead of generating one.|No| +|`metricsSpec`|Custom metricsSpec. The compaction task uses the specified metrics spec rather than generating one.|No| +|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval. See `segmentGranularity` of [`granularitySpec`](index.md#granularityspec) for more details.|No| +|`tuningConfig`|[Parallel indexing task tuningConfig](../ingestion/native-batch.md#tuningconfig)|No| +|`context`|[Task context](../ingestion/tasks.md#context)|No| + +To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig). + +> You can run multiple compactionTasks at the same time. For example, you can run 12 compactionTasks per month instead of running a single task for the entire year. + +A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec`include all dimensions and metrics of the input segments by default. + +Compaction tasks exit without doing anything and issue a failure status code: +- if the interval you specify has no data segments loaded +OR +- if the interval you specify is empty. + +The output segment can have different metadata from the input segments unless all input segments have the same metadata. + +### Dimension handling +Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same dataSource. If the input segments have different dimensions, the output segment includes all dimensions of the input segments. + +Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the new desired order and data types. + +If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec. + +### Rollup +Druid only rolls up the output segment when `rollup` is set for all input segments. +See [Roll-up](../ingestion/index.md#rollup) for more details. +You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes). + +### Example compaction task +The following JSON illustrates a compaction task to compact _all segments_ within the interval `2017-01-01/2018-01-01` and create new segments: + +```json +{ + "type" : "compact", + "dataSource" : "wikipedia", + "ioConfig" : { + "type": "compact", + "inputSpec": { + "type": "interval", + "interval": "2017-01-01/2018-01-01" + } + } +} +``` + +Since `segmentGranularity` is null, Druid retains the original segment granularity unchanged when compaction is complete. + +### Compaction IOConfig + +The compaction IOConfig requires specifying `inputSpec` as seen below. + +|Field|Description|Required| +|-----|-----------|--------| +|`type`|Task type. Should be `compact`|Yes| +|`inputSpec`|Input specification|Yes| + +There are two supported `inputSpec`s for now. + +The interval `inputSpec` is: + +|Field|Description|Required| +|-----|-----------|--------| +|`type`|Task type. Should be `interval`|Yes| +|`interval`|Interval to compact|Yes| + +The segments `inputSpec` is: + +|Field|Description|Required| +|-----|-----------|--------| +|`type`|Task type. Should be `segments`|Yes| +|`segments`|A list of segment IDs|Yes| + +## Learn more +See the following topics for more information: +- [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case. +- [Compacting Segments](../design/coordinator.md#compacting-segments) for more on automatic compaction. +- See [Compaction Configuration API](../operations/api-reference.md#compaction-configuration) +and [Compaction Configuration](../configuration/index.md#compaction-dynamic-configuration) for configuration information. diff --git a/docs/ingestion/data-management.md b/docs/ingestion/data-management.md index f0fddda8d23d..0c63b51e6580 100644 --- a/docs/ingestion/data-management.md +++ b/docs/ingestion/data-management.md @@ -21,173 +21,9 @@ title: "Data management" ~ specific language governing permissions and limitations ~ under the License. --> +Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. - - - -## Schema changes - -Schemas for datasources can change at any time and Apache Druid supports different schemas among segments. - -### Replacing segments - -Druid uniquely -identifies segments using the datasource, interval, version, and partition number. The partition number is only visible in the segment id if -there are multiple segments created for some granularity of time. For example, if you have hourly segments, but you -have more data in an hour than a single segment can hold, you can create multiple segments for the same hour. These segments will share -the same datasource, interval, and version, but have linearly increasing partition numbers. - -``` -foo_2015-01-01/2015-01-02_v1_0 -foo_2015-01-01/2015-01-02_v1_1 -foo_2015-01-01/2015-01-02_v1_2 -``` - -In the example segments above, the dataSource = foo, interval = 2015-01-01/2015-01-02, version = v1, partitionNum = 0. -If at some later point in time, you reindex the data with a new schema, the newly created segments will have a higher version id. - -``` -foo_2015-01-01/2015-01-02_v2_0 -foo_2015-01-01/2015-01-02_v2_1 -foo_2015-01-01/2015-01-02_v2_2 -``` - -Druid batch indexing (either Hadoop-based or IndexTask-based) guarantees atomic updates on an interval-by-interval basis. -In our example, until all `v2` segments for `2015-01-01/2015-01-02` are loaded in a Druid cluster, queries exclusively use `v1` segments. -Once all `v2` segments are loaded and queryable, all queries ignore `v1` segments and switch to the `v2` segments. -Shortly afterwards, the `v1` segments are unloaded from the cluster. - -Note that updates that span multiple segment intervals are only atomic within each interval. They are not atomic across the entire update. -For example, you have segments such as the following: - -``` -foo_2015-01-01/2015-01-02_v1_0 -foo_2015-01-02/2015-01-03_v1_1 -foo_2015-01-03/2015-01-04_v1_2 -``` - -`v2` segments will be loaded into the cluster as soon as they are built and replace `v1` segments for the period of time the -segments overlap. Before v2 segments are completely loaded, your cluster may have a mixture of `v1` and `v2` segments. - -``` -foo_2015-01-01/2015-01-02_v1_0 -foo_2015-01-02/2015-01-03_v2_1 -foo_2015-01-03/2015-01-04_v1_2 -``` - -In this case, queries may hit a mixture of `v1` and `v2` segments. - -### Different schemas among segments - -Druid segments for the same datasource may have different schemas. If a string column (dimension) exists in one segment but not -another, queries that involve both segments still work. Queries for the segment missing the dimension will behave as if the dimension has only null values. -Similarly, if one segment has a numeric column (metric) but another does not, queries on the segment missing the -metric will generally "do the right thing". Aggregations over this missing metric behave as if the metric were missing. - - - -## Compaction and reindexing - -Compaction is a type of overwrite operation, which reads an existing set of segments, combines them into a new set with larger but fewer segments, and overwrites the original set with the new compacted set, without changing the data that is stored. - -For performance reasons, it is sometimes beneficial to compact a set of segments into a set of larger but fewer segments, as there is some per-segment processing and memory overhead in both the ingestion and querying paths. - -Compaction tasks merge all segments of the given interval. The syntax is: - -```json -{ - "type": "compact", - "id": , - "dataSource": , - "ioConfig": , - "dimensionsSpec" , - "metricsSpec" , - "segmentGranularity": , - "tuningConfig" , - "context": -} -``` - -|Field|Description|Required| -|-----|-----------|--------| -|`type`|Task type. Should be `compact`|Yes| -|`id`|Task id|No| -|`dataSource`|DataSource name to be compacted|Yes| -|`ioConfig`|ioConfig for compaction task. See [Compaction IOConfig](#compaction-ioconfig) for details.|Yes| -|`dimensionsSpec`|Custom dimensionsSpec. Compaction task will use this dimensionsSpec if exist instead of generating one. See below for more details.|No| -|`metricsSpec`|Custom metricsSpec. Compaction task will use this metricsSpec if specified rather than generating one.|No| -|`segmentGranularity`|If this is set, compactionTask will change the segment granularity for the given interval. See `segmentGranularity` of [`granularitySpec`](index.md#granularityspec) for more details. See the below table for the behavior.|No| -|`tuningConfig`|[Parallel indexing task tuningConfig](../ingestion/native-batch.md#tuningconfig)|No| -|`context`|[Task context](../ingestion/tasks.md#context)|No| - - -An example of compaction task is - -```json -{ - "type" : "compact", - "dataSource" : "wikipedia", - "ioConfig" : { - "type": "compact", - "inputSpec": { - "type": "interval", - "interval": "2017-01-01/2018-01-01" - } - } -} -``` - -This compaction task reads _all segments_ of the interval `2017-01-01/2018-01-01` and results in new segments. -Since `segmentGranularity` is null, the original segment granularity will be remained and not changed after compaction. -To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig). -Please note that you can run multiple compactionTasks at the same time. For example, you can run 12 compactionTasks per month instead of running a single task for the entire year. - -A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. -For example, its `inputSource` is always the [DruidInputSource](native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec` -include all dimensions and metrics of the input segments by default. - -Compaction tasks will exit with a failure status code, without doing anything, if the interval you specify has no -data segments loaded in it (or if the interval you specify is empty). - -The output segment can have different metadata from the input segments unless all input segments have the same metadata. - -- Dimensions: since Apache Druid supports schema change, the dimensions can be different across segments even if they are a part of the same dataSource. -If the input segments have different dimensions, the output segment basically includes all dimensions of the input segments. -However, even if the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be -changed from `string` to primitive types, or the order of dimensions can be changed for better locality. -In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering. -This is because more recent segments are more likely to have the new desired order and data types. If you want to use -your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec. -- Roll-up: the output segment is rolled up only when `rollup` is set for all input segments. -See [Roll-up](../ingestion/index.md#rollup) for more details. -You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes). - - -### Compaction IOConfig - -The compaction IOConfig requires specifying `inputSpec` as seen below. - -|Field|Description|Required| -|-----|-----------|--------| -|`type`|Task type. Should be `compact`|Yes| -|`inputSpec`|Input specification|Yes| - -There are two supported `inputSpec`s for now. - -The interval `inputSpec` is: - -|Field|Description|Required| -|-----|-----------|--------| -|`type`|Task type. Should be `interval`|Yes| -|`interval`|Interval to compact|Yes| - -The segments `inputSpec` is: - -|Field|Description|Required| -|-----|-----------|--------| -|`type`|Task type. Should be `segments`|Yes| -|`segments`|A list of segment IDs|Yes| - +In addition to the tasks covered on this page, you can also use segment compaction to improve the layout of your existing data. Refer to [Segment optimization](../operations/segment-optimization.md) to see if compaction will help in your environment. For an overview and steps to configure manual compaction tasks, see [Compaction](./compaction.md). ## Adding new data @@ -280,3 +116,8 @@ Druid also supports separating Historical processes into tiers, and the retentio These features are useful for performance/cost management; a common use case is separating Historical processes into a "hot" tier and a "cold" tier. For more information, please see [Load rules](../operations/rule-configuration.md). + +## Learn more +See the following topics for more information: +- [Compaction](./compaction.md) for an overview and steps to configure manual compaction tasks. +- [Segments](../design/segments.md) for information on how Druid handles segment versioning. diff --git a/docs/operations/segment-optimization.md b/docs/operations/segment-optimization.md index e0e909efb240..73d1212939a5 100644 --- a/docs/operations/segment-optimization.md +++ b/docs/operations/segment-optimization.md @@ -38,7 +38,7 @@ In Apache Druid, it's important to optimize the segment size because It would be best if you can optimize the segment size at ingestion time, but sometimes it's not easy especially when it comes to stream ingestion because the amount of data ingested might vary over time. In this case, -you can create segments with a sub-optimized size first and optimize them later. +you can create segments with a sub-optimized size first and optimize them later using [compaction](../ingestion/compaction.md). You may need to consider the followings to optimize your segments. @@ -96,3 +96,6 @@ Once you find your segments need compaction, you can consider the below two opti inputSpec to read from the segments generated by the Kafka indexing tasks. This might be helpful if you want to compact a lot of segments in parallel. Details on how to do this can be found on the [Updating existing data](../ingestion/data-management.md#update) section of the data management page. + +## Learn more +For an overview of compaction and how to submit a manual compaction task, see [Compaction](../ingestion/compaction.md). diff --git a/website/i18n/en.json b/website/i18n/en.json index f67380992222..ae666f4c94c4 100644 --- a/website/i18n/en.json +++ b/website/i18n/en.json @@ -270,6 +270,9 @@ "development/versioning": { "title": "Versioning" }, + "ingestion/compaction": { + "title": "Compaction" + }, "ingestion/data-formats": { "title": "Data formats" }, From 8a29081f0639f037377626f6ab5ee3e187644858 Mon Sep 17 00:00:00 2001 From: Charles Smith Date: Tue, 2 Mar 2021 09:59:17 -0800 Subject: [PATCH 02/18] fix links, typos, some reorganization --- docs/ingestion/data-management.md | 2 +- docs/ingestion/index.md | 8 ++++---- docs/ingestion/tasks.md | 2 +- docs/operations/basic-cluster-tuning.md | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/ingestion/data-management.md b/docs/ingestion/data-management.md index 0c63b51e6580..2a062a708720 100644 --- a/docs/ingestion/data-management.md +++ b/docs/ingestion/data-management.md @@ -21,7 +21,7 @@ title: "Data management" ~ specific language governing permissions and limitations ~ under the License. --> -Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. +Within the context of this topic Data management refers to Apache Druid's data maintenance capabilities for existing datasources. There are several options to help you keep your data relevant and to help your Druid cluster remain performant. For example updating, reingesting, adding lookups, reindexing, or deleting data. In addition to the tasks covered on this page, you can also use segment compaction to improve the layout of your existing data. Refer to [Segment optimization](../operations/segment-optimization.md) to see if compaction will help in your environment. For an overview and steps to configure manual compaction tasks, see [Compaction](./compaction.md). diff --git a/docs/ingestion/index.md b/docs/ingestion/index.md index 75ea7031dfb4..c2c64fc0ff93 100644 --- a/docs/ingestion/index.md +++ b/docs/ingestion/index.md @@ -196,7 +196,7 @@ that datasource leads to much faster query times. This can often be done with ju footprint, since abbreviated datasources tend to be substantially smaller. - If you are using a [best-effort rollup](#perfect-rollup-vs-best-effort-rollup) ingestion configuration that does not guarantee perfect rollup, you can potentially improve your rollup ratio by switching to a guaranteed perfect rollup option, or by -[reindexing](data-management.md#compaction-and-reindexing) your data in the background after initial ingestion. +[reindexing](data-management.md#reingesting-data) or [compacting](./compaction.md) your data in the background after initial ingestion. ### Perfect rollup vs Best-effort rollup @@ -258,7 +258,7 @@ storage size decreases - and it also tends to improve query performance as well. Not all ingestion methods support an explicit partitioning configuration, and not all have equivalent levels of flexibility. As of current Druid versions, If you are doing initial ingestion through a less-flexible method (like -Kafka) then you can use [reindexing techniques](data-management.md#compaction-and-reindexing) to repartition your data after it +Kafka) then you can use [reindexing](data-management.md#reingesting-data) or [compaction](./compaction.md) to repartition your data after it is initially ingested. This is a powerful technique: you can use it to ensure that any data older than a certain threshold is optimally partitioned, even as you continuously add new data from a stream. @@ -268,8 +268,8 @@ The following table shows how each ingestion method handles partitioning: |------|------------| |[Native batch](native-batch.md)|Configured using [`partitionsSpec`](native-batch.md#partitionsspec) inside the `tuningConfig`.| |[Hadoop](hadoop.md)|Configured using [`partitionsSpec`](hadoop.md#partitionsspec) inside the `tuningConfig`.| -|[Kafka indexing service](../development/extensions-core/kafka-ingestion.md)|Partitioning in Druid is guided by how your Kafka topic is partitioned. You can also [reindex](data-management.md#compaction-and-reindexing) to repartition after initial ingestion.| -|[Kinesis indexing service](../development/extensions-core/kinesis-ingestion.md)|Partitioning in Druid is guided by how your Kinesis stream is sharded. You can also [reindex](data-management.md#compaction-and-reindexing) to repartition after initial ingestion.| +|[Kafka indexing service](../development/extensions-core/kafka-ingestion.md)|Partitioning in Druid is guided by how your Kafka topic is partitioned. You can also [reindex](data-management.md#reingesting-data) or [compact](./compaction.md) to repartition after initial ingestion.| +|[Kinesis indexing service](../development/extensions-core/kinesis-ingestion.md)|Partitioning in Druid is guided by how your Kinesis stream is sharded. You can also [reindex](data-management.md#reingesting-data) or [compact](./compaction.md) to repartition after initial ingestion.| > Note that, of course, one way to partition data is to load it into separate datasources. This is a perfectly viable > approach and works very well when the number of datasources does not lead to excessive per-datasource overheads. If diff --git a/docs/ingestion/tasks.md b/docs/ingestion/tasks.md index 4fc21d37ca79..3b96e759a35c 100644 --- a/docs/ingestion/tasks.md +++ b/docs/ingestion/tasks.md @@ -389,7 +389,7 @@ Submitted automatically, on your behalf, by [Tranquility](tranquility.md). ### `compact` Compaction tasks merge all segments of the given interval. See the documentation on -[compaction](data-management.md#compaction-and-reindexing) for details. +[compaction](compaction.md) for details. ### `kill` diff --git a/docs/operations/basic-cluster-tuning.md b/docs/operations/basic-cluster-tuning.md index 1f7253c23f21..9f9351aa2aac 100644 --- a/docs/operations/basic-cluster-tuning.md +++ b/docs/operations/basic-cluster-tuning.md @@ -262,7 +262,7 @@ The total memory usage of the MiddleManager + Tasks: If you use the [Kafka Indexing Service](../development/extensions-core/kafka-ingestion.md) or [Kinesis Indexing Service](../development/extensions-core/kinesis-ingestion.md), the number of tasks required will depend on the number of partitions and your taskCount/replica settings. On top of those requirements, allocating more task slots in your cluster is a good idea, so that you have free task -slots available for other tasks, such as [compaction tasks](../ingestion/data-management.md#compact). +slots available for other tasks, such as [compaction tasks](../ingestion/compaction.md). ###### Hadoop ingestion From 50610a862d92019e36148a972b3482f74c3b75a4 Mon Sep 17 00:00:00 2001 From: Charles Smith Date: Wed, 3 Mar 2021 11:18:47 -0800 Subject: [PATCH 03/18] fix spelling. TBD still there for work in progress --- docs/ingestion/compaction.md | 104 +++++++++++++----- docs/tutorials/tutorial-compaction.md | 4 +- .../tutorial/compaction-day-granularity.json | 4 +- website/.spelling | 3 + 4 files changed, 83 insertions(+), 32 deletions(-) diff --git a/docs/ingestion/compaction.md b/docs/ingestion/compaction.md index ffcff2b09cad..7d437166ed3f 100644 --- a/docs/ingestion/compaction.md +++ b/docs/ingestion/compaction.md @@ -27,15 +27,8 @@ Compaction in Apache Druid is a strategy to optimize segment size. Compaction ta Compaction can sometimes increase performance because it reduces the per-segment processing and memory overhead required for ingestion and for querying paths. See [Segment optimization](../operations/segment-optimization.md) for guidance to determine if compaction will help in your case. -## Data handling with compaction -During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. - -Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction tasks combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity. - -> In Apache Druid 0.21.0 and prior, Druid set the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments. - ## Types of segment compaction -You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction) for a datasource. Using a segment search policy, automatic compaction periodically identifies segments for compaction starting with the newest to oldest and submits compaction tasks. +You can configure the Druid Coordinator to perform automatic compaction, also called auto-compaction) for a datasource. Using a segment search policy, automatic compaction periodically identifies segments for compaction starting with the newest to oldest and submits. When segments can benefit from compaction, Druid automatically submits a compaction task. Automatic compaction works in most use cases and should be your first option. To learn more about automatic compaction, see [Compacting Segments](../design/coordinator.md#compacting-segments). @@ -46,9 +39,29 @@ In cases where you require more control over compaction, you can manually submit TBD are there feature gaps where automatic compaction doesn't work? + +## Data handling with compaction +During compaction, Druid overwrites the original set of segments with the compacted set without modifying the data. + +Druid retains the query granularity for the compacted segments. If segments have different query granularities before compaction, Druid chooses the finest level of granularity for the resulting compacted segment. For example if a compaction tasks combines two segments, one with day query granularity and one with minute query granularity, the resulting segment uses minute query granularity. + +> In Apache Druid 0.21.0 and prior, Druid sets the granularity for compacted segments to the default granularity of `NONE` regardless of the query granularity of the original segments. + +### Dimension handling +Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same dataSource. See [Different schemas among segments](../design/segments.md#different-schemas-among-segments). If the input segments have different dimensions, the output segment includes all dimensions of the input segments. + +Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the new desired order and data types. + +If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec. + +### Rollup +Druid only rolls up the output segment when `rollup` is set for all input segments. +See [Roll-up](../ingestion/index.md#rollup) for more details. +You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes). + ## Setting up a manual compaction task -To perform a manual compaction, you submit a compaction tasks. Compaction tasks merge all segments for the defined interval according to the following syntax: +To perform a manual compaction, you submit a compaction task. Compaction tasks merge all segments for the defined interval according to the following syntax: ```json { @@ -58,7 +71,6 @@ To perform a manual compaction, you submit a compaction tasks. Compaction tasks "ioConfig": , "dimensionsSpec" , "metricsSpec" , - "segmentGranularity": , "tuningConfig" , "context": } @@ -69,16 +81,17 @@ To perform a manual compaction, you submit a compaction tasks. Compaction tasks |`type`|Task type. Should be `compact`|Yes| |`id`|Task id|No| |`dataSource`|DataSource name to be compacted|Yes| -|`ioConfig`|ioConfig for compaction task. See [Compaction IOConfig](#compaction-ioconfig) for details.|Yes| +|`ioConfig`|I/O configuration for compaction task. See [Compaction I/O configuration](#compaction-io-configuration) for details.|Yes| |`dimensionsSpec`|Custom dimensionsSpec. The compaction task uses the specified dimensionsSpec if it exists instead of generating one.|No| |`metricsSpec`|Custom metricsSpec. The compaction task uses the specified metrics spec rather than generating one.|No| -|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval. See `segmentGranularity` of [`granularitySpec`](index.md#granularityspec) for more details.|No| -|`tuningConfig`|[Parallel indexing task tuningConfig](../ingestion/native-batch.md#tuningconfig)|No| -|`context`|[Task context](../ingestion/tasks.md#context)|No| +|`segmentGranularity`|When set, the compaction task changes the segment granularity for the given interval. Deprecated. Use `granularitySpec`. |No.| +|`tuningConfig`|[Parallel indexing task tuningConfig](./native-batch.md#tuningconfig)|No| +|`context`|[Task context](./tasks.md#context)|No| +|`granularitySpec`|Custom `granularitySpec` to describe the `segmentGranularity` and `queryGranularity` for the compacted segments. See [Compaction granularitySpec](#compaction-granularity-spec)|No| To control the number of result segments per time chunk, you can set [maxRowsPerSegment](../configuration/index.md#compaction-dynamic-configuration) or [numShards](../ingestion/native-batch.md#tuningconfig). -> You can run multiple compactionTasks at the same time. For example, you can run 12 compactionTasks per month instead of running a single task for the entire year. +> You can run multiple compaction tasks at the same time. For example, you can run 12 compaction tasks per month instead of running a single task for the entire year. A compaction task internally generates an `index` task spec for performing compaction work with some fixed parameters. For example, its `inputSource` is always the [DruidInputSource](native-batch.md#druid-input-source), and `dimensionsSpec` and `metricsSpec`include all dimensions and metrics of the input segments by default. @@ -89,17 +102,6 @@ OR The output segment can have different metadata from the input segments unless all input segments have the same metadata. -### Dimension handling -Apache Druid supports schema changes. Therefore, dimensions can be different across segments even if they are a part of the same dataSource. If the input segments have different dimensions, the output segment includes all dimensions of the input segments. - -Even when the input segments have the same set of dimensions, the dimension order or the data type of dimensions can be different. For example, the data type of some dimensions can be changed from `string` to primitive types, or the order of dimensions can be changed for better locality. In this case, the dimensions of recent segments precede that of old segments in terms of data types and the ordering because more recent segments are more likely to have the new desired order and data types. - -If you want to use your own ordering and types, you can specify a custom `dimensionsSpec` in the compaction task spec. - -### Rollup -Druid only rolls up the output segment when `rollup` is set for all input segments. -See [Roll-up](../ingestion/index.md#rollup) for more details. -You can check that your segments are rolled up or not by using [Segment Metadata Queries](../querying/segmentmetadataquery.md#analysistypes). ### Example compaction task The following JSON illustrates a compaction task to compact _all segments_ within the interval `2017-01-01/2018-01-01` and create new segments: @@ -112,17 +114,17 @@ The following JSON illustrates a compaction task to compact _all segments_ withi "type": "compact", "inputSpec": { "type": "interval", - "interval": "2017-01-01/2018-01-01" + "interval": "2017-01-01/2018-01-01", } } } ``` -Since `segmentGranularity` is null, Druid retains the original segment granularity unchanged when compaction is complete. +This task doesn't specify a `granularitySpec` so Druid retains the original segment granularity unchanged when compaction is complete. -### Compaction IOConfig +### Compaction I/O configuration -The compaction IOConfig requires specifying `inputSpec` as seen below. +The compaction `ioConfig` requires specifying `inputSpec` as seen below. |Field|Description|Required| |-----|-----------|--------| @@ -145,6 +147,48 @@ The segments `inputSpec` is: |`type`|Task type. Should be `segments`|Yes| |`segments`|A list of segment IDs|Yes| +### Compaction granularity spec + +You can optionally use the `granularitySpec` object to configure the segment granularity and the query granularity of the compacted segments. They syntax is as follows: +```json + "type": "compact", + "id": , + "dataSource": , + ... + , + "granularitySpec": { + "segmentGranularity":