Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions docs/ingestion/ingestion-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -495,18 +495,18 @@ The `indexSpec` object can include the following properties:
|-----|-----------|-------|
|bitmap|Compression format for bitmap indexes. Should be a JSON object with `type` set to `roaring` or `concise`.|`{"type": "roaring"}`|
|dimensionCompression|Compression format for dimension columns. Options are `lz4`, `lzf`, `zstd`, or `uncompressed`.|`lz4`|
|stringDictionaryEncoding|Encoding format for STRING value dictionaries used by STRING and COMPLEX&lt;json&gt; columns. <br /><br />Example to enable front coding: `{"type":"frontCoded", "bucketSize": 4}`<br />`bucketSize` is the number of values to place in a bucket to perform delta encoding. Must be a power of 2, maximum is 128. Defaults to 4.<br /> `formatVersion` can specify older versions for backwards compatibility during rolling upgrades, valid options are `0` and `1`. Defaults to `0` for backwards compatibility.<br /><br />See [Front coding](#front-coding) for more information.|`{"type":"utf8"}`|
|stringDictionaryEncoding|Encoding format for STRING value dictionaries used by STRING and COMPLEX&lt;json&gt; columns. <br /><br />Example to enable front coding: `{"type":"frontCoded", "bucketSize": 4}`<br />`bucketSize` is the number of values to place in a bucket to perform delta encoding. Must be a power of 2, maximum is 128. Defaults to 4.<br /> `formatVersion` can specify older versions for backwards compatibility during rolling upgrades, valid options are `0` and `1`, defaults to `1`.<br /><br />See [Front coding](#front-coding) for more information.|`{"type":"frontCoded", "bucketSize": 4, "formatVersion": 1}`|
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
|stringDictionaryEncoding|Encoding format for STRING value dictionaries used by STRING and COMPLEX&lt;json&gt; columns. <br /><br />Example to enable front coding: `{"type":"frontCoded", "bucketSize": 4}`<br />`bucketSize` is the number of values to place in a bucket to perform delta encoding. Must be a power of 2, maximum is 128. Defaults to 4.<br /> `formatVersion` can specify older versions for backwards compatibility during rolling upgrades, valid options are `0` and `1`, defaults to `1`.<br /><br />See [Front coding](#front-coding) for more information.|`{"type":"frontCoded", "bucketSize": 4, "formatVersion": 1}`|
|stringDictionaryEncoding|Encoding format for STRING value dictionaries used by STRING and COMPLEX&lt;json&gt; columns. <br /><br />Example to enable front coding: `{"type":"frontCoded", "bucketSize": 4}`<br />`bucketSize` is the number of values to place in a bucket to perform delta encoding. Must be a power of 2, maximum is 128. Defaults to 4.<br /> `formatVersion` can specify older versions for backwards compatibility during rolling upgrades. Valid options are 0 and 1. Defaults to 1.<br /><br />See [Front coding](#front-coding) for more information.|`{"type":"frontCoded", "bucketSize": 4, "formatVersion": 1}`|

|metricCompression|Compression format for primitive type metric columns. Options are `lz4`, `lzf`, `zstd`, `uncompressed`, or `none` (which is more efficient than `uncompressed`, but not supported by older versions of Druid).|`lz4`|
|longEncoding|Encoding format for long-typed columns. Applies regardless of whether they are dimensions or metrics. Options are `auto` or `longs`. `auto` encodes the values using offset or lookup table depending on column cardinality, and store them with variable size. `longs` stores the value as-is with 8 bytes each.|`longs`|
|jsonCompression|Compression format to use for nested column raw data. Options are `lz4`, `lzf`, `zstd`, or `uncompressed`.|`lz4`|

##### Front coding

Front coding is an experimental feature starting in version 25.0. Front coding is an incremental encoding strategy that Druid can use to store STRING and [COMPLEX&lt;json&gt;](../querying/nested-columns.md) columns. It allows Druid to create smaller UTF-8 encoded segments with very little performance cost.
Front coding is an incremental encoding strategy that Druid uses by default to store STRING and [COMPLEX&lt;json&gt;](../querying/nested-columns.md) columns. It allows Druid to create smaller UTF-8 encoded segments with very little performance cost.

You can enable front coding with all types of ingestion. For information on defining an `indexSpec` in a query context, see [SQL-based ingestion reference](../multi-stage-query/reference.md#context-parameters).
For information on defining an `indexSpec` in a query context, see [SQL-based ingestion reference](../multi-stage-query/reference.md#context-parameters).

> Front coding was originally introduced in Druid 25.0, and an improved 'version 1' was introduced in Druid 26.0, with typically faster read speed and smaller storage size. The current recommendation is to enable it in a staging environment and fully test your use case before using in production. By default, segments created with front coding enabled in Druid 26.0 are backwards compatible with Druid 25.0, but those created with Druid 26.0 or 25.0 are not compatible with Druid versions older than 25.0. If using front coding in Druid 25.0 and upgrading to Druid 26.0, the `formatVersion` defaults to `0` to keep writing out the older format to enable seamless downgrades to Druid 25.0, and then later is recommended to be changed to `1` once determined that rollback is not necessary.
> Front coding was originally introduced in Druid 25.0, and an improved 'version 1' was introduced in Druid 26.0, with typically faster read speed and smaller storage size, before finally becoming the default in Druid 27.0. By default, segments created with Druid 27.0 are backwards compatible with Druid 26.0, but not compatible with Druid versions older than 26.0. If upgrading to Druid 27.0 from a version older than 26.0, the `stringDictionaryEncoding` should be set to `{"type": "utf8"}` to keep writing out the older format to enable seamless downgrades to Druid 25.0 and older, and then later is recommended to be changed to the new default once determined that rollback is not necessary.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
> Front coding was originally introduced in Druid 25.0, and an improved 'version 1' was introduced in Druid 26.0, with typically faster read speed and smaller storage size, before finally becoming the default in Druid 27.0. By default, segments created with Druid 27.0 are backwards compatible with Druid 26.0, but not compatible with Druid versions older than 26.0. If upgrading to Druid 27.0 from a version older than 26.0, the `stringDictionaryEncoding` should be set to `{"type": "utf8"}` to keep writing out the older format to enable seamless downgrades to Druid 25.0 and older, and then later is recommended to be changed to the new default once determined that rollback is not necessary.
> Front coding was originally introduced in Druid 25.0. Then, an improved 'version 1' was introduced in Druid 26.0, with typically faster read speed and smaller storage size, before finally becoming the default in Druid 27.0. By default, segments created with Druid 27.0 are backwards compatible with Druid 26.0, but not compatible with Druid versions older than 26.0. If upgrading to Druid 27.0 from a version older than 26.0, set the `stringDictionaryEncoding` to `{"type": "utf8"}` to keep writing out the older format to enable seamless downgrades to Druid 25.0 and older, and then later is recommended to be changed to the new default once determined that rollback is not necessary.

This part is a bit confusing:
"...and then later is recommended to be changed to the new default once determined that rollback is not necessary."
Are we recommending that users set stringDictionaryEncoding to front coding?


Beyond these properties, each ingestion method has its own specific tuning properties. See the documentation for each
[ingestion method](./index.md#ingestion-methods) for details.
Original file line number Diff line number Diff line change
Expand Up @@ -466,8 +466,8 @@ public void testAutoCompactionDutySubmitAndVerifyCompaction() throws Exception
fullDatasourceName,
AutoCompactionSnapshot.AutoCompactionScheduleStatus.RUNNING,
0,
13702,
13701,
13326,
13325,
0,
2,
2,
Expand All @@ -484,7 +484,7 @@ public void testAutoCompactionDutySubmitAndVerifyCompaction() throws Exception
fullDatasourceName,
AutoCompactionSnapshot.AutoCompactionScheduleStatus.RUNNING,
0,
21566,
20906,
0,
0,
3,
Expand Down Expand Up @@ -600,16 +600,16 @@ public void testAutoCompactionDutyCanUpdateTaskSlots() throws Exception
getAndAssertCompactionStatus(
fullDatasourceName,
AutoCompactionSnapshot.AutoCompactionScheduleStatus.RUNNING,
13702,
13701,
13326,
13325,
0,
2,
2,
0,
1,
1,
0);
Assert.assertEquals(compactionResource.getCompactionProgress(fullDatasourceName).get("remainingSegmentSize"), "13702");
Assert.assertEquals(compactionResource.getCompactionProgress(fullDatasourceName).get("remainingSegmentSize"), "13326");
// Run compaction again to compact the remaining day
// Remaining day compacted (1 new segment). Now both days compacted (2 total)
forceTriggerAutoCompaction(2);
Expand All @@ -620,7 +620,7 @@ public void testAutoCompactionDutyCanUpdateTaskSlots() throws Exception
fullDatasourceName,
AutoCompactionSnapshot.AutoCompactionScheduleStatus.RUNNING,
0,
21566,
20906,
0,
0,
3,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,13 @@
})
public interface StringEncodingStrategy
{
Utf8 DEFAULT = new Utf8();
String UTF8 = "utf8";
String FRONT_CODED = "frontCoded";

byte UTF8_ID = 0x00;
byte FRONT_CODED_ID = 0x01;
int DEFAULT_BUCKET_SIZE = 4;

StringEncodingStrategy DEFAULT = new FrontCoded(DEFAULT_BUCKET_SIZE, null);

String getType();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ public final class FrontCodedIndexed implements Indexed<ByteBuffer>
{
public static final byte V0 = 0;
public static final byte V1 = 1;
public static final byte DEFAULT_VERSION = V0;
public static final byte DEFAULT_VERSION = V1;
public static final int DEFAULT_BUCKET_SIZE = 4;

public static byte validateVersion(byte version)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@

package org.apache.druid.segment;

import org.apache.druid.segment.column.StringEncodingStrategy;
import org.apache.druid.segment.data.CompressionFactory.LongEncodingStrategy;
import org.apache.druid.segment.data.CompressionStrategy;
import org.apache.druid.segment.data.ConciseBitmapSerdeFactory;
Expand All @@ -33,14 +34,16 @@ public ConciseBitmapIndexMergerV9Test(
CompressionStrategy compressionStrategy,
CompressionStrategy dimCompressionStrategy,
LongEncodingStrategy longEncodingStrategy,
StringEncodingStrategy stringEncodingStrategy,
SegmentWriteOutMediumFactory segmentWriteOutMediumFactory
)
{
super(
new ConciseBitmapSerdeFactory(),
compressionStrategy,
dimCompressionStrategy,
longEncodingStrategy
longEncodingStrategy,
stringEncodingStrategy
);
indexMerger = TestHelper.getTestIndexMergerV9(segmentWriteOutMediumFactory);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,14 @@
import org.apache.druid.segment.column.ColumnIndexSupplier;
import org.apache.druid.segment.column.DictionaryEncodedColumn;
import org.apache.druid.segment.column.StringUtf8DictionaryEncodedColumn;
import org.apache.druid.segment.column.StringEncodingStrategy;
import org.apache.druid.segment.column.StringValueSetIndex;
import org.apache.druid.segment.data.BitmapSerdeFactory;
import org.apache.druid.segment.data.BitmapValues;
import org.apache.druid.segment.data.CompressionFactory;
import org.apache.druid.segment.data.CompressionStrategy;
import org.apache.druid.segment.data.ConciseBitmapSerdeFactory;
import org.apache.druid.segment.data.FrontCodedIndexed;
import org.apache.druid.segment.data.ImmutableBitmapValues;
import org.apache.druid.segment.data.IncrementalIndexTest;
import org.apache.druid.segment.incremental.IncrementalIndex;
Expand Down Expand Up @@ -91,7 +93,7 @@ public class IndexMergerTestBase extends InitializedNullHandlingTest

protected IndexMerger indexMerger;

@Parameterized.Parameters(name = "{index}: metric compression={0}, dimension compression={1}, long encoding={2}, segment write-out medium={3}")
@Parameterized.Parameters(name = "{index}: metric compression={0}, dimension compression={1}, long encoding={2}, string encoding={3} segment write-out medium={4}")
public static Collection<Object[]> data()
{
return Collections2.transform(
Expand All @@ -100,6 +102,11 @@ public static Collection<Object[]> data()
EnumSet.allOf(CompressionStrategy.class),
ImmutableSet.copyOf(CompressionStrategy.noNoneValues()),
EnumSet.allOf(CompressionFactory.LongEncodingStrategy.class),
ImmutableSet.of(
new StringEncodingStrategy.Utf8(),
new StringEncodingStrategy.FrontCoded(16, FrontCodedIndexed.V0),
new StringEncodingStrategy.FrontCoded(16, FrontCodedIndexed.V1)
),
SegmentWriteOutMediumFactory.builtInFactories()
)
), new Function<List<?>, Object[]>()
Expand Down Expand Up @@ -148,14 +155,16 @@ protected IndexMergerTestBase(
@Nullable BitmapSerdeFactory bitmapSerdeFactory,
CompressionStrategy compressionStrategy,
CompressionStrategy dimCompressionStrategy,
CompressionFactory.LongEncodingStrategy longEncodingStrategy
CompressionFactory.LongEncodingStrategy longEncodingStrategy,
StringEncodingStrategy stringEncodingStrategy
)
{
this.indexSpec = IndexSpec.builder()
.withBitmapSerdeFactory(bitmapSerdeFactory != null ? bitmapSerdeFactory : new ConciseBitmapSerdeFactory())
.withDimensionCompression(dimCompressionStrategy)
.withMetricCompression(compressionStrategy)
.withLongEncoding(longEncodingStrategy)
.withStringDictionaryEncoding(stringEncodingStrategy)
.build();
this.indexIO = TestHelper.getTestIndexIO();
this.useBitmapIndexes = bitmapSerdeFactory != null;
Expand Down Expand Up @@ -510,6 +519,12 @@ public void testMergeSpecChange() throws Exception
} else {
builder.withLongEncoding(CompressionFactory.LongEncodingStrategy.LONGS);
}
if (StringEncodingStrategy.UTF8_ID == indexSpec.getStringDictionaryEncoding().getId()) {
builder.withStringDictionaryEncoding(new StringEncodingStrategy.FrontCoded(4, FrontCodedIndexed.V1));
} else {
builder.withStringDictionaryEncoding(new StringEncodingStrategy.Utf8());
}

IndexSpec newSpec = builder.build();

AggregatorFactory[] mergedAggregators = new AggregatorFactory[]{new CountAggregatorFactory("count")};
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@

package org.apache.druid.segment;

import org.apache.druid.segment.column.StringEncodingStrategy;
import org.apache.druid.segment.data.CompressionFactory.LongEncodingStrategy;
import org.apache.druid.segment.data.CompressionStrategy;
import org.apache.druid.segment.writeout.SegmentWriteOutMediumFactory;
Expand All @@ -32,14 +33,16 @@ public NoBitmapIndexMergerV9Test(
CompressionStrategy compressionStrategy,
CompressionStrategy dimCompressionStrategy,
LongEncodingStrategy longEncodingStrategy,
StringEncodingStrategy stringEncodingStrategy,
SegmentWriteOutMediumFactory segmentWriteOutMediumFactory
)
{
super(
null,
compressionStrategy,
dimCompressionStrategy,
longEncodingStrategy
longEncodingStrategy,
stringEncodingStrategy
);
indexMerger = TestHelper.getTestIndexMergerV9(segmentWriteOutMediumFactory);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@

package org.apache.druid.segment;

import org.apache.druid.segment.column.StringEncodingStrategy;
import org.apache.druid.segment.data.CompressionFactory.LongEncodingStrategy;
import org.apache.druid.segment.data.CompressionStrategy;
import org.apache.druid.segment.data.RoaringBitmapSerdeFactory;
Expand All @@ -33,14 +34,16 @@ public RoaringBitmapIndexMergerV9Test(
CompressionStrategy compressionStrategy,
CompressionStrategy dimCompressionStrategy,
LongEncodingStrategy longEncodingStrategy,
StringEncodingStrategy stringEncodingStrategy,
SegmentWriteOutMediumFactory segmentWriteOutMediumFactory
)
{
super(
RoaringBitmapSerdeFactory.getInstance(),
compressionStrategy,
dimCompressionStrategy,
longEncodingStrategy
longEncodingStrategy,
stringEncodingStrategy
);
indexMerger = TestHelper.getTestIndexMergerV9(segmentWriteOutMediumFactory);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,7 @@ public void testFrontCodedDefaultSerde() throws JsonProcessingException
// this next assert seems silly, but its a sanity check to make us think hard before changing the default version,
// to make us think of the backwards compatibility implications, as new versions of segment format stuff cannot be
// downgraded to older versions of Druid and still read
// the default version should be changed to V1 after Druid 26.0 is released
Assert.assertEquals(FrontCodedIndexed.V0, FrontCodedIndexed.DEFAULT_VERSION);
Assert.assertEquals(FrontCodedIndexed.V1, FrontCodedIndexed.DEFAULT_VERSION);
}

@Test
Expand Down
Loading