Segments sorted by non-time columns.#16849
Conversation
Currently, segments are always sorted by __time, followed by the sort order provided by the user via dimensionsSpec or CLUSTERED BY. Sorting by __time enables efficient execution of queries involving time-ordering or granularity. Time-ordering is a simple matter of reading the rows in stored order, and granular cursors can be generated in streaming fashion. However, for various workloads, it's better for storage footprint and query performance to sort by arbitrary orders that do not start with __time. With this patch, users can sort segments by such orders. For spec-based ingestion, users add "useExplicitSegmentSortOrder: true" to dimensionsSpec. The "dimensions" list determines the sort order. To define a sort order that includes "__time", users explicitly include a dimension named "__time". For SQL-based ingestion, users set the context parameter "useExplicitSegmentSortOrder: true". The CLUSTERED BY clause is then used as the explicit segment sort order. In both cases, when the new "useExplicitSegmentSortOrder" parameter is false (the default), __time is implicitly prepended to the sort order, as it always was prior to this patch. The new parameter is experimental for two main reasons. First, such segments can cause errors when loaded by older servers, due to violating their expectations that timestamps are always monotonically increasing. Second, even on newer servers, not all queries can run on non-time-sorted segments. Scan queries involving time-ordering and any query involving granularity will not run. (To partially mitigate this, a currently-undocumented SQL feature "sqlUseGranularity" is provided. When set to false the SQL planner avoids using "granularity".) Changes on the write path: 1) DimensionsSpec can now optionally contain a __time dimension, which controls the placement of __time in the sort order. If not present, __time is considered to be first in the sort order, as it has always been. 2) IncrementalIndex and IndexMerger are updated to sort facts more flexibly; not always by time first. 3) Metadata (stored in metadata.drd) gains a "sortOrder" field. 4) MSQ can generate range-based shard specs even when not all columns are singly-valued strings. It merely stops accepting new clustering key fields when it encounters the first one that isn't a singly-valued string. This is useful because it enables range shard specs on "someDim" to be created for clauses like "CLUSTERED BY someDim, __time". Changes on the read path: 1) Add StorageAdapter#getSortOrder so query engines can tell how a segment is sorted. 2) Update QueryableIndexStorageAdapter, IncrementalIndexStorageAdapter, and VectorCursorGranularizer to throw errors when using granularities on non-time-ordered segments. 3) Update ScanQueryEngine to throw an error when using the time-ordering "order" parameter on non-time-ordered segments. 4) Update TimeBoundaryQueryRunnerFactory to perform a segment scan when running on a non-time-ordered segment. 5) Add "sqlUseGranularity" context parameter that causes the SQL planner to avoid using granularities other than ALL. Other changes: 1) Rename DimensionsSpec "hasCustomDimensions" to "hasFixedDimensions" and change the meaning subtly: it now returns true if the DimensionsSpec represents an unchanging list of dimensions, or false if there is some discovery happening. This is what call sites had expected anyway.
|
|
||
| @MethodSource("data") | ||
| @ParameterizedTest(name = "{index}:with context {0}") | ||
| public void testReplaceOnFooWithAllClusteredByDim(String contextName, Map<String, Object> context) |
Check notice
Code scanning / CodeQL
Useless parameter
|
|
||
| @MethodSource("data") | ||
| @ParameterizedTest(name = "{index}:with context {0}") | ||
| public void testReplaceOnFooWithAllClusteredByDimExplicitSort(String contextName, Map<String, Object> context) |
Check notice
Code scanning / CodeQL
Useless parameter
|
|
||
| @MethodSource("data") | ||
| @ParameterizedTest(name = "{index}:with context {0}") | ||
| public void testReplaceOnFooWithAllClusteredByDimThenTimeExplicitSort(String contextName, Map<String, Object> context) |
Check notice
Code scanning / CodeQL
Useless parameter
|
|
||
| @MethodSource("data") | ||
| @ParameterizedTest(name = "{index}:with context {0}") | ||
| public void testReplaceOnFooWithAllClusteredByDimThenTimeError(String contextName, Map<String, Object> context) |
Check notice
Code scanning / CodeQL
Useless parameter
|
|
||
| @MethodSource("data") | ||
| @ParameterizedTest(name = "{index}:with context {0}") | ||
| public void testReplaceOnFooWithAllClusteredByDimThenTimeError2(String contextName, Map<String, Object> context) |
Check notice
Code scanning / CodeQL
Useless parameter
|
|
||
| @MethodSource("data") | ||
| @ParameterizedTest(name = "{index}:with context {0}") | ||
| public void testReplaceOnFooWithAllClusteredByTimeThenDimExplicitSort(String contextName, Map<String, Object> context) |
Check notice
Code scanning / CodeQL
Useless parameter
| DataSchema schema = new DataSchema( | ||
| IdUtilsTest.VALID_ID_CHARS, | ||
| new TimestampSpec("time", "auto", null), | ||
| DimensionsSpec.builder() | ||
| .setDimensions( | ||
| ImmutableList.of( | ||
| new StringDimensionSchema("__time"), | ||
| new StringDimensionSchema("dimA"), | ||
| new StringDimensionSchema("dimB") | ||
| ) | ||
| ) | ||
| .setDimensionExclusions(ImmutableList.of("dimC")) | ||
| .build(), | ||
| null, | ||
| new ArbitraryGranularitySpec(Granularities.DAY, ImmutableList.of(Intervals.of("2014/2015"))), | ||
| null, | ||
| null, | ||
| jsonMapper | ||
| ); |
Check notice
Code scanning / CodeQL
Unread local variable
| ConcurrentMap<IncrementalIndexRow, IncrementalIndexRow> rangeMap = descending ? subMap.descendingMap() : subMap; | ||
| return rangeMap.keySet(); | ||
| } else { | ||
| return Iterables.filter( |
There was a problem hiding this comment.
I know we need this right now, but i'm not sure we will need this after #16533. timeRangeIterable is primarily used to support query granularity buckets in topN (in my branch to support mark/resetToMark to move the cursor in the facts table to the correct granularity bucket without having to advance the cursor directly). i think we could just use iterator or expose an alternative iteralble for incremental index cursor if we aren't requesting time ordering
There was a problem hiding this comment.
I think it's also used for intervals filtering even for non-granular cursors. It seems like it'd be useful for that at least.
| } | ||
|
|
||
| @Override | ||
| public Iterator<IncrementalIndexRow> iterator(boolean descending) |
There was a problem hiding this comment.
descending is pretty tightly coupled with time ordering, but i guess is harmless to implement
There was a problem hiding this comment.
It seemed to make more sense to implement it here rather than throw an error. I suppose in theory there could be a use case, in the future, for situations like ORDER BY userId DESC when the segment is sorted by userId.
| { | ||
| final List<String> baseSortOrder = baseAdapter.getSortOrder(); | ||
|
|
||
| // Sorted the same way as the base segment, unless the unnested column shadows one of the base columns. |
There was a problem hiding this comment.
given that we will basically always be reading the unnested column, i guess the main expected utility of this will be if ordering by one of the columns that is not being unnested?
There was a problem hiding this comment.
I figure the common case is that the unnested column will not be first in the sort order. So we'll return whatever prefix of the order is there up to the unnested column.
| List<String> getSortOrder(); | ||
|
|
||
| default boolean isTimeOrdered() | ||
| { | ||
| return ColumnHolder.TIME_COLUMN_NAME.equals(Iterables.getFirst(getSortOrder(), null)); | ||
| } |
There was a problem hiding this comment.
in the context of #16533, this should probably live on CursorHolder instead of StorageAdapter, though it will require a bunch of tests to make and dispose of a cursor holder to check if their query against the sort order, but otherwise shouldn't be very disruptive. I wonder if we should include a direction similar to in that PR though, maybe re-using scan query ordering so that these two changes are compatible?
There was a problem hiding this comment.
This sounds like a good idea, although in the interests of minimizing conflicts, it'd probably be good to do this after #16533 is merged (since it moves OrderBy around).
| public List<String> getSortOrder() | ||
| { | ||
| return sortOrder; | ||
| } | ||
|
|
There was a problem hiding this comment.
i know it cannot be specified during ingest with the current mechanisms, but it seems like we should include the direction of the ordering as well, maybe re-use the scan query ordering?
| } | ||
|
|
||
| if (sortOrdersToMerge.stream().anyMatch(Objects::isNull)) { | ||
| return null; |
There was a problem hiding this comment.
should this be an error if they aren't all null? seems like it should be pretty consistent across indexable adapters...
There was a problem hiding this comment.
I made it lenient for two reasons:
- Merging of the other parts of the Metadata is also lenient
- It isn't documented that this method requires that all Metadatas are sourced from segments created with the same ingestion spec + the same version of the software
| String column = null; | ||
|
|
||
| for (final List<String> sortOrder : sortOrdersToMerge) { | ||
| if (mergedSortOrder.size() >= sortOrder.size()) { |
There was a problem hiding this comment.
does this happen from columns with all null values or something? maybe this method could use some comments to make the rationale behind the decisions clearer
There was a problem hiding this comment.
I added some comments and also fixed a problem: null is now treated as [__time] (which makes sense, since that was the only possible sort order prior to the sortOrder field being added). Due to this change the method is no longer @Nullable.
| // It's possibly incorrect in some cases for sort order to be SORTED_BY_TIME_ONLY here, but for historical reasons, | ||
| // we're keeping this in place for now. The handling of "interval" in "makeCursors", which has been in place for | ||
| // some time, suggests we think the data is always sorted by time. |
There was a problem hiding this comment.
is this referring to the RowWalker with its skipToDateTime?
| incrementalIndexSchema.getTimestampSpec(), | ||
| this.gran, | ||
| this.rollup, | ||
| ColumnHolder.TIME_COLUMN_NAME.equals(Iterables.getFirst(dimensionOrder, null)) ? null : dimensionOrder |
There was a problem hiding this comment.
maybe we should always write this out instead of leaving it null? i guess it is done like this to make it easier to fill in older segments and new segments ordered by time first?
There was a problem hiding this comment.
Hmm I don't totally remember why this is conditional. It doesn't really make sense to be conditional IMO. I updated it to always write out.
| if (index.timePosition == 0) { | ||
| return Metadata.SORTED_BY_TIME_ONLY; | ||
| } else { | ||
| return Collections.emptyList(); |
There was a problem hiding this comment.
i suppose depending on the type of facts holder we could actually report something here (rollup should be ordered i think?)
There was a problem hiding this comment.
I think that makes sense. I added a comment about this being a possible change in the future.
|
From the discussion in #16849 (comment), after #16533 is merged we should revisit this one to (a) resolve conflict (b) replace |
| } | ||
|
|
||
| ColumnSelectorFactory selectorFactory = table.makeColumnSelectorFactory(joinableOffset, descending, closer); | ||
| ColumnSelectorFactory selectorFactory = table.makeColumnSelectorFactory(joinableOffset, closer); |
Check notice
Code scanning / CodeQL
Possible confusion of local and field
|
Pushed up a commit resolving conflicts with #16533. Main changes:
|
It is possible for the collation to refer to a field that isn't mapped, such as when the DML includes "CLUSTERED BY some_function(some_field)". In this case, the collation refers to a projected column that is not part of the field mappings. Prior to this patch, that would lead to an out of bounds list access on fieldMappings. This patch fixes the problem by identifying the position of __time in the fieldMappings first, rather than retrieving each collation field from fieldMappings. Fixes a bug introduced in apache#16849.
* Place __time in signatures according to sort order. Updates a variety of places to put __time in row signatures according to its position in the sort order, rather than always first, including: - InputSourceSampler. - ScanQueryEngine (in the default signature when "columns" is empty). - Various StorageAdapters, which also have the effect of reordering the column order in segmentMetadata queries, and therefore in SQL schemas as well. Follow-up to #16849. * Fix compilation. * Additional fixes. * Fix. * Fix style. * Omit nonexistent columns from the row signature. * Fix tests.
* MSQ: Fix validation of time position in collations. It is possible for the collation to refer to a field that isn't mapped, such as when the DML includes "CLUSTERED BY some_function(some_field)". In this case, the collation refers to a projected column that is not part of the field mappings. Prior to this patch, that would lead to an out of bounds list access on fieldMappings. This patch fixes the problem by identifying the position of __time in the fieldMappings first, rather than retrieving each collation field from fieldMappings. Fixes a bug introduced in #16849. * Fix test. Better warning message.
* Segments primarily sorted by non-time columns. Currently, segments are always sorted by __time, followed by the sort order provided by the user via dimensionsSpec or CLUSTERED BY. Sorting by __time enables efficient execution of queries involving time-ordering or granularity. Time-ordering is a simple matter of reading the rows in stored order, and granular cursors can be generated in streaming fashion. However, for various workloads, it's better for storage footprint and query performance to sort by arbitrary orders that do not start with __time. With this patch, users can sort segments by such orders. For spec-based ingestion, users add "useExplicitSegmentSortOrder: true" to dimensionsSpec. The "dimensions" list determines the sort order. To define a sort order that includes "__time", users explicitly include a dimension named "__time". For SQL-based ingestion, users set the context parameter "useExplicitSegmentSortOrder: true". The CLUSTERED BY clause is then used as the explicit segment sort order. In both cases, when the new "useExplicitSegmentSortOrder" parameter is false (the default), __time is implicitly prepended to the sort order, as it always was prior to this patch. The new parameter is experimental for two main reasons. First, such segments can cause errors when loaded by older servers, due to violating their expectations that timestamps are always monotonically increasing. Second, even on newer servers, not all queries can run on non-time-sorted segments. Scan queries involving time-ordering and any query involving granularity will not run. (To partially mitigate this, a currently-undocumented SQL feature "sqlUseGranularity" is provided. When set to false the SQL planner avoids using "granularity".) Changes on the write path: 1) DimensionsSpec can now optionally contain a __time dimension, which controls the placement of __time in the sort order. If not present, __time is considered to be first in the sort order, as it has always been. 2) IncrementalIndex and IndexMerger are updated to sort facts more flexibly; not always by time first. 3) Metadata (stored in metadata.drd) gains a "sortOrder" field. 4) MSQ can generate range-based shard specs even when not all columns are singly-valued strings. It merely stops accepting new clustering key fields when it encounters the first one that isn't a singly-valued string. This is useful because it enables range shard specs on "someDim" to be created for clauses like "CLUSTERED BY someDim, __time". Changes on the read path: 1) Add StorageAdapter#getSortOrder so query engines can tell how a segment is sorted. 2) Update QueryableIndexStorageAdapter, IncrementalIndexStorageAdapter, and VectorCursorGranularizer to throw errors when using granularities on non-time-ordered segments. 3) Update ScanQueryEngine to throw an error when using the time-ordering "order" parameter on non-time-ordered segments. 4) Update TimeBoundaryQueryRunnerFactory to perform a segment scan when running on a non-time-ordered segment. 5) Add "sqlUseGranularity" context parameter that causes the SQL planner to avoid using granularities other than ALL. Other changes: 1) Rename DimensionsSpec "hasCustomDimensions" to "hasFixedDimensions" and change the meaning subtly: it now returns true if the DimensionsSpec represents an unchanging list of dimensions, or false if there is some discovery happening. This is what call sites had expected anyway. * Fixups from CI. * Fixes. * Fix missing arg. * Additional changes. * Fix logic. * Fixes. * Fix test. * Adjust test. * Remove throws. * Fix styles. * Fix javadocs. * Cleanup. * Smoother handling of null ordering. * Fix tests. * Missed a spot on the merge. * Fixups. * Avoid needless Filters.and. * Add timeBoundaryInspector to test. * Fix tests. * Fix FrameStorageAdapterTest. * Fix various tests. * Use forceSegmentSortByTime instead of useExplicitSegmentSortOrder. * Pom fix. * Fix doc.
* Place __time in signatures according to sort order. Updates a variety of places to put __time in row signatures according to its position in the sort order, rather than always first, including: - InputSourceSampler. - ScanQueryEngine (in the default signature when "columns" is empty). - Various StorageAdapters, which also have the effect of reordering the column order in segmentMetadata queries, and therefore in SQL schemas as well. Follow-up to apache#16849. * Fix compilation. * Additional fixes. * Fix. * Fix style. * Omit nonexistent columns from the row signature. * Fix tests.
* MSQ: Fix validation of time position in collations. It is possible for the collation to refer to a field that isn't mapped, such as when the DML includes "CLUSTERED BY some_function(some_field)". In this case, the collation refers to a projected column that is not part of the field mappings. Prior to this patch, that would lead to an out of bounds list access on fieldMappings. This patch fixes the problem by identifying the position of __time in the fieldMappings first, rather than retrieving each collation field from fieldMappings. Fixes a bug introduced in apache#16849. * Fix test. Better warning message.
* Segments primarily sorted by non-time columns. Currently, segments are always sorted by __time, followed by the sort order provided by the user via dimensionsSpec or CLUSTERED BY. Sorting by __time enables efficient execution of queries involving time-ordering or granularity. Time-ordering is a simple matter of reading the rows in stored order, and granular cursors can be generated in streaming fashion. However, for various workloads, it's better for storage footprint and query performance to sort by arbitrary orders that do not start with __time. With this patch, users can sort segments by such orders. For spec-based ingestion, users add "useExplicitSegmentSortOrder: true" to dimensionsSpec. The "dimensions" list determines the sort order. To define a sort order that includes "__time", users explicitly include a dimension named "__time". For SQL-based ingestion, users set the context parameter "useExplicitSegmentSortOrder: true". The CLUSTERED BY clause is then used as the explicit segment sort order. In both cases, when the new "useExplicitSegmentSortOrder" parameter is false (the default), __time is implicitly prepended to the sort order, as it always was prior to this patch. The new parameter is experimental for two main reasons. First, such segments can cause errors when loaded by older servers, due to violating their expectations that timestamps are always monotonically increasing. Second, even on newer servers, not all queries can run on non-time-sorted segments. Scan queries involving time-ordering and any query involving granularity will not run. (To partially mitigate this, a currently-undocumented SQL feature "sqlUseGranularity" is provided. When set to false the SQL planner avoids using "granularity".) Changes on the write path: 1) DimensionsSpec can now optionally contain a __time dimension, which controls the placement of __time in the sort order. If not present, __time is considered to be first in the sort order, as it has always been. 2) IncrementalIndex and IndexMerger are updated to sort facts more flexibly; not always by time first. 3) Metadata (stored in metadata.drd) gains a "sortOrder" field. 4) MSQ can generate range-based shard specs even when not all columns are singly-valued strings. It merely stops accepting new clustering key fields when it encounters the first one that isn't a singly-valued string. This is useful because it enables range shard specs on "someDim" to be created for clauses like "CLUSTERED BY someDim, __time". Changes on the read path: 1) Add StorageAdapter#getSortOrder so query engines can tell how a segment is sorted. 2) Update QueryableIndexStorageAdapter, IncrementalIndexStorageAdapter, and VectorCursorGranularizer to throw errors when using granularities on non-time-ordered segments. 3) Update ScanQueryEngine to throw an error when using the time-ordering "order" parameter on non-time-ordered segments. 4) Update TimeBoundaryQueryRunnerFactory to perform a segment scan when running on a non-time-ordered segment. 5) Add "sqlUseGranularity" context parameter that causes the SQL planner to avoid using granularities other than ALL. Other changes: 1) Rename DimensionsSpec "hasCustomDimensions" to "hasFixedDimensions" and change the meaning subtly: it now returns true if the DimensionsSpec represents an unchanging list of dimensions, or false if there is some discovery happening. This is what call sites had expected anyway. * Fixups from CI. * Fixes. * Fix missing arg. * Additional changes. * Fix logic. * Fixes. * Fix test. * Adjust test. * Remove throws. * Fix styles. * Fix javadocs. * Cleanup. * Smoother handling of null ordering. * Fix tests. * Missed a spot on the merge. * Fixups. * Avoid needless Filters.and. * Add timeBoundaryInspector to test. * Fix tests. * Fix FrameStorageAdapterTest. * Fix various tests. * Use forceSegmentSortByTime instead of useExplicitSegmentSortOrder. * Pom fix. * Fix doc.
* Place __time in signatures according to sort order. Updates a variety of places to put __time in row signatures according to its position in the sort order, rather than always first, including: - InputSourceSampler. - ScanQueryEngine (in the default signature when "columns" is empty). - Various StorageAdapters, which also have the effect of reordering the column order in segmentMetadata queries, and therefore in SQL schemas as well. Follow-up to apache#16849. * Fix compilation. * Additional fixes. * Fix. * Fix style. * Omit nonexistent columns from the row signature. * Fix tests.
* MSQ: Fix validation of time position in collations. It is possible for the collation to refer to a field that isn't mapped, such as when the DML includes "CLUSTERED BY some_function(some_field)". In this case, the collation refers to a projected column that is not part of the field mappings. Prior to this patch, that would lead to an out of bounds list access on fieldMappings. This patch fixes the problem by identifying the position of __time in the fieldMappings first, rather than retrieving each collation field from fieldMappings. Fixes a bug introduced in apache#16849. * Fix test. Better warning message.
This patch supports sorting segments by non-time columns (added in #16849) to MSQ compaction. Specifically, if `forceSegmentSortByTime` is set in the data schema, either via the user-supplied compaction config or in the inferred schema, the following steps are taken: - Skip adding `__time` explicitly as the first column to the dimension schema since it already comes as part of the schema - Ensure column mappings propagate `__time` in the order specified by the schema - Set `forceSegmentSortByTime` in the MSQ context.
|
@clintropolis @gianm The "maxIngestedEventTime" returned is -146136543-09-08T08:23:32.096Z is wrong. Doesn't look like this code in QueryableIndexSegment: Is handling MaxIngestedEventTimeInspector? |
The intention of the new It does seem odd we return min datetime, i would have expected null. I don't think it makes much sense to have an implementation of |
|
@clintropolis
which does seems to return the actual time (min and max) of the rows/data (not the end interval of the segment)... unless I am missing something.
|
|
Oh sorry, i meant the max timestamp within the segment interval, that code is the same (ish), its just on |
PR apache#16849 changed the behavior such that maxIngestedEventTime is not updated for non-real-time data. This patch restores the old behavior for non-real-time data by using a TimeBoundaryInspector when MaxIngestedEventTimeInspector is not present.
|
Oops, the behavior change was unintentional (at least I didn't intend it, and this was my PR). This should restore the old behavior: #17686 |
…ache#17686) PR apache#16849 changed the behavior such that maxIngestedEventTime is not updated for non-real-time data. This patch restores the old behavior for non-real-time data by using a TimeBoundaryInspector when MaxIngestedEventTimeInspector is not present.
Summary
Currently, segments are always sorted by
__time, followed by the sort order provided by the user viadimensionsSpecorCLUSTERED BY. Sorting by__timeenables efficient execution of queries involving time-ordering or granularity. Time-ordering is a simple matter of reading the rows in stored order, and granular cursors can be generated in streaming fashion.However, for various workloads, it's better for storage footprint and query performance to sort by arbitrary orders that do not start with
__time. With this patch, users can sort segments by such orders.API
For spec-based ingestion, users add
forceSegmentSortByTime: falseto dimensionsSpec. Thedimensionslist determines the sort order. To define a sort order that includes__time, users explicitly include a dimension named__time.For SQL-based ingestion, users set the context parameter
forceSegmentSortByTime: false. The CLUSTERED BY clause is then used as the explicit segment sort order.In both cases, when the new
forceSegmentSortByTimeparameter istrue(the default),__timeis implicitly prepended to the sort order, as it always was prior to this patch.The new parameter is experimental for two main reasons. First, such segments can cause errors when loaded by older servers, due to violating their expectations that timestamps are always monotonically increasing. Second, even on newer servers, not all queries can run on non-time-sorted segments. Scan queries involving time-ordering and any query involving granularity will not run. (To partially mitigate this, a currently-undocumented SQL feature
sqlUseGranularityis provided. When set to false the SQL planner avoids usinggranularity.)Main changes
Changes on the write path:
DimensionsSpeccan now optionally contain a__timedimension, which controls the placement of__timein the sort order. If not present,__timeis considered to be first in the sort order, as it has always been.IncrementalIndexandIndexMergerare updated to sort facts more flexibly; not always by time first.Metadata(stored in metadata.drd) gains asortOrderfield.MSQ can generate range-based shard specs even when not all columns are singly-valued strings. It merely stops accepting new clustering key fields when it encounters the first one that isn't a singly-valued string. This is useful because it enables range shard specs on
someDimto be created for clauses likeCLUSTERED BY someDim, __time.Auto-compaction respects and propagates sort orders that don't start with
__time.Changes on the read path:
Update cursor holders for
QueryableIndexandIncrementalIndexto return the ordering of the underlying index, so query engines can tell how a segment is sorted.Update
CursorGranularizerandVectorCursorGranularizerto throw errors when using granularities on non-time-ordered segments.Update
ScanQueryEngineto throw an error when using the time-orderingorderparameter on non-time-ordered segments.Update
TimeBoundaryQueryRunnerFactoryto perform a segment scan when running on a non-time-ordered segment.Add
sqlUseGranularitycontext parameter that causes the SQL planner to avoid using granularities other than ALL. This is undocumented, because it's hopefully a short-term hack. The more ideal thing would be to have all the native queries work properly on these segments.Move
getMinTime,getMaxTime, andgetMaxIngestedEventTimefromStorageAdaptertoTimeBoundaryInspectorandMaxIngestedEventTimeInspector. This is mainly necessary fortimeBoundaryqueries to be able to tell whether they can usegetMinTimeandgetMaxTime, vs. needing to use a cursor. Previously,timeBoundaryassumed that an adapter backing aTableDataSourcewas guaranteed to have an exactgetMinTimeandgetMaxTime. But after this patch, that isn't necessarily going to be the case (in particular: a non-time-sorted segment won't be able to know its exact min/max time without a full scan.)Other changes:
Rename
DimensionsSpec#hasCustomDimensionstohasFixedDimensionsand change the meaning subtly: it now returns true if theDimensionsSpecrepresents an unchanging list of dimensions, or false if there is some discovery happening. This is what call sites had expected anyway.Removed
descendingfromJoinable#makeJoinMatcher. Now thatdescendinghas more or less been phased out in favor ofordering: [__time DESC], the joinable order no longer needs to be reversed. (All joined rows arising from the same left-hand row have the same value.)