Improve code flow in the First/Last vector aggregators and unify the numeric aggregators with the String implementations #16230
Conversation
| if (selectedPair.rhs != null) { | ||
| putValue(buf, position, selectedPair.lhs, selectedPair.rhs); | ||
| } else if (useDefault) { | ||
| putDefaultValue(buf, position, selectedPair.lhs); | ||
| } else { | ||
| putNull(buf, position, selectedPair.lhs); | ||
| } |
There was a problem hiding this comment.
a single put method which could handle all cases may make this more natural
or it can be extracted it into a method - it's present at 3-4 places
There was a problem hiding this comment.
I kept it separate to prevent autoboxing in the putDefaultValue. The callers can directly put the default value, without needing to convert it from value to Object and then back to value. putNull can probably be removed.
| } | ||
| // Compare the selectedIndex's value to the value on the buffer. This way, we write to the buffer only once | ||
| // Weeds out empty vectors, where endRow == startRow | ||
| if (selectedIndex != null) { |
There was a problem hiding this comment.
I wonder if the following is ok:
- all time values are
null selectedIndexwill remainnull- no value will be written; as this condition will prevent that
There was a problem hiding this comment.
Yes, in that case, we'd wanna leave the buffer as is, as the buffer may contain some precomputed first/last value of other batches of rows.
If all the time values are null, then we don't have to worry, the serialized pair would be {-DateTimes.MAX, 0}, which is fine.
…numeric aggregators with the String implementations (apache#16230) This PR fixes the first and last vector aggregators and improves their readability. Following changes are introduced The folding is broken in the vectorized versions. We consider time before checking the folded object. If the numerical aggregator gets passed any other object type for some other reason (like String), then the aggregator considers it to be folded, even though it shouldn’t be. We should convert these objects to the desired type, and aggregate them properly. The aggregators must properly use generics. This would minimize the ClassCastException issues that can happen with mixed segment types. We are unifying the string first/last aggregators with numeric versions as well. The aggregators must aggregate null values (https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/query/aggregation/first/StringFirstLastUtils.java#L55-L56 ). The aggregator should only ignore pairs with time == null, and not value == null Time nullity is ignored when trying to vectorize the data. String versions initialized with DateTimes.MIN that is equal to Long.MIN / 2. This can cause incorrect results in case the user enters a custom time column. NOTE: This is still present because it would require a larger refactor in all of the versions. There is a difference in what users might expect from the results because the code flow is changed (for example, the direction of the for loops, etc), however, this will only change the results, and not the contract set by first/last aggregators, which is that if multiple values have the same timestamp, then any of them can get picked. If the column is non-existent, the users might expect a change in the timestamp from DateTime.MAX to Long.MAX, because the code incorrectly used DateTime.MAX to initialize the aggregator, however, in case of a custom timestamp column, this might not be the case. The SQL query might be prohibited from using any Long since it requires a cast to the timestamp function that can fail, but AFAICT native queries don't have such limitations.
…numeric aggregators with the String implementations (#16230) (#16396) This PR fixes the first and last vector aggregators and improves their readability. Following changes are introduced The folding is broken in the vectorized versions. We consider time before checking the folded object. If the numerical aggregator gets passed any other object type for some other reason (like String), then the aggregator considers it to be folded, even though it shouldn’t be. We should convert these objects to the desired type, and aggregate them properly. The aggregators must properly use generics. This would minimize the ClassCastException issues that can happen with mixed segment types. We are unifying the string first/last aggregators with numeric versions as well. The aggregators must aggregate null values (https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/query/aggregation/first/StringFirstLastUtils.java#L55-L56 ). The aggregator should only ignore pairs with time == null, and not value == null Time nullity is ignored when trying to vectorize the data. String versions initialized with DateTimes.MIN that is equal to Long.MIN / 2. This can cause incorrect results in case the user enters a custom time column. NOTE: This is still present because it would require a larger refactor in all of the versions. There is a difference in what users might expect from the results because the code flow is changed (for example, the direction of the for loops, etc), however, this will only change the results, and not the contract set by first/last aggregators, which is that if multiple values have the same timestamp, then any of them can get picked. If the column is non-existent, the users might expect a change in the timestamp from DateTime.MAX to Long.MAX, because the code incorrectly used DateTime.MAX to initialize the aggregator, however, in case of a custom timestamp column, this might not be the case. The SQL query might be prohibited from using any Long since it requires a cast to the timestamp function that can fail, but AFAICT native queries don't have such limitations. Co-authored-by: Laksh Singla <lakshsingla@gmail.com>
Description
This PR fixes the first and last vector aggregators and improves their readability. Following changes are introduced
The folding is broken in the vectorized versions. We consider time before checking the folded object.
If the numerical aggregator gets passed any other object type for some other reason (like String), then the aggregator considers it to be folded, even though it shouldn’t be. We should convert these objects to the desired type, and aggregate them properly.
The aggregators must properly use generics. This would minimize the ClassCastException issues that can happen with mixed segment types. We are unifying the string first/last aggregators with numeric versions as well.
The aggregators must aggregate null values (https://github.com/apache/druid/blob/master/processing/src/main/java/org/apache/druid/query/aggregation/first/StringFirstLastUtils.java#L55-L56 ). The aggregator should only ignore pairs with time == null, and not value == null
Time nullity is ignored when trying to vectorize the data.
String versions initialized withNOTE: This is still present because it would require a larger refactor in all of the versions.DateTimes.MINthat is equal toLong.MIN / 2. This can cause incorrect results in case the user enters a custom time column.There is a difference in what users might expect from the results because the code flow is changed (for example, the direction of the for loops, etc), however, this will only change the results, and not the contract set by first/last aggregators, which is that if multiple values have the same timestamp, then any of them can get picked.
If the column is non-existent, the users might expect a change in the timestamp from DateTime.MAX to Long.MAX, because the code incorrectly used DateTime.MAX to initialize the aggregator, however, in case of a custom timestamp column, this might not be the case. The SQL query might be prohibited from using any Long since it requires a cast to the timestamp function that can fail, but AFAICT native queries don't have such limitations.
Release note
Fixup the vector first/last aggregators
Key changed/added classes in this PR
MyFooOurBarTheirBazThis PR has: