Skip to content

perf: optimize spark_hex dictionary path by avoiding dictionary expansion#19832

Merged
Jefffrey merged 6 commits intoapache:mainfrom
lyne7-sc:perf/spark_hex_dictionary
Jan 18, 2026
Merged

perf: optimize spark_hex dictionary path by avoiding dictionary expansion#19832
Jefffrey merged 6 commits intoapache:mainfrom
lyne7-sc:perf/spark_hex_dictionary

Conversation

@lyne7-sc
Copy link
Copy Markdown
Contributor

Which issue does this PR close?

Follow up to #19738

Rationale for this change

The current hex implementation expands DictionaryArray inputs into a regular array, which causes loss of dictionary encoding and redundant hex computation for repeated values.

What changes are included in this PR?

  • Apply hex encoding only to dictionary values
  • Avoid expanding dictionary arrays during execution

Benchmark

Size Before After Speedup
1024 8.3 µs 7.2 µs 1.15×
4096 42.9 µs 34.5 µs 1.24×
8192 91.6 µs 71.7 µs 1.28×

Are these changes tested?

Yes. Existing unit tests and sqllogictest tests pass.

Are there any user-facing changes?

No.

@github-actions github-actions Bot added sqllogictest SQL Logic Tests (.slt) spark labels Jan 15, 2026
Comment on lines +271 to +274
let encoded_values_array: ArrayRef = match encoded_values {
ColumnarValue::Array(a) => a,
ColumnarValue::Scalar(s) => Arc::new(s.to_array()?),
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably refactor hex_encode_bytes and hex_encode_int64 to return arrays only, as their signature say they return ColumnarValue but they never return the scalar variant, forcing handling like this

}
DataType::Dictionary(_, value_type) => {
DataType::Dictionary(_, _) => {
let dict = as_dictionary_array::<Int32Type>(&array);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we should have some check that the dictionary has i32 key type, otherwise this will panic

let dict_values = dict.values();

match **value_type {
let encoded_values: ColumnarValue = match dict_values.data_type() {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to consider arms for LargeUtf, views, etc.

FROM VALUES ('foo'), ('bar'), ('foo'), (NULL), ('baz'), ('bar');

query T
SELECT hex(dict_col) FROM t_dict_utf8;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we check the output type here with arrow_typeof to ensure they are dictionaries

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After running the SLT tests for hex, it seems the planner might unpack dictionary-encoded inputs like Dictionary(Int32, Utf8) or Dictionary(Int32, Int64) into their underlying types (Utf8View or Int64) before calling the function. However, Dictionary(Binary) appears to stay as a dictionary

logical_plan
01)Projection: arrow_typeof(hex(CAST(t_dict_utf8.dict_col AS Utf8View)))
physical_plan
01)ProjectionExec: expr=[arrow_typeof(hex(CAST(dict_col@0 AS Utf8View)))]

logical_plan
01)Projection: arrow_typeof(hex(t_dict_binary.dict_col))
physical_plan
01)ProjectionExec: expr=[arrow_typeof(hex(dict_col@0))]

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it's related to this issue

We can still push through with this PR even though it only works for binary (we can change the tests to binary here)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the clarification. the tests have been updated to use dictionary(binary).

@Jefffrey Jefffrey added this pull request to the merge queue Jan 18, 2026
Merged via the queue into apache:main with commit 8b179d9 Jan 18, 2026
28 checks passed
@Jefffrey
Copy link
Copy Markdown
Contributor

Thanks @lyne7-sc

@lyne7-sc lyne7-sc deleted the perf/spark_hex_dictionary branch February 12, 2026 01:48
de-bgunter pushed a commit to de-bgunter/datafusion that referenced this pull request Mar 24, 2026
…ansion (apache#19832)

## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes apache#123` indicates that this PR will close issue apache#123.
-->

Follow up to
apache#19738

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

The current hex implementation expands `DictionaryArray` inputs into a
regular array, which causes loss of dictionary encoding and redundant
hex computation for repeated values.

## What changes are included in this PR?

- Apply hex encoding only to dictionary values 
- Avoid expanding dictionary arrays during execution

### Benchmark
| Size | Before | After | Speedup |
| ---- | ------ | ----- | ------- |
| 1024 | 8.3 µs | 7.2 µs | 1.15× |
| 4096 | 42.9 µs | 34.5 µs | 1.24× |
| 8192 | 91.6 µs | 71.7 µs | 1.28× |

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

## Are these changes tested?
Yes. Existing unit tests and `sqllogictest` tests pass.

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

## Are there any user-facing changes?
No.

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

spark sqllogictest SQL Logic Tests (.slt)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants