[GLUTEN-8115][CORE] Refine the BuildSideRelation transform to support all scenarios#8116
[GLUTEN-8115][CORE] Refine the BuildSideRelation transform to support all scenarios#8116philo-he merged 5 commits intoapache:mainfrom
Conversation
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
1 similar comment
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
1 similar comment
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
1 similar comment
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
@philo-he @zhztheplayer @jackylee-ch @zzcclp Could you please take a look if you find a time, thanks |
|
Do you have a specific query example? |
TPC-DS Q5 is a reuse-exchange example, see: #7807 After reuse, ColumnarSubqueryBroadcastExec's buildKeys and exchange's buildKeys may not be the same reference, and in our internal case, even the names referenced by alias are different because of alias. In Spark, the transform of relation is always carried out in BroadcastExchangeExec, so this problem does not exist. we can save the buildKeys of BroadcastExchangeExec. this way, in the subsequent transform, we can do the relation transform like spark would have done in BroadcastExchangeExec, in which case the references are always the same, and then we can take the desired index value from the buildKeys output. |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
Run Gluten Clickhouse CI on x86 |
|
|
||
| // Copy from org.apache.spark.sql.execution.joins.HashJoin#canRewriteAsLongType | ||
| // we should keep consistent with it to identify the LongHashRelation. | ||
| private def canRewriteAsLongType(keys: Seq[Expression]): Boolean = { |
There was a problem hiding this comment.
@yikf, if spark changes this logic on its side, e.g., for adding support for other types, unexpected failure can happen in Gluten? Seems there is no other way.
There was a problem hiding this comment.
There doesn't seem to be any other way at the moment.
There was a problem hiding this comment.
@yikf, ok, let's merge this pr firstly. Thanks for your efforts!
philo-he
left a comment
There was a problem hiding this comment.
Verified with internal TPC-DS 3TB test.
This pull request introduces a safer and more robust approach for handling Spark's `BroadcastMode` during serialization. The main improvement is the introduction of a new `SafeBroadcastMode` abstraction and related utilities, which help avoid serialization issues that caused a Stackoverflow exception during broadcast exchanges. BroadcastMode was introduced in the [PR](apache#8116) that caused the issues observed. HashedRelationBroadcastMode embeds Catalyst expression trees, which are not safe to Kryo-serialize when running with `spark.kryo.referenceTracking=false` (default internally). With this change, the broadcast payload now contains only primitives and byte arrays (no Catalyst trees). For bound keys, we serialize just column ordinals (+ null-aware flag) and for computed keys (e.g., upper(col)), we serialize the key expressions once as Java bytes and deserialize only where needed to build projections. Ran internal test set (50 queries) and ran other query specifically checking if `spark.gluten.velox.offHeapBroadcastBuildRelation.enabled=true;` works.
What changes were proposed in this pull request?
Fix #8115.
If exchange reuse, the
buildKeysofColumnarSubqueryBroadcastExecand thebuildKeysofBroadcastExchangeExecmay not be the same, making it difficult to construct the projection of the transform. therefore, theBuildSideRelationshould retain theBroadcastModeofBroadcastExchangeExecto simplify the projection of the transform.How was this patch tested?
exists unit tests, manual tests