[SPARK-53968][SQL] Store decimal precision loss conf in arithmetic expressions#52681
Closed
stefankandic wants to merge 4 commits intoapache:masterfrom
Closed
[SPARK-53968][SQL] Store decimal precision loss conf in arithmetic expressions#52681stefankandic wants to merge 4 commits intoapache:masterfrom
stefankandic wants to merge 4 commits intoapache:masterfrom
Conversation
cloud-fan
approved these changes
Oct 22, 2025
Contributor
|
thanks, merging to master! |
huangxiaopingRD
pushed a commit
to huangxiaopingRD/spark
that referenced
this pull request
Nov 25, 2025
…pressions ### What changes were proposed in this pull request? Currently, arithmetic expressions such as `Add` and `Multiply` use the configuration `spark.sql.decimalOperations.allowPrecisionLoss` to determine their output type when working with decimal values. This approach is problematic because if the expression is transformed or copied, its return type could change depending on the active configuration value. This issue can happen during view resolution; we can use one value of the config during analysis and different one during query optimization. If a referenced expression changes type and that reference is reused elsewhere in the plan it will trigger a plan validation error. ### Why are the changes needed? To address this, we should follow a similar approach to what was done for ANSI mode: store the relevant context directly within the expression as part of its state. This ensures the expression remains stable and unaffected by configuration changes when it’s copied or transformed. To make this transition smooth, I’ve generalized the existing EvalMode used for ANSI so that it can be extended to multiple configuration dimensions. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Added a new unit test in SQLViewSuite which was failing with plan validation error before. ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#52681 from stefankandic/fixViewDec. Authored-by: Stefan Kandic <stefan.kandic@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
dongjoon-hyun
pushed a commit
that referenced
this pull request
Feb 27, 2026
### What changes were proposed in this pull request? Fix V2FunctionBenchmark, caused by SPARK-53968 (#52681) ``` $ build/sbt "sql/Test/runMain org.apache.spark.sql.connector.functions.V2FunctionBenchmark" ... [info] running (fork) org.apache.spark.sql.connector.functions.V2FunctionBenchmark [error] WARNING: Using incubator modules: jdk.incubator.vector [info] 13:43:41.084 WARN org.apache.spark.util.Utils: Your hostname, H27212-MAC-01.local, resolves to a loopback address: 127.0.0.1; using 10.242.159.140 instead (on interface en0) [info] 13:43:41.087 WARN org.apache.spark.util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address [info] 13:43:41.260 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [info] Running benchmark: scalar function (long + long) -> long, result_nullable = true codegen = true [info] Running case: native_long_add [error] Exception in thread "main" java.lang.NullPointerException: Cannot invoke "org.apache.spark.sql.catalyst.expressions.NumericEvalContext.evalMode()" because the return value of "org.apache.spark.sql.catalyst.expressions.BinaryArithmetic.evalContext()" is null [error] at org.apache.spark.sql.catalyst.expressions.BinaryArithmetic.evalMode(arithmetic.scala:201) [error] at org.apache.spark.sql.catalyst.expressions.BinaryArithmetic.failOnError(arithmetic.scala:209) [error] at org.apache.spark.sql.catalyst.expressions.BinaryArithmetic.initQueryContext(arithmetic.scala:251) [error] at org.apache.spark.sql.catalyst.expressions.SupportQueryContext.$init$(Expression.scala:670) [error] at org.apache.spark.sql.catalyst.expressions.BinaryArithmetic.<init>(arithmetic.scala:193) [error] at org.apache.spark.sql.connector.functions.V2FunctionBenchmark$NativeAdd.<init>(V2FunctionBenchmark.scala:111) [error] at org.apache.spark.sql.connector.functions.V2FunctionBenchmark$.$anonfun$scalarFunctionBenchmark$3(V2FunctionBenchmark.scala:88) ... ``` ### Why are the changes needed? Fix the benchmark. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Verified locally, GHA benchmark reports for JDK 17 and 21 are updated. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #54540 from pan3793/SPARK-55744. Lead-authored-by: Cheng Pan <chengpan@apache.org> Co-authored-by: pan3793 <pan3793@users.noreply.github.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
Currently, arithmetic expressions such as
AddandMultiplyuse the configurationspark.sql.decimalOperations.allowPrecisionLossto determine their output type when working with decimal values. This approach is problematic because if the expression is transformed or copied, its return type could change depending on the active configuration value.This issue can happen during view resolution; we can use one value of the config during analysis and different one during query optimization. If a referenced expression changes type and that reference is reused elsewhere in the plan it will trigger a plan validation error.
Why are the changes needed?
To address this, we should follow a similar approach to what was done for ANSI mode: store the relevant context directly within the expression as part of its state. This ensures the expression remains stable and unaffected by configuration changes when it’s copied or transformed. To make this transition smooth, I’ve generalized the existing EvalMode used for ANSI so that it can be extended to multiple configuration dimensions.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Added a new unit test in SQLViewSuite which was failing with plan validation error before.
Was this patch authored or co-authored using generative AI tooling?
No.