-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-53998][TESTS] Add addition E2E tests for RTM #52870
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thank you, @jerrypeng . |
| import org.apache.spark.sql.test.TestSparkSession | ||
| import org.apache.spark.sql.types.{IntegerType, StringType, StructType} | ||
|
|
||
| class StreamRealTimeModeE2ESuite extends StreamRealTimeModeE2ESuiteBase { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you check the CI failure?
2025-11-04T07:01:24.2425213Z �[0m[�[0m�[31merror�[0m] �[0m�[0mFailed tests:�[0m
2025-11-04T07:01:24.2426468Z �[0m[�[0m�[31merror�[0m] �[0m�[0m org.apache.spark.sql.streaming.StreamRealTimeModeSuite�[0m
| override protected def createSparkSession = | ||
| new TestSparkSession( | ||
| new SparkContext( | ||
| "local[15]", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use more smaller values like other tests? According to the commit logs, RTM test suites seem to use this kind of high values. I'm wondering if this is required for some reasons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dongjoon-hyun when a streaming query runs in RTM, all stages of the query run concurrently. Which means the cluster needs to have the number of available slots equal to the total number of tasks across all stages.
|
Gentle ping, @jerrypeng . |
|
Gentle ping once more, @jerrypeng . |
|
Hi, @jerrypeng . Could you share us your thoughts about this PR? |
|
Gentle ping, @jerrypeng . |
1 similar comment
|
Gentle ping, @jerrypeng . |
|
@dongjoon-hyun sorry for the delay. Let me take a look at the failure. |
dd64bea to
3a54608
Compare
|
Thank you, @jerrypeng . |
dongjoon-hyun
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, LGTM.
Merged to master/4.1 for Apache Spark 4.1.0 RC3.
### What changes were proposed in this pull request? Add some additional end to end tests for RTM ### Why are the changes needed? To have better test coverage for RTM functionality ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? N/A. Only tests are added ### Was this patch authored or co-authored using generative AI tooling? no Closes #52870 from jerrypeng/SPARK-53998-2. Authored-by: Jerry Peng <jerry.peng@databricks.com> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 7df7dad) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
|
@dongjoon-hyun thank you! |
|
Thank YOU! |
* Move GlutenStreamingQuerySuite to correct package * Add Spark 4.1 new test suites for Gluten * Enable new and existing Gluten test suites for Spark 4.1 UT * Update workflow trigger paths to exclude Spark 4.0 and 4.1 shims directories for clickhouse backend * Add support for Spark 4.1 in build script * Merge Spark 4.1.0 sql-tests into Gluten Spark 4.1 (three-way merge) Three-way merge performed using Git: - Base: Spark 4.0.1 (29434ea766b) - Left: Spark 4.1.0 (e221b56be7b) - Right: Gluten Spark 4.1 backends-velox Summary: - Auto-merged: 165 files - New tests added: 31 files (collations, edge cases, recursion, spatial, etc.) - Modified tests: 134 files - Deleted tests: 2 files (collations.sql -> split into 4 files, timestamp-ntz.sql) Conflicts resolved: - inputs/timestamp-ntz.sql: Right deleted + Left modified -> DELETED (per resolution rule) New test suites from Spark 4.1.0: - Collations (4 files): aliases, basic, padding-trim, string-functions - Edge cases (6 files): alias-resolution, extract-value, join-resolution, etc. - Advanced features: cte-recursion, generators, kllquantiles, thetasketch, time - Name resolution: order-by-alias, session-variable-precedence, runtime-replaceable - Spatial functions: st-functions (ANSI and non-ANSI variants) - Various resolution edge cases Total files after merge: 671 (up from 613) * Enable additional Spark 4.1 SQL tests by resolving TODOs * Add new Spark 4.1 test files to VeloxSQLQueryTestSettings * [Fix] Replace `RuntimeReplaceable` with its `replacement` to fix UT. see apache/spark#50287 * [4.1.0] Exclude "infer shredding with mixed scale" see apache/spark#52406 * [Fix] Implement Kryo serialization for CachedColumnarBatch see apache/spark#50599 * [4.1.0] Exclude GlutenMapStatusEndToEndSuite and configure parallelism see apache/spark#50230 * [4.1.0] Exclude Spark Structure Steaming tests in Gluten see - apache/spark#52473 - apache/spark#52870 - apache/spark#52891 * [4.1.0] Exclude failing SQL tests on Spark 4.1 * Replace SparkException.require with standard require in ColumnarCachedBatchSerializer to work across different spark versions * [Fix] Replace `RuntimeReplaceable` with its `replacement` to fix UT. see apache/spark#50287 * Exclude Spark 4.0 and 4.1 paths in clickhouse_be_trigger using `!` prefix * [Fix] Update GlutenShowNamespacesParserSuite to use GlutenSQLTestsBaseTrait
What changes were proposed in this pull request?
Add some additional end to end tests for RTM
Why are the changes needed?
To have better test coverage for RTM functionality
Does this PR introduce any user-facing change?
no
How was this patch tested?
N/A. Only tests are added
Was this patch authored or co-authored using generative AI tooling?
no