-
Notifications
You must be signed in to change notification settings - Fork 1.9k
add values list expression #1165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
bb74f4d to
1216110
Compare
71c04fc to
7f2170e
Compare
alamb
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome @jimexist -- it will make creating small examples / demos much easier 🏅 🏅 🏅
The only thing I think that needs to be fixed prior to merge is the contents of from_plan in utils.rs where values is not recreated correctly from the input exprs.
The test cases are very thorough as is the implementation.
I took it for a spin locally:
> SELECT * FROM (VALUES (1, 'one'), (2, 'two'), (3, 'three'));
Plan("subquery in FROM must have an alias")
>
>
>
> SELECT * FROM (VALUES (1, 'one'), (2, 'two'), (3, 'three')) as t;
+---------+---------+
| column1 | column2 |
+---------+---------+
| 1 | one |
| 2 | two |
| 3 | three |
+---------+---------+
3 rows in set. Query took 0.007 seconds.
>
SELECT * FROM (VALUES (1, 'one'), (2, 'two'), (3, 'three')) as t(num, letter);
+-----+--------+
| num | letter |
+-----+--------+
| 1 | one |
| 2 | two |
| 3 | three |
+-----+--------+
3 rows in set. Query took 0.005 seconds.Which is very cool (it is working the same as Postgres!)
| assert!(plan.is_err()); | ||
| } | ||
| { | ||
| let sql = "VALUES (1),('2')"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 for testing the negative case
| if data.is_empty() { | ||
| return Err(DataFusionError::Plan("Values list cannot be empty".into())); | ||
| } | ||
| // we have this empty batch as a placeholder to satisfy evaluation argument |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder what you think about moving the creation of the actual RecordBatch from ValuesExec::try_new to execute -- the rationale would be to make PhysicalPlan creation faster and push the actual work into execute where if can potentially be run concurrently with other parts
Given the size of data in a VALUES statement, this is not likely to be any real difference so I am fine with leaving the creation in the same place too -- I just wanted to mention it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i can address this in a subsequent PR where more expr types are supported (e.g. CAST)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| | LogicalPlan::Aggregate { .. } | ||
| | LogicalPlan::Repartition { .. } | ||
| | LogicalPlan::CreateExternalTable { .. } | ||
| | LogicalPlan::Values { .. } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect it is not likely to matter, but constant folding could be applied to the Exprs in values. As written this code will not apply constant folding to those expressions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
address in #1170
datafusion/src/optimizer/utils.rs
Outdated
| }), | ||
| LogicalPlan::Values { schema, values } => Ok(LogicalPlan::Values { | ||
| schema: schema.clone(), | ||
| values: values.to_vec(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the values here should be derived from expr - the various optimizers call from_plan to create a new logical plan after potentially rewriting expressions returned from LogicalPlan::expressions.
something like (untested)
values : exprs.windows(values[0].len()).map(|w| w.to_vec()).collect()I'll add some comments to make that clearer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| assert_batches_eq!(expected, &actual); | ||
| } | ||
| { | ||
| let sql = "VALUES (NULL,'a'),(NULL,'b'),(3,'c')"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOL every case I could come up with to test you have already covered
|
It also would be cool to add |
let me track that here |
3ee8066 to
b5d8f04
Compare
8f0dcf7 to
ba42411
Compare
|
🎉 |
…factoring (apache#1165) * move CheckOverflow to spark-expr crate * move NegativeExpr to spark-expr crate * move UnboundColumn to spark-expr crate * move ExpandExec from execution::datafusion::operators to execution::operators * refactoring to remove datafusion subpackage * update imports in benches * fix * fix
…factoring (apache#1165) * move CheckOverflow to spark-expr crate * move NegativeExpr to spark-expr crate * move UnboundColumn to spark-expr crate * move ExpandExec from execution::datafusion::operators to execution::operators * refactoring to remove datafusion subpackage * update imports in benches * fix * fix
* feat: add support for array_contains expression * test: add unit test for array_contains function * Removes unnecessary case expression for handling null values * chore: Move more expressions from core crate to spark-expr crate (apache#1152) * move aggregate expressions to spark-expr crate * move more expressions * move benchmark * normalize_nan * bitwise not * comet scalar funcs * update bench imports * remove dead code (apache#1155) * fix: Spark 4.0-preview1 SPARK-47120 (apache#1156) ## Which issue does this PR close? Part of apache/datafusion-comet#372 and apache/datafusion-comet#551 ## Rationale for this change To be ready for Spark 4.0 ## What changes are included in this PR? This PR fixes the new test SPARK-47120 added in Spark 4.0 ## How are these changes tested? tests enabled * chore: Move string kernels and expressions to spark-expr crate (apache#1164) * Move string kernels and expressions to spark-expr crate * remove unused hash kernel * remove unused dependencies * chore: Move remaining expressions to spark-expr crate + some minor refactoring (apache#1165) * move CheckOverflow to spark-expr crate * move NegativeExpr to spark-expr crate * move UnboundColumn to spark-expr crate * move ExpandExec from execution::datafusion::operators to execution::operators * refactoring to remove datafusion subpackage * update imports in benches * fix * fix * chore: Add ignored tests for reading complex types from Parquet (apache#1167) * Add ignored tests for reading structs from Parquet * add basic map test * add tests for Map and Array * feat: Add Spark-compatible implementation of SchemaAdapterFactory (apache#1169) * Add Spark-compatible SchemaAdapterFactory implementation * remove prototype code * fix * refactor * implement more cast logic * implement more cast logic * add basic test * improve test * cleanup * fmt * add support for casting unsigned int to signed int * clippy * address feedback * fix test * fix: Document enabling comet explain plan usage in Spark (4.0) (apache#1176) * test: enabling Spark tests with offHeap requirement (apache#1177) ## Which issue does this PR close? ## Rationale for this change After apache/datafusion-comet#1062 We have not running Spark tests for native execution ## What changes are included in this PR? Removed the off heap requirement for testing ## How are these changes tested? Bringing back Spark tests for native execution * feat: Improve shuffle metrics (second attempt) (apache#1175) * improve shuffle metrics * docs * more metrics * refactor * address feedback * fix: stddev_pop should not directly return 0.0 when count is 1.0 (apache#1184) * add test * fix * fix * fix * feat: Make native shuffle compression configurable and respect `spark.shuffle.compress` (apache#1185) * Make shuffle compression codec and level configurable * remove lz4 references * docs * update comment * clippy * fix benches * clippy * clippy * disable test for miri * remove lz4 reference from proto * minor: move shuffle classes from common to spark (apache#1193) * minor: refactor decodeBatches to make private in broadcast exchange (apache#1195) * minor: refactor prepare_output so that it does not require an ExecutionContext (apache#1194) * fix: fix missing explanation for then branch in case when (apache#1200) * minor: remove unused source files (apache#1202) * chore: Upgrade to DataFusion 44.0.0-rc2 (apache#1154) * move aggregate expressions to spark-expr crate * move more expressions * move benchmark * normalize_nan * bitwise not * comet scalar funcs * update bench imports * save * save * save * remove unused imports * clippy * implement more hashers * implement Hash and PartialEq * implement Hash and PartialEq * implement Hash and PartialEq * benches * fix ScalarUDFImpl.return_type failure * exclude test from miri * ignore correct test * ignore another test * remove miri checks * use return_type_from_exprs * Revert "use return_type_from_exprs" This reverts commit febc1f1ec1301f9b359fc23ad6a117224fce35b7. * use DF main branch * hacky workaround for regression in ScalarUDFImpl.return_type * fix repo url * pin to revision * bump to latest rev * bump to latest DF rev * bump DF to rev 9f530dd * add Cargo.lock * bump DF version * no default features * Revert "remove miri checks" This reverts commit 4638fe3aa5501966cd5d8b53acf26c698b10b3c9. * Update pin to DataFusion e99e02b * update pin * Update Cargo.toml Bump to 44.0.0-rc2 * update cargo lock * revert miri change --------- Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org> * update UT Signed-off-by: Dharan Aditya <dharan.aditya@gmail.com> * fix typo in UT Signed-off-by: Dharan Aditya <dharan.aditya@gmail.com> --------- Signed-off-by: Dharan Aditya <dharan.aditya@gmail.com> Co-authored-by: Andy Grove <agrove@apache.org> Co-authored-by: KAZUYUKI TANIMURA <ktanimura@apple.com> Co-authored-by: Parth Chandra <parthc@apache.org> Co-authored-by: Liang-Chi Hsieh <viirya@gmail.com> Co-authored-by: Raz Luvaton <16746759+rluvaton@users.noreply.github.com> Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Which issue does this PR close?
Closes #1080
Rationale for this change
What changes are included in this PR?
Are there any user-facing changes?