-
Notifications
You must be signed in to change notification settings - Fork 3k
Spark 3.5: Add utility to load table state reliably #10984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/SparkTableUtil.java
Outdated
Show resolved
Hide resolved
|
|
||
| public static Dataset<Row> loadTable(SparkSession spark, Table table, long snapshotId) { | ||
| SparkTable sparkTable = new SparkTable(table, snapshotId, false); | ||
| DataSourceV2Relation relation = createRelation(sparkTable, ImmutableMap.of()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The snapshotId(and timestamp) could also be supplied as an option in the future Spark versions.
Should we have an method to take options as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We actually bypass the resolution completely and manually create Dataset in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may need to pass options in the future but let's add that once there is a use case (we will simply overload this method).
1fa595e to
e834e11
Compare
| required(5, "stringCol", Types.StringType.get())); | ||
|
|
||
| @TestTemplate | ||
| public void testLoadingTableDirectly() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test would previously fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Should we move this test to org.apache.iceberg.spark.TestSparkTableUtil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel it belongs here as it is important to check the action can be invoked without loading tables via the Spark catalog (as that one will set the catalog name correctly).
This is the only test that goes via validationCatalog.
karuppayya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm,
left a nitpick, thanks @aokolnychyi for the change and @nastra for reviewing.
|
Thanks, @karuppayya @nastra! |
backport of apache#10984, tests can be backport in together with apache#11106
While reviewing #10288, I realized we don't have a reliable way to load Iceberg table state as
Datasetin Spark. We shouldn't useload(table.name())as it is not clear if the name already includes the catalog name. This PR extends what we currently do for metadata tables to regular tables.