Skip to content

Spark 3.4: Add write options to override the compression properties of the table#8313

Merged
aokolnychyi merged 18 commits intoapache:masterfrom
roryqi:add_compress_option
Aug 29, 2023
Merged

Spark 3.4: Add write options to override the compression properties of the table#8313
aokolnychyi merged 18 commits intoapache:masterfrom
roryqi:add_compress_option

Conversation

@roryqi
Copy link
Copy Markdown
Contributor

@roryqi roryqi commented Aug 13, 2023

What changes were proposed in this pull request?

Flink has supported to use write option to override the compression properties of the table.
I refer to the pull request #6049 and then I add write options to override the compression properties of the table for Spark 3.4.

Why are the changes needed?

First, there exist some old tables using gzip compression codec in the production environment. If we use zstd compression codec to rewrite the data, we can reduce the data volume and reduce the cost of storage.
Second, this pr is also meaning after we choose zstd as default compression codec. Because we can choose different compression levels when we write the Iceberg data and rewrite the Iceberg data if we have this pr.

Does this PR introduce any user-facing change?

Yes.
This pr introduces the config option compression-codec, compression-level and compression-strategy. (document added)
This pr introduces the Spark config properties spark.sql.iceberg.write.compression-codec, spark.sql.iceberg.write.compression-level and spark.sql.iceberg.write.compression-strategy. (I will add the document after this pr is merged.)

How was this patch tested?

Add a new ut and manual verification.
I use the ut to verify that the compression codec is correct.
I verify the compression config option is correct by hand.
企业微信截图_93ca4fed-cd04-4315-819b-a56afdea106c

企业微信截图_fdb11295-8800-4ccc-ab5a-ec0abf12e913

@roryqi roryqi marked this pull request as draft August 13, 2023 08:06
@github-actions github-actions Bot added the spark label Aug 13, 2023
@roryqi roryqi changed the title Spark 3.4: Add write options to override the compression properties of the Table Spark 3.4: Add write options to override the compression properties of the table Aug 13, 2023
@roryqi
Copy link
Copy Markdown
Contributor Author

roryqi commented Aug 13, 2023

@chenjunjiedada @ConeyLiu Could you help me review this pr?

@roryqi roryqi marked this pull request as ready for review August 13, 2023 08:42
@chenjunjiedada
Copy link
Copy Markdown
Collaborator

@jerqi Could you add some doc like in #6049 since compression-level and compression-strategy are targeted for different formats separately? Also, it would be better to add a description of the default value and the available options. Others LGTM. Thanks for the contribution.

@github-actions github-actions Bot added the docs label Aug 13, 2023
@roryqi
Copy link
Copy Markdown
Contributor Author

roryqi commented Aug 13, 2023

@jerqi Could you add some doc like in #6049 since compression-level and compression-strategy are targeted for different formats separately? Also, it would be better to add a description of the default value and the available options. Others LGTM. Thanks for the contribution.

Thanks for your review. I have added the write option document for Spark.
And I find that we don't have Spark properties document. I want to raise another pr to add it after this pr is merged. I create an issue to track this problem. #8314

@roryqi roryqi requested a review from chenjunjiedada August 13, 2023 12:11
@roryqi
Copy link
Copy Markdown
Contributor Author

roryqi commented Aug 13, 2023

@pvary @nastra Could you help me review this pr?

@roryqi roryqi requested a review from ConeyLiu August 14, 2023 02:35
Copy link
Copy Markdown
Contributor

@ConeyLiu ConeyLiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

import org.junit.runners.Parameterized;

@RunWith(Parameterized.class)
public class TestCompressionSettings {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The project is currently in the process of moving away from JUnit4, meaning that new tests should be written purely in Junit5 + using AssertJ assertions. See also https://iceberg.apache.org/contribute/#testing for some additional info

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your review. I got it. I have migrated the test to the Junit 5 framework.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to add some new ut for SparkPositionDeltaWrite and SparkPositionDeletesRewrite.java. I need to extend the SparkCatalogTestBase. The SparkCatalogTestBase still use JUnit4, I will change ut to Junit4 to avoid conflicts.

@roryqi roryqi requested a review from nastra August 14, 2023 09:02
public static final String WAP_BRANCH = "spark.wap.branch";

// Controls write compress options
public static final String COMPRESSION_CODEC = "spark.sql.iceberg.write.compression-codec";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We rarely use extra write and read namespaces in SQL properties, the only exception is preserving grouping as it was not clear otherwise. What about dropping write from all names?

spark.sql.iceberg.compression-codec
spark.sql.iceberg.compression-level
spark.sql.iceberg.compression-strategy

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

this.dataSparkType = dataSparkType;
this.equalityDeleteSparkType = equalityDeleteSparkType;
this.positionDeleteSparkType = positionDeleteSparkType;

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional: What about inlining if it fits on one line?

...
this.equalityDeleteSparkType = equalityDeleteSparkType;
this.positionDeleteSparkType = positionDeleteSparkType;
this.writeProperties = writeProperties != null ? writeProperties : ImmutableMap.of();

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

.isEqualToIgnoringCase(properties.get(COMPRESSION_CODEC));
}

if (PARQUET.equals(format)) {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find that we can't set table property for the action RewritePositionDeletes. The action RewritePositionDeletes will always use parquet as the format.

Copy link
Copy Markdown
Contributor Author

@roryqi roryqi Aug 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

企业微信截图_491440a5-40ce-404f-86c0-aec1addfe029 In my ut, I set the table property `DEFAULT_FILE_FORMAT` and `DELETE_DEFAULT_FILE_FORMAT` is orc, but the table `position_deletes` properties is empty.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems like an issue worth looking into after this change.

cc @szehon-ho.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I will investigate it.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the root cause of this case.
The class BaseMetadataTable override the method properties

  @Override
  public Map<String, String> properties() {
    return ImmutableMap.of();
  }

Option 1:
we can modify the class BaseMetadataTable.
#8428

Option2:
We can modify the class PositionDeletesTable
#8429

I prefer option 1. I feel the properties of meta data table should respect the ones of base table. We also don't have ways to modify the properites of meta data table. @aokolnychyi @szehon-ho WDYT?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, yea I am ok with option 1 as well, @aokolnychyi what do you think?

@roryqi roryqi requested a review from aokolnychyi August 16, 2023 11:32
@roryqi
Copy link
Copy Markdown
Contributor Author

roryqi commented Aug 17, 2023

Do we also have to cover other places that use SparkFileWriterFactory? For instance, SparkPositionDeltaWrite?

Thanks for you review, my mistake. I will add the modification of the code and add the ut.

@aokolnychyi I have addressed the comments. Could you take another look if you have time?

@roryqi
Copy link
Copy Markdown
Contributor Author

roryqi commented Aug 23, 2023

@nastra @aokolnychyi Gently ping. Sorry to bother you. Could you have another look if you have time?

@aokolnychyi
Copy link
Copy Markdown
Contributor

Let me take another look.

@aokolnychyi
Copy link
Copy Markdown
Contributor

This change looks good to me. I am not sure about adding a new utility class vs just using SparkWriteConf.
@jerqi, what do you think?

@roryqi
Copy link
Copy Markdown
Contributor Author

roryqi commented Aug 26, 2023

This change looks good to me. I am not sure about adding a new utility class vs just using SparkWriteConf. @jerqi, what do you think?

The second solution seems more elegant. Thanks for your suggestion. I have refactored the code to follow the suggestion.

@roryqi roryqi requested a review from aokolnychyi August 26, 2023 10:48
Comment thread spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/SparkWrite.java Outdated
Comment thread spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/SparkWrite.java Outdated
writeProperties.put(ORC_COMPRESSION_STRATEGY, orcCompressionStrategy());
break;
default:
throw new IllegalArgumentException(String.format("Unknown file format %s", format));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether it is worth failing the query. Maybe, just do nothing here?

Copy link
Copy Markdown
Contributor Author

@roryqi roryqi Aug 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I refer to the Flink's implement #6049. Flink choose to fail the query. It also has benefits if we choose to fail the query. We can find the wrong config option more easily.

Comment thread spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/SparkWriteConf.java Outdated
Copy link
Copy Markdown
Contributor

@aokolnychyi aokolnychyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Left a few suggestions.

private final Context context;

private boolean cleanupOnAbort = true;
private final Map<String, String> writeProperties;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: This variable should be co-located with other final variables above. It is super minor but let's do this in a follow-up.

Copy link
Copy Markdown
Contributor Author

@roryqi roryqi Aug 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I have raised a follow-up pr #8421. The pr has been merged.

@aokolnychyi aokolnychyi merged commit ab0a2fd into apache:master Aug 29, 2023
@aokolnychyi
Copy link
Copy Markdown
Contributor

Thanks, @jerqi! Thanks for reviewing, @ConeyLiu @chenjunjiedada.

@roryqi
Copy link
Copy Markdown
Contributor Author

roryqi commented Aug 30, 2023

Thanks @aokolnychyi ! Code master. Thanks @ConeyLiu @chenjunjiedada @nastra , too.

writeProperties.put(PARQUET_COMPRESSION, parquetCompressionCodec());
String parquetCompressionLevel = parquetCompressionLevel();
if (parquetCompressionLevel != null) {
writeProperties.put(PARQUET_COMPRESSION_LEVEL, parquetCompressionLevel);
Copy link
Copy Markdown
Member

@szehon-ho szehon-ho Aug 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi guys, while working on #8299, I noticed the test failing and that this is missing delete file compression override. I think the TestCompression delete file compression check passes by chance, I have a fix over there for it: 43e3cab

Copy link
Copy Markdown
Contributor Author

@roryqi roryqi Aug 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for this mistake. I will check how to valid the correctness of the case.
企业微信截图_9057e533-5033-48a3-817e-a22238277d8d
If we don't set the delete compression codec, we will reuse data compression codec. So the test case passed.
I have raised a pr #8438 to fix this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants