Skip to content

Conversation

@WeichenXu123
Copy link
Contributor

@WeichenXu123 WeichenXu123 commented Sep 3, 2019

What changes were proposed in this pull request?

Copy any "spark.hive.foo=bar" spark properties into hadoop conf as "hive.foo=bar"

Why are the changes needed?

Providing spark side config entry for hive configurations.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

UT.

@SparkQA
Copy link

SparkQA commented Sep 3, 2019

Test build #110036 has finished for PR 25661 at commit 036035c.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Sep 3, 2019

Test build #110038 has finished for PR 25661 at commit 2d8ea81.

  • This patch fails build dependency tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@WeichenXu123
Copy link
Contributor Author

Jenkins retest this please.

@SparkQA
Copy link

SparkQA commented Sep 4, 2019

Test build #110078 has finished for PR 25661 at commit 2d8ea81.

  • This patch fails MiMa tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@WeichenXu123 WeichenXu123 changed the title [WIP][SPARK-28957][SQL] Copy any "spark.hive.foo=bar" spark properties into hadoop conf as "hive.foo=bar" [SPARK-28957][SQL] Copy any "spark.hive.foo=bar" spark properties into hadoop conf as "hive.foo=bar" Sep 4, 2019
Copy link
Member

@felixcheung felixcheung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably ok... I was a bit worry that existing app having spark.hive.* might now break, but probably low chance that is happening?

@SparkQA
Copy link

SparkQA commented Sep 4, 2019

Test build #110086 has finished for PR 25661 at commit 9f6b4f0.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@WeichenXu123
Copy link
Contributor Author

@cloud-fan What do you think ?
Should we override existing spark.hive config ?

@cloud-fan
Copy link
Contributor

I like this proposal. It's much better than looking at all hive configs and re-implementing it in Spark. (see how InsertIntoHiveTable handles "hive.exec.dynamic.partition and hive.exec.dynamic.partition.mode).

@WeichenXu123 can you resolve the conflicts?

@cloud-fan
Copy link
Contributor

Can we also update configuration.md#custom-hadoophive-configuration?

@SparkQA
Copy link

SparkQA commented Sep 24, 2019

Test build #111280 has finished for PR 25661 at commit b0e80e7.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Sep 25, 2019

Test build #111323 has finished for PR 25661 at commit a119a6c.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan cloud-fan closed this in d8b0914 Sep 25, 2019
@cloud-fan
Copy link
Contributor

thanks, merging to master!

@@ -79,8 +79,10 @@ private[spark] class SparkHadoopUtil extends Logging {
* Appends S3-specific, spark.hadoop.*, and spark.buffer.size configurations to a Hadoop
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit. This function description seems to become behind, but, PR looks good.
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants