diff --git a/docs/sql-migration-guide.md b/docs/sql-migration-guide.md index 699f9acc8c50e..96f2c5dcf9735 100644 --- a/docs/sql-migration-guide.md +++ b/docs/sql-migration-guide.md @@ -40,8 +40,6 @@ license: | ### DDL Statements - - In Spark 3.0, `CREATE TABLE` without a specific provider uses the value of `spark.sql.sources.default` as its provider. In Spark version 2.4 and below, it was Hive. To restore the behavior before Spark 3.0, you can set `spark.sql.legacy.createHiveTableByDefault.enabled` to `true`. - - In Spark 3.0, when inserting a value into a table column with a different data type, the type coercion is performed as per ANSI SQL standard. Certain unreasonable type conversions such as converting `string` to `int` and `double` to `boolean` are disallowed. A runtime exception is thrown if the value is out-of-range for the data type of the column. In Spark version 2.4 and below, type conversions during table insertion are allowed as long as they are valid `Cast`. When inserting an out-of-range value to an integral field, the low-order bits of the value is inserted(the same as Java/Scala numeric type casting). For example, if 257 is inserted to a field of byte type, the result is 1. The behavior is controlled by the option `spark.sql.storeAssignmentPolicy`, with a default value as "ANSI". Setting the option as "Legacy" restores the previous behavior. - The `ADD JAR` command previously returned a result set with the single value 0. It now returns an empty result set. diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala index c4922b56f0756..4b99cd5ab313f 100644 --- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala +++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala @@ -2235,7 +2235,7 @@ object SQLConf { s"instead of the value of ${DEFAULT_DATA_SOURCE_NAME.key}.") .version("3.0.0") .booleanConf - .createWithDefault(false) + .createWithDefault(true) val LEGACY_BUCKETED_TABLE_SCAN_OUTPUT_ORDERING = buildConf("spark.sql.legacy.bucketedTableScan.outputOrdering")