withSQLConf(
("spark.sql.adaptive.enabled", "false"),
("spark.sql.test.forceApplySortAggregate", "true"),
("spark.gluten.sql.columnar.forceShuffledHashJoin", "true")) {
createTPCHNotNullTables()
spark.sql("select l_partkey,count(1) from lineitem group by l_partkey").explain
}
VeloxColumnarToRowExec
+- ^(2) HashAggregateTransformer(keys=[l_partkey#77L], functions=[count(1)], output=[l_partkey#77L, count(1)#123L])
+- ^(2) SortExecTransformer [l_partkey#77L ASC NULLS FIRST], false, 0
+- ^(2) InputIteratorTransformer[l_partkey#77L, count#127L]
+- ^(2) InputAdapter
+- ^(2) RowToVeloxColumnar
+- ^(2) Exchange hashpartitioning(l_partkey#77L, 5), ENSURE_REQUIREMENTS, [plan_id=466]
+- ^(2) VeloxColumnarToRowExec
+- ^(1) FlushableHashAggregateTransformer(keys=[l_partkey#77L], functions=[partial_count(1)], output=[l_partkey#77L, count#127L])
+- ^(1) SortExecTransformer [l_partkey#77L ASC NULLS FIRST], false, 0
+- ^(1) NativeFileScan parquet [l_partkey#77L] Batched: true, DataFilters: [], Format: Parquet, Location: InMemoryFileIndex(1 paths)[file:/Users/zml/Desktop/git_hub/incubator-gluten/backends-velox/target..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<l_partkey:bigint>
Backend
VL (Velox)
Bug description
== Physical Plan ==
Spark version
None
Spark configurations
No response
System information
No response
Relevant logs
No response