Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ abstract class Optimizer(catalogManager: CatalogManager)
PushDownPredicates) :: Nil
}

val batches = (Batch("Eliminate Distinct", Once, EliminateDistinct) ::
val batches = (
// Technically some of the rules in Finish Analysis are not optimizer rules and belong more
// in the analyzer, because they are needed for correctness (e.g. ComputeCurrentTime).
// However, because we also use the analyzer to canonicalized queries (for view definition),
Expand All @@ -166,6 +166,7 @@ abstract class Optimizer(catalogManager: CatalogManager)
//////////////////////////////////////////////////////////////////////////////////////////
// Optimizer rules start here
//////////////////////////////////////////////////////////////////////////////////////////
Batch("Eliminate Distinct", Once, EliminateDistinct) ::
// - Do the first call of CombineUnions before starting the major Optimizer rules,
// since it can reduce the number of iteration and the other rules could add/move
// extra operators between two adjacent Union operators.
Expand Down Expand Up @@ -411,14 +412,26 @@ abstract class Optimizer(catalogManager: CatalogManager)
}

/**
* Remove useless DISTINCT for MAX and MIN.
* Remove useless DISTINCT:
* 1. For some aggregate expression, e.g.: MAX and MIN.
* 2. If the distinct semantics is guaranteed by child.
*
* This rule should be applied before RewriteDistinctAggregates.
*/
object EliminateDistinct extends Rule[LogicalPlan] {
override def apply(plan: LogicalPlan): LogicalPlan = plan.transformAllExpressionsWithPruning(
_.containsPattern(AGGREGATE_EXPRESSION)) {
case ae: AggregateExpression if ae.isDistinct && isDuplicateAgnostic(ae.aggregateFunction) =>
ae.copy(isDistinct = false)
override def apply(plan: LogicalPlan): LogicalPlan = plan.transformWithPruning(
_.containsPattern(AGGREGATE)) {
case agg: Aggregate =>
agg.transformExpressionsWithPruning(_.containsPattern(AGGREGATE_EXPRESSION)) {
case ae: AggregateExpression if ae.isDistinct &&
isDuplicateAgnostic(ae.aggregateFunction) =>
ae.copy(isDistinct = false)

case ae: AggregateExpression if ae.isDistinct &&
agg.child.distinctKeys.exists(
_.subsetOf(ExpressionSet(ae.aggregateFunction.children.filterNot(_.foldable)))) =>
Copy link
Contributor

@sigmod sigmod Apr 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it correct?

If input plan to this rule is:

SELECT a, count(distinct c) FROM (
   SELECT distinct a, b, c 
   FROM t
)
GROUP BY a

Will the added case branch rewrite the plan to

SELECT a, count(c) FROM (
   SELECT distinct a, b, c 
   FROM t
)
GROUP BY a

agg.child.distinctKeys is {a, b, c}
ExpressionSet(ae.aggregateFunction.children.filterNot(_.foldable)) is {c}.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the distinctKeys of distinct a, b, c is ExpressionSet(a, b, c) not ExpressionSet(a), ExpressionSet(b), ExpressionSet(c)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. I forgot distinctKeys is a set of sets.

How about:

agg.child.distinctKeys.exists(
           key =>  !key.isEmpty() && 
                         key.subsetOf(ExpressionSet(ae.aggregateFunction.children.filterNot(_.foldable))))

Alternatively, we can do a require here to make sure that we never return an empty key:
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlanDistinctKeys.scala#L32

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sense, I add a require at LogicalPlanDistinctKeys

ae.copy(isDistinct = false)
}
}

def isDuplicateAgnostic(af: AggregateFunction): Boolean = af match {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,12 @@ import org.apache.spark.sql.internal.SQLConf.PROPAGATE_DISTINCT_KEYS_ENABLED
*/
trait LogicalPlanDistinctKeys { self: LogicalPlan =>
lazy val distinctKeys: Set[ExpressionSet] = {
if (conf.getConf(PROPAGATE_DISTINCT_KEYS_ENABLED)) DistinctKeyVisitor.visit(self) else Set.empty
if (conf.getConf(PROPAGATE_DISTINCT_KEYS_ENABLED)) {
val keys = DistinctKeyVisitor.visit(self)
require(keys.forall(_.nonEmpty))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need this require? It looks fine to have an empty set as the distinct keys, e.g. global aggregate without keys. It means that the entire data set is distinct (have at most one row), and EliminateDistinct is OK with empty set in distinct keys.

Copy link
Contributor Author

@ulysses-you ulysses-you Apr 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's more about avoid some unexpected things. It will be a correctness issue if other opterators return empty distinct key. And as you mentioned, the global aggregate has already optimzied by EliminateDistinct and OptimizeOneRowPlan, so it's fine ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point is that DistinctKeyVisitor does not work with global aggregate now, and an empty expression set is still a valid distinct key, why do we forbid it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have already forbidden it inside DistinctKeyVisitor. Do you think we should support that case ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only done at the else branch, not the if branch. I think we have two options:

  1. keep the requirement, and add the filter in the if branch as well
  2. remove the requirement, and remove the filter from the else branch.

I prefer option 2 as I think an empty expression set does mean something as a distinct key, we should not ignore this information. It also works the same as other distinct keys:

  1. It can replace all other distinct keys as it's a subset of any expression set
  2. It can satisfy any distinct key requirement, e.g. remove unnecessary distinct in aggregate functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only done at the else branch, not the if branch

It's a good point. I will do a followup soon.

keys
} else {
Set.empty
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ class EliminateDistinctSuite extends PlanTest {
}

val testRelation = LocalRelation($"a".int)
val testRelation2 = LocalRelation($"a".int, $"b".string)

Seq(
Max(_),
Expand Down Expand Up @@ -71,4 +72,21 @@ class EliminateDistinctSuite extends PlanTest {
comparePlans(Optimize.execute(query), answer)
}
}

test("SPARK-38832: Remove unnecessary distinct in aggregate expression by distinctKeys") {
val q1 = testRelation2.groupBy($"a")($"a")
.rebalance().groupBy()(countDistinct($"a") as "x", sumDistinct($"a") as "y").analyze
val r1 = testRelation2.groupBy($"a")($"a")
.rebalance().groupBy()(count($"a") as "x", sum($"a") as "y").analyze
comparePlans(Optimize.execute(q1), r1)

// not a subset of distinct attr
val q2 = testRelation2.groupBy($"a", $"b")($"a", $"b")
.rebalance().groupBy()(countDistinct($"a") as "x", sumDistinct($"a") as "y").analyze
comparePlans(Optimize.execute(q2), q2)

// child distinct key is empty
val q3 = testRelation2.groupBy($"a")(countDistinct($"a") as "x").analyze
comparePlans(Optimize.execute(q3), q3)
}
}