Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ abstract class Expression extends TreeNode[Expression] {
*
* See [[Canonicalize]] for more details.
*/
def semanticEquals(other: Expression): Boolean =
final def semanticEquals(other: Expression): Boolean =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree with the idea.

deterministic && other.deterministic && canonicalized == other.canonicalized

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -278,11 +278,6 @@ case class AttributeReference(
case _ => false
}

override def semanticEquals(other: Expression): Boolean = other match {
case ar: AttributeReference => sameRef(ar)
case _ => false
}

Comment on lines -281 to -285
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't need canonicalized?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default implementation works for AttributeReference

override def semanticHash(): Int = {
this.exprId.hashCode()
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,13 +76,6 @@ abstract class SubqueryExpression(
AttributeSet.fromAttributeSets(outerAttrs.map(_.references))
override def children: Seq[Expression] = outerAttrs ++ joinCond
override def withNewPlan(plan: LogicalPlan): SubqueryExpression
override def semanticEquals(o: Expression): Boolean = o match {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all the subclasses of SubqueryExpression have implemented canonicalized.

case p: SubqueryExpression =>
this.getClass.getName.equals(p.getClass.getName) && plan.sameResult(p.plan) &&
children.length == p.children.length &&
children.zip(p.children).forall(p => p._1.semanticEquals(p._2))
case _ => false
}
}

object SubqueryExpression {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -170,4 +170,11 @@ class CanonicalizeSuite extends SparkFunSuite {
assert(nestedExpr2.canonicalized != nestedExpr3.canonicalized)
}
}

test("SPARK-35742: Expression.semanticEquals should be symmetrical") {
val attr = AttributeReference("col", IntegerType)()
val expr = PromotePrecision(attr)
assert(expr.semanticEquals(attr))
assert(attr.semanticEquals(expr))
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ package org.apache.spark.sql.execution
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.catalyst.InternalRow
import org.apache.spark.sql.catalyst.expressions._
import org.apache.spark.sql.catalyst.plans.QueryPlan

/**
* Similar to [[SubqueryBroadcastExec]], this node is used to store the
Expand All @@ -40,6 +41,11 @@ case class SubqueryAdaptiveBroadcastExec(
"SubqueryAdaptiveBroadcastExec does not support the execute() code path.")
}

protected override def doCanonicalize(): SparkPlan = {
val keys = buildKeys.map(k => QueryPlan.normalizeExpressions(k, child.output))
copy(name = "dpp", buildKeys = keys, child = child.canonicalized)
}

override protected def withNewChildInternal(newChild: SparkPlan): SubqueryAdaptiveBroadcastExec =
copy(child = newChild)
}
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,9 @@ case class HashAggregateExec(
// This is for testing. We force TungstenAggregationIterator to fall back to the unsafe row hash
// map and/or the sort-based aggregation once it has processed a given number of input rows.
private val testFallbackStartsAt: Option[(Int, Int)] = {
sqlContext.getConf("spark.sql.TungstenAggregate.testFallbackStartsAt", null) match {
Option(sqlContext).map { sc =>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a hidden bug. SubqueryExpression will be sent to the executor side and build Projection, and be put in EquivalentExpressions, which needs to call canonicalized.

This means, Spark may serialize and send HashAggregateExec to the executor side, where sqlContext should be null.

It's hidden for a long time because ScalarSubquery didn't implement canonicalized, so the bug is not triggered. However, it also means semanticHash is wrong.

I think it only affects common subquery elimination, and shouldn't be a serious bug.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the long term, I think we should only send an "expression evaluator" to the executor side. The semantic check should only be done in the driver side.

Copy link
Contributor

@peter-toth peter-toth Jun 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late comment @cloud-fan, but I think I've run into this issue before:
https://github.com/apache/spark/pull/28885/files#diff-9b62cef6bfdeb6c802bb120c7a724a974d5067a69585285bebb64c48603f8d6fR105-R108. The point is that there might be other nodes where canonicalization on executor side can cause issues. SortExec.enableRadixSort is the other one I found.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

InSubqueryExec already implements canonicalized before this PR, so we need to fix these bugs anyway.

I have an idea to fix this problem in all physical plans:

  1. remove SparkPlan.sqlContext, so that we can catch all the callers of it
  2. add @transient final val session = SparkSession.getActiveSession.orNull, to replace the previous sqlContext
  3. override conf in SparkPlan: if (session != null) session.sessionState.conf else SQLConf.get

@peter-toth what do you think? AFAIK the only reason to access SparkPlan.sqlContext at executor side is to read a conf, and we can do that with SQLConf.get at executor side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm quite busy this week and may not have time to implement this idea recently. @peter-toth feel free to pick up this idea and open a PR if you have time, or I'll do it next or next next week. Thanks in advance!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, thanks. I will try to open a PR this week.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #32947.

sc.getConf("spark.sql.TungstenAggregate.testFallbackStartsAt", null)
}.orNull match {
case null | "" => None
case fallbackStartsAt =>
val splits = fallbackStartsAt.split(",").map(_.trim)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,8 @@ case class ScalarSubquery(
override def toString: String = plan.simpleString(SQLConf.get.maxToStringFields)
override def withNewPlan(query: BaseSubqueryExec): ScalarSubquery = copy(plan = query)

override def semanticEquals(other: Expression): Boolean = other match {
case s: ScalarSubquery => plan.sameResult(s.plan)
case _ => false
override lazy val canonicalized: Expression = {
ScalarSubquery(plan.canonicalized.asInstanceOf[BaseSubqueryExec], ExprId(0))
}

// the first column in first row from `query`.
Expand Down Expand Up @@ -127,11 +126,6 @@ case class InSubqueryExec(
override def withNewPlan(plan: BaseSubqueryExec): InSubqueryExec = copy(plan = plan)
final override def nodePatternsInternal: Seq[TreePattern] = Seq(IN_SUBQUERY_EXEC)

override def semanticEquals(other: Expression): Boolean = other match {
case in: InSubqueryExec => child.semanticEquals(in.child) && plan.sameResult(in.plan)
case _ => false
}

Comment on lines -130 to -134
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's already there, in L164

def updateResult(): Unit = {
val rows = plan.executeCollect()
result = if (plan.output.length > 1) {
Expand Down
Loading