-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-32083][SQL] AQE coalesce should at least return one partition #29307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test build #126806 has finished for PR 29307 at commit
|
| // If users specify the num partitions via APIs like `repartition`, we shouldn't change it. | ||
| // For `SinglePartition`, it requires exactly one partition and we can't change it either. | ||
| override def canChangeNumPartitions: Boolean = | ||
| !isUserSpecifiedNumPartitions && outputPartitioning != SinglePartition |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have test for SinglePartition case ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is for future-proof. It doesn't change anything. When there is a global aggregate, there will always be data in the final partition and we can't coalesce to 0 partitions anyway.
| def canUseLocalShuffleReader(plan: SparkPlan): Boolean = plan match { | ||
| case s: ShuffleQueryStageExec => | ||
| s.shuffle.canChangeNumPartitions | ||
| s.shuffle.canChangeNumPartitions && s.mapStats.isDefined |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This skips 0 partitions case, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea, otherwise we will hit
val splitPoints = if (numMappers == 0) {
Seq.empty
} else ...
which creates a local reader with 0 partitions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be able to turn the if into assert, but I'd like to only do it in master to be safe.
viirya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This idea looks good. 0 partitions sounds a bit risky, and it's nice to avoid creating such edge case.
|
Test build #126859 has finished for PR 29307 at commit
|
|
thanks for the review, merging to master! |
|
@cloud-fan @viirya should this change go into 3.0.1 as well? |
|
@abellina yes, I'm working on the backport PR (need to fix some conflicts) |
|
Thanks @cloud-fan !! |
This PR updates the AQE framework to at least return one partition during coalescing. This PR also updates `ShuffleExchangeExec.canChangeNumPartitions` to not coalesce for `SinglePartition`. It's a bit risky to return 0 partitions, as sometimes it's different from empty data. For example, global aggregate will return one result row even if the input table is empty. If there is 0 partition, no task will be run and no result will be returned. More specifically, the global aggregate requires `AllTuples` and we can't coalesce to 0 partitions. This is not a real bug for now. The global aggregate will be planned as partial and final physical agg nodes. The partial agg will return at least one row, so that the shuffle still have data. But it's better to fix this issue to avoid potential bugs in the future. According to apache#28916, this change also fix some perf problems. no updated test. Closes apache#29307 from cloud-fan/aqe. Authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
This PR updates the AQE framework to at least return one partition during coalescing. This PR also updates `ShuffleExchangeExec.canChangeNumPartitions` to not coalesce for `SinglePartition`. It's a bit risky to return 0 partitions, as sometimes it's different from empty data. For example, global aggregate will return one result row even if the input table is empty. If there is 0 partition, no task will be run and no result will be returned. More specifically, the global aggregate requires `AllTuples` and we can't coalesce to 0 partitions. This is not a real bug for now. The global aggregate will be planned as partial and final physical agg nodes. The partial agg will return at least one row, so that the shuffle still have data. But it's better to fix this issue to avoid potential bugs in the future. According to apache#28916, this change also fix some perf problems. no updated test. Closes apache#29307 from cloud-fan/aqe. Authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
…Partition This is a partial backport of #29307 Most of the changes are not needed because #28226 is in master only. This PR only backports the safeguard in `ShuffleExchangeExec.canChangeNumPartitions` Closes #29321 from cloud-fan/aqe. Authored-by: Wenchen Fan <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
What changes were proposed in this pull request?
This PR updates the AQE framework to at least return one partition during coalescing.
This PR also updates
ShuffleExchangeExec.canChangeNumPartitionsto not coalesce forSinglePartition.Why are the changes needed?
It's a bit risky to return 0 partitions, as sometimes it's different from empty data. For example, global aggregate will return one result row even if the input table is empty. If there is 0 partition, no task will be run and no result will be returned. More specifically, the global aggregate requires
AllTuplesand we can't coalesce to 0 partitions.This is not a real bug for now. The global aggregate will be planned as partial and final physical agg nodes. The partial agg will return at least one row, so that the shuffle still have data. But it's better to fix this issue to avoid potential bugs in the future.
According to #28916, this change also fix some perf problems.
Does this PR introduce any user-facing change?
no
How was this patch tested?
updated test.