-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-32920][SHUFFLE] Finalization of Shuffle push/merge with Push based shuffle and preparation step for the reduce stage #30691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from 2 commits
Commits
Show all changes
20 commits
Select commit
Hold shift + click to select a range
bd8edc5
LIHADOOP-48527 Driver side changes supporting Pushbased shuffle:
Victsm d8fe417
Address ngone51 review comments
venkata91 755148c
Address otterc and mridulm review comments and additional changes for
venkata91 b2f668d
Address review comments from mridulm
venkata91 e013725
Merge branch 'upstream-master' into SPARK-32920
venkata91 d4a6b89
Address review comments from mridulm
venkata91 14c79f7
Additional test cases handling corner cases like stage cancellation etc
venkata91 77ea695
Empty commit to add [email protected] as secondary author
otterc 5ce2423
Empty commit to add Min Shen as author
Victsm a7b05ca
Merge branch 'upstream-master' into SPARK-32920
venkata91 e972a54
Address mridulm review comments
venkata91 a311cd2
Fix TODO comment
venkata91 b00ec40
Add some comments
venkata91 19b0b64
Merge branch 'upstream-master' into SPARK-32920
venkata91 e14544e
Address style check issues
venkata91 4de6809
Additional changes
venkata91 c8df6b2
few more changes
venkata91 3939dc4
Address Min's review comments
venkata91 e570818
Address review comments
venkata91 35d0615
Merge branch 'upstream-master' into SPARK-32920
venkata91 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -19,8 +19,8 @@ package org.apache.spark.scheduler | |
|
|
||
| import java.io.NotSerializableException | ||
| import java.util.Properties | ||
| import java.util.concurrent.{ConcurrentHashMap, TimeUnit} | ||
| import java.util.concurrent.atomic.AtomicInteger | ||
| import java.util.concurrent.{ConcurrentHashMap, TimeoutException, TimeUnit} | ||
| import java.util.concurrent.atomic.{AtomicBoolean, AtomicInteger} | ||
|
|
||
| import scala.annotation.tailrec | ||
| import scala.collection.Map | ||
|
|
@@ -29,12 +29,17 @@ import scala.collection.mutable.{HashMap, HashSet, ListBuffer} | |
| import scala.concurrent.duration._ | ||
| import scala.util.control.NonFatal | ||
|
|
||
| import com.google.common.util.concurrent.{Futures, SettableFuture} | ||
|
|
||
| import org.apache.spark._ | ||
| import org.apache.spark.broadcast.Broadcast | ||
| import org.apache.spark.executor.{ExecutorMetrics, TaskMetrics} | ||
| import org.apache.spark.internal.Logging | ||
| import org.apache.spark.internal.config | ||
| import org.apache.spark.internal.config.Tests.TEST_NO_STAGE_RETRY | ||
| import org.apache.spark.network.netty.SparkTransportConf | ||
| import org.apache.spark.network.shuffle.{ExternalBlockStoreClient, MergeFinalizerListener} | ||
| import org.apache.spark.network.shuffle.protocol.MergeStatuses | ||
| import org.apache.spark.network.util.JavaUtils | ||
| import org.apache.spark.partial.{ApproximateActionListener, ApproximateEvaluator, PartialResult} | ||
| import org.apache.spark.rdd.{RDD, RDDCheckpointData} | ||
|
|
@@ -254,6 +259,28 @@ private[spark] class DAGScheduler( | |
| private val blockManagerMasterDriverHeartbeatTimeout = | ||
| sc.getConf.get(config.STORAGE_BLOCKMANAGER_MASTER_DRIVER_HEARTBEAT_TIMEOUT).millis | ||
|
|
||
| private val shuffleMergeResultsTimeoutSec = | ||
| JavaUtils.timeStringAsSec(sc.getConf.get(config.PUSH_BASED_SHUFFLE_MERGE_RESULTS_TIMEOUT)) | ||
|
|
||
| private val shuffleMergeFinalizeWaitSec = | ||
| JavaUtils.timeStringAsSec(sc.getConf.get(config.PUSH_BASED_SHUFFLE_MERGE_FINALIZE_TIMEOUT)) | ||
|
|
||
| // lazy initialized so that the shuffle client can be properly initialized | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| private lazy val externalShuffleClient: Option[ExternalBlockStoreClient] = | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| if (pushBasedShuffleEnabled) { | ||
| val transConf = SparkTransportConf.fromSparkConf(sc.conf, "shuffle", 1) | ||
| val shuffleClient = new ExternalBlockStoreClient(transConf, env.securityManager, | ||
| env.securityManager.isAuthenticationEnabled(), | ||
| sc.conf.get(config.SHUFFLE_REGISTRATION_TIMEOUT)) | ||
| shuffleClient.init(sc.conf.getAppId) | ||
| Some(shuffleClient) | ||
| } else { | ||
| None | ||
| } | ||
|
|
||
| private val shuffleMergeFinalizeScheduler = | ||
| ThreadUtils.newDaemonSingleThreadScheduledExecutor("shuffle-merge-finalizer") | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| /** | ||
| * Called by the TaskSetManager to report task's starting. | ||
| */ | ||
|
|
@@ -689,7 +716,7 @@ private[spark] class DAGScheduler( | |
| dep match { | ||
| case shufDep: ShuffleDependency[_, _, _] => | ||
| val mapStage = getOrCreateShuffleMapStage(shufDep, stage.firstJobId) | ||
| if (!mapStage.isAvailable) { | ||
| if (!mapStage.isAvailable || !mapStage.isMergeFinalized) { | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| missing += mapStage | ||
| } | ||
| case narrowDep: NarrowDependency[_] => | ||
|
|
@@ -1271,21 +1298,19 @@ private[spark] class DAGScheduler( | |
| * locations for block push/merge by getting the historical locations of past executors. | ||
| */ | ||
| private def prepareShuffleServicesForShuffleMapStage(stage: ShuffleMapStage): Unit = { | ||
| // TODO(SPARK-32920) Handle stage reuse/retry cases separately as without finalize | ||
| // TODO changes we cannot disable shuffle merge for the retry/reuse cases | ||
| val mergerLocs = sc.schedulerBackend.getShufflePushMergerLocations( | ||
| stage.shuffleDep.partitioner.numPartitions, stage.resourceProfileId) | ||
|
|
||
| if (mergerLocs.nonEmpty) { | ||
| stage.shuffleDep.setMergerLocs(mergerLocs) | ||
| logInfo(s"Push-based shuffle enabled for $stage (${stage.name}) with" + | ||
| s" ${stage.shuffleDep.getMergerLocs.size} merger locations") | ||
|
|
||
| logDebug("List of shuffle push merger locations " + | ||
| s"${stage.shuffleDep.getMergerLocs.map(_.host).mkString(", ")}") | ||
| } else { | ||
| logInfo("No available merger locations." + | ||
| s" Push-based shuffle disabled for $stage (${stage.name})") | ||
| if (!stage.shuffleDep.shuffleMergeFinalized) { | ||
| val mergerLocs = sc.schedulerBackend.getShufflePushMergerLocations( | ||
| stage.shuffleDep.partitioner.numPartitions, stage.resourceProfileId) | ||
| if (mergerLocs.nonEmpty) { | ||
| stage.shuffleDep.setMergerLocs(mergerLocs) | ||
| logInfo(s"Push-based shuffle enabled for $stage (${stage.name}) with" + | ||
| s" ${stage.shuffleDep.getMergerLocs.size} merger locations") | ||
|
|
||
| logDebug("List of shuffle push merger locations " + | ||
| s"${stage.shuffleDep.getMergerLocs.map(_.host).mkString(", ")}") | ||
| } else { | ||
| logInfo("Push-based shuffle disabled for $stage (${stage.name})") | ||
| } | ||
| } | ||
| } | ||
|
|
||
|
|
@@ -1678,33 +1703,10 @@ private[spark] class DAGScheduler( | |
| } | ||
|
|
||
| if (runningStages.contains(shuffleStage) && shuffleStage.pendingPartitions.isEmpty) { | ||
| markStageAsFinished(shuffleStage) | ||
| logInfo("looking for newly runnable stages") | ||
| logInfo("running: " + runningStages) | ||
| logInfo("waiting: " + waitingStages) | ||
| logInfo("failed: " + failedStages) | ||
|
|
||
| // This call to increment the epoch may not be strictly necessary, but it is retained | ||
| // for now in order to minimize the changes in behavior from an earlier version of the | ||
| // code. This existing behavior of always incrementing the epoch following any | ||
| // successful shuffle map stage completion may have benefits by causing unneeded | ||
| // cached map outputs to be cleaned up earlier on executors. In the future we can | ||
| // consider removing this call, but this will require some extra investigation. | ||
| // See https://github.com/apache/spark/pull/17955/files#r117385673 for more details. | ||
| mapOutputTracker.incrementEpoch() | ||
|
|
||
| clearCacheLocs() | ||
|
|
||
| if (!shuffleStage.isAvailable) { | ||
| // Some tasks had failed; let's resubmit this shuffleStage. | ||
| // TODO: Lower-level scheduler should also deal with this | ||
| logInfo("Resubmitting " + shuffleStage + " (" + shuffleStage.name + | ||
| ") because some of its tasks had failed: " + | ||
| shuffleStage.findMissingPartitions().mkString(", ")) | ||
| submitStage(shuffleStage) | ||
| if (pushBasedShuffleEnabled) { | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| scheduleShuffleMergeFinalize(shuffleStage) | ||
| } else { | ||
| markMapStageJobsAsFinished(shuffleStage) | ||
| submitWaitingChildStages(shuffleStage) | ||
| processShuffleMapStageCompletion(shuffleStage) | ||
| } | ||
| } | ||
| } | ||
|
|
@@ -2004,6 +2006,142 @@ private[spark] class DAGScheduler( | |
| } | ||
| } | ||
|
|
||
| /** | ||
| * Schedules shuffle merge finalize. | ||
| */ | ||
| private[scheduler] def scheduleShuffleMergeFinalize(stage: ShuffleMapStage): Unit = { | ||
| // TODO Use the default single threaded scheduler or extend ThreadUtils to | ||
| // TODO support the multi-threaded scheduler? | ||
| logInfo(("%s (%s) scheduled for finalizing" + | ||
| " shuffle merge in %s s").format(stage, stage.name, shuffleMergeFinalizeWaitSec)) | ||
| shuffleMergeFinalizeScheduler.schedule( | ||
| new Runnable { | ||
| override def run(): Unit = finalizeShuffleMerge(stage) | ||
| }, | ||
| shuffleMergeFinalizeWaitSec, | ||
| TimeUnit.SECONDS | ||
| ) | ||
| } | ||
|
|
||
| /** | ||
| * DAGScheduler notifies all the remote shuffle services chosen to serve shuffle merge request for | ||
| * the given shuffle map stage to finalize the shuffle merge process for this shuffle. This is | ||
| * invoked in a separate thread to reduce the impact on the DAGScheduler main thread, as the | ||
| * scheduler might need to talk to 1000s of shuffle services to finalize shuffle merge. | ||
| */ | ||
| private[scheduler] def finalizeShuffleMerge(stage: ShuffleMapStage): Unit = { | ||
| logInfo("%s (%s) finalizing the shuffle merge".format(stage, stage.name)) | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| externalShuffleClient.foreach { shuffleClient => | ||
| val shuffleId = stage.shuffleDep.shuffleId | ||
| val numMergers = stage.shuffleDep.getMergerLocs.length | ||
| val numResponses = new AtomicInteger() | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| val results = (0 until numMergers).map(_ => SettableFuture.create[Boolean]()) | ||
| val timedOut = new AtomicBoolean() | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| // NOTE: This is a defensive check to post finalize event if numMergers is 0 (i.e. no shuffle | ||
| // service available). | ||
| if (numMergers == 0) { | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| eventProcessLoop.post(ShuffleMergeFinalized(stage)) | ||
| return | ||
| } | ||
|
|
||
| def increaseAndCheckResponseCount: Unit = { | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| if (numResponses.incrementAndGet() == numMergers) { | ||
| // Since this runs in the netty client thread and is outside of DAGScheduler | ||
| // event loop, we only post ShuffleMergeFinalized event into the event queue. | ||
| // The processing of this event should be done inside the event loop, so it | ||
| // can safely modify scheduler's internal state. | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| logInfo("%s (%s) shuffle merge finalized".format(stage, stage.name)) | ||
| eventProcessLoop.post(ShuffleMergeFinalized(stage)) | ||
| } | ||
| } | ||
|
|
||
venkata91 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| stage.shuffleDep.getMergerLocs.zipWithIndex.foreach { | ||
| case (shuffleServiceLoc, index) => | ||
| // Sends async request to shuffle service to finalize shuffle merge on that host | ||
| shuffleClient.finalizeShuffleMerge(shuffleServiceLoc.host, | ||
| shuffleServiceLoc.port, shuffleId, | ||
| new MergeFinalizerListener { | ||
| override def onShuffleMergeSuccess(statuses: MergeStatuses): Unit = { | ||
| assert(shuffleId == statuses.shuffleId) | ||
| // Register the merge results even if already timed out, in case the reducer | ||
| // needing this merged block starts after dag scheduler receives this response. | ||
| mapOutputTracker.registerMergeResults(statuses.shuffleId, | ||
| MergeStatus.convertMergeStatusesToMergeStatusArr(statuses, shuffleServiceLoc)) | ||
| if (!timedOut.get()) { | ||
| increaseAndCheckResponseCount | ||
| results(index).set(true) | ||
| } | ||
| } | ||
|
|
||
| override def onShuffleMergeFailure(e: Throwable): Unit = { | ||
| if (!timedOut.get()) { | ||
| logWarning(s"Exception encountered when trying to finalize shuffle " + | ||
| s"merge on ${shuffleServiceLoc.host} for shuffle $shuffleId", e) | ||
| increaseAndCheckResponseCount | ||
| // Do not fail the future as this would cause dag scheduler to prematurely | ||
| // give up on waiting for merge results from the remaining shuffle services | ||
| // if one fails | ||
| results(index).set(false) | ||
| } | ||
| } | ||
| }) | ||
| } | ||
| // DAGScheduler only waits for a limited amount of time for the merge results. | ||
| // It will attempt to submit the next stage(s) irrespective of whether merge results | ||
| // from all shuffle services are received or not. | ||
| // TODO what are the reasonable configurations for the 2 timeouts? When # mappers | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| // TODO and # reducers for a shuffle is really large, and if the merge ratio is not | ||
| // TODO high enough, the MergeStatuses to be retrieved from 1 shuffle service could | ||
| // TODO be pretty large (10s MB to 100s MB). How to properly handle this scenario? | ||
| try { | ||
| Futures.allAsList(results: _*).get(shuffleMergeResultsTimeoutSec, TimeUnit.SECONDS) | ||
| } catch { | ||
| case _: TimeoutException => | ||
| logInfo(s"Timed out on waiting for merge results from all " + | ||
| s"$numMergers mergers for shuffle $shuffleId") | ||
| timedOut.set(true) | ||
| eventProcessLoop.post(ShuffleMergeFinalized(stage)) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| private def processShuffleMapStageCompletion(shuffleStage: ShuffleMapStage): Unit = { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. review note: no changes here. Method extracted from |
||
| markStageAsFinished(shuffleStage) | ||
| logInfo("looking for newly runnable stages") | ||
| logInfo("running: " + runningStages) | ||
| logInfo("waiting: " + waitingStages) | ||
| logInfo("failed: " + failedStages) | ||
|
|
||
| // This call to increment the epoch may not be strictly necessary, but it is retained | ||
| // for now in order to minimize the changes in behavior from an earlier version of the | ||
| // code. This existing behavior of always incrementing the epoch following any | ||
| // successful shuffle map stage completion may have benefits by causing unneeded | ||
| // cached map outputs to be cleaned up earlier on executors. In the future we can | ||
| // consider removing this call, but this will require some extra investigation. | ||
| // See https://github.com/apache/spark/pull/17955/files#r117385673 for more details. | ||
| mapOutputTracker.incrementEpoch() | ||
|
|
||
| clearCacheLocs() | ||
|
|
||
| if (!shuffleStage.isAvailable) { | ||
| // Some tasks had failed; let's resubmit this shuffleStage. | ||
| // TODO: Lower-level scheduler should also deal with this | ||
| logInfo("Resubmitting " + shuffleStage + " (" + shuffleStage.name + | ||
| ") because some of its tasks had failed: " + | ||
| shuffleStage.findMissingPartitions().mkString(", ")) | ||
| submitStage(shuffleStage) | ||
| } else { | ||
| markMapStageJobsAsFinished(shuffleStage) | ||
| submitWaitingChildStages(shuffleStage) | ||
| } | ||
| } | ||
|
|
||
| private[scheduler] def handleShuffleMergeFinalized(stage: ShuffleMapStage): Unit = { | ||
| stage.shuffleDep.markShuffleMergeFinalized | ||
venkata91 marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| processShuffleMapStageCompletion(stage) | ||
| } | ||
|
|
||
| private def handleResubmittedFailure(task: Task[_], stage: Stage): Unit = { | ||
| logInfo(s"Resubmitted $task, so marking it as still running.") | ||
| stage match { | ||
|
|
@@ -2451,6 +2589,9 @@ private[scheduler] class DAGSchedulerEventProcessLoop(dagScheduler: DAGScheduler | |
|
|
||
| case ResubmitFailedStages => | ||
| dagScheduler.resubmitFailedStages() | ||
|
|
||
| case ShuffleMergeFinalized(stage) => | ||
| dagScheduler.handleShuffleMergeFinalized(stage) | ||
| } | ||
|
|
||
| override def onError(e: Throwable): Unit = { | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
1 change: 1 addition & 0 deletions
1
core/src/test/resources/META-INF/services/org.apache.spark.scheduler.ExternalClusterManager
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,3 +1,4 @@ | ||
| org.apache.spark.scheduler.DummyExternalClusterManager | ||
| org.apache.spark.scheduler.MockExternalClusterManager | ||
| org.apache.spark.scheduler.CSMockExternalClusterManager | ||
| org.apache.spark.scheduler.PushBasedClusterManager | ||
venkata91 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.