Skip to content

Commit 577aad1

Browse files
JoshRosenzzcclp
authored andcommitted
[SPARK-18553][CORE][BRANCH-1.6] Fix leak of TaskSetManager following executor loss
## What changes were proposed in this pull request? _This is the master branch-1.6 version of apache#15986; the original description follows:_ This patch fixes a critical resource leak in the TaskScheduler which could cause RDDs and ShuffleDependencies to be kept alive indefinitely if an executor with running tasks is permanently lost and the associated stage fails. This problem was originally identified by analyzing the heap dump of a driver belonging to a cluster that had run out of shuffle space. This dump contained several `ShuffleDependency` instances that were retained by `TaskSetManager`s inside the scheduler but were not otherwise referenced. Each of these `TaskSetManager`s was considered a "zombie" but had no running tasks and therefore should have been cleaned up. However, these zombie task sets were still referenced by the `TaskSchedulerImpl.taskIdToTaskSetManager` map. Entries are added to the `taskIdToTaskSetManager` map when tasks are launched and are removed inside of `TaskScheduler.statusUpdate()`, which is invoked by the scheduler backend while processing `StatusUpdate` messages from executors. The problem with this design is that a completely dead executor will never send a `StatusUpdate`. There is [some code](https://github.com/apache/spark/blob/072f4c518cdc57d705beec6bcc3113d9a6740819/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L338) in `statusUpdate` which handles tasks that exit with the `TaskState.LOST` state (which is supposed to correspond to a task failure triggered by total executor loss), but this state only seems to be used in Mesos fine-grained mode. There doesn't seem to be any code which performs per-task state cleanup for tasks that were running on an executor that completely disappears without sending any sort of final death message. The `executorLost` and [`removeExecutor`](https://github.com/apache/spark/blob/072f4c518cdc57d705beec6bcc3113d9a6740819/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L527) methods don't appear to perform any cleanup of the `taskId -> *` mappings, causing the leaks observed here. This patch's fix is to maintain a `executorId -> running task id` mapping so that these `taskId -> *` maps can be properly cleaned up following an executor loss. There are some potential corner-case interactions that I'm concerned about here, especially some details in [the comment](https://github.com/apache/spark/blob/072f4c518cdc57d705beec6bcc3113d9a6740819/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L523) in `removeExecutor`, so I'd appreciate a very careful review of these changes. ## How was this patch tested? I added a new unit test to `TaskSchedulerImplSuite`. /cc kayousterhout and markhamstra, who reviewed apache#15986. Author: Josh Rosen <joshrosen@databricks.com> Closes apache#16070 from JoshRosen/fix-leak-following-total-executor-loss-1.6.
1 parent 2abb6a4 commit 577aad1

3 files changed

Lines changed: 115 additions & 33 deletions

File tree

core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala

Lines changed: 45 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -87,8 +87,8 @@ private[spark] class TaskSchedulerImpl(
8787
// Incrementing task IDs
8888
val nextTaskId = new AtomicLong(0)
8989

90-
// Number of tasks running on each executor
91-
private val executorIdToTaskCount = new HashMap[String, Int]
90+
// IDs of the tasks running on each executor
91+
private val executorIdToRunningTaskIds = new HashMap[String, HashSet[Long]]
9292

9393
// The set of executors we have on each host; this is used to compute hostsAlive, which
9494
// in turn is used to decide when we can attain data locality on a given host
@@ -256,7 +256,7 @@ private[spark] class TaskSchedulerImpl(
256256
val tid = task.taskId
257257
taskIdToTaskSetManager(tid) = taskSet
258258
taskIdToExecutorId(tid) = execId
259-
executorIdToTaskCount(execId) += 1
259+
executorIdToRunningTaskIds(execId).add(tid)
260260
executorsByHost(host) += execId
261261
availableCpus(i) -= CPUS_PER_TASK
262262
assert(availableCpus(i) >= 0)
@@ -285,7 +285,7 @@ private[spark] class TaskSchedulerImpl(
285285
var newExecAvail = false
286286
for (o <- offers) {
287287
executorIdToHost(o.executorId) = o.host
288-
executorIdToTaskCount.getOrElseUpdate(o.executorId, 0)
288+
executorIdToRunningTaskIds.getOrElseUpdate(o.executorId, HashSet[Long]())
289289
if (!executorsByHost.contains(o.host)) {
290290
executorsByHost(o.host) = new HashSet[String]()
291291
executorAdded(o.executorId, o.host)
@@ -331,37 +331,34 @@ private[spark] class TaskSchedulerImpl(
331331
var failedExecutor: Option[String] = None
332332
synchronized {
333333
try {
334-
if (state == TaskState.LOST && taskIdToExecutorId.contains(tid)) {
335-
// We lost this entire executor, so remember that it's gone
336-
val execId = taskIdToExecutorId(tid)
337-
338-
if (executorIdToTaskCount.contains(execId)) {
339-
removeExecutor(execId,
340-
SlaveLost(s"Task $tid was lost, so marking the executor as lost as well."))
341-
failedExecutor = Some(execId)
342-
}
343-
}
344334
taskIdToTaskSetManager.get(tid) match {
345335
case Some(taskSet) =>
346-
if (TaskState.isFinished(state)) {
347-
taskIdToTaskSetManager.remove(tid)
348-
taskIdToExecutorId.remove(tid).foreach { execId =>
349-
if (executorIdToTaskCount.contains(execId)) {
350-
executorIdToTaskCount(execId) -= 1
351-
}
336+
if (state == TaskState.LOST) {
337+
// TaskState.LOST is only used by the Mesos fine-grained scheduling mode,
338+
// where each executor corresponds to a single task, so mark the executor as failed.
339+
val execId = taskIdToExecutorId.getOrElse(tid, throw new IllegalStateException(
340+
"taskIdToTaskSetManager.contains(tid) <=> taskIdToExecutorId.contains(tid)"))
341+
if (executorIdToRunningTaskIds.contains(execId)) {
342+
val reason =
343+
SlaveLost(s"Task $tid was lost, so marking the executor as lost as well.")
344+
removeExecutor(execId, reason)
345+
failedExecutor = Some(execId)
352346
}
353347
}
354-
if (state == TaskState.FINISHED) {
355-
taskSet.removeRunningTask(tid)
356-
taskResultGetter.enqueueSuccessfulTask(taskSet, tid, serializedData)
357-
} else if (Set(TaskState.FAILED, TaskState.KILLED, TaskState.LOST).contains(state)) {
348+
if (TaskState.isFinished(state)) {
349+
cleanupTaskState(tid)
358350
taskSet.removeRunningTask(tid)
359-
taskResultGetter.enqueueFailedTask(taskSet, tid, state, serializedData)
351+
if (state == TaskState.FINISHED) {
352+
taskResultGetter.enqueueSuccessfulTask(taskSet, tid, serializedData)
353+
} else if (Set(TaskState.FAILED, TaskState.KILLED, TaskState.LOST).contains(state)) {
354+
taskResultGetter.enqueueFailedTask(taskSet, tid, state, serializedData)
355+
}
360356
}
361357
case None =>
362358
logError(
363359
("Ignoring update with state %s for TID %s because its task set is gone (this is " +
364-
"likely the result of receiving duplicate task finished status updates)")
360+
"likely the result of receiving duplicate task finished status updates) or its " +
361+
"executor has been marked as failed.")
365362
.format(state, tid))
366363
}
367364
} catch {
@@ -470,7 +467,7 @@ private[spark] class TaskSchedulerImpl(
470467
var failedExecutor: Option[String] = None
471468

472469
synchronized {
473-
if (executorIdToTaskCount.contains(executorId)) {
470+
if (executorIdToRunningTaskIds.contains(executorId)) {
474471
val hostPort = executorIdToHost(executorId)
475472
logExecutorLoss(executorId, hostPort, reason)
476473
removeExecutor(executorId, reason)
@@ -512,13 +509,31 @@ private[spark] class TaskSchedulerImpl(
512509
logError(s"Lost executor $executorId on $hostPort: $reason")
513510
}
514511

512+
/**
513+
* Cleans up the TaskScheduler's state for tracking the given task.
514+
*/
515+
private def cleanupTaskState(tid: Long): Unit = {
516+
taskIdToTaskSetManager.remove(tid)
517+
taskIdToExecutorId.remove(tid).foreach { executorId =>
518+
executorIdToRunningTaskIds.get(executorId).foreach { _.remove(tid) }
519+
}
520+
}
521+
515522
/**
516523
* Remove an executor from all our data structures and mark it as lost. If the executor's loss
517524
* reason is not yet known, do not yet remove its association with its host nor update the status
518525
* of any running tasks, since the loss reason defines whether we'll fail those tasks.
519526
*/
520527
private def removeExecutor(executorId: String, reason: ExecutorLossReason) {
521-
executorIdToTaskCount -= executorId
528+
// The tasks on the lost executor may not send any more status updates (because the executor
529+
// has been lost), so they should be cleaned up here.
530+
executorIdToRunningTaskIds.remove(executorId).foreach { taskIds =>
531+
logDebug("Cleaning up TaskScheduler state for tasks " +
532+
s"${taskIds.mkString("[", ",", "]")} on failed executor $executorId")
533+
// We do not notify the TaskSetManager of the task failures because that will
534+
// happen below in the rootPool.executorLost() call.
535+
taskIds.foreach(cleanupTaskState)
536+
}
522537

523538
val host = executorIdToHost(executorId)
524539
val execs = executorsByHost.getOrElse(host, new HashSet)
@@ -556,11 +571,11 @@ private[spark] class TaskSchedulerImpl(
556571
}
557572

558573
def isExecutorAlive(execId: String): Boolean = synchronized {
559-
executorIdToTaskCount.contains(execId)
574+
executorIdToRunningTaskIds.contains(execId)
560575
}
561576

562577
def isExecutorBusy(execId: String): Boolean = synchronized {
563-
executorIdToTaskCount.getOrElse(execId, -1) > 0
578+
executorIdToRunningTaskIds.get(execId).exists(_.nonEmpty)
564579
}
565580

566581
// By default, rack is unknown

core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -425,10 +425,11 @@ class StandaloneDynamicAllocationSuite
425425
assert(executors.size === 2)
426426

427427
// simulate running a task on the executor
428-
val getMap = PrivateMethod[mutable.HashMap[String, Int]]('executorIdToTaskCount)
428+
val getMap =
429+
PrivateMethod[mutable.HashMap[String, mutable.HashSet[Long]]]('executorIdToRunningTaskIds)
429430
val taskScheduler = sc.taskScheduler.asInstanceOf[TaskSchedulerImpl]
430-
val executorIdToTaskCount = taskScheduler invokePrivate getMap()
431-
executorIdToTaskCount(executors.head) = 1
431+
val executorIdToRunningTaskIds = taskScheduler invokePrivate getMap()
432+
executorIdToRunningTaskIds(executors.head) = mutable.HashSet(1L)
432433
// kill the busy executor without force; this should fail
433434
assert(killExecutor(sc, executors.head, force = false))
434435
apps = getApplications()

core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@
1717

1818
package org.apache.spark.scheduler
1919

20+
import java.nio.ByteBuffer
21+
2022
import org.apache.spark._
2123

2224
class FakeSchedulerBackend extends SchedulerBackend {
@@ -273,4 +275,68 @@ class TaskSchedulerImplSuite extends SparkFunSuite with LocalSparkContext with L
273275
assert("executor1" === taskDescriptions3(0).executorId)
274276
}
275277

278+
test("if an executor is lost then the state for its running tasks is cleaned up (SPARK-18553)") {
279+
sc = new SparkContext("local", "TaskSchedulerImplSuite")
280+
val taskScheduler = new TaskSchedulerImpl(sc)
281+
taskScheduler.initialize(new FakeSchedulerBackend)
282+
// Need to initialize a DAGScheduler for the taskScheduler to use for callbacks.
283+
new DAGScheduler(sc, taskScheduler) {
284+
override def taskStarted(task: Task[_], taskInfo: TaskInfo) {}
285+
override def executorAdded(execId: String, host: String) {}
286+
}
287+
288+
val e0Offers = Seq(WorkerOffer("executor0", "host0", 1))
289+
val attempt1 = FakeTask.createTaskSet(1)
290+
291+
// submit attempt 1, offer resources, task gets scheduled
292+
taskScheduler.submitTasks(attempt1)
293+
val taskDescriptions = taskScheduler.resourceOffers(e0Offers).flatten
294+
assert(1 === taskDescriptions.length)
295+
296+
// mark executor0 as dead
297+
taskScheduler.executorLost("executor0", SlaveLost())
298+
assert(!taskScheduler.isExecutorAlive("executor0"))
299+
assert(!taskScheduler.hasExecutorsAliveOnHost("host0"))
300+
assert(taskScheduler.getExecutorsAliveOnHost("host0").isEmpty)
301+
302+
303+
// Check that state associated with the lost task attempt is cleaned up:
304+
assert(taskScheduler.taskIdToExecutorId.isEmpty)
305+
assert(taskScheduler.taskIdToTaskSetManager.isEmpty)
306+
}
307+
308+
test("if a task finishes with TaskState.LOST its executor is marked as dead") {
309+
sc = new SparkContext("local", "TaskSchedulerImplSuite")
310+
val taskScheduler = new TaskSchedulerImpl(sc)
311+
taskScheduler.initialize(new FakeSchedulerBackend)
312+
// Need to initialize a DAGScheduler for the taskScheduler to use for callbacks.
313+
new DAGScheduler(sc, taskScheduler) {
314+
override def taskStarted(task: Task[_], taskInfo: TaskInfo) {}
315+
override def executorAdded(execId: String, host: String) {}
316+
}
317+
318+
val e0Offers = Seq(WorkerOffer("executor0", "host0", 1))
319+
val attempt1 = FakeTask.createTaskSet(1)
320+
321+
// submit attempt 1, offer resources, task gets scheduled
322+
taskScheduler.submitTasks(attempt1)
323+
val taskDescriptions = taskScheduler.resourceOffers(e0Offers).flatten
324+
assert(1 === taskDescriptions.length)
325+
326+
// Report the task as failed with TaskState.LOST
327+
taskScheduler.statusUpdate(
328+
tid = taskDescriptions.head.taskId,
329+
state = TaskState.LOST,
330+
serializedData = ByteBuffer.allocate(0)
331+
)
332+
333+
// Check that state associated with the lost task attempt is cleaned up:
334+
assert(taskScheduler.taskIdToExecutorId.isEmpty)
335+
assert(taskScheduler.taskIdToTaskSetManager.isEmpty)
336+
337+
// Check that the executor has been marked as dead
338+
assert(!taskScheduler.isExecutorAlive("executor0"))
339+
assert(!taskScheduler.hasExecutorsAliveOnHost("host0"))
340+
assert(taskScheduler.getExecutorsAliveOnHost("host0").isEmpty)
341+
}
276342
}

0 commit comments

Comments
 (0)