Skip to content

Commit ecfdffc

Browse files
uncleGensrowen
authored andcommitted
[SPARK-27503][DSTREAM] JobGenerator thread exit for some fatal errors but application keeps running
## What changes were proposed in this pull request? In some corner cases, `JobGenerator` thread (including some other EventLoop threads) may exit for some fatal error, like OOM, but Spark Streaming job keep running with no batch job generating. Currently, we only report any non-fatal error. ``` override def run(): Unit = { try { while (!stopped.get) { val event = eventQueue.take() try { onReceive(event) } catch { case NonFatal(e) => try { onError(e) } catch { case NonFatal(e) => logError("Unexpected error in " + name, e) } } } } catch { case ie: InterruptedException => // exit even if eventQueue is not empty case NonFatal(e) => logError("Unexpected error in " + name, e) } } ``` In this PR, we double check if event thread alive when post Event ## How was this patch tested? existing unit tests Closes #24400 from uncleGen/SPARK-27503. Authored-by: uncleGen <[email protected]> Signed-off-by: Sean Owen <[email protected]>
1 parent 7cc15af commit ecfdffc

1 file changed

Lines changed: 7 additions & 1 deletion

File tree

core/src/main/scala/org/apache/spark/util/EventLoop.scala

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,13 @@ private[spark] abstract class EventLoop[E](name: String) extends Logging {
100100
* Put the event into the event queue. The event thread will process it later.
101101
*/
102102
def post(event: E): Unit = {
103-
eventQueue.put(event)
103+
if (!stopped.get) {
104+
if (eventThread.isAlive) {
105+
eventQueue.put(event)
106+
} else {
107+
onError(new IllegalStateException(s"$name has already been stopped accidentally."))
108+
}
109+
}
104110
}
105111

106112
/**

0 commit comments

Comments
 (0)