Skip to content

Commit e5e751b

Browse files
committed
[SPARK-48887][K8S] Enable spark.kubernetes.executor.checkAllContainers by default
### What changes were proposed in this pull request? This PR aims to enable `spark.kubernetes.executor.checkAllContainers` by default from Apache Spark 4.0.0. ### Why are the changes needed? Since Apache Spark 3.1.0, `spark.kubernetes.executor.checkAllContainers` is supported and useful because [sidecar pattern](https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/) is used in many cases. Also, it prevents user mistakes which forget and ignore the sidecars' failures by always reporting sidecar failures via executor status. - #29924 ### Does this PR introduce _any_ user-facing change? - This configuration is no-op when there is no other container. - This will report user containers' error correctly when there exist other containers which are provided by the users. ### How was this patch tested? Both `true` and `false` are covered by our CI test coverage since Apache Spark 3.1.0. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #47337 from dongjoon-hyun/SPARK-48887. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent 115c6e4 commit e5e751b

File tree

3 files changed

+4
-2
lines changed

3 files changed

+4
-2
lines changed

docs/core-migration-guide.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,8 @@ license: |
3838

3939
- Since Spark 4.0, Spark uses `ReadWriteOncePod` instead of `ReadWriteOnce` access mode in persistence volume claims. To restore the legacy behavior, you can set `spark.kubernetes.legacy.useReadWriteOnceAccessMode` to `true`.
4040

41+
- Since Spark 4.0, Spark reports its executor pod status by checking all containers of that pod. To restore the legacy behavior, you can set `spark.kubernetes.executor.checkAllContainers` to `false`.
42+
4143
- Since Spark 4.0, Spark uses `~/.ivy2.5.2` as Ivy user directory by default to isolate the existing systems from Apache Ivy's incompatibility. To restore the legacy behavior, you can set `spark.jars.ivy` to `~/.ivy2`.
4244

4345
- Since Spark 4.0, Spark uses the external shuffle service for deleting shuffle blocks for deallocated executors when the shuffle is no longer needed. To restore the legacy behavior, you can set `spark.shuffle.service.removeShuffle` to `false`.

docs/running-on-kubernetes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1327,7 +1327,7 @@ See the [configuration page](configuration.html) for information on Spark config
13271327
</tr>
13281328
<tr>
13291329
<td><code>spark.kubernetes.executor.checkAllContainers</code></td>
1330-
<td><code>false</code></td>
1330+
<td><code>true</code></td>
13311331
<td>
13321332
Specify whether executor pods should be check all containers (including sidecars) or only the executor container when determining the pod status.
13331333
</td>

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -711,7 +711,7 @@ private[spark] object Config extends Logging {
711711
"executor status.")
712712
.version("3.1.0")
713713
.booleanConf
714-
.createWithDefault(false)
714+
.createWithDefault(true)
715715

716716
val KUBERNETES_EXECUTOR_MISSING_POD_DETECT_DELTA =
717717
ConfigBuilder("spark.kubernetes.executor.missingPodDetectDelta")

0 commit comments

Comments
 (0)