Skip to content

Commit 1024875

Browse files
squitosrowen
authored andcommitted
[SPARK-25088][CORE][MESOS][DOCS] Update Rest Server docs & defaults.
## What changes were proposed in this pull request? (a) disabled rest submission server by default in standalone mode (b) fails the standalone master if rest server enabled & authentication secret set (c) fails the mesos cluster dispatcher if authentication secret set (d) doc updates (e) when submitting a standalone app, only try the rest submission first if spark.master.rest.enabled=true otherwise you'd see a 10 second pause like 18/08/09 08:13:22 INFO RestSubmissionClient: Submitting a request to launch an application in spark://... 18/08/09 08:13:33 WARN RestSubmissionClient: Unable to connect to server spark://... I also made sure the mesos cluster dispatcher failed with the secret enabled, though I had to do that on slightly different code as I don't have mesos native libs around. ## How was this patch tested? I ran the tests in the mesos module & in core for org.apache.spark.deploy.* I ran a test on a cluster with standalone master to make sure I could still start with the right configs, and would fail the right way too. Closes #22071 from squito/rest_doc_updates. Authored-by: Imran Rashid <irashid@cloudera.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
1 parent 80784a1 commit 1024875

6 files changed

Lines changed: 29 additions & 3 deletions

File tree

core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ private[deploy] class SparkSubmitArguments(args: Seq[String], env: Map[String, S
8282
var driverCores: String = null
8383
var submissionToKill: String = null
8484
var submissionToRequestStatusFor: String = null
85-
var useRest: Boolean = true // used internally
85+
var useRest: Boolean = false // used internally
8686

8787
/** Default properties present in the currently defined defaults file. */
8888
lazy val defaultSparkProperties: HashMap[String, String] = {
@@ -115,6 +115,8 @@ private[deploy] class SparkSubmitArguments(args: Seq[String], env: Map[String, S
115115
// Use `sparkProperties` map along with env vars to fill in any missing parameters
116116
loadEnvironmentArguments()
117117

118+
useRest = sparkProperties.getOrElse("spark.master.rest.enabled", "false").toBoolean
119+
118120
validateArguments()
119121

120122
/**

core/src/main/scala/org/apache/spark/deploy/master/Master.scala

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,10 +121,18 @@ private[deploy] class Master(
121121
}
122122

123123
// Alternative application submission gateway that is stable across Spark versions
124-
private val restServerEnabled = conf.getBoolean("spark.master.rest.enabled", true)
124+
private val restServerEnabled = conf.getBoolean("spark.master.rest.enabled", false)
125125
private var restServer: Option[StandaloneRestServer] = None
126126
private var restServerBoundPort: Option[Int] = None
127127

128+
{
129+
val authKey = SecurityManager.SPARK_AUTH_SECRET_CONF
130+
require(conf.getOption(authKey).isEmpty || !restServerEnabled,
131+
s"The RestSubmissionServer does not support authentication via ${authKey}. Either turn " +
132+
"off the RestSubmissionServer with spark.master.rest.enabled=false, or do not use " +
133+
"authentication.")
134+
}
135+
128136
override def onStart(): Unit = {
129137
logInfo("Starting Spark master at " + masterUrl)
130138
logInfo(s"Running Spark version ${org.apache.spark.SPARK_VERSION}")

core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,7 @@ private[spark] abstract class RestSubmissionServer(
5151
val host: String,
5252
val requestedPort: Int,
5353
val masterConf: SparkConf) extends Logging {
54+
5455
protected val submitRequestServlet: SubmitRequestServlet
5556
protected val killRequestServlet: KillRequestServlet
5657
protected val statusRequestServlet: StatusRequestServlet

docs/running-on-mesos.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -174,6 +174,8 @@ can find the results of the driver from the Mesos Web UI.
174174

175175
To use cluster mode, you must start the `MesosClusterDispatcher` in your cluster via the `sbin/start-mesos-dispatcher.sh` script,
176176
passing in the Mesos master URL (e.g: mesos://host:5050). This starts the `MesosClusterDispatcher` as a daemon running on the host.
177+
Note that the `MesosClusterDispatcher` does not support authentication. You should ensure that all network access to it is
178+
protected (port 7077 by default).
177179

178180
By setting the Mesos proxy config property (requires mesos version >= 1.4), `--conf spark.mesos.proxy.baseURL=http://localhost:5050` when launching the dispatcher, the mesos sandbox URI for each driver is added to the mesos dispatcher UI.
179181

docs/security.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,12 @@ secrets to be secure.
2222

2323
For other resource managers, `spark.authenticate.secret` must be configured on each of the nodes.
2424
This secret will be shared by all the daemons and applications, so this deployment configuration is
25-
not as secure as the above, especially when considering multi-tenant clusters.
25+
not as secure as the above, especially when considering multi-tenant clusters. In this
26+
configuration, a user with the secret can effectively impersonate any other user.
27+
28+
The Rest Submission Server and the MesosClusterDispatcher do not support authentication. You should
29+
ensure that all network access to the REST API & MesosClusterDispatcher (port 6066 and 7077
30+
respectively by default) are restricted to hosts that are trusted to submit jobs.
2631

2732
<table class="table">
2833
<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>

resource-managers/mesos/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcher.scala

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,14 @@ private[mesos] class MesosClusterDispatcher(
5151
conf: SparkConf)
5252
extends Logging {
5353

54+
{
55+
// This doesn't support authentication because the RestSubmissionServer doesn't support it.
56+
val authKey = SecurityManager.SPARK_AUTH_SECRET_CONF
57+
require(conf.getOption(authKey).isEmpty,
58+
s"The MesosClusterDispatcher does not support authentication via ${authKey}. It is not " +
59+
s"currently possible to run jobs in cluster mode with authentication on.")
60+
}
61+
5462
private val publicAddress = Option(conf.getenv("SPARK_PUBLIC_DNS")).getOrElse(args.host)
5563
private val recoveryMode = conf.get(RECOVERY_MODE).toUpperCase()
5664
logInfo("Recovery mode in Mesos dispatcher set to: " + recoveryMode)

0 commit comments

Comments
 (0)