Skip to content

Commit 8406ac4

Browse files
Allan Douglas R. de Oliveirapwendell
authored andcommitted
EC2 configurable workers
Added option to configure number of worker instances and to set SPARK_MASTER_OPTS Depends on: mesos/spark-ec2#46 Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com> Closes #612 from douglaz/ec2_configurable_workers and squashes the following commits: d6c5d65 [Allan Douglas R. de Oliveira] Added master opts parameter 6c34671 [Allan Douglas R. de Oliveira] Use number of worker instances as string on template ba528b9 [Allan Douglas R. de Oliveira] Added SPARK_WORKER_INSTANCES parameter (cherry picked from commit 4669a84) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
1 parent 36e687d commit 8406ac4

2 files changed

Lines changed: 12 additions & 2 deletions

File tree

ec2/deploy.generic/root/spark-ec2/ec2-variables.sh

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,3 +28,5 @@ export SPARK_VERSION="{{spark_version}}"
2828
export SHARK_VERSION="{{shark_version}}"
2929
export HADOOP_MAJOR_VERSION="{{hadoop_major_version}}"
3030
export SWAP_MB="{{swap}}"
31+
export SPARK_WORKER_INSTANCES="{{spark_worker_instances}}"
32+
export SPARK_MASTER_OPTS="{{spark_master_opts}}"

ec2/spark_ec2.py

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,12 @@ def parse_args():
103103
help="When destroying a cluster, delete the security groups that were created")
104104
parser.add_option("--use-existing-master", action="store_true", default=False,
105105
help="Launch fresh slaves, but use an existing stopped master if possible")
106+
parser.add_option("--worker-instances", type="int", default=1,
107+
help="Number of instances per worker: variable SPARK_WORKER_INSTANCES (default: 1)")
108+
parser.add_option("--master-opts", type="string", default="",
109+
help="Extra options to give to master through SPARK_MASTER_OPTS variable (e.g -Dspark.worker.timeout=180)")
110+
111+
106112

107113
(opts, args) = parser.parse_args()
108114
if len(args) != 2:
@@ -224,7 +230,7 @@ def launch_cluster(conn, opts, cluster_name):
224230
sys.exit(1)
225231
if opts.key_pair is None:
226232
print >> stderr, "ERROR: Must provide a key pair name (-k) to use on instances."
227-
sys.exit(1)
233+
sys.exit(1)
228234
print "Setting up security groups..."
229235
master_group = get_or_make_group(conn, cluster_name + "-master")
230236
slave_group = get_or_make_group(conn, cluster_name + "-slaves")
@@ -552,7 +558,9 @@ def deploy_files(conn, root_dir, opts, master_nodes, slave_nodes, modules):
552558
"modules": '\n'.join(modules),
553559
"spark_version": spark_v,
554560
"shark_version": shark_v,
555-
"hadoop_major_version": opts.hadoop_major_version
561+
"hadoop_major_version": opts.hadoop_major_version,
562+
"spark_worker_instances": "%d" % opts.worker_instances,
563+
"spark_master_opts": opts.master_opts
556564
}
557565

558566
# Create a temp directory in which we will place all the files to be

0 commit comments

Comments
 (0)