@@ -62,7 +62,7 @@ The command to launch the Spark application on the cluster is as follows:
6262 --args <APP_MAIN_ARGUMENTS> \
6363 --num-executors <NUMBER_OF_EXECUTOR_PROCESSES> \
6464 --am-class <ApplicationMaster_CLASS>
65- --am -memory <MEMORY_FOR_ApplicationMaster> \
65+ --driver -memory <MEMORY_FOR_ApplicationMaster> \
6666 --executor-memory <MEMORY_PER_EXECUTOR> \
6767 --executor-cores <CORES_PER_EXECUTOR> \
6868 --name <application_name> \
@@ -86,7 +86,7 @@ For example:
8686 --class org.apache.spark.examples.SparkPi \
8787 --args yarn-cluster \
8888 --num-executors 3 \
89- --am -memory 4g \
89+ --driver -memory 4g \
9090 --executor-memory 2g \
9191 --executor-cores 1
9292
@@ -105,7 +105,7 @@ In order to tune executor cores/number/memory etc., you need to export environme
105105* ` SPARK_EXECUTOR_INSTANCES ` , Number of executors to start (Default: 2)
106106* ` SPARK_EXECUTOR_CORES ` , Number of cores per executor (Default: 1).
107107* ` SPARK_EXECUTOR_MEMORY ` , Memory per executor (e.g. 1000M, 2G) (Default: 1G)
108- * ` SPARK_AM_MEMORY ` , Memory for application master (e.g. 1000M, 2G) (Default: 512 Mb)
108+ * ` SPARK_DRIVER_MEMORY ` , Memory for driver (e.g. 1000M, 2G) (Default: 512 Mb)
109109* ` SPARK_YARN_APP_NAME ` , The name of your application (Default: Spark)
110110* ` SPARK_YARN_QUEUE ` , The YARN queue to use for allocation requests (Default: 'default')
111111* ` SPARK_YARN_DIST_FILES ` , Comma separated list of files to be distributed with the job.
0 commit comments