Skip to content

Commit 7b012c9

Browse files
CodingCatpwendell
authored andcommitted
[SPARK-1105] fix site scala version error in docs
https://spark-project.atlassian.net/browse/SPARK-1105 fix site scala version error Author: CodingCat <zhunansjtu@gmail.com> Closes apache#618 from CodingCat/doc_version and squashes the following commits: 39bb8aa [CodingCat] more fixes 65bedb0 [CodingCat] fix site scala version error in doc
1 parent b61435c commit 7b012c9

8 files changed

Lines changed: 27 additions & 26 deletions

docs/_config.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,8 @@ markdown: kramdown
55
# of Spark, Scala, and Mesos.
66
SPARK_VERSION: 1.0.0-incubating-SNAPSHOT
77
SPARK_VERSION_SHORT: 1.0.0
8-
SCALA_VERSION: "2.10"
8+
SCALA_BINARY_VERSION: "2.10"
9+
SCALA_VERSION: "2.10.3"
910
MESOS_VERSION: 0.13.0
1011
SPARK_ISSUE_TRACKER_URL: https://spark-project.atlassian.net
1112
SPARK_GITHUB_URL: https://github.com/apache/incubator-spark

docs/bagel-programming-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ This guide shows the programming model and features of Bagel by walking through
1616
To use Bagel in your program, add the following SBT or Maven dependency:
1717

1818
groupId = org.apache.spark
19-
artifactId = spark-bagel_{{site.SCALA_VERSION}}
19+
artifactId = spark-bagel_{{site.SCALA_BINARY_VERSION}}
2020
version = {{site.SPARK_VERSION}}
2121

2222
# Programming Model

docs/building-with-maven.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,10 @@ You'll need to configure Maven to use more memory than usual by setting `MAVEN_O
1717

1818
If you don't run this, you may see errors like the following:
1919

20-
[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_VERSION}}/classes...
20+
[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_BINARY_VERSION}}/classes...
2121
[ERROR] PermGen space -> [Help 1]
2222

23-
[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_VERSION}}/classes...
23+
[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_BINARY_VERSION}}/classes...
2424
[ERROR] Java heap space -> [Help 1]
2525

2626
You can fix this by setting the `MAVEN_OPTS` variable as discussed before.

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with
1919

2020
sbt/sbt assembly
2121

22-
For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_VERSION}}. If you write applications in Scala, you will need to use this same version of Scala in your own program -- newer major versions may not work. You can get the right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
22+
For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}. If you write applications in Scala, you will need to use a compatible Scala version (e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get the right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
2323

2424
# Running the Examples and Shell
2525

docs/quick-start.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ object SimpleApp {
115115
def main(args: Array[String]) {
116116
val logFile = "$YOUR_SPARK_HOME/README.md" // Should be some file on your system
117117
val sc = new SparkContext("local", "Simple App", "YOUR_SPARK_HOME",
118-
List("target/scala-{{site.SCALA_VERSION}}/simple-project_{{site.SCALA_VERSION}}-1.0.jar"))
118+
List("target/scala-{{site.SCALA_BINARY_VERSION}}/simple-project_{{site.SCALA_BINARY_VERSION}}-1.0.jar"))
119119
val logData = sc.textFile(logFile, 2).cache()
120120
val numAs = logData.filter(line => line.contains("a")).count()
121121
val numBs = logData.filter(line => line.contains("b")).count()
@@ -214,7 +214,7 @@ To build the program, we also write a Maven `pom.xml` file that lists Spark as a
214214
<dependencies>
215215
<dependency> <!-- Spark dependency -->
216216
<groupId>org.apache.spark</groupId>
217-
<artifactId>spark-core_{{site.SCALA_VERSION}}</artifactId>
217+
<artifactId>spark-core_{{site.SCALA_BINARY_VERSION}}</artifactId>
218218
<version>{{site.SPARK_VERSION}}</version>
219219
</dependency>
220220
</dependencies>

docs/running-on-yarn.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ This can be built by setting the Hadoop version and `SPARK_YARN` environment var
1515
SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
1616

1717
The assembled JAR will be something like this:
18-
`./assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly_{{site.SPARK_VERSION}}-hadoop2.0.5.jar`.
18+
`./assembly/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-assembly_{{site.SPARK_VERSION}}-hadoop2.0.5.jar`.
1919

2020
The build process now also supports new YARN versions (2.2.x). See below.
2121

@@ -25,7 +25,7 @@ The build process now also supports new YARN versions (2.2.x). See below.
2525
- The assembled jar can be installed into HDFS or used locally.
2626
- Your application code must be packaged into a separate JAR file.
2727

28-
If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_{{site.SCALA_VERSION}}-{{site.SPARK_VERSION}}` file can be generated by running `sbt/sbt assembly`. NOTE: since the documentation you're reading is for Spark version {{site.SPARK_VERSION}}, we are assuming here that you have downloaded Spark {{site.SPARK_VERSION}} or checked it out of source control. If you are using a different version of Spark, the version numbers in the jar generated by the sbt package command will obviously be different.
28+
If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_{{site.SCALA_BINARY_VERSION}}-{{site.SPARK_VERSION}}` file can be generated by running `sbt/sbt assembly`. NOTE: since the documentation you're reading is for Spark version {{site.SPARK_VERSION}}, we are assuming here that you have downloaded Spark {{site.SPARK_VERSION}} or checked it out of source control. If you are using a different version of Spark, the version numbers in the jar generated by the sbt package command will obviously be different.
2929

3030
# Configuration
3131

@@ -78,9 +78,9 @@ For example:
7878
$ cp conf/log4j.properties.template conf/log4j.properties
7979

8080
# Submit Spark's ApplicationMaster to YARN's ResourceManager, and instruct Spark to run the SparkPi example
81-
$ SPARK_JAR=./assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop2.0.5-alpha.jar \
81+
$ SPARK_JAR=./assembly/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop2.0.5-alpha.jar \
8282
./bin/spark-class org.apache.spark.deploy.yarn.Client \
83-
--jar examples/target/scala-{{site.SCALA_VERSION}}/spark-examples-assembly-{{site.SPARK_VERSION}}.jar \
83+
--jar examples/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-examples-assembly-{{site.SPARK_VERSION}}.jar \
8484
--class org.apache.spark.examples.SparkPi \
8585
--args yarn-standalone \
8686
--num-workers 3 \
@@ -117,13 +117,13 @@ In order to tune worker core/number/memory etc. You need to export environment v
117117

118118
For example:
119119

120-
SPARK_JAR=./assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop2.0.5-alpha.jar \
121-
SPARK_YARN_APP_JAR=examples/target/scala-{{site.SCALA_VERSION}}/spark-examples-assembly-{{site.SPARK_VERSION}}.jar \
120+
SPARK_JAR=./assembly/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop2.0.5-alpha.jar \
121+
SPARK_YARN_APP_JAR=examples/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-examples-assembly-{{site.SPARK_VERSION}}.jar \
122122
./bin/run-example org.apache.spark.examples.SparkPi yarn-client
123123

124124

125-
SPARK_JAR=./assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop2.0.5-alpha.jar \
126-
SPARK_YARN_APP_JAR=examples/target/scala-{{site.SCALA_VERSION}}/spark-examples-assembly-{{site.SPARK_VERSION}}.jar \
125+
SPARK_JAR=./assembly/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop2.0.5-alpha.jar \
126+
SPARK_YARN_APP_JAR=examples/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-examples-assembly-{{site.SPARK_VERSION}}.jar \
127127
MASTER=yarn-client ./bin/spark-shell
128128

129129

docs/scala-programming-guide.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,12 +17,12 @@ This guide shows each of these features and walks through some samples. It assum
1717

1818
# Linking with Spark
1919

20-
Spark {{site.SPARK_VERSION}} uses Scala {{site.SCALA_VERSION}}. If you write applications in Scala, you'll need to use this same version of Scala in your program -- newer major versions may not work.
20+
Spark {{site.SPARK_VERSION}} uses Scala {{site.SCALA_BINARY_VERSION}}. If you write applications in Scala, you will need to use a compatible Scala version (e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work.
2121

2222
To write a Spark application, you need to add a dependency on Spark. If you use SBT or Maven, Spark is available through Maven Central at:
2323

2424
groupId = org.apache.spark
25-
artifactId = spark-core_{{site.SCALA_VERSION}}
25+
artifactId = spark-core_{{site.SCALA_BINARY_VERSION}}
2626
version = {{site.SPARK_VERSION}}
2727

2828
In addition, if you wish to access an HDFS cluster, you need to add a dependency on `hadoop-client` for your version of HDFS:
@@ -31,7 +31,7 @@ In addition, if you wish to access an HDFS cluster, you need to add a dependency
3131
artifactId = hadoop-client
3232
version = <your-hdfs-version>
3333

34-
For other build systems, you can run `sbt/sbt assembly` to pack Spark and its dependencies into one JAR (`assembly/target/scala-{{site.SCALA_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop*.jar`), then add this to your CLASSPATH. Set the HDFS version as described [here](index.html#a-note-about-hadoop-versions).
34+
For other build systems, you can run `sbt/sbt assembly` to pack Spark and its dependencies into one JAR (`assembly/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-assembly-{{site.SPARK_VERSION}}-hadoop*.jar`), then add this to your CLASSPATH. Set the HDFS version as described [here](index.html#a-note-about-hadoop-versions).
3535

3636
Finally, you need to import some Spark classes and implicit conversions into your program. Add the following lines:
3737

docs/streaming-programming-guide.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -275,23 +275,23 @@ To write your own Spark Streaming program, you will have to add the following de
275275
SBT or Maven project:
276276

277277
groupId = org.apache.spark
278-
artifactId = spark-streaming_{{site.SCALA_VERSION}}
278+
artifactId = spark-streaming_{{site.SCALA_BINARY_VERSION}}
279279
version = {{site.SPARK_VERSION}}
280280

281281
For ingesting data from sources like Kafka and Flume that are not present in the Spark
282282
Streaming core
283283
API, you will have to add the corresponding
284-
artifact `spark-streaming-xyz_{{site.SCALA_VERSION}}` to the dependencies. For example,
284+
artifact `spark-streaming-xyz_{{site.SCALA_BINARY_VERSION}}` to the dependencies. For example,
285285
some of the common ones are as follows.
286286

287287

288288
<table class="table">
289289
<tr><th>Source</th><th>Artifact</th></tr>
290-
<tr><td> Kafka </td><td> spark-streaming-kafka_{{site.SCALA_VERSION}} </td></tr>
291-
<tr><td> Flume </td><td> spark-streaming-flume_{{site.SCALA_VERSION}} </td></tr>
292-
<tr><td> Twitter </td><td> spark-streaming-twitter_{{site.SCALA_VERSION}} </td></tr>
293-
<tr><td> ZeroMQ </td><td> spark-streaming-zeromq_{{site.SCALA_VERSION}} </td></tr>
294-
<tr><td> MQTT </td><td> spark-streaming-mqtt_{{site.SCALA_VERSION}} </td></tr>
290+
<tr><td> Kafka </td><td> spark-streaming-kafka_{{site.SCALA_BINARY_VERSION}} </td></tr>
291+
<tr><td> Flume </td><td> spark-streaming-flume_{{site.SCALA_BINARY_VERSION}} </td></tr>
292+
<tr><td> Twitter </td><td> spark-streaming-twitter_{{site.SCALA_BINARY_VERSION}} </td></tr>
293+
<tr><td> ZeroMQ </td><td> spark-streaming-zeromq_{{site.SCALA_BINARY_VERSION}} </td></tr>
294+
<tr><td> MQTT </td><td> spark-streaming-mqtt_{{site.SCALA_BINARY_VERSION}} </td></tr>
295295
<tr><td> </td><td></td></tr>
296296
</table>
297297

@@ -410,7 +410,7 @@ Scala and [JavaStreamingContext](api/streaming/index.html#org.apache.spark.strea
410410
Additional functionality for creating DStreams from sources such as Kafka, Flume, and Twitter
411411
can be imported by adding the right dependencies as explained in an
412412
[earlier](#linking) section. To take the
413-
case of Kafka, after adding the artifact `spark-streaming-kafka_{{site.SCALA_VERSION}}` to the
413+
case of Kafka, after adding the artifact `spark-streaming-kafka_{{site.SCALA_BINARY_VERSION}}` to the
414414
project dependencies, you can create a DStream from Kafka as
415415

416416
<div class="codetabs">

0 commit comments

Comments
 (0)