Skip to content

Commit 3dc7a45

Browse files
authored
Merge pull request #614 from JohnSnowLabs/221-release-candidate
2.2.1 Release Candidate
2 parents 3c38492 + 1fb0f59 commit 3dc7a45

File tree

11 files changed

+70
-49
lines changed

11 files changed

+70
-49
lines changed

CHANGELOG

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,24 @@
1+
========
2+
2.2.1
3+
========
4+
---------------
5+
Overview
6+
---------------
7+
This short release is to address a few uncovered issues in the previous 2.2.0 release. Thank you all for quick feedback.
8+
9+
---------------
10+
Enhancements
11+
---------------
12+
* NerDLApproach new param includeValidationProp allows partitioning the training set and exclude a fraction
13+
* NerDLApproach trainValidationProp now randomly samples the data as opposed to head first
14+
15+
---------------
16+
Bugfixes
17+
---------------
18+
* Fixed a bug in ResourceHelper causing folder resources to fail when a folder is empty (affects various annotators)
19+
* Fixed a bug in python embeddings format not parsed to upper case
20+
* Fixed a bug in python causing an incapability to load PipelineModels after loading embeddings
21+
122
========
223
2.2.0
324
========

README.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Take a look at our official Spark NLP page: [http://nlp.johnsnowlabs.com/](http:
4242

4343
## Apache Spark Support
4444

45-
Spark NLP *2.2.0* has been built on top of Apache Spark 2.4.3
45+
Spark NLP *2.2.1* has been built on top of Apache Spark 2.4.3
4646

4747
| Spark NLP | Spark 2.3.x | Spark 2.4 |
4848
|-------------|-------------------------------------|--------------|
@@ -68,18 +68,18 @@ This library has been uploaded to the [spark-packages repository](https://spark-
6868

6969
Benefit of spark-packages is that makes it available for both Scala-Java and Python
7070

71-
To use the most recent version just add the `--packages JohnSnowLabs:spark-nlp:2.2.0` to you spark command
71+
To use the most recent version just add the `--packages JohnSnowLabs:spark-nlp:2.2.1` to you spark command
7272

7373
```sh
74-
spark-shell --packages JohnSnowLabs:spark-nlp:2.2.0
74+
spark-shell --packages JohnSnowLabs:spark-nlp:2.2.1
7575
```
7676

7777
```sh
78-
pyspark --packages JohnSnowLabs:spark-nlp:2.2.0
78+
pyspark --packages JohnSnowLabs:spark-nlp:2.2.1
7979
```
8080

8181
```sh
82-
spark-submit --packages JohnSnowLabs:spark-nlp:2.2.0
82+
spark-submit --packages JohnSnowLabs:spark-nlp:2.2.1
8383
```
8484

8585
This can also be used to create a SparkSession manually by using the `spark.jars.packages` option in both Python and Scala
@@ -147,7 +147,7 @@ Our package is deployed to maven central. In order to add this package as a depe
147147
<dependency>
148148
<groupId>com.johnsnowlabs.nlp</groupId>
149149
<artifactId>spark-nlp_2.11</artifactId>
150-
<version>2.2.0</version>
150+
<version>2.2.1</version>
151151
</dependency>
152152
```
153153

@@ -158,22 +158,22 @@ and
158158
<dependency>
159159
<groupId>com.johnsnowlabs.nlp</groupId>
160160
<artifactId>spark-nlp-ocr_2.11</artifactId>
161-
<version>2.2.0</version>
161+
<version>2.2.1</version>
162162
</dependency>
163163
```
164164

165165
### SBT
166166

167167
```sbtshell
168168
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp
169-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.2.0"
169+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.2.1"
170170
```
171171

172172
and
173173

174174
```sbtshell
175175
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-ocr
176-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.2.0"
176+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.2.1"
177177
```
178178

179179
Maven Central: [https://mvnrepository.com/artifact/com.johnsnowlabs.nlp](https://mvnrepository.com/artifact/com.johnsnowlabs.nlp)
@@ -189,7 +189,7 @@ If you installed pyspark through pip/conda, you can install `spark-nlp` through
189189
Pip:
190190

191191
```bash
192-
pip install spark-nlp==2.2.0
192+
pip install spark-nlp==2.2.1
193193
```
194194

195195
Conda:
@@ -216,7 +216,7 @@ spark = SparkSession.builder \
216216
.master("local[4]")\
217217
.config("spark.driver.memory","8G")\
218218
.config("spark.driver.maxResultSize", "2G") \
219-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.0")\
219+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.1")\
220220
.config("spark.kryoserializer.buffer.max", "500m")\
221221
.getOrCreate()
222222
```
@@ -249,7 +249,7 @@ Use either one of the following options
249249
* Add the following Maven Coordinates to the interpreter's library list
250250

251251
```bash
252-
com.johnsnowlabs.nlp:spark-nlp_2.11:2.2.0
252+
com.johnsnowlabs.nlp:spark-nlp_2.11:2.2.1
253253
```
254254

255255
* Add path to pre-built jar from [here](#pre-compiled-spark-nlp-and-spark-nlp-ocr) in the interpreter's library list making sure the jar is available to driver path
@@ -259,7 +259,7 @@ com.johnsnowlabs.nlp:spark-nlp_2.11:2.2.0
259259
Apart from previous step, install python module through pip
260260

261261
```bash
262-
pip install spark-nlp==2.2.0
262+
pip install spark-nlp==2.2.1
263263
```
264264

265265
Or you can install `spark-nlp` from inside Zeppelin by using Conda:
@@ -284,7 +284,7 @@ export PYSPARK_PYTHON=python3
284284
export PYSPARK_DRIVER_PYTHON=jupyter
285285
export PYSPARK_DRIVER_PYTHON_OPTS=notebook
286286

287-
pyspark --packages JohnSnowLabs:spark-nlp:2.2.0
287+
pyspark --packages JohnSnowLabs:spark-nlp:2.2.1
288288
```
289289

290290
Alternatively, you can mix in using `--jars` option for pyspark + `pip install spark-nlp`
@@ -346,7 +346,7 @@ To include the OCR submodule in Spark NLP, you will need to add the following to
346346

347347
```basg
348348
--repositories http://repo.spring.io/plugins-release
349-
--packages JohnSnowLabs:spark-nlp:2.2.0,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.0,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3
349+
--packages JohnSnowLabs:spark-nlp:2.2.1,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.1,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3
350350
```
351351

352352
This way you will download the extra dependencies needed by our OCR submodule. The Python SparkSession equivalent is
@@ -358,7 +358,7 @@ spark = SparkSession.builder \
358358
.config("spark.driver.memory", "6g") \
359359
.config("spark.executor.memory", "6g") \
360360
.config("spark.jars.repositories", "http://repo.spring.io/plugins-release") \
361-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.0,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.0,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3") \
361+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.1,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.1,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3") \
362362
.getOrCreate()
363363
```
364364

build.sbt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ if(is_gpu.equals("false")){
1616

1717
organization:= "com.johnsnowlabs.nlp"
1818

19-
version := "2.2.0"
19+
version := "2.2.1"
2020

2121
scalaVersion in ThisBuild := scalaVer
2222

@@ -200,7 +200,7 @@ assemblyMergeStrategy in assembly := {
200200
lazy val evaluation = (project in file("eval"))
201201
.settings(
202202
name := "spark-nlp-eval",
203-
version := "2.2.0",
203+
version := "2.2.1",
204204

205205
assemblyMergeStrategy in assembly := evalMergeRules,
206206

@@ -241,7 +241,7 @@ lazy val evaluation = (project in file("eval"))
241241
lazy val ocr = (project in file("ocr"))
242242
.settings(
243243
name := "spark-nlp-ocr",
244-
version := "2.2.0",
244+
version := "2.2.1",
245245

246246
test in assembly := {},
247247

docs/_layouts/landing.html

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,22 +49,22 @@ <h1>{{ _section.title }}</h1>
4949
<div class="cell cell--12 cell--lg-12" style="text-align: left; background-color: #2d2d2d; padding: 10px">
5050
{% highlight bash %}
5151
# Install Spark NLP from PyPI
52-
$ pip install spark-nlp==2.2.0
52+
$ pip install spark-nlp==2.2.1
5353

5454
# Install Spark NLP from Anacodna/Conda
5555
$ conda install -c johnsnowlabs spark-nlp
5656

5757
# Load Spark NLP with Spark Shell
58-
$ spark-shell --packages JohnSnowLabs:spark-nlp:2.2.0
58+
$ spark-shell --packages JohnSnowLabs:spark-nlp:2.2.1
5959

6060
# Load Spark NLP with PySpark
61-
$ pyspark --packages JohnSnowLabs:spark-nlp:2.2.0
61+
$ pyspark --packages JohnSnowLabs:spark-nlp:2.2.1
6262

6363
# Load Spark NLP with Spark Submit
64-
$ spark-submit --packages JohnSnowLabs:spark-nlp:2.2.0
64+
$ spark-submit --packages JohnSnowLabs:spark-nlp:2.2.1
6565

6666
# Load Spark NLP as external JAR after comiling and bulding Spark NLP by `sbt assembly`
67-
$ spark-shell --jar spark-nlp-assembly-2.2.0
67+
$ spark-shell --jar spark-nlp-assembly-2.2.1
6868
{% endhighlight %}
6969
</div>
7070
</div>

docs/en/install.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ modify_date: "2019-05-16"
1313
If you installed pyspark through pip, you can install `spark-nlp` through pip as well.
1414

1515
```bash
16-
pip install spark-nlp==2.2.0
16+
pip install spark-nlp==2.2.1
1717
```
1818

1919
PyPI [spark-nlp package](https://pypi.org/project/spark-nlp/)
@@ -36,7 +36,7 @@ spark = SparkSession.builder \
3636
.master("local[*]")\
3737
.config("spark.driver.memory","8G")\
3838
.config("spark.driver.maxResultSize", "2G") \
39-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.0")\
39+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.1")\
4040
.config("spark.kryoserializer.buffer.max", "500m")\
4141
.getOrCreate()
4242
```
@@ -97,7 +97,7 @@ Our package is deployed to maven central. In order to add this package as a depe
9797
<dependency>
9898
<groupId>com.johnsnowlabs.nlp</groupId>
9999
<artifactId>spark-nlp_2.11</artifactId>
100-
<version>2.2.0</version>
100+
<version>2.2.1</version>
101101
</dependency>
102102
```
103103

@@ -108,22 +108,22 @@ and
108108
<dependency>
109109
<groupId>com.johnsnowlabs.nlp</groupId>
110110
<artifactId>spark-nlp-ocr_2.11</artifactId>
111-
<version>2.2.0</version>
111+
<version>2.2.1</version>
112112
</dependency>
113113
```
114114

115115
### SBT
116116

117117
```sbtshell
118118
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp
119-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.2.0"
119+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.2.1"
120120
```
121121

122122
and
123123

124124
```sbtshell
125125
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-ocr
126-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.2.0"
126+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.2.1"
127127
```
128128

129129
Maven Central: [https://mvnrepository.com/artifact/com.johnsnowlabs.nlp](https://mvnrepository.com/artifact/com.johnsnowlabs.nlp)
@@ -151,7 +151,7 @@ Note: You can import these notebooks by using their URLs.
151151
4- From the Source drop-down menu, select **Maven Coordinate:**
152152
![Databricks](https://databricks.com/wp-content/uploads/2015/07/select-maven-1024x711.png)
153153

154-
5- Now, all available **Spark Packages** are at your fingertips! Just search for **JohnSnowLabs:spark-nlp:version** where **version** stands for the library version such as: `1.8.4` or `2.2.0`
154+
5- Now, all available **Spark Packages** are at your fingertips! Just search for **JohnSnowLabs:spark-nlp:version** where **version** stands for the library version such as: `1.8.4` or `2.2.1`
155155
![Databricks](https://databricks.com/wp-content/uploads/2015/07/browser-1024x548.png)
156156

157157
6- Select **spark-nlp** package and we are good to go!

docs/en/quickstart.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -29,17 +29,17 @@ Spark NLP is built on top of **Apache Spark 2.4.0** and such is the **only** sup
2929
To start using the library, execute any of the following lines depending on your desired use case:
3030

3131
```bash
32-
spark-shell --packages JohnSnowLabs:spark-nlp:2.2.0
33-
pyspark --packages JohnSnowLabs:spark-nlp:2.2.0
34-
spark-submit --packages JohnSnowLabs:spark-nlp:2.2.0
32+
spark-shell --packages JohnSnowLabs:spark-nlp:2.2.1
33+
pyspark --packages JohnSnowLabs:spark-nlp:2.2.1
34+
spark-submit --packages JohnSnowLabs:spark-nlp:2.2.1
3535
```
3636

3737
### **Straight forward Python on jupyter notebook**
3838

3939
Use pip to install (after you pip installed numpy and pyspark)
4040

4141
```bash
42-
pip install spark-nlp==2.2.0
42+
pip install spark-nlp==2.2.1
4343
jupyter notebook
4444
```
4545

@@ -60,7 +60,7 @@ spark = SparkSession.builder \
6060
.appName('OCR Eval') \
6161
.config("spark.driver.memory", "6g") \
6262
.config("spark.executor.memory", "6g") \
63-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.0") \
63+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.1") \
6464
.getOrCreate()
6565
```
6666

@@ -69,13 +69,13 @@ spark = SparkSession.builder \
6969
Add the following maven coordinates in the dependency configuration page:
7070

7171
```bash
72-
com.johnsnowlabs.nlp:spark-nlp_2.11:2.2.0
72+
com.johnsnowlabs.nlp:spark-nlp_2.11:2.2.1
7373
```
7474

7575
For Python in **Apache Zeppelin** you may need to setup _**SPARK_SUBMIT_OPTIONS**_ utilizing --packages instruction shown above like this
7676

7777
```bash
78-
export SPARK_SUBMIT_OPTIONS="--packages JohnSnowLabs:spark-nlp:2.2.0"
78+
export SPARK_SUBMIT_OPTIONS="--packages JohnSnowLabs:spark-nlp:2.2.1"
7979
```
8080

8181
### **Python Jupyter Notebook with PySpark**
@@ -85,7 +85,7 @@ export SPARK_HOME=/path/to/your/spark/folder
8585
export PYSPARK_DRIVER_PYTHON=jupyter
8686
export PYSPARK_DRIVER_PYTHON_OPTS=notebook
8787

88-
pyspark --packages JohnSnowLabs:spark-nlp:2.2.0
88+
pyspark --packages JohnSnowLabs:spark-nlp:2.2.1
8989
```
9090

9191
### S3 based standalone cluster (No Hadoop)
@@ -297,7 +297,7 @@ lightPipeline.annotate("Hello world, please annotate my text")
297297
Spark NLP OCR Module is not included within Spark NLP. It is not an annotator and not an extension to Spark ML. You can include it with the following coordinates for Maven:
298298

299299
```bash
300-
com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.0
300+
com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.1
301301
```
302302

303303
### Creating Spark datasets from PDF (To be used with Spark NLP)

python/setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040
# For a discussion on single-sourcing the version across setup.py and the
4141
# project code, see
4242
# https://packaging.python.org/en/latest/single_source_version.html
43-
version='2.2.0', # Required
43+
version='2.2.1', # Required
4444

4545
# This is a one-line description or tagline of what your project does. This
4646
# corresponds to the "Summary" metadata field:

python/sparknlp/__init__.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,14 @@ def start(include_ocr=False):
4141

4242
if include_ocr:
4343
builder \
44-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.0,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.0,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3") \
44+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.1,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.1,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3") \
4545
.config("spark.jars.repositories", "http://repo.spring.io/plugins-release")
4646

4747
else:
48-
builder.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.0") \
48+
builder.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.1") \
4949

5050
return builder.getOrCreate()
5151

5252

5353
def version():
54-
print('2.2.0')
54+
print('2.2.1')

python/sparknlp/internal.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ def new_java_obj(self, java_class, *args):
2222

2323
def new_java_array(self, pylist, java_class):
2424
"""
25-
ToDo: Inspired from spark 2.2.0. Delete if we upgrade
25+
ToDo: Inspired from spark 2.0. Review if spark changes
2626
"""
2727
java_array = self.sc._gateway.new_array(java_class, len(pylist))
2828
for i in range(len(pylist)):

src/main/scala/com/johnsnowlabs/nlp/SparkNLP.scala

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import org.apache.spark.sql.SparkSession
44

55
object SparkNLP {
66

7-
val currentVersion = "2.2.0"
7+
val currentVersion = "2.2.1"
88

99
def start(includeOcr: Boolean = false): SparkSession = {
1010
val build = SparkSession.builder()
@@ -15,11 +15,11 @@ object SparkNLP {
1515

1616
if (includeOcr) {
1717
build
18-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.0,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.0,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3")
18+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.1,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.2.1,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3")
1919
.config("spark.jars.repositories", "http://repo.spring.io/plugins-release")
2020
} else {
2121
build
22-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.0")
22+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.2.1")
2323
}
2424

2525
build.getOrCreate()

0 commit comments

Comments
 (0)