Skip to content

Commit 51f1f19

Browse files
committed
Release candidate 2.1.1
1 parent e5d354c commit 51f1f19

File tree

11 files changed

+63
-54
lines changed

11 files changed

+63
-54
lines changed

CHANGELOG

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,18 @@
1+
========
2+
2.1.1
3+
========
4+
---------------
5+
Overview
6+
---------------
7+
Thank you so much for your feedback on slack. This release is to extend life length of the 2.1.x release, with important bugfixes from upstrea
8+
9+
---------------
10+
Bugfixes
11+
---------------
12+
* Fixed a bug in NerConverter caused by empty entities, returning an error when flushing entities
13+
* Fixed a bug when creating BERT Models from python, where contrib libraries were not loaded
14+
* Fixed missing setters for whitelist param in NerConverter
15+
116
========
217
2.1.0
318
========

README.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ Take a look at our official Spark NLP page: [http://nlp.johnsnowlabs.com/](http:
4141

4242
## Apache Spark Support
4343

44-
Spark NLP *2.1.0* has been built on top of Apache Spark 2.4.3
44+
Spark NLP *2.1.1* has been built on top of Apache Spark 2.4.3
4545

4646
Note that pre-build Spark NLP is not retrocompatible with older Spark 2.x.x, so models and environments might not work.
4747

@@ -66,18 +66,18 @@ This library has been uploaded to the [spark-packages repository](https://spark-
6666

6767
Benefit of spark-packages is that makes it available for both Scala-Java and Python
6868

69-
To use the most recent version just add the `--packages JohnSnowLabs:spark-nlp:2.1.0` to you spark command
69+
To use the most recent version just add the `--packages JohnSnowLabs:spark-nlp:2.1.1` to you spark command
7070

7171
```sh
72-
spark-shell --packages JohnSnowLabs:spark-nlp:2.1.0
72+
spark-shell --packages JohnSnowLabs:spark-nlp:2.1.1
7373
```
7474

7575
```sh
76-
pyspark --packages JohnSnowLabs:spark-nlp:2.1.0
76+
pyspark --packages JohnSnowLabs:spark-nlp:2.1.1
7777
```
7878

7979
```sh
80-
spark-submit --packages JohnSnowLabs:spark-nlp:2.1.0
80+
spark-submit --packages JohnSnowLabs:spark-nlp:2.1.1
8181
```
8282

8383
This can also be used to create a SparkSession manually by using the `spark.jars.packages` option in both Python and Scala
@@ -145,7 +145,7 @@ Our package is deployed to maven central. In order to add this package as a depe
145145
<dependency>
146146
<groupId>com.johnsnowlabs.nlp</groupId>
147147
<artifactId>spark-nlp_2.11</artifactId>
148-
<version>2.1.0</version>
148+
<version>2.1.1</version>
149149
</dependency>
150150
```
151151

@@ -156,22 +156,22 @@ and
156156
<dependency>
157157
<groupId>com.johnsnowlabs.nlp</groupId>
158158
<artifactId>spark-nlp-ocr_2.11</artifactId>
159-
<version>2.1.0</version>
159+
<version>2.1.1</version>
160160
</dependency>
161161
```
162162

163163
### SBT
164164

165165
```sbtshell
166166
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp
167-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.1.0"
167+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.1.1"
168168
```
169169

170170
and
171171

172172
```sbtshell
173173
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-ocr
174-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.1.0"
174+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.1.1"
175175
```
176176

177177
Maven Central: [https://mvnrepository.com/artifact/com.johnsnowlabs.nlp](https://mvnrepository.com/artifact/com.johnsnowlabs.nlp)
@@ -187,7 +187,7 @@ If you installed pyspark through pip/conda, you can install `spark-nlp` through
187187
Pip:
188188

189189
```bash
190-
pip install spark-nlp==2.1.0
190+
pip install spark-nlp==2.1.1
191191
```
192192

193193
Conda:
@@ -214,7 +214,7 @@ spark = SparkSession.builder \
214214
.master("local[4]")\
215215
.config("spark.driver.memory","8G")\
216216
.config("spark.driver.maxResultSize", "2G") \
217-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.0")\
217+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.1")\
218218
.config("spark.kryoserializer.buffer.max", "500m")\
219219
.getOrCreate()
220220
```
@@ -247,7 +247,7 @@ Use either one of the following options
247247
* Add the following Maven Coordinates to the interpreter's library list
248248

249249
```bash
250-
com.johnsnowlabs.nlp:spark-nlp_2.11:2.1.0
250+
com.johnsnowlabs.nlp:spark-nlp_2.11:2.1.1
251251
```
252252

253253
* Add path to pre-built jar from [here](#pre-compiled-spark-nlp-and-spark-nlp-ocr) in the interpreter's library list making sure the jar is available to driver path
@@ -257,7 +257,7 @@ com.johnsnowlabs.nlp:spark-nlp_2.11:2.1.0
257257
Apart from previous step, install python module through pip
258258

259259
```bash
260-
pip install spark-nlp==2.1.0
260+
pip install spark-nlp==2.1.1
261261
```
262262

263263
Or you can install `spark-nlp` from inside Zeppelin by using Conda:
@@ -282,7 +282,7 @@ export PYSPARK_PYTHON=python3
282282
export PYSPARK_DRIVER_PYTHON=jupyter
283283
export PYSPARK_DRIVER_PYTHON_OPTS=notebook
284284

285-
pyspark --packages JohnSnowLabs:spark-nlp:2.1.0
285+
pyspark --packages JohnSnowLabs:spark-nlp:2.1.1
286286
```
287287

288288
Alternatively, you can mix in using `--jars` option for pyspark + `pip install spark-nlp`
@@ -343,7 +343,7 @@ To include the OCR submodule in Spark NLP, you will need to add the following to
343343

344344
```
345345
--repositories http://repo.spring.io/plugins-release
346-
--packages JohnSnowLabs:spark-nlp:2.1.0,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.0,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3
346+
--packages JohnSnowLabs:spark-nlp:2.1.1,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.1,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3
347347
```
348348

349349
This way you will download the extra dependencies needed by our OCR submodule. The Python SparkSession equivalent is
@@ -355,7 +355,7 @@ spark = SparkSession.builder \
355355
.config("spark.driver.memory", "6g") \
356356
.config("spark.executor.memory", "6g") \
357357
.config("spark.jars.repositories", "http://repo.spring.io/plugins-release") \
358-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.0,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.0,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3") \
358+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.1,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.1,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3") \
359359
.getOrCreate()
360360
```
361361

build.sbt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ if(is_gpu.equals("false")){
1616

1717
organization:= "com.johnsnowlabs.nlp"
1818

19-
version := "2.1.0"
19+
version := "2.1.1"
2020

2121
scalaVersion in ThisBuild := scalaVer
2222

@@ -200,7 +200,7 @@ assemblyMergeStrategy in assembly := {
200200
lazy val evaluation = (project in file("eval"))
201201
.settings(
202202
name := "spark-nlp-eval",
203-
version := "2.1.0",
203+
version := "2.1.1",
204204

205205
assemblyMergeStrategy in assembly := evalMergeRules,
206206

@@ -241,7 +241,7 @@ lazy val evaluation = (project in file("eval"))
241241
lazy val ocr = (project in file("ocr"))
242242
.settings(
243243
name := "spark-nlp-ocr",
244-
version := "2.1.0",
244+
version := "2.1.1",
245245

246246
test in assembly := {},
247247

docs/_layouts/landing.html

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,22 +49,22 @@ <h1>{{ _section.title }}</h1>
4949
<div class="cell cell--12 cell--lg-12" style="text-align: left; background-color: #2d2d2d; padding: 10px">
5050
{% highlight bash %}
5151
# Install Spark NLP from PyPI
52-
$ pip install spark-nlp==2.1.0
52+
$ pip install spark-nlp==2.1.1
5353

5454
# Install Spark NLP from Anacodna/Conda
5555
$ conda install -c johnsnowlabs spark-nlp
5656

5757
# Load Spark NLP with Spark Shell
58-
$ spark-shell --packages JohnSnowLabs:spark-nlp:2.1.0
58+
$ spark-shell --packages JohnSnowLabs:spark-nlp:2.1.1
5959

6060
# Load Spark NLP with PySpark
61-
$ pyspark --packages JohnSnowLabs:spark-nlp:2.1.0
61+
$ pyspark --packages JohnSnowLabs:spark-nlp:2.1.1
6262

6363
# Load Spark NLP with Spark Submit
64-
$ spark-submit --packages JohnSnowLabs:spark-nlp:2.1.0
64+
$ spark-submit --packages JohnSnowLabs:spark-nlp:2.1.1
6565

6666
# Load Spark NLP as external JAR after comiling and bulding Spark NLP by `sbt assembly`
67-
$ spark-shell --jar spark-nlp-assembly-2.1.0
67+
$ spark-shell --jar spark-nlp-assembly-2.1.1
6868
{% endhighlight %}
6969
</div>
7070
</div>

docs/en/install.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ modify_date: "2019-05-16"
1313
If you installed pyspark through pip, you can install `spark-nlp` through pip as well.
1414

1515
```bash
16-
pip install spark-nlp==2.1.0
16+
pip install spark-nlp==2.1.1
1717
```
1818

1919
PyPI [spark-nlp package](https://pypi.org/project/spark-nlp/)
@@ -36,7 +36,7 @@ spark = SparkSession.builder \
3636
.master("local[*]")\
3737
.config("spark.driver.memory","8G")\
3838
.config("spark.driver.maxResultSize", "2G") \
39-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.0")\
39+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.1")\
4040
.config("spark.kryoserializer.buffer.max", "500m")\
4141
.getOrCreate()
4242
```
@@ -97,7 +97,7 @@ Our package is deployed to maven central. In order to add this package as a depe
9797
<dependency>
9898
<groupId>com.johnsnowlabs.nlp</groupId>
9999
<artifactId>spark-nlp_2.11</artifactId>
100-
<version>2.1.0</version>
100+
<version>2.1.1</version>
101101
</dependency>
102102
```
103103

@@ -108,22 +108,22 @@ and
108108
<dependency>
109109
<groupId>com.johnsnowlabs.nlp</groupId>
110110
<artifactId>spark-nlp-ocr_2.11</artifactId>
111-
<version>2.1.0</version>
111+
<version>2.1.1</version>
112112
</dependency>
113113
```
114114

115115
### SBT
116116

117117
```sbtshell
118118
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp
119-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.1.0"
119+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "2.1.1"
120120
```
121121

122122
and
123123

124124
```sbtshell
125125
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-ocr
126-
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.1.0"
126+
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-ocr" % "2.1.1"
127127
```
128128

129129
Maven Central: [https://mvnrepository.com/artifact/com.johnsnowlabs.nlp](https://mvnrepository.com/artifact/com.johnsnowlabs.nlp)
@@ -151,7 +151,7 @@ Note: You can import these notebooks by using their URLs.
151151
4- From the Source drop-down menu, select **Maven Coordinate:**
152152
![Databricks](https://databricks.com/wp-content/uploads/2015/07/select-maven-1024x711.png)
153153

154-
5- Now, all available **Spark Packages** are at your fingertips! Just search for **JohnSnowLabs:spark-nlp:version** where **version** stands for the library version such as: `1.8.4` or `2.1.0`
154+
5- Now, all available **Spark Packages** are at your fingertips! Just search for **JohnSnowLabs:spark-nlp:version** where **version** stands for the library version such as: `1.8.4` or `2.1.1`
155155
![Databricks](https://databricks.com/wp-content/uploads/2015/07/browser-1024x548.png)
156156

157157
6- Select **spark-nlp** package and we are good to go!

docs/en/quickstart.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -29,17 +29,17 @@ Spark NLP is built on top of **Apache Spark 2.4.0** and such is the **only** sup
2929
To start using the library, execute any of the following lines depending on your desired use case:
3030

3131
```bash
32-
spark-shell --packages JohnSnowLabs:spark-nlp:2.1.0
33-
pyspark --packages JohnSnowLabs:spark-nlp:2.1.0
34-
spark-submit --packages JohnSnowLabs:spark-nlp:2.1.0
32+
spark-shell --packages JohnSnowLabs:spark-nlp:2.1.1
33+
pyspark --packages JohnSnowLabs:spark-nlp:2.1.1
34+
spark-submit --packages JohnSnowLabs:spark-nlp:2.1.1
3535
```
3636

3737
### **Straight forward Python on jupyter notebook**
3838

3939
Use pip to install (after you pip installed numpy and pyspark)
4040

4141
```bash
42-
pip install spark-nlp==2.1.0
42+
pip install spark-nlp==2.1.1
4343
jupyter notebook
4444
```
4545

@@ -60,7 +60,7 @@ spark = SparkSession.builder \
6060
.appName('OCR Eval') \
6161
.config("spark.driver.memory", "6g") \
6262
.config("spark.executor.memory", "6g") \
63-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.0") \
63+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.1") \
6464
.getOrCreate()
6565
```
6666

@@ -69,13 +69,13 @@ spark = SparkSession.builder \
6969
Add the following maven coordinates in the dependency configuration page:
7070

7171
```bash
72-
com.johnsnowlabs.nlp:spark-nlp_2.11:2.1.0
72+
com.johnsnowlabs.nlp:spark-nlp_2.11:2.1.1
7373
```
7474

7575
For Python in **Apache Zeppelin** you may need to setup _**SPARK_SUBMIT_OPTIONS**_ utilizing --packages instruction shown above like this
7676

7777
```bash
78-
export SPARK_SUBMIT_OPTIONS="--packages JohnSnowLabs:spark-nlp:2.1.0"
78+
export SPARK_SUBMIT_OPTIONS="--packages JohnSnowLabs:spark-nlp:2.1.1"
7979
```
8080

8181
### **Python Jupyter Notebook with PySpark**
@@ -85,7 +85,7 @@ export SPARK_HOME=/path/to/your/spark/folder
8585
export PYSPARK_DRIVER_PYTHON=jupyter
8686
export PYSPARK_DRIVER_PYTHON_OPTS=notebook
8787

88-
pyspark --packages JohnSnowLabs:spark-nlp:2.1.0
88+
pyspark --packages JohnSnowLabs:spark-nlp:2.1.1
8989
```
9090

9191
### S3 based standalone cluster (No Hadoop)
@@ -297,7 +297,7 @@ lightPipeline.annotate("Hello world, please annotate my text")
297297
Spark NLP OCR Module is not included within Spark NLP. It is not an annotator and not an extension to Spark ML. You can include it with the following coordinates for Maven:
298298

299299
```bash
300-
com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.0
300+
com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.1
301301
```
302302

303303
### Creating Spark datasets from PDF (To be used with Spark NLP)

python/setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040
# For a discussion on single-sourcing the version across setup.py and the
4141
# project code, see
4242
# https://packaging.python.org/en/latest/single_source_version.html
43-
version='2.1.0', # Required
43+
version='2.1.1', # Required
4444

4545
# This is a one-line description or tagline of what your project does. This
4646
# corresponds to the "Summary" metadata field:

python/sparknlp/__init__.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -40,14 +40,14 @@ def start(include_ocr=False):
4040

4141
if include_ocr:
4242
builder \
43-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.0,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.0,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3") \
43+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.1,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.1,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3") \
4444
.config("spark.jars.repositories", "http://repo.spring.io/plugins-release")
4545

4646
else:
47-
builder.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.0") \
47+
builder.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.1") \
4848

4949
return builder.getOrCreate()
5050

5151

5252
def version():
53-
print('2.1.0')
53+
print('2.1.1')

src/main/scala/com/johnsnowlabs/nlp/SparkNLP.scala

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import org.apache.spark.sql.SparkSession
44

55
object SparkNLP {
66

7-
val currentVersion = "2.1.0"
7+
val currentVersion = "2.1.1"
88

99
def start(includeOcr: Boolean = false): SparkSession = {
1010
val build = SparkSession.builder()
@@ -15,11 +15,11 @@ object SparkNLP {
1515

1616
if (includeOcr) {
1717
build
18-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.0,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.0,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3")
18+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.1,com.johnsnowlabs.nlp:spark-nlp-ocr_2.11:2.1.1,javax.media.jai:com.springsource.javax.media.jai.core:1.1.3")
1919
.config("spark.jars.repositories", "http://repo.spring.io/plugins-release")
2020
} else {
2121
build
22-
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.0")
22+
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.1.1")
2323
}
2424

2525
build.getOrCreate()

src/main/scala/com/johnsnowlabs/nlp/annotators/Tokenizer.scala

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -96,12 +96,6 @@ class Tokenizer(override val uid: String) extends AnnotatorApproach[TokenizerMod
9696
$(splitChars)
9797
}
9898

99-
setDefault(
100-
targetPattern -> "\\S+",
101-
contextChars -> Array(".", ",", ";", ":", "!", "?", "*", "-", "(", ")", "\"", "'"),
102-
caseSensitiveExceptions -> true
103-
)
104-
10599
def buildRuleFactory: RuleFactory = {
106100
val rules = ArrayBuffer.empty[String]
107101

0 commit comments

Comments
 (0)