Releases: explosion/spaCy
v2.0.0: Neural networks, 13 new models for 7+ languages, better training, custom pipelines, Pickle & lots of API improvements
We're very excited to finally introduce spaCy v2.0. The new version gets spaCy up to date with the latest deep learning technologies and makes it much easier to run spaCy in scalable cloud computing workflows. We've fixed over 60 bugs (every open bug!), including several long-standing issues, trained 13 neural network models for 7+ languages and added alpha tokenization support for 8 new languages. We also re-wrote almost all of the usage guides, API docs and code examples.
pip install -U spacyconda install -c conda-forge spacy✨ Major features and improvements
- NEW: Convolutional neural network models for English, German, Spanish, Portuguese, French, Italian, Dutch and multi-language NER. Substantial improvements in accuracy over the v1.x models.
 - NEW: 
Vectorsclass for managing word vectors, plus trainable document vectors and contextual similarity via convolutional neural networks. - NEW: Custom processing pipeline components and extension attributes on the 
Doc,TokenandSpanviaDoc._,Token._andSpan._. - NEW: Built-in, trainable text classification pipeline component.
 - NEW: Built-in displaCy visualizers for dependencies and entities, with Jupyter notebook support.
 - NEW: Alpha tokenization for Danish, Polish, Indonesian, Thai, Hindi, Irish, Turkish, Croatian and Romanian.
 - Improved language data, support for lazy loading and simple, lookup-based lemmatization for English, German, French, Spanish, Italian, Hungarian, Portuguese and Swedish.
 - Support for multi-language models and new 
MultiLanguageclass (xx). - Strings are now resolved to hash values, instead of mapped to integer IDs. This means that the string-to-int mapping no longer depends on the vocabulary state.
 - Improved and consistent saving, loading and serialization across objects, plus Pickle support.
 PhraseMatcherfor matching large terminology lists asDocobjects, plus revisedMatcherAPI.- New CLI commands 
validate,vocabandevaluate, plus entry point forspacycommand to use instead ofpython -m spacy. - Experimental GPU support via Chainer's CuPy module.
 
🔮 Models
spaCy v2.0 comes with 13 new convolutional neural network models for 7+ languages. The models have been designed and implemented from scratch specifically for spaCy. A novel bloom embedding strategy with subword features is used to support huge vocabularies in tiny tables.
All core models include part-of-speech tags, dependency labels and named entities. Small models include only context-specific token vectors, while medium-sized and large models ship with word vectors. For more details, see the models directory or try our new model comparison tool.
| Name | Language | Features | Size | 
|---|---|---|---|
en_core_web_sm | 
English | Tagger, parser, entities | 35 MB | 
en_core_web_md | 
English | Tagger, parser, entities, vectors | 115 MB | 
en_core_web_lg | 
English | Tagger, parser, entities, vectors | 812 MB | 
en_vectors_web_lg | 
English | Vectors | 627 MB | 
de_core_news_sm | 
German | Tagger, parser, entities | 36 MB | 
es_core_news_sm | 
Spanish | Tagger, parser, entities | 35 MB | 
es_core_news_md | 
Spanish | Tagger, parser, entities, vectors | 93 MB | 
pt_core_news_sm | 
Portuguese | Tagger, parser, entities | 36 MB | 
fr_core_news_sm | 
French | Tagger, parser, entities | 37 MB | 
fr_core_news_md | 
French | Tagger, parser, entities, vectors | 106 MB | 
it_core_news_sm | 
Italian | Tagger, parser, entities | 34 MB | 
nl_core_news_sm | 
Dutch | Tagger, parser, entities | 34 MB | 
xx_ent_wiki_sm | 
Multi-language | Entities | 33MB | 
You can download a model by using its name or shortcut. To load a model, use spacy.load(), or import it as a module and call its load() method:
spacy download en_core_web_smimport spacy
nlp = spacy.load('en_core_web_sm')
import en_core_web_sm
nlp = en_core_web_sm.load()📈 Benchmarks
spaCy v2.0's new neural network models bring significant improvements in accuracy, especially for English Named Entity Recognition. The new en_core_web_lg model makes about 25% fewer mistakes than the corresponding v1.x model and is within 1% of the current state-of-the-art (Strubell et al., 2017). The v2.0 models are also cheaper to run at scale, as they require under 1 GB of memory per process.
English
| Model | spaCy | Type | UAS | LAS | NER F | POS | Size | 
|---|---|---|---|---|---|---|---|
en_core_web_sm-2.0.0 | 
v2.x | neural | 91.7 | 89.8 | 85.3 | 97.0 | 35MB | 
en_core_web_md-2.0.0 | 
v2.x | neural | 91.7 | 89.8 | 85.9 | 97.1 | 115MB | 
en_core_web_lg-2.0.0 | 
v2.x | neural | 91.9 | 90.1 | 85.9 | 97.2 | 812MB | 
en_core_web_sm-1.1.0 | 
v1.x | linear | 86.6 | 83.8 | 78.5 | 96.6 | 50MB | 
en_core_web_md-1.2.1 | 
v1.x | linear | 90.6 | 88.5 | 81.4 | 96.7 | 1GB | 
Spanish
| Model | spaCy | Type | UAS | LAS | NER F | POS | Size | 
|---|---|---|---|---|---|---|---|
es_core_news_sm-2.0.0 | 
v2.x | neural | 89.8 | 86.8 | 88.7 | 96.9 | 35MB | 
es_core_news_md-2.0.0 | 
v2.x | neural | 90.2 | 87.2 | 89.0 | 97.8 | 93MB | 
es_core_web_md-1.1.0 | 
v1.x | linear | 87.5 | n/a | 94.2 | 96.7 | 377MB | 
For more details of the other models, see the models directory and model comparison tool.
🔴 Bug fixes
- Fix issue #125, #228, #299, #377, #460, #606, #930: Add full Pickle support.
 - Fix issue #152, #264, #322, #343, #437, #514, #636, #785, #927, #985, #992, #1011: Fix and improve serialization and deserialization of 
Docobjects. - Fix issue #285, #1225: Fix memory growth problem when streaming data.
 - Fix issue #512: Improve parser to prevent it from returning two 
ROOTobjects. - Fix issue #519, #611, #725: Retrain German model with better tokenized input.
 - Fix issue #524: Improve parser and handling of noun chunks.
 - Fix issue #621: Prevent double spaces from changing the parser result.
 - Fix issue #664, #999, #1026: Fix bugs that would prevent loading trained NER models.
 - Fix issue #671, #809, #856: Fix importing and loading of word vectors.
 - Fix issue #683, #1052, #1442: Don't require tag maps to provide 
SPtag. - Fix issue #753: Resolve bug that would tag OOV items as personal pronouns.
 - Fix issue #860, #956, #1085, #1381: Allow custom attribute extensions on 
Doc,TokenandSpan. - Fix issue #905, #954, #1021, #1040, #1042: Improve parsing model and allow faster accuracy updates.
 - Fix issue #933, #977, #1406: Update online demos.
 - Fix issue #995: Improve punctuation rules for Hebrew and other non-latin languages.
 - Fix issue #1008: 
traincommand finally works correctly if used withoutdev_data. - Fix issue #1012: Improve word vectors documentation.
 - Fix issue #1043: Improve NER models and allow faster accuracy updates.
 - Fix issue #1044: Fix bugs in French model and improve performance.
 - Fix issue #1051: Improve error messages if functionality needs a model to be installed.
 - Fix issue #1071: Correct typo of "whereve" in English tokenizer exceptions.
 - Fix issue #1088: Emoji are now split into separate tokens wherever possible.
 - Fix issue #1240: Allow merging 
Spans without keyword arguments. - Fix issue #1243: Resolve undefined names in deprecated functions.
 - Fix issue #1250: Fix caching bug that would cause tokenizer to ignore special case rules after first parse.
 - Fix issue #1257: Ensure the compare operator 
==works as expected on tokens. - Fix issue #1291: Improve documentation of training format.
 - Fix issue #1336: Fix bug that caused inconsistencies in NER results.
 - Fix issue #1375: Make sure 
Token.nborraisesIndexErrorcorrectly. - Fix issue #1450: Fix error when OP quantifier 
"*"ends the match pattern. - Fix issue #1452: Fix bug that would mutate the original text.
 
📖 Documentation and examples
- NEW: Completely rewritten, reorganised and redesigned [usage](http...
 
v1.10.0: Alpha support for Thai & Russian, plus improvements and bug fixes
⚠️ Important note: This is a bridge release that gets the current state of the v1.x branch published. Stay tuned for v2.0.
✨ Major features and improvements
- NEW: Alpha tokenization support for Thai and Russian.
 - NEW: Alpha support for Japanese part-of-speech tagging.
 - NEW: Dependency pattern-matching algorithm (see #1120).
 - Add support for getting a lowest common ancestor matrix via 
Doc.get_lca_matrix(). - Improve capturing of English noun chunks.
 
🔴 Bug fixes
- Fix issue #1078: Simplify URL pattern.
 - Fix issue #1174: Fix NER model loading bug and make sure JSON keys are loaded as strings.
 - Fix issue #1291: Document correct JSON format for training.
 - Fix issue #1292: Fix error when adding custom infix rules.
 - Fix issue #1387: Ensure that lemmatizer respects exception rules.
 - Fix issue #1410: Support single value for attribute list in 
Doc.to_scalarandDoc.to_array. 
📖 Documentation and examples
- Document correct JSON format for training.
 - Fix various typos and inconsistencies.
 
👥 Contributors
Thanks to @raphael0202, @gideonite, @delirious-lettuce, @polm, @kevinmarsh, @IamJeffG, @Vimos, @ericzhao28, @galaxyh, @hscspring, @wannaphongcom, @Wellan89, @kokes, @mdcclv, @ameyuuno, @ramananbalakrishnan, @Demfier, @johnhaley81, @mayukh18 and @jnothman for the pull requests and contributions.
v1.9.0: Spanish model, alpha support for Norwegian & Japanese, and bug fixes
Thanks to all of you for 5,000 stars on GitHub, the valuable feedback in the user survey and testing spaCy v2.0 alpha. We're working hard on getting the new version ready and can't wait to release it. In the meantime, here's a new release for the 1.x branch that fixes a variety of outstanding bugs and adds capabilities for new languages.
💌 P.S.: If you haven't gotten your hands on a set of spaCy stickers yet, you can still do so – send us a DM with your address on Twitter or Gitter, and we'll mail you some!
✨ Major features and improvements
- NEW: The first official Spanish model (377 MB) including vocab, syntax, entities and word vectors. Thanks to the amazing folks at recogn.ai for the collaboration!
 
python -m spacy download esnlp = spacy.load('es')
doc = nlp(u'Esto es una frase.')- NEW: Alpha tokenization for Norwegian Bokmål and Japanese (via Janome).
 - NEW: Allow dropout training for 
ParserandEntityRecognizer, using thedropkeyword argument to theupdate()method. - NEW: Glossary for POS, dependency and NER annotation scheme via 
spacy.explain(). For example,spacy.explain('NORP')will return "Nationalities or religious or political groups". - Improve language data for Dutch, French and Spanish.
 - Add 
Language.parse_treemethod to generate POS tree for all sentences in aDoc. 
🔴 Bug fixes
- Fix issue #1031: Close gaps in 
LexemeAPI. - Fix issue #1034: Add annotation scheme glossary and 
spacy.explain(). - Fix issue #1051: Improved error messaging when trying to load non-existing model.
 - Fix issue #1052: Add missing 
SPsymbol to tag map. - Fix issue #1061: Add 
flush_cachemethod to tokenizer. - Fix issue #1069: Fix 
Doc.sentsiterator when customised with generator. - Fix issue ##1099, #1143: Improve documentation on models in 
requirements.txt. - Fix issue #1137: Use lower min version for 
requestsdependency. - Fix issue #1207: Fix 
Span.noun_chunks. - Fix issue with 
sixand its dependencies that occasionally caused spaCy to fail. - Fix typo in 
packagecommand that caused error when printing error messages. 
📖 Documentation and examples
- Fix various typos and inconsistencies.
 - NEW: spaCy 101 guide for v2.0: all important concepts, explained with examples and illustrations. Note that some of the behaviour and examples are specific to v2.0+ – but the NLP basics are relevant independent of the spaCy version you're using.
 
👥 Contributors
Thanks to @kengz, @luvogels, @ferdous-al-imran, @uetchy, @akYoung, @pasupulaphani, @dvsrepo, @raphael0202, @yuvalpinter, @frascuchon, @kootenpv, @oroszgy, @bartbroere, @ianmobbs, @garfieldnate, @polm, @callumkift, @swierh, @val314159, @lgenerknol and @jsparedes for the contributions!
v2.0.0 alpha: Neural network models, Pickle, better training & lots of API improvements
 Last update: 2.0.0rc2, 2017-11-07
This is an alpha pre-release of spaCy v2.0.0 and available on pip as spacy-nightly. It's not intended for production use. The alpha documentation is available at alpha.spacy.io. Please note that the docs reflect the library's intended state on release, not the current state of the implementation. For bug reports, feedback and questions, see the spaCy v2.0.0 alpha thread.
Before installing v2.0.0 alpha, we recommend setting up a clean environment.
pip install spacy-nightlyThe models are still under development and will keep improving. For more details, see the benchmarks below. There will also be additional models for German, French and Spanish.
| Name | Lang | Capabilities | Size | spaCy | Info | 
|---|---|---|---|---|---|
en_core_web_sm-2.0.0a4 | 
en | Parser, Tagger, NER | 42MB | >=2.0.0a14 | 
ℹ️ | 
en_vectors_web_lg-2.0.0a0 | 
en | Vectors (GloVe) | 627MB | >=2.0.0a10 | 
ℹ️ | 
xx_ent_wiki_sm-2.0.0a0 | 
multi | NER | 12MB | <=2.0.0a9 | 
ℹ️ | 
You can download a model by using its name or shortcut. To load a model, use spaCy's loader, e.g. nlp = spacy.load('en_core_web_sm') , or import it as a module (import en_core_web_sm) and call its load() method, e.g nlp = en_core_web_sm.load().
python -m spacy download en_core_web_sm📈 Benchmarks
The evaluation was conducted on raw text with no gold standard information. Speed and accuracy are currently comparable to the v1.x models: speed on CPU is slightly lower, while accuracy is slightly higher. We expect performance to improve quickly between now and the release date, as we run more experiments and optimise the implementation.
| Model | spaCy | Type | UAS | LAS | NER F | POS | Words/s | 
|---|---|---|---|---|---|---|---|
en_core_web_sm-2.0.0a4 | 
v2.x | neural | 91.9 | 90.0 | 85.0 | 97.1 | 10,000 | 
en_core_web_sm-2.0.0a3 | 
v2.x | neural | 91.2 | 89.2 | 85.3 | 96.9 | 10,000 | 
en_core_web_sm-2.0.0a2 | 
v2.x | neural | 91.5 | 89.5 | 84.7 | 96.9 | 10,000 | 
en_core_web_sm-1.1.0 | 
v1.x | linear | 86.6 | 83.8 | 78.5 | 96.6 | 25,700 | 
en_core_web_md-1.2.1 | 
v1.x | linear | 90.6 | 88.5 | 81.4 | 96.7 | 18,800 | 
✨ Major features and improvements
- NEW: Neural network model for English (comparable performance to the >1GB v1.x models) and multi-language NER (still experimental).
 - NEW: GPU support via Chainer's CuPy module.
 - NEW: Strings are now resolved to hash values, instead of mapped to integer IDs. This means that the string-to-int mapping no longer depends on the vocabulary state.
 - NEW: Trainable document vectors and contextual similarity via convolutional neural networks.
 - NEW: Built-in text classification component.
 - NEW: Built-in displaCy visualizers with Jupyter notebook support.
 - NEW: Alpha tokenization for Danish, Polish and Indonesian.
 - Improved language data, support for lazy loading and simple, lookup-based lemmatization for English, German, French, Spanish, Italian, Hungarian, Portuguese and Swedish.
 - Improved language processing pipelines and support for custom, model-specific components.
 - Improved and consistent saving, loading and serialization across objects, plus Pickle support.
 - Revised matcher API to make it easier to add and manage patterns and callbacks in one step.
 - Support for multi-language models and new 
MultiLanguageclass (xx). - Entry point for 
spacycommand to use instead ofpython -m spacy. 
🚧 Work in progress (not yet implemented)
- NEW: Neural network models for German, French and Spanish.
 - NEW:
 Binder, a container class for serializing collections ofDocobjects.
🔴 Bug fixes
- Fix issue #125, #228, #299, #377, #460, #606, #930: Add full Pickle support.
 - Fix issue #152, #264, #322, #343, #437, #514, #636, #785, #927, #985, #992, #1011: Fix and improve serialization and deserialization of 
Docobjects. - Fix issue #512: Improve parser to prevent it from returning two 
ROOTobjects. - Fix issue #524: Improve parser and handling of noun chunks.
 - Fix issue #621: Prevent double spaces from changing the parser result.
 - Fix issue #664, #999, #1026: Fix bugs that would prevent loading trained NER models.
 - Fix issue #671, #809, #856: Fix importing and loading of word vectors.
 - Fix issue #753: Resolve bug that would tag OOV items as personal pronouns.
 - Fix issue #905, #1021, #1042: Improve parsing model and allow faster accuracy updates.
 - Fix issue #995: Improve punctuation rules for Hebrew and other non-latin languages.
 - Fix issue #1008: 
traincommand finally works correctly if used withoutdev_data. - Fix issue #1012: Improve documentation on model saving and loading.
 - Fix issue #1043: Improve NER models and allow faster accuracy updates.
 - Fix issue #1051: Improve error messages if functionality needs a model to be installed.
 - Fix issue #1071: Correct typo of "whereve" in English tokenizer exceptions.
 - Fix issue #1088: Emoji are now split into separate tokens wherever possible.
 
🚧 Work in progress (not yet implemented)
📖 Documentation and examples
- NEW: spacy 101 guide with simple explanations and illustrations of the most important concepts and an overview of spaCy's features and capabilities.
 - NEW: Visualizing spaCy guide on how to use the built-in 
displacymodule. - NEW: API docs for top-level functions, 
spacy.displacy,spacy.util,spacy.gold.GoldCorpus. - NEW: Full code example for text classification (sentiment analysis).
 - Improved rule-based matching guide with examples for matching entities and phone numbers, and social media analysis.
 - Improved processing pipelines guide with examples for custom sentence segmentation logic and hooking in a sentiment analysis model.
 - Re-wrote all API and usage docs and added more examples.
 
🚧 Work in progress (not yet implemented)
- NEW: Usage guide on scaling spaCy for production.
 - NEW: Usage guide on text classification.
 - NEW: API docs for
 spacy.pipeline.TextCategorizer,spacy.pipeline.Tensorizer,spacy.tokens.binder.Binderandspacy.vectors.Vectors.- Improved training, NER training and deep learning usage guides with examples.
 
⚠️  Backwards incompatibilities
Note that the old v1.x models are not compatible with spaCy v2.0.0. If you've trained your own models, you'll have to re-train them to be able to use them with the new version. For a full overview of changes in v2.0, see the alpha documentation and guide on migrating from spaCy 1.x.
Loading models
spacy.load() is now only intended for loading models – if you need an empty language class, import it directly instead, e.g. from spacy.lang.en import English. If the model you're loading is a shortcut link or package name, spaCy will expect it to be a model package, import it and call its load() method. If you supply a path, spaCy will expect it to be a model data directory and use the meta.json to initialise a language class and call nlp.from_disk() with the data path.
nlp = spacy.load('en')
nlp = spacy.load('en_core_web_sm')
nlp = spacy.load('/model-data')
nlp = English().from.disk('/model-data')
# OLD: nlp = spacy.load('en', path='/model-data')Hash values instead of integer IDs
The StringStore now resolves all strings to hash values instead of integer IDs. This means that the string-to-int mapping no longer depends on the vocabulary state, making a lot of workflows much simpler, especially during training. However, you still need to make sure all objects have access to the same Vocab. Otherwise, spaCy won't be able to resolve hashes back to their string values.
nlp.vocab.strings[u'coffee']       # 3197928453018144401
other_nlp.vocab.strings[u'coffee'] # 3197928453018144401Serialization
spaCy's [serializ...
v1.8.2: French model and small improvements
We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.
📊 Take the survey!
✨ Major features and improvements
- Move model shortcuts to 
shortcuts.jsonto allow adding new ones without updating spaCy. - NEW: The first official French model (~1.3 GB) including vocab, syntax and word vectors.
 
python -m spacy download fr_depvec_web_lgimport fr_depvec_web_lg
nlp = fr_depvec_web_lg.load()
doc = nlp(u'Parlez-vous Français?')🔴 Bug fixes
- Fix reporting if 
traincommand is used withoutdev_data. - Fix issue #1019: Make 
Spanhashable. 
📖 Documentation and examples
- Update list of available models with more information (capabilities, license).
 - Update adding languages workflow with data examples and details on training.
 - Fix various typos and inconsistencies.
 
👥 Contributors
Thanks to @raphael0202 and @julien-c for the contributions!
v1.8.1: Saving, loading and training bug fixes
We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.
📊 Take the survey!
🔴 Bug fixes
- Fix issue #988: Ensure noun chunks can't be nested.
 - Fix issue #991: 
convertcommand now uses Python 2/3 compatiblejson.dumps. - Fix issue #995: Use 
regexlibrary for non-latin characters to simplify punctuation rules. - Fix issue #999: Fix parser and NER model saving and loading.
 - Fix issue #1001: Add 
SPACEto Spanish tag map. - Fix issue #1008: 
traincommand now works correctly if used withoutdev_data. - Fix issue #1009: 
Language.save_to_directory()now converts strings to pathlib paths. 
📖 Documentation and examples
- Fix issue #889, #967: Correct typos in lightning tour and 
pos_tag.pyexamples. - Add 
Language.save_to_directory()method to API docs. - Fix various typos and inconsistencies.
 
👥 Contributors
Thanks to @dvsrepo, @beneyal and @oroszgy for the pull requests!
v1.8.0: Better NER training, saving and loading
We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.
📊 Take the survey!
✨ Major features and improvements
- NEW: Add experimental 
Language.save_to_directory()method to make it easier to save user-trained models. - Add 
spacy.compatmodule to handle platform and Python version compatibility. - Update 
packagecommand to read from existingmeta.jsonand supply custom location to meta file. - Fix various compatibility issues and improve error messages in 
spacy.cli. 
🔴 Bug fixes
- Fix issue #701, #822, #937, #959: Updated docs for NER training and saving/loading.
 - Fix issue #968: 
spacy.load()now prints warning if no model is found. - Fix issue #970, #978: Use correct unicode paths for symlinks on Python 2 / Windows.
 - Fix issue #973: Make 
token.lemmaandtoken.lemma_attributes writeable. - Fix issue #983: Add 
spacy.compatto handle compatibility. 
📖 Documentation and examples
- NEW: Example for training a new entity type.
 - NEW: Workflow for training the Named Entity Recognizer.
 - NEW: Workflow for saving and loading models.
 - Update Contributing Guidelines with code conventions for Python and Cython.
 - Fix various typos and inconsistencies.
 
👥 Contributors
Thanks to @tsohil and @oroszgy for the pull requests!
v1.7.5: Bug fixes and new CLI commands
We've been delighted to see spaCy growing so much over the last few months. Before the v1.0 release, we asked for your feedback, which has been incredibly helpful in improving the library. As we're getting closer to v2.0 we hope you'll take a few minutes to fill out the survey, to help us understand how you're using the library, and how it can be better.
📊 Take the survey!
✨ Major features and improvements
- NEW: Experimental 
convertandmodelcommands to convert files to spaCy's JSON format for training, and initialise a new model and its data directory. - Updated language data for Spanish and Portuguese.
 
🔴 Bug fixes
- Error messages now show the new download commands if no model is loaded.
 - The 
packagecommand now works correctly and doesn't fail when creating files. - Fix issue #693: Improve rules for detecting noun chunks.
 - Fix issue #758: Adding labels now doesn't cause 
EntityRecognizertransition bug. - Fix issue #862: 
labelkeyword argument is now handled correctly indoc.merge(). - Fix issue #891: Tokens containing 
/infixes are now split by the tokenizer. - Fix issue #898: Dependencies are now deprojectivized correctly.
 - Fix issue #910: NER models with new labels now saved correctly, preventing memory errors.
 - Fix issue #934, #946: Symlink paths are now handled correctly on Windows, preventing 
invalid switcherror. - Fix issue #947: Hebrew module is now added to 
setup.pyand__init__.py. - Fix issue #948: Contractions are now lemmatized correctly.
 - Fix issue #957: Use 
regexmodule to avoid back-tracking on URL regex. 
📖 Documentation and examples
- Documentation for new 
convertandmodelcommands. - Update troubleshooting guide with 
--no-cache-direrror resulting from outdated pip version and file name shadowing model problem. - Fix various typos and inconsistencies.
 
👥 Contributors
Thanks to @ericzhao28, @Gregory-Howard, @kinow, @jreeter, @mamoit, @kumaranvpl and @dvsrepo for the pull requests!
v1.7.3: Alpha support for Hebrew, new CLI commands and bug fixes
✨ Major features and improvements
- NEW: Alpha tokenization for Hebrew.
 - NEW: Experimental 
trainandpackagecommands to train a model and convert it to a Python package. - Enable experimental support for L1-regularized regression loss in dependency parser and named entity recognizer. Should improve fine-tuning of existing models.
 - Fix high memory usage in 
downloadcommand. 
🔴 Bug fixes
- Fix issue #903, #912: Base forms are now correctly protected from lemmatization.
 - Fix issue #909, #925: Use 
mlinkto create symlinks in Python 2 on Windows. - Fix issue #910: Update config when adding label to pre-trained model.
 - Fix issue #911: Delete old training scripts.
 - Fix issue #918: Use 
--no-cache-dirwhen downloading models via pip. - Fixed infinite recursion in 
spacy.info. - Fix initialisation of languages when no model is available.
 
📖 Documentation and examples
- Troubleshooting guide for most common issues and usage problems.
 - Documentation for new 
packageandtraincommands. - Documentation for spaCy's JSON format for training data.
 - Fix various typos and inconsistencies.
 
👥 Contributors
Thanks to @raphael0202, @pavlin99th, @iddoberger and @solresol for the pull requests!
v1.7.2: Small fixes to beam parser and model linking
🔴 Bug fixes
- Success message in 
linkis now displayed correctly when using local paths. - Decrease beam density and fix Python 3 problem in 
beam_parser. - Fix issue #894: Model packages now install and compile paths correctly on Windows.
 
📖 Documentation and examples
- Standalone NER training example.
 - Fix various typos and inconsistencies.