- 
          
 - 
                Notifications
    
You must be signed in to change notification settings  - Fork 4.6k
 
Closed
Labels
bugBugs and behaviour differing from documentationBugs and behaviour differing from documentation
Description
The usage example provided in the documentation for Tokenizer.add_special_case raises a KeyError.
Steps to reproduce:
import spacy
from spacy.symbols import ORTH, LEMMA, POS
nlp = spacy.load('en')
nlp.tokenizer.add_special_case(u'gimme',
    [
        {
            ORTH: u'gim',
            LEMMA: u'give',
            POS: u'VERB'},
        {
            ORTH: u'me' }])
# Traceback (most recent call last):
#   File "test.py", line 13, in <module>
#     ORTH: u'me' }])
#   File "spacy/tokenizer.pyx", line 377, in spacy.tokenizer.Tokenizer.add_special_case (spacy/tokenizer.cpp:8460)
#  File "spacy/vocab.pyx", line 340, in spacy.vocab.Vocab.make_fused_token (spacy/vocab.cpp:7879)
# KeyError: 'F'Environment
- Operating System: Ubuntu 16.04 / macOS 10.12.1
 - Python Version Used: CPython 3.5.2
 - spaCy Version Used: 1.2.0
 - Environment Information: n/a
 
honnibal
Metadata
Metadata
Assignees
Labels
bugBugs and behaviour differing from documentationBugs and behaviour differing from documentation