Details: https://spacy.io/models/zh#zh_core_web_lg
Chinese pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler.
| Feature | Description |
|---|---|
| Name | zh_core_web_lg |
| Version | 3.7.0 |
| spaCy | >=3.7.0,<3.8.0 |
| Default Pipeline | tok2vec, tagger, parser, attribute_ruler, ner |
| Components | tok2vec, tagger, parser, senter, attribute_ruler, ner |
| Vectors | 500000 keys, 500000 unique vectors (300 dimensions) |
| Sources | OntoNotes 5 (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston) CoreNLP Universal Dependencies Converter (Stanford NLP Group) Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia) (Explosion) |
| License | MIT |
| Author | Explosion |
Label Scheme
View label scheme (100 labels for 3 components)
| Component | Labels |
|---|---|
tagger |
AD, AS, BA, CC, CD, CS, DEC, DEG, DER, DEV, DT, ETC, FW, IJ, INF, JJ, LB, LC, M, MSP, NN, NR, NT, OD, ON, P, PN, PU, SB, SP, URL, VA, VC, VE, VV, X, _SP |
parser |
ROOT, acl, advcl:loc, advmod, advmod:dvp, advmod:loc, advmod:rcomp, amod, amod:ordmod, appos, aux:asp, aux:ba, aux:modal, aux:prtmod, auxpass, case, cc, ccomp, compound:nn, compound:vc, conj, cop, dep, det, discourse, dobj, etc, mark, mark:clf, name, neg, nmod, nmod:assmod, nmod:poss, nmod:prep, nmod:range, nmod:tmod, nmod:topic, nsubj, nsubj:xsubj, nsubjpass, nummod, parataxis:prnmod, punct, xcomp |
ner |
CARDINAL, DATE, EVENT, FAC, GPE, LANGUAGE, LAW, LOC, MONEY, NORP, ORDINAL, ORG, PERCENT, PERSON, PRODUCT, QUANTITY, TIME, WORK_OF_ART |
Accuracy
| Type | Score |
|---|---|
TOKEN_ACC |
95.85 |
TOKEN_P |
94.58 |
TOKEN_R |
91.36 |
TOKEN_F |
92.94 |
TAG_ACC |
90.33 |
SENTS_P |
78.05 |
SENTS_R |
72.63 |
SENTS_F |
75.24 |
DEP_UAS |
70.86 |
DEP_LAS |
65.71 |
ENTS_P |
73.55 |
ENTS_R |
69.25 |
ENTS_F |
71.34 |
- Downloads last month
- 28
Evaluation results
- NER Precisionself-reported0.736
- NER Recallself-reported0.693
- NER F Scoreself-reported0.713
- TAG (XPOS) Accuracyself-reported0.903
- Unlabeled Attachment Score (UAS)self-reported0.709
- Labeled Attachment Score (LAS)self-reported0.657
- Sentences F-Scoreself-reported0.752