Commit
·
28045b8
1
Parent(s):
69f94ab
update model card
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ metrics:
|
|
| 15 |
|
| 16 |
## Model description
|
| 17 |
|
| 18 |
-
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It has then been fine-tuned for token classification on the SourceData [sd-
|
| 19 |
|
| 20 |
|
| 21 |
## Intended uses & limitations
|
|
@@ -30,7 +30,7 @@ To have a quick check of the model:
|
|
| 30 |
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
|
| 31 |
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
|
| 32 |
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
|
| 33 |
-
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-roles')
|
| 34 |
ner = pipeline('ner', model, tokenizer=tokenizer)
|
| 35 |
res = ner(example)
|
| 36 |
for r in res:
|
|
@@ -43,7 +43,7 @@ The model must be used with the `roberta-base` tokenizer.
|
|
| 43 |
|
| 44 |
## Training data
|
| 45 |
|
| 46 |
-
The model was trained for token classification using the [EMBO/sd-
|
| 47 |
|
| 48 |
## Training procedure
|
| 49 |
|
|
@@ -53,7 +53,7 @@ Training code is available at https://github.com/source-data/soda-roberta
|
|
| 53 |
|
| 54 |
- Model fine tuned: EMBL/bio-lm
|
| 55 |
- Tokenizer vocab size: 50265
|
| 56 |
-
- Training data: EMBO/sd-
|
| 57 |
- Dataset configuration: SMALL_MOL_ROLES
|
| 58 |
- Training with 48771 examples.
|
| 59 |
- Evaluating on 13801 examples.
|
|
|
|
| 15 |
|
| 16 |
## Model description
|
| 17 |
|
| 18 |
+
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It has then been fine-tuned for token classification on the SourceData [sd-panels](https://huggingface.co/datasets/EMBO/sd-panels) dataset with the `SMALL_MOL_ROLES` configuration to perform pure context-dependent semantic role classification of bioentities.
|
| 19 |
|
| 20 |
|
| 21 |
## Intended uses & limitations
|
|
|
|
| 30 |
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
|
| 31 |
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
|
| 32 |
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
|
| 33 |
+
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-smallmol-roles')
|
| 34 |
ner = pipeline('ner', model, tokenizer=tokenizer)
|
| 35 |
res = ner(example)
|
| 36 |
for r in res:
|
|
|
|
| 43 |
|
| 44 |
## Training data
|
| 45 |
|
| 46 |
+
The model was trained for token classification using the [EMBO/sd-panels dataset](https://huggingface.co/datasets/EMBO/sd-panels) which includes manually annotated examples.
|
| 47 |
|
| 48 |
## Training procedure
|
| 49 |
|
|
|
|
| 53 |
|
| 54 |
- Model fine tuned: EMBL/bio-lm
|
| 55 |
- Tokenizer vocab size: 50265
|
| 56 |
+
- Training data: EMBO/sd-panels
|
| 57 |
- Dataset configuration: SMALL_MOL_ROLES
|
| 58 |
- Training with 48771 examples.
|
| 59 |
- Evaluating on 13801 examples.
|