File size: 1,017 Bytes
c0ba91f a573903 c0ba91f a573903 c0ba91f a573903 4410ab3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: apache-2.0
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: NER-advance
results: []
datasets:
- unimelb-nlp/wikiann
language:
- en
pipeline_tag: token-classification
---
## NER-Advance
This model is a fine-tuned version of xlm-roberta-base on an unimelb-nlp/wikiann dataset. It achieves the following results on the evaluation set:
• Loss: 0.27204614877700806
• F1: 0.9195951994165037
## Training hyperparameters
The following hyperparameters were used during training:
• learning_rate: 2e-5
• train_batch_size: 24
• eval_batch_size: 24
• seed: 42
• weight_decay=0.01
• lr_scheduler_type: linear
• warmup_ratio=0.1
• num_epochs: 2
## Training results
# Training Loss | Epoch | Validation Loss | F1
0.2895 1.0 0.3054 0.8916
0.2422 2.0 0.2720 0.9195
## Framework versions
• Transformers 4.38.2
• Pytorch 2.2.1+cu121
• Datasets 2.18.0
• Tokenizers 0.15.2 |