MeDeBERTa β v2 (July 2025)
Fine-tuned microsoft/deberta-v3-xsmall on 269 874 Q-A pairs (30 intent labels) for the MeDeBERTaBot medicine question-classification task.
| Value | |
|---|---|
| Epochs | 20 (best @ epoch 17) | 
| Batch / Grad. Accum. | 16 / 4 (eff. 64) | 
| Learning rate | 5 Γ 10β»β΅ | 
| Best val. accuracy | 0.99855 | 
| Test accuracy | 0.99859 | 
| Macro F1 (test) | 0.99867 | 
| Balanced accuracy (test) | 0.99868 | 
| Micro AUC | 0.999997 | 
| Micro average precision | 0.99993 | 
| Loss (val / test) | 0.01371 / 0.01305 | 
| Hardware | RTX 2080 Ti (11 GB) | 
Per-class metrics (excerpt)
| Label | Precision | Recall | F1 | Support | 
|---|---|---|---|---|
| any_code | 1.000 | 1.000 | 1.000 | 980 | 
| contexts | 0.988 | 0.987 | 0.988 | 923 | 
| treatment summary | 1.000 | 0.998 | 0.999 | 927 | 
| β¦ | β¦ | β¦ | β¦ | β¦ | 
Full table: see classification_report.json / classification_report.csv.
Training
The full fine-tuning pipeline (data prep β training β evaluation scripts) is
maintained in the companion GitHub repo
βΆ MeDeBERTaBot Β· deberta_fine_tuning
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tok = AutoTokenizer.from_pretrained("malakhovks/MeDeBERTa")
model = AutoModelForSequenceClassification.from_pretrained("malakhovks/MeDeBERTa")
inputs = tok("what are contraindications for TENS?", return_tensors="pt")
pred   = model(**inputs).logits.argmax(-1).item()
print(model.config.id2label[pred])
Changelog
See CHANGELOG.md for full version history.
- Downloads last month
 - 15
 
Model tree for malakhovks/MeDeBERTa
Base model
microsoft/deberta-v3-xsmall