This repository provides a fine-tuned BERT model specifically designed for Tamil masked language modeling tasks. The model was trained on the Agasthiyar medicinal word corpus dataset, making it ideal for applications that involve predicting missing or obscured words in Tamil text, particularly within the domain of medicinal language.
dab4aa4
verified
metadata
license: mit
language:
- ta
metrics:
- perplexity
pipeline_tag: fill-mask
tags:
- tamil
- medicine
- mlm
- BERT