Llama-3.2-1B-Instruct, with domain adapted pretraining (DAPT), also called Continuous Pre-training (CPT) on a Dutch medical corpus, slightly biased towards cardiology.
Training for one full epoch, with a 256 batch size, maximally 768 sequence length and a linear-cosine schedule (details follow..).
This model will be further pre-trained on 5 million cardiology records from the UMCU.
The perplexity was around 5 on the validation set.
Note: this is not instruction tuned, and does not generate an EOS token. Update coming.
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support