DACTYL Finetuned SLMs
Collection
These are models that were continued pretrained on a domain-specific corpus.
•
20 items
•
Updated
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 2.9102 | 0.9998 | 1355 | 2.8891 |
Base model
meta-llama/Llama-3.2-1B-Instruct