DACTYL Finetuned SLMs
Collection
These are models that were continued pretrained on a domain-specific corpus.
•
20 items
•
Updated
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on the None dataset.
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| No log | 1.0 | 105 | 3.7892 |
Base model
meta-llama/Llama-3.2-1B-Instruct