Codingart-AI
Collection
14 items
โข
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.9995 | 0.3608 | 500 | 0.9288 |
| 0.9057 | 0.7215 | 1000 | 0.8905 |
| 0.8627 | 1.0823 | 1500 | 0.8647 |
| 0.7987 | 1.4430 | 2000 | 0.8529 |
| 0.7801 | 1.8038 | 2500 | 0.8351 |
| 0.752 | 2.1645 | 3000 | 0.8355 |
| 0.7222 | 2.5253 | 3500 | 0.8261 |
| 0.7016 | 2.8860 | 4000 | 0.8225 |
Base model
meta-llama/Meta-Llama-3-8B-Instruct