smollm3-configs / README.md
loubnabnl's picture
loubnabnl HF Staff
Update README.md
8d27ec7 verified
---
language:
- en
---
# SmolLM3 Training Configs
**[IMPORTANT NOTE]**: for the latest configs go to this repo: https://github.com/huggingface/smollm/tree/main/text/pretraining/smollm3
Here you can find the training configs for [SmoLLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base) using [nanotron](https://github.com/huggingface/nanotron/) with exact training details and data mixtures.
The model was trained on 11.2T tokens in 3 stages on 4k context:
- stage 1 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/stage1_8T.yaml)
- stage 2 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/stage2_8T_9T.yaml)
- stage 3 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/stage3_9T_11T.yaml)
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F61c141342aac764ce1654e43%2F944zWNgcI1I06RZuoP11B.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END -->
And then we trained on an additional 2 stages to extend the contetx length to 64k:
- stage 4 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/long_context_4k_to_32k.yaml)
- stage 5 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/long_context_32k_to_64.yaml)
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F61c141342aac764ce1654e43%2FjBOiemVtbfi9YD7Pki6sY.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END -->