Update README.md
Browse files
README.md
CHANGED
|
@@ -123,7 +123,7 @@ TODO
|
|
| 123 |
|-------------------|------------|------------|------------|------------|
|
| 124 |
| Pretraining Stage 1 | 4 trillion tokens<br>(1 epoch) | 4 trillion tokens<br>(1 epoch) | 5 trillion tokens<br>(1.2 epochs) | 6 trillion tokens<br>(1.5 epochs) |
|
| 125 |
| Pretraining Stage 2 | 50B tokens (3 runs)<br>*merged* | 50B tokens (3 runs)<br>*merged* | 100B tokens (3 runs)<br>300B tokens (1 run)<br>*merged* | 100B tokens (3 runs)<br>300B tokens (1 run)<br>*merged* |
|
| 126 |
-
| Post-training | SFT + DPO +
|
| 127 |
|
| 128 |
#### Stage 1: Initial Pretraining
|
| 129 |
- Dataset: [OLMo-mix-1124](https://huggingface.co/datasets/allenai/olmo-mix-1124) (3.9T tokens)
|
|
|
|
| 123 |
|-------------------|------------|------------|------------|------------|
|
| 124 |
| Pretraining Stage 1 | 4 trillion tokens<br>(1 epoch) | 4 trillion tokens<br>(1 epoch) | 5 trillion tokens<br>(1.2 epochs) | 6 trillion tokens<br>(1.5 epochs) |
|
| 125 |
| Pretraining Stage 2 | 50B tokens (3 runs)<br>*merged* | 50B tokens (3 runs)<br>*merged* | 100B tokens (3 runs)<br>300B tokens (1 run)<br>*merged* | 100B tokens (3 runs)<br>300B tokens (1 run)<br>*merged* |
|
| 126 |
+
| Post-training | SFT + DPO + GRPO<br>([preference mix](#)) | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-1124-7b-preference-mix)) | SFT + DPO + PPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-1124-13b-preference-mix)) | SFT + DPO + GRPO<br>([preference mix](https://huggingface.co/datasets/allenai/olmo-2-32b-pref-mix-v1)) |
|
| 127 |
|
| 128 |
#### Stage 1: Initial Pretraining
|
| 129 |
- Dataset: [OLMo-mix-1124](https://huggingface.co/datasets/allenai/olmo-mix-1124) (3.9T tokens)
|