Update README.md
Browse files
README.md
CHANGED
|
@@ -6,98 +6,24 @@ datasets:
|
|
| 6 |
language:
|
| 7 |
- en
|
| 8 |
pipeline_tag: text-generation
|
| 9 |
-
tags:
|
| 10 |
-
- pt
|
| 11 |
-
- doge
|
| 12 |
---
|
| 13 |
|
|
|
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
| 18 |
-
<img src="https://huggingface.co/spaces/SmallDoge/README/resolve/main/org_icon.png" width="100%" alt="SmallDoge" />
|
| 19 |
-
</div>
|
| 20 |
-
<hr>
|
| 21 |
-
<div align="center">
|
| 22 |
-
<a href="https://discord.gg/P2yYH95N" target="_blank" style="margin: 2px;">
|
| 23 |
-
<img alt="Discord" src="https://img.shields.io/badge/Discord-Small%20Doges-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
|
| 24 |
-
</a>
|
| 25 |
-
<a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
|
| 26 |
-
<img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
|
| 27 |
-
</a>
|
| 28 |
-
<a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
|
| 29 |
-
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
|
| 30 |
-
</a>
|
| 31 |
-
<a href="https://github.com/SmallDoges/small-doge/blob/main/LICENSE" style="margin: 2px;">
|
| 32 |
-
<img alt="License" src="https://img.shields.io/badge/License-Apache--2.0-blue.svg" style="display: inline-block; vertical-align: middle;"/>
|
| 33 |
-
</a>
|
| 34 |
-
</div>
|
| 35 |
|
| 36 |
-
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
>>> model = AutoModelForCausalLM.from_pretrained("SmallDoge/Doge-320M", trust_remote_code=True)
|
| 46 |
-
>>> inputs = tokenizer("Hey how are you doing?", return_tensors="pt")
|
| 47 |
-
|
| 48 |
-
>>> out = model.generate(**inputs, max_new_tokens=100)
|
| 49 |
-
>>> print(tokenizer.batch_decode(out))
|
| 50 |
-
```
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
## Model Details
|
| 54 |
-
|
| 55 |
-
We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus). If you want to continue pre-training this model, you can find the unconverged checkpoint [here](https://huggingface.co/SmallDoge/Doge-160M-checkpoint). These models has not been fine-tuned for instruction, the instruction model is [here](https://huggingface.co/SmallDoge/Doge-160M-Instruct).
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
**Pre-Training**:
|
| 59 |
-
|
| 60 |
-
| Model | Training Data | Steps | Content Length | Tokens | LR | Batch Size | Precision | RTX 4090 GPU hours |
|
| 61 |
-
|---|---|---|---|---|---|---|---|---|
|
| 62 |
-
| [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 8k | 2048 | 4B | 8e-3 | 0.5M | bfloat16 | 14 |
|
| 63 |
-
| [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 16k | 2048 | 16B | 6e-3 | 1M | bfloat16 | 128 |
|
| 64 |
-
| [Doge-160M](https://huggingface.co/SmallDoge/Doge-160M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 24k | 2048 | 32B | 4e-3 | 1.5M | bfloat16 | 522 |
|
| 65 |
-
| [Doge-320M](https://huggingface.co/SmallDoge/Doge-320M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 32k | 2048 | 64B | 2e-3 | 2M | bfloat16 | 1856 |
|
| 66 |
-
|
| 67 |
-
**Evaluation**:
|
| 68 |
-
|
| 69 |
-
| Model | MMLU | TriviaQA | ARC | PIQA | HellaSwag | OBQA | Winogrande | tokens / s on i7-11 CPU |
|
| 70 |
-
|---|---|---|---|---|---|---|---|---|
|
| 71 |
-
| [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M) | 25.4 | 0.03 | 29.8 | 58.4 | 27.3 | 25.6 | 50.2 | 142 |
|
| 72 |
-
| [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M) | 26.4 | 0.2 | 37.9 | 61.4 | 31.5 | 28.0 | 50.8 | 62 |
|
| 73 |
-
| [Doge-160M](https://huggingface.co/SmallDoge/Doge-160M) | 29.2 | 4.8 | 44.4 | 66.3 | 38.7 | 34.4 | 52.2 | 28 |
|
| 74 |
-
| [Doge-320M](https://huggingface.co/SmallDoge/Doge-320M) | 33.8 | 9.4 | 52.1 | 69.9 | 46.5 | 37.9 | 55.0 | 16 |
|
| 75 |
-
|
| 76 |
-
> [!NOTE]
|
| 77 |
-
> All evaluations are done using five-shot settings, without additional training on the benchmarks.
|
| 78 |
-
|
| 79 |
-
**Procedure**:
|
| 80 |
-
|
| 81 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/y18ty3sh)
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
**Environment**:
|
| 85 |
-
|
| 86 |
-
- Image: nvcr.io/nvidia/pytorch:24.12-py3
|
| 87 |
-
- Hardware: 1x NVIDIA RTX 4090
|
| 88 |
-
- Software: Transformers
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
## Citation
|
| 92 |
-
|
| 93 |
-
```bibtex
|
| 94 |
-
@misc{shi2024wonderfulmatrices,
|
| 95 |
-
title={Wonderful Matrices: Combining for a More Efficient and Effective Foundation Model Architecture},
|
| 96 |
-
author={Jingze Shi and Bingheng Wu},
|
| 97 |
-
year={2024},
|
| 98 |
-
eprint={2412.11834},
|
| 99 |
-
archivePrefix={arXiv},
|
| 100 |
-
primaryClass={cs.LG},
|
| 101 |
-
url={https://arxiv.org/abs/2412.11834},
|
| 102 |
-
}
|
| 103 |
-
```
|
|
|
|
| 6 |
language:
|
| 7 |
- en
|
| 8 |
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# **Doge 320M checkpoint**
|
| 12 |
|
| 13 |
+

|
| 14 |
|
| 15 |
+
Doge uses `wsd_scheduler` as the training scheduler, which divides the learning rate into three stages: `warmup`, `stable`, and `decay`. It allows us to continue training on any new dataset from any checkpoint in the `stable stage` without spikes of the training.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
+
Here are the initial learning rates required to continue training at each checkpoint:
|
| 18 |
|
| 19 |
+
- **[Doge-20M](https://huggingface.co/SmallDoge/Doge-20M-checkpoint)**: 8e-3
|
| 20 |
+
- **[Doge-60M](https://huggingface.co/SmallDoge/Doge-60M-checkpoint)**: 6e-3
|
| 21 |
+
- **[Doge-160M](https://huggingface.co/SmallDoge/Doge-160M-checkpoint)**: 4e-3
|
| 22 |
+
- **[Doge-320M](https://huggingface.co/SmallDoge/Doge-320M-checkpoint)**: 2e-3
|
| 23 |
|
| 24 |
+
| Model | Learning Rate | Schedule | Warmup Steps | Stable Steps |
|
| 25 |
+
|-------|---------------|----------|--------------|--------------|
|
| 26 |
+
| [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M-checkpoint) | 8e-3 | wsd_scheduler | 800 | 6400 |
|
| 27 |
+
| [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M-checkpoint) | 6e-3 | wsd_scheduler | 1600 | 12800 |
|
| 28 |
+
| [Doge-160M](https://huggingface.co/SmallDoge/Doge-160M-checkpoint) | 4e-3 | wsd_scheduler | 2400 | 19200 |
|
| 29 |
+
| [Doge-320M](https://huggingface.co/SmallDoge/Doge-320M-checkpoint) | 2e-3 | wsd_scheduler | 3200 | 25600 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|