zephyr-7b-sft-full-SPIN
Collection
Models fine-tuned with SPIN across iterations 0,1,2,3
β’
4 items
β’
Updated
β’
7
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models (https://arxiv.org/abs/2401.01335)
This model is a self-play fine-tuned model at iteration 3 from alignment-handbook/zephyr-7b-sft-full using synthetic data based on on the HuggingFaceH4/ultrachat_200k dataset.
The following hyperparameters were used during training:
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 63.70 |
| ARC (25-shot) | 66.13 |
| HellaSwag (10-shot) | 85.85 |
| MMLU (5-shot) | 61.51 |
| TruthfulQA (0-shot) | 57.89 |
| Winogrande (5-shot) | 76.64 |
| GSM8K (5-shot) | 34.19 |
@misc{chen2024selfplay,
title={Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
author={Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
year={2024},
eprint={2401.01335},
archivePrefix={arXiv},
primaryClass={cs.LG}
}