TMLR-Group-HF/GT-Qwen3-8B-Base
This is the Qwen3-8B-Base model trained by GRPO Ground Truth method using MATH training set, as described in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].
Citation
@article{zhang2025coreward,
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
journal={arXiv preprint arXiv:2508.00410},
year={2025}
}