Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models
This repository contains the CoReward-Qwen3-1.7B-Base model, a Qwen3-1.7B-Base model trained using the novel Co-rewarding method on the MATH training set. This work was presented in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
The Co-rewarding framework aims to improve training stability by seeking complementary supervision from another view, as detailed in the paper's abstract: "Co-rewarding, a novel self-supervised RL framework that improves training stability by seeking complementary supervision from another views."
For more details on the Co-rewarding framework and its implementation, please refer to the official GitHub repository.
Citation
@article{zhang2025co,
title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
journal={arXiv preprint arXiv:2508.00410},
year={2025}
}