Geraldxm nielsr HF Staff commited on
Commit
47a11fa
·
verified ·
1 Parent(s): db9862c

Improve model card: Add pipeline tag, library, paper link, and correct GitHub/Citation (#1)

Browse files

- Improve model card: Add pipeline tag, library, paper link, and correct GitHub/Citation (8d6a04cb39a44ff28ba1ae3a138f8781c2a18345)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -1,19 +1,22 @@
1
  ---
2
  license: mit
 
 
3
  ---
 
4
  ## TMLR-Group-HF/Self-Certainty-Qwen2.5-7B
5
 
6
- This is the Qwen2.5-7B model trained by Self-Certainty method using MATH training set.
7
 
8
- If you are interested in Co-Reward, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-Reward].
9
 
10
  ## Citation
11
 
12
- ```
13
  @article{zhang2025coreward,
14
- title={Co-Reward: Self-supervised Reinforcement Learning for Large Language Model Reasoning via Contrastive Agreement},
15
- author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
16
- journal={arXiv preprint arXiv:2508.00410}
17
- year={2025},
18
  }
19
  ```
 
1
  ---
2
  license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
  ---
6
+
7
  ## TMLR-Group-HF/Self-Certainty-Qwen2.5-7B
8
 
9
+ This is the Qwen2.5-7B model trained by the Self-Certainty method using the MATH training set. It is part of the work presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
10
 
11
+ For more details on the Co-rewarding framework and its implementation, you can find the code and further information on the official GitHub repository: [https://github.com/tmlr-group/Co-rewarding](https://github.com/tmlr-group/Co-rewarding).
12
 
13
  ## Citation
14
 
15
+ ```bibtex
16
  @article{zhang2025coreward,
17
+ title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
18
+ author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
19
+ journal={arXiv preprint arXiv:2508.00410},
20
+ year={2025}
21
  }
22
  ```