nielsr HF Staff commited on
Commit
c120515
·
verified ·
1 Parent(s): 0546bb1

Improve model card: Add pipeline tag, library name, and update details

Browse files

This PR enhances the model card by:
- Adding `pipeline_tag: text-generation` to improve discoverability on the Hugging Face Hub for relevant tasks.
- Adding `library_name: transformers` to enable the automated "How to use" widget, as evidenced by the `config.json` file.
- Updating the model description to include context from the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
- Correcting the GitHub repository link to point to the accurate URL: [https://github.com/tmlr-group/Co-rewarding](https://github.com/tmlr-group/Co-rewarding).
- Updating the BibTeX entry in the `Citation` section to ensure the title precisely matches the paper's title.

Files changed (1) hide show
  1. README.md +12 -8
README.md CHANGED
@@ -1,19 +1,23 @@
1
  ---
2
  license: mit
 
 
3
  ---
 
4
  ## TMLR-Group-HF/Entropy-Qwen3-1.7B-Base
5
 
6
- This is the Qwen3-1.7B-Base model trained by Entropy Minimization method using MATH training set.
7
 
8
- If you are interested in Co-Reward, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-Reward].
9
 
10
  ## Citation
11
 
12
- ```
13
- @article{zhang2025coreward,
14
- title={Co-Reward: Self-supervised Reinforcement Learning for Large Language Model Reasoning via Contrastive Agreement},
15
- author={Zizhuo Zhang and Jianing Zhu and Xinmu Ge and Zihua Zhao and Zhanke Zhou and Xuan Li and Xiao Feng and Jiangchao Yao and Bo Han},
16
- journal={arXiv preprint arXiv:2508.00410}
17
- year={2025},
 
18
  }
19
  ```
 
1
  ---
2
  license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
  ---
6
+
7
  ## TMLR-Group-HF/Entropy-Qwen3-1.7B-Base
8
 
9
+ This is the Qwen3-1.7B-Base model trained by the Entropy Minimization method using the MATH training set. This model is a baseline evaluated in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
10
 
11
+ If you are interested in the Co-rewarding framework and its implementation, you can find more details on the official GitHub Repository: [https://github.com/tmlr-group/Co-rewarding](https://github.com/tmlr-group/Co-rewarding).
12
 
13
  ## Citation
14
 
15
+ If you use our datasets or models, please cite our paper!
16
+ ```bibtex
17
+ @article{zhang2025co,
18
+ title={Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models},
19
+ author={Zhang, Zizhuo and Zhu, Jianing and Ge, Xinmu and Zhao, Zihua and Zhou, Zhanke and Li, Xuan and Feng, Xiao and Yao, Jiangchao and Han, Bo},
20
+ journal={arXiv preprint arXiv:2508.00410},
21
+ year={2025}
22
  }
23
  ```