Improve dataset card: Add paper/code links, tasks, tags, description, sample usage, and citation

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +63 -1
README.md CHANGED
@@ -1,4 +1,66 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
4
- Selected easy prompts used to train Qwen2.5-Math-7B.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ - reinforcement-learning
6
+ language:
7
+ - en
8
+ tags:
9
+ - llm
10
+ - math
11
+ - reasoning
12
+ - fine-tuning
13
  ---
14
+
15
+ This dataset contains selected easy prompts used to train Qwen2.5-Math-7B, as part of the research presented in the paper [Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training](https://huggingface.co/papers/2510.04996).
16
+
17
+ Reinforce-Ada is an adaptive sampling framework for online reinforcement learning (RL) post-training of large language models (LLMs) for reasoning tasks. It aims to resolve the "signal collapse" problem by continuously reallocating sampling effort to prompts with the greatest uncertainty or learning potential. This dataset provides specific prompts utilized in the experiments to facilitate this adaptive sampling process.
18
+
19
+ **Paper:** [https://huggingface.co/papers/2510.04996](https://huggingface.co/papers/2510.04996)
20
+ **Code:** [https://github.com/RLHFlow/Reinforce-Ada](https://github.com/RLHFlow/Reinforce-Ada)
21
+
22
+ ## Sample Usage
23
+
24
+ To prepare and process the training data for the Reinforce-Ada framework, you can use the scripts provided in the associated GitHub repository.
25
+
26
+ First, prepare the training and test datasets. You can adjust the `pass_rate` for hard and easy prompt selection:
27
+
28
+ ```bash
29
+ # adjust pass_rate to 0.125 and 0.313 for hard and easy prompt selection, respectively.
30
+ bash scripts/prepare_data.py
31
+ ```
32
+
33
+ After preparing the data, convert it to the `verl` training format and generate a validation set using the following commands:
34
+
35
+ ```bash
36
+ # Convert to verl training format
37
+ echo "Converting to verl training format..."
38
+ python3 data_process/reformat.py \
39
+ --local_dir ${output_dir} \
40
+ --model_name_or_path ${model_name} \
41
+ --data_source ${data_name} \
42
+
43
+ # Generate validation set
44
+ echo "Generating validation set..."
45
+ python3 data_process/get_validation_set.py \
46
+ --local_dir ${output_dir} \
47
+ --model_name_or_path ${model_name}
48
+ ```
49
+
50
+ For more details on environment setup, experimentation, and using the processed training sets and checkpoints, please refer to the [Reinforce-Ada GitHub repository](https://github.com/RLHFlow/Reinforce-Ada).
51
+
52
+ ## Citation
53
+
54
+ If you find our paper or code helpful, please cite our work:
55
+
56
+ ```bibtex
57
+ @misc{xiong2025reinforceada,
58
+ title={Reinforce-Ada: An Adaptive Sampling Framework for Reinforce-Style LLM Training},
59
+ author={Wei Xiong and Chenlu Ye and Baohao Liao and Hanze Dong and Xinxing Xu and Christof Monz and Jiang Bian and Nan Jiang and Tong Zhang},
60
+ year={2025},
61
+ eprint={2510.04996},
62
+ archivePrefix={arXiv},
63
+ primaryClass={cs.LG},
64
+ url={https://arxiv.org/abs/2510.04996},
65
+ }
66
+ ```