File size: 3,697 Bytes
e187ac3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
license: mit
datasets:
- xl-zhao/PromptCoT-DS-Dataset
language:
- en
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
---
# **PromptCoT: Synthesizing Olympiad-Level Problems for Mathematical Reasoning in Large Language Models** 

[![ArXiv](https://img.shields.io/badge/arXiv-2503.02324-red)](http://arxiv.org/abs/2503.02324)  
[![GitHub](https://img.shields.io/badge/GitHub-PromptCoT-blue)](https://github.com/zhaoxlpku/PromptCoT)  

---

## 🚀 **Overview**  
The **PromptCoT-DS** series models are distilled mathematical reasoning models trained using **more challenging problem sets generated by the PromptCoT pipeline**. These models are derived from **DeepSeek-R1-Distill-Qwen** and benefit from an enhanced training dataset designed to improve mathematical reasoning capabilities.  

✔ **PromptCoT-DS-1.5B** → Distilled from **DeepSeek-R1-Distill-Qwen-7B** (1.5B parameters)  
✔ **PromptCoT-DS-7B** → Distilled from **DeepSeek-R1-Distill-Qwen-7B** (7B parameters)  

For more details, refer to our **paper on ArXiv**: [🔗 PromptCoT: Synthesizing Olympiad-Level Problems for Mathematical Reasoning in Large Language Models](http://arxiv.org/abs/2503.02324).  

---


## 🔥 **Quick Start: Using the Model**  

### **1️⃣ Install Dependencies**  
```bash
pip install transformers vllm torch accelerate
```

### **2️⃣ Load the Model with Hugging Face Transformers**  
You can use **PromptCoT-DS** models to solve **mathematical problems** using Hugging Face’s `generate` API:  

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "xl-zhao/PromptCoT-DS-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")

problem_statement = (
    "A robe takes 2 bolts of blue fiber and half that much white fiber.  How many bolts in total does it take?"
)

prompt = (
    "<|begin▁of▁sentence|>Please reason step by step, and put your final answer within \\boxed{{}}."
    "<|User|>" + problem_statement + "<|Assistant|>"
)

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

with torch.no_grad():
    output = model.generate(**inputs, max_length=32768, temperature=0.6)

generated_solution = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_solution)
```

---

## ⚡ **Using vLLM for Fast Inference**  
For optimized inference, use `vLLM`:  
```python
from vllm import LLM, SamplingParams

model_name = "xl-zhao/PromptCoT-DS-7B"
llm = LLM(model=model_name, tensor_parallel_size=1)

problem_statement = (
    "A robe takes 2 bolts of blue fiber and half that much white fiber.  How many bolts in total does it take?"
)

prompt = (
    "<|begin▁of▁sentence|>Please reason step by step, and put your final answer within \\boxed{{}}."
    "<|User|>" + problem_statement + "<|Assistant|>"
)

sampling_params = SamplingParams(temperature=0.6, max_tokens=32768)
outputs = llm.generate([prompt], sampling_params)

print(outputs[0].outputs[0].text)
```

---

## 🔗 **Full Usage & Advanced Options**  
For advanced usage, including batch inference and evaluation on mathematical benchmarks, refer to the **full repository on GitHub**:  
🔹 [GitHub: PromptCoT](https://github.com/zhaoxlpku/PromptCoT)  

---

## 📜 **Citation**  
If you use **PromptCoT**, please consider citing:  
```
@article{zhao2025promptcot,
  author    = {Zhao, Xueliang and Wu, Wei and Guan, Jian and Kong, Lingpeng},
  title     = {PromptCoT: Synthesizing Olympiad-Level Problems for Mathematical Reasoning in Large Language Models},
  year      = {2025},
  journal   = {arXiv preprint arXiv:2503.02324},
  url       = {http://arxiv.org/abs/2503.02324}
}
```