congkai commited on
Commit
7a93eb8
·
verified ·
1 Parent(s): cafd29d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +226 -0
README.md ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ language:
4
+ - en
5
+ base_model:
6
+ - meta-llama/Llama-3.2-1B
7
+ pipeline_tag: text-generation
8
+ ---
9
+ # Model Card for Model ID
10
+
11
+ <!-- Provide a quick summary of what the model is/does. -->
12
+
13
+ InfR aims to advance AI systems by improving reasoning, reducing adoption barriers, and addressing privacy concerns through smaller model sizes.
14
+
15
+ ## Model Details
16
+
17
+ ### Model Description
18
+
19
+ <!-- Provide a longer summary of what this model is. -->
20
+
21
+
22
+
23
+ - **Developed by:** InfiX
24
+ - **Language(s) (NLP):** English
25
+ - **Continual pretrained from model:** [[meta-llama/Llama-3.2-1B]](https://huggingface.co/meta-llama/Llama-3.2-1B)
26
+
27
+ ### Model Sources
28
+
29
+ <!-- Provide the basic links for the model. -->
30
+
31
+ - **Repository:** [[github]](https://github.com/InfiXAI/InfiR)
32
+ - **Paper [optional]:** [[Arxiv]](https://arxiv.org/abs/2502.11573)
33
+
34
+ ## Uses
35
+
36
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
37
+
38
+ ## Bias, Risks, and Limitations
39
+
40
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
41
+
42
+ - **Performance gaps** remain vs. 70 B+ models on very hard reasoning (e.g., OlympiadBench).
43
+ - **Safety & bias**: inherits Llama-3.2 tokenizer & pre-training distribution; may reflect web biases.
44
+ - **Knowledge cut-off**: mid-2023.
45
+ - **Evaluation** has focused on English benchmarks; multilingual robustness not verified.
46
+
47
+
48
+ ## How to Get Started with the Model
49
+
50
+ ### Installation
51
+
52
+ First, install the required dependencies:
53
+
54
+ ```bash
55
+ pip install torch transformers
56
+ ```
57
+
58
+ For optimal performance, we recommend using PyTorch 2.0+ and CUDA 11.8+.
59
+
60
+ ### Basic Usage
61
+
62
+ Here's a simple example to get started with InfiR-1B-Instruct:
63
+
64
+ ```python
65
+ from transformers import AutoTokenizer, AutoModelForCausalLM
66
+
67
+ # Define messages in chat format
68
+ messages = [
69
+ {"role": "system", "content": "You are a helpful assistant."},
70
+ {"role": "user", "content": "A new program had 60 downloads in the first month. The number of downloads in the second month was three times as many as the downloads in the first month, but then reduced by 30% in the third month. How many downloads did the program have total over the three months? Think step by step."},
71
+ ]
72
+
73
+ # Load model and tokenizer
74
+ tokenizer = AutoTokenizer.from_pretrained("InfiX-ai/InfiR-1B-Instruct")
75
+ model = AutoModelForCausalLM.from_pretrained("InfiX-ai/InfiR-1B-Instruct")
76
+
77
+ # Apply chat template and generate
78
+ raw_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
79
+ inputs = tokenizer(raw_prompt, return_tensors="pt")
80
+ outputs = model.generate(inputs["input_ids"], max_new_tokens=2048)
81
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
82
+ ```
83
+
84
+ ### Advanced Usage Examples
85
+
86
+ #### 1. Mathematical Reasoning
87
+
88
+ ```python
89
+ # Mathematical problem solving with chat format
90
+ messages = [
91
+ {"role": "system", "content": "You are a helpful assistant."},
92
+ {"role": "user", "content": "If a rectangle has a length of 8 units and a width of 6 units, what is its area and perimeter? Solve this step by step."},
93
+ ]
94
+
95
+ raw_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
96
+ inputs = tokenizer(raw_prompt, return_tensors="pt")
97
+ outputs = model.generate(
98
+ inputs["input_ids"],
99
+ max_new_tokens=512,
100
+ temperature=0.1,
101
+ do_sample=True
102
+ )
103
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
104
+ ```
105
+
106
+ #### 2. Code Generation
107
+
108
+ ```python
109
+ # Code generation example with chat format
110
+ messages = [
111
+ {"role": "system", "content": "You are a helpful assistant."},
112
+ {"role": "user", "content": "Write a Python function to calculate the factorial of a number."},
113
+ ]
114
+
115
+ raw_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
116
+ inputs = tokenizer(raw_prompt, return_tensors="pt")
117
+ outputs = model.generate(
118
+ inputs["input_ids"],
119
+ max_new_tokens=256,
120
+ temperature=0.2,
121
+ do_sample=True
122
+ )
123
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
124
+ ```
125
+
126
+ #### 3. Chain-of-Thought Reasoning
127
+
128
+ ```python
129
+ # Chain-of-thought reasoning with chat format
130
+ messages = [
131
+ {"role": "system", "content": "You are a helpful assistant."},
132
+ {"role": "user", "content": "A train travels 120 km in 2 hours. What is its speed in km/h? Let's approach this step by step."},
133
+ ]
134
+
135
+ raw_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
136
+ inputs = tokenizer(raw_prompt, return_tensors="pt")
137
+ outputs = model.generate(
138
+ inputs["input_ids"],
139
+ max_new_tokens=300,
140
+ temperature=0.3,
141
+ do_sample=True
142
+ )
143
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
144
+ ```
145
+
146
+ ## Training Details
147
+
148
+ ### Training Data
149
+
150
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
151
+
152
+ | Stage | Tokens | Composition |
153
+ |-------|--------|-------------|
154
+ | Pre-training | 900 B | 52 % code, 48 % high-quality web (math, science, encyclopedic) |
155
+ | Annealing | 40 B | extra math & code + synthetic samples |
156
+ | SFT | ~4 M | Infinity-Instruct, Orca-AgentInstruct-1M, NuminaMath, ScaleQuest (filtered) |
157
+
158
+ Data cleaning: heuristic filters, MinHash de-duplication, 10-gram benchmark decontamination, reward-model rejection sampling.
159
+
160
+ ### Training Procedure
161
+
162
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
163
+
164
+ | Hyper-parameter | Value |
165
+ |-----------------|-------|
166
+ | Precision | bf16 mixed |
167
+ | Optimizer | AdamW |
168
+ | LR (pre-train) | 1.4 e-3, cosine → 0 |
169
+ | LR (SFT) | 2 e-5, cosine w/ 10 % warm-up |
170
+ | Batch size | 2048 (pre-train), 128 (SFT) |
171
+ | Sequence len | 4096 |
172
+ | Epochs | 1 (pre-train), 1 (anneal), 4 (SFT) |
173
+ | GPUs | 64 × H800, 5760 GPU-hours total |
174
+
175
+ ## Evaluation
176
+
177
+ <!-- This section describes the evaluation protocols and provides the results. -->
178
+
179
+ ### Benchmarks & Results
180
+
181
+ | Benchmark | InfiR-1B-Instruct | Llama-3.2-1B-Instruct | Qwen-2.5-1.5B-Instruct |
182
+ |-----------|-------------------|------------------------|-------------------------|
183
+ | MMLU | 50.22 | 46.27 | 61.78 |
184
+ | GSM8K | 70.9 | 47.9 | 74.3 |
185
+ | MATH | 46.4 | 30.0 | 53.4 |
186
+ | HumanEval | 58.54 | 39.63 | 51.83 |
187
+ | MBPP | 56.03 | 49.03 | 56.81 |
188
+
189
+
190
+ ## Technical Specifications
191
+
192
+ ### Model Architecture and Objective
193
+
194
+ - Base: Llama-3.2-1B (32 layers, 32 heads, RoPE, GQA, 2 k ctx → 4 k extended)
195
+
196
+ ## Citation
197
+
198
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
199
+
200
+ **BibTeX:**
201
+
202
+ ```bibtex
203
+ @misc{xie2025infir,
204
+ title={InfiR: Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning},
205
+ author={Xie, Congkai and Cai, Shuo and Wang, Wenjun and others},
206
+ year={2025},
207
+ eprint={2502.11573},
208
+ archivePrefix={arXiv},
209
+ primaryClass={cs.CL}
210
+ }
211
+ ```
212
+
213
+ **APA:**
214
+
215
+ Xie, C., Cai, S., Wang, W., et al. (2025). *InfiR: Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning*. arXiv:2502.11573.
216
+
217
+ ---
218
+
219
+ ## Glossary
220
+
221
+ - **SLM**: Small Language Model (<2 B parameters)
222
+ - **CoT**: Chain-of-Thought prompting or training
223
+ - **REC**: Renewable Energy Certificate
224
+ - **PUE**: Power Usage Effectiveness (ratio of total facility power to IT power)
225
+
226
+ ---