littlebird13 commited on
Commit
b01caad
·
verified ·
1 Parent(s): 6af341f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -89
README.md CHANGED
@@ -1,30 +1,29 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507/blob/main/LICENSE
5
  pipeline_tag: text-generation
6
  ---
7
 
8
- # Qwen3-30B-A3B-Thinking-2507
9
- <a href="https://chat.qwen.ai/?model=Qwen3-30B-A3B-2507" target="_blank" style="margin: 2px;">
10
  <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
11
  </a>
12
 
13
  ## Highlights
14
 
15
- # Highlights
16
 
17
- Over the past three months, we have continued to scale the **thinking capability** of Qwen3-30B-A3B, improving both the **quality and depth** of reasoning. We are pleased to introduce **Qwen3-30B-A3B-Thinking-2507**, featuring the following key enhancements:
 
 
 
18
 
19
- - **Significantly improved performance** on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise. Notably, **Qwen3-30B-A3B even outperforms the previous Qwen3-235B-A22B by a large margin**.
20
- - **Markedly better general capabilities**, such as instruction following, tool usage, text generation, and alignment with human preferences.
21
- - **Enhanced 256K long-context understanding** capabilities.
22
-
23
- **NOTE**: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
24
 
25
  ## Model Overview
26
 
27
- **Qwen3-30B-A3B-Thinking-2507** has the following features:
28
  - Type: Causal Language Models
29
  - Training Stage: Pretraining & Post-training
30
  - Number of Parameters: 30.5B in total and 3.3B activated
@@ -35,50 +34,50 @@ Over the past three months, we have continued to scale the **thinking capability
35
  - Number of Activated Experts: 8
36
  - Context Length: **262,144 natively**.
37
 
38
- **NOTE: This model supports only thinking mode. Meanwhile, specifying `enable_thinking=True` is no longer required.**
39
-
40
- Additionally, to enforce model thinking, the default chat template automatically includes `<think>`. Therefore, it is normal for the model's output to contain only `</think>` without an explicit opening `<think>` tag.
41
 
42
  For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
43
 
 
44
  ## Performance
45
 
46
- | | Gemini2.5-Flash-Thinking | Qwen3-235B-A22B Thinking | Qwen3-30B-A3B Thinking | Qwen3-30B-A3B-Thinking-2507 |
47
- |--- | --- | --- | --- | --- |
48
- | **Knowledge** | | | | |
49
- | MMLU-Pro | 81.9 | **82.8** | 78.5 | 80.9 |
50
- | MMLU-Redux | 92.1 | **92.7** | 89.5 | 91.4 |
51
- | GPQA | **82.8** | 71.1 | 65.8 | 73.4 |
52
- | SuperGPQA | 57.8 | **60.7** | 51.8 | 56.8 |
53
  | **Reasoning** | | | | | | |
54
- | AIME25 | 72.0 | 81.5 | 70.9 | **85.0** |
55
- | HMMT25 | 64.2 | 62.5 | 49.8 | **71.4** |
56
- | LiveBench 20241125 | 74.3 | **77.1** | 74.3 | 76.8 |
57
- | **Coding** | | | | |
58
- | LiveCodeBench v6 (25.02-25.05) | 61.2 | 55.7 | 57.4 | **66.0** |
59
- | CFEval | 1995 | **2056** | 1940 | 2044 |
60
- | OJBench | 23.5 | **25.6** | 20.7 | 25.1 |
61
- | **Alignment** | | | | |
62
- | IFEval | **89.8** | 83.4 | 86.5 | 88.9 |
63
- | Arena-Hard v2$ | 56.7 | **61.5** | 36.3 | 56.0 |
64
- | Creative Writing v3 | **85.0** | 84.6 | 79.1 | 84.4 |
65
- | WritingBench | 83.9 | 80.3 | 77.0 | **85.0** |
66
- | **Agent** | | | | |
67
- | BFCL-v3 | 68.6 | 70.8 | 69.1 | **72.4** |
68
- | TAU1-Retail | 65.2 | 54.8 | 61.7 | **67.8** |
69
- | TAU1-Airline | **54.0** | 26.0 | 32.0 | 48.0 |
70
- | TAU2-Retail | **66.7** | 40.4 | 34.2 | 58.8 |
71
- | TAU2-Airline | 52.0 | 30.0 | 36.0 | **58.0** |
72
- | TAU2-Telecom | **31.6** | 21.9 | 22.8 | 26.3 |
73
- | **Multilingualism** | | | | |
74
- | MultiIF | 74.4 | 71.9 | 72.2 | **76.4** |
75
- | MMLU-ProX | **80.2** | 80.0 | 73.1 | 76.4 |
76
- | INCLUDE | **83.9** | 78.7 | 71.9 | 74.4 |
77
- | PolyMATH | 49.8 | **54.7** | 46.1 | 52.6 |
78
-
79
- $ For reproducibility, we report the win rates evaluated by GPT-4.1.
80
-
81
- \& For highly challenging tasks (including PolyMATH and all reasoning and coding tasks), we use an output length of 81,920 tokens. For all other tasks, we set the output length to 32,768.
 
82
 
83
 
84
  ## Quickstart
@@ -94,7 +93,7 @@ The following contains a code snippet illustrating how to use the model generate
94
  ```python
95
  from transformers import AutoModelForCausalLM, AutoTokenizer
96
 
97
- model_name = "Qwen/Qwen3-30B-A3B-Thinking-2507"
98
 
99
  # load the tokenizer and the model
100
  tokenizer = AutoTokenizer.from_pretrained(model_name)
@@ -119,36 +118,26 @@ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
119
  # conduct text completion
120
  generated_ids = model.generate(
121
  **model_inputs,
122
- max_new_tokens=32768
123
  )
124
  output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
125
 
126
- # parsing thinking content
127
- try:
128
- # rindex finding 151668 (</think>)
129
- index = len(output_ids) - output_ids[::-1].index(151668)
130
- except ValueError:
131
- index = 0
132
 
133
- thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
134
- content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
135
-
136
- print("thinking content:", thinking_content) # no opening <think> tag
137
  print("content:", content)
138
-
139
  ```
140
 
141
  For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
142
  - SGLang:
143
  ```shell
144
- python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B-Thinking-2507 --context-length 262144 --reasoning-parser deepseek-r1
145
  ```
146
  - vLLM:
147
  ```shell
148
- vllm serve Qwen/Qwen3-30B-A3B-Thinking-2507 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1
149
  ```
150
 
151
- **Note: If you encounter out-of-memory (OOM) issues, you may consider reducing the context length to a smaller value. However, since the model may require longer token sequences for reasoning, we strongly recommend using a context length greater than 131,072 when possible.**
152
 
153
  For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
154
 
@@ -161,27 +150,13 @@ To define the available tools, you can use the MCP configuration file, use the i
161
  from qwen_agent.agents import Assistant
162
 
163
  # Define LLM
164
- # Using Alibaba Cloud Model Studio
165
  llm_cfg = {
166
- 'model': 'qwen3-30b-a3b-thinking-2507',
167
- 'model_type': 'qwen_dashscope',
168
- }
169
-
170
- # Using OpenAI-compatible API endpoint. It is recommended to disable the reasoning and the tool call parsing
171
- # functionality of the deployment frameworks and let Qwen-Agent automate the related operations. For example,
172
- # `VLLM_USE_MODELSCOPE=true vllm serve Qwen/Qwen3-30B-A3B-Thinking-2507 --served-model-name Qwen3-30B-A3B-Thinking-2507 --tensor-parallel-size 8 --max-model-len 262144`.
173
- #
174
- # llm_cfg = {
175
- # 'model': 'Qwen3-30B-A3B-Thinking-2507',
176
- #
177
- # # Use a custom endpoint compatible with OpenAI API:
178
- # 'model_server': 'http://localhost:8000/v1', # api_base without reasoning and tool call parsing
179
- # 'api_key': 'EMPTY',
180
- # 'generate_cfg': {
181
- # 'thought_in_content': True,
182
- # },
183
- # }
184
 
 
 
 
 
185
 
186
  # Define Tools
187
  tools = [
@@ -214,18 +189,15 @@ print(responses)
214
  To achieve optimal performance, we recommend the following settings:
215
 
216
  1. **Sampling Parameters**:
217
- - We suggest using `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`.
218
  - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
219
 
220
- 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
221
 
222
  3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
223
  - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
224
  - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
225
 
226
- 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
227
-
228
-
229
  ### Citation
230
 
231
  If you find our work helpful, feel free to give us a cite.
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE
5
  pipeline_tag: text-generation
6
  ---
7
 
8
+ # Qwen3-30B-A3B-Instruct-2507
9
+ <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
10
  <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
11
  </a>
12
 
13
  ## Highlights
14
 
15
+ We introduce the updated version of the **Qwen3-30B-A3B non-thinking mode**, named **Qwen3-30B-A3B-Instruct-2507**, featuring the following key enhancements:
16
 
17
+ - **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**.
18
+ - **Substantial gains** in long-tail knowledge coverage across **multiple languages**.
19
+ - **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation.
20
+ - **Enhanced capabilities** in **256K long-context understanding**.
21
 
22
+ ![image/jpeg](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-2507/Qwen3-30B-A3B-Instruct-2507.jpeg)
 
 
 
 
23
 
24
  ## Model Overview
25
 
26
+ **Qwen3-30B-A3B-Instruct-2507** has the following features:
27
  - Type: Causal Language Models
28
  - Training Stage: Pretraining & Post-training
29
  - Number of Parameters: 30.5B in total and 3.3B activated
 
34
  - Number of Activated Experts: 8
35
  - Context Length: **262,144 natively**.
36
 
37
+ **NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
 
 
38
 
39
  For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
40
 
41
+
42
  ## Performance
43
 
44
+ | | Deepseek-V3-0324 | GPT-4o-0327 | Gemini-2.5-Flash Non-Thinking | Qwen3-235B-A22B Non-Thinking | Qwen3-30B-A3B Non-Thinking | Qwen3-30B-A3B-Instruct-2507 |
45
+ |--- | --- | --- | --- | --- | --- | --- |
46
+ | **Knowledge** | | | | | | |
47
+ | MMLU-Pro | **81.2** | 79.8 | 81.1 | 75.2 | 69.1 | 78.4 |
48
+ | MMLU-Redux | 90.4 | **91.3** | 90.6 | 89.2 | 84.1 | 89.3 |
49
+ | GPQA | 68.4 | 66.9 | **78.3** | 62.9 | 54.8 | 70.4 |
50
+ | SuperGPQA | **57.3** | 51.0 | 54.6 | 48.2 | 42.2 | 53.4 |
51
  | **Reasoning** | | | | | | |
52
+ | AIME25 | 46.6 | 26.7 | **61.6** | 24.7 | 21.6 | 61.3 |
53
+ | HMMT25 | 27.5 | 7.9 | **45.8** | 10.0 | 12.0 | 43.0 |
54
+ | ZebraLogic | 83.4 | 52.6 | 57.9 | 37.7 | 33.2 | **90.0** |
55
+ | LiveBench 20241125 | 66.9 | 63.7 | **69.1** | 62.5 | 59.4 | 69.0 |
56
+ | **Coding** | | | | | | |
57
+ | LiveCodeBench v6 (25.02-25.05) | **45.2** | 35.8 | 40.1 | 32.9 | 29.0 | 43.2 |
58
+ | MultiPL-E | 82.2 | 82.7 | 77.7 | 79.3 | 74.6 | **83.8** |
59
+ | Aider-Polyglot | 55.1 | 45.3 | 44.0 | **59.6** | 24.4 | 35.6 |
60
+ | **Alignment** | | | | | | |
61
+ | IFEval | 82.3 | 83.9 | 84.3 | 83.2 | 83.7 | **84.7** |
62
+ | Arena-Hard v2* | 45.6 | 61.9 | 58.3 | 52.0 | 24.8 | **69.0** |
63
+ | Creative Writing v3 | 81.6 | 84.9 | 84.6 | 80.4 | 68.1 | **86.0** |
64
+ | WritingBench | 74.5 | 75.5 | 80.5 | 77.0 | 72.2 | **85.5** |
65
+ | **Agent** | | | | | | |
66
+ | BFCL-v3 | 64.7 | 66.5 | 66.1 | **68.0** | 58.6 | 65.1 |
67
+ | TAU1-Retail | 49.6 | 60.3# | **65.2** | 65.2 | 38.3 | 59.1 |
68
+ | TAU1-Airline | 32.0 | 42.8# | **48.0** | 32.0 | 18.0 | 40.0 |
69
+ | TAU2-Retail | **71.1** | 66.7# | 64.3 | 64.9 | 31.6 | 57.0 |
70
+ | TAU2-Airline | 36.0 | 42.0# | **42.5** | 36.0 | 18.0 | 38.0 |
71
+ | TAU2-Telecom | **34.0** | 29.8# | 16.9 | 24.6 | 18.4 | 12.3 |
72
+ | **Multilingualism** | | | | | | |
73
+ | MultiIF | 66.5 | 70.4 | 69.4 | 70.2 | **70.8** | 67.9 |
74
+ | MMLU-ProX | 75.8 | 76.2 | **78.3** | 73.2 | 65.1 | 72.0 |
75
+ | INCLUDE | 80.1 | 82.1 | **83.8** | 75.6 | 67.8 | 71.9 |
76
+ | PolyMATH | 32.2 | 25.5 | 41.9 | 27.0 | 23.3 | **43.1** |
77
+
78
+ *: For reproducibility, we report the win rates evaluated by GPT-4.1.
79
+
80
+ \#: Results were generated using GPT-4o-20241120, as access to the native function calling API of GPT-4o-0327 was unavailable.
81
 
82
 
83
  ## Quickstart
 
93
  ```python
94
  from transformers import AutoModelForCausalLM, AutoTokenizer
95
 
96
+ model_name = "Qwen/Qwen3-30B-A3B-Instruct-2507"
97
 
98
  # load the tokenizer and the model
99
  tokenizer = AutoTokenizer.from_pretrained(model_name)
 
118
  # conduct text completion
119
  generated_ids = model.generate(
120
  **model_inputs,
121
+ max_new_tokens=16384
122
  )
123
  output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
124
 
125
+ content = tokenizer.decode(output_ids, skip_special_tokens=True)
 
 
 
 
 
126
 
 
 
 
 
127
  print("content:", content)
 
128
  ```
129
 
130
  For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
131
  - SGLang:
132
  ```shell
133
+ python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B-Instruct-2507 --context-length 262144
134
  ```
135
  - vLLM:
136
  ```shell
137
+ vllm serve Qwen/Qwen3-30B-A3B-Instruct-2507 --max-model-len 262144
138
  ```
139
 
140
+ **Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
141
 
142
  For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
143
 
 
150
  from qwen_agent.agents import Assistant
151
 
152
  # Define LLM
 
153
  llm_cfg = {
154
+ 'model': 'Qwen3-30B-A3B-Instruct-2507',
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
155
 
156
+ # Use a custom endpoint compatible with OpenAI API:
157
+ 'model_server': 'http://localhost:8000/v1', # api_base
158
+ 'api_key': 'EMPTY',
159
+ }
160
 
161
  # Define Tools
162
  tools = [
 
189
  To achieve optimal performance, we recommend the following settings:
190
 
191
  1. **Sampling Parameters**:
192
+ - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
193
  - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
194
 
195
+ 2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
196
 
197
  3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
198
  - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
199
  - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
200
 
 
 
 
201
  ### Citation
202
 
203
  If you find our work helpful, feel free to give us a cite.