Update README.md
Browse files
README.md
CHANGED
|
@@ -1,30 +1,29 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
license: apache-2.0
|
| 4 |
-
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
---
|
| 7 |
|
| 8 |
-
# Qwen3-30B-A3B-
|
| 9 |
-
<a href="https://chat.qwen.ai
|
| 10 |
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
|
| 11 |
</a>
|
| 12 |
|
| 13 |
## Highlights
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
-
-
|
| 20 |
-
- **Markedly better general capabilities**, such as instruction following, tool usage, text generation, and alignment with human preferences.
|
| 21 |
-
- **Enhanced 256K long-context understanding** capabilities.
|
| 22 |
-
|
| 23 |
-
**NOTE**: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
|
| 24 |
|
| 25 |
## Model Overview
|
| 26 |
|
| 27 |
-
**Qwen3-30B-A3B-
|
| 28 |
- Type: Causal Language Models
|
| 29 |
- Training Stage: Pretraining & Post-training
|
| 30 |
- Number of Parameters: 30.5B in total and 3.3B activated
|
|
@@ -35,50 +34,50 @@ Over the past three months, we have continued to scale the **thinking capability
|
|
| 35 |
- Number of Activated Experts: 8
|
| 36 |
- Context Length: **262,144 natively**.
|
| 37 |
|
| 38 |
-
**NOTE: This model supports only thinking mode. Meanwhile, specifying `enable_thinking=
|
| 39 |
-
|
| 40 |
-
Additionally, to enforce model thinking, the default chat template automatically includes `<think>`. Therefore, it is normal for the model's output to contain only `</think>` without an explicit opening `<think>` tag.
|
| 41 |
|
| 42 |
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
| 43 |
|
|
|
|
| 44 |
## Performance
|
| 45 |
|
| 46 |
-
| |
|
| 47 |
-
|--- | --- | --- | --- | --- |
|
| 48 |
-
| **Knowledge** | | | | |
|
| 49 |
-
| MMLU-Pro | 81.
|
| 50 |
-
| MMLU-Redux |
|
| 51 |
-
| GPQA | **
|
| 52 |
-
| SuperGPQA | 57.
|
| 53 |
| **Reasoning** | | | | | | |
|
| 54 |
-
| AIME25 |
|
| 55 |
-
| HMMT25 |
|
| 56 |
-
|
|
| 57 |
-
| **
|
| 58 |
-
|
|
| 59 |
-
|
|
| 60 |
-
|
|
| 61 |
-
|
|
| 62 |
-
|
|
| 63 |
-
|
|
| 64 |
-
|
|
| 65 |
-
|
|
| 66 |
-
|
|
| 67 |
-
|
|
| 68 |
-
|
|
| 69 |
-
| TAU1-
|
| 70 |
-
|
|
| 71 |
-
| TAU2-
|
| 72 |
-
| TAU2-
|
| 73 |
-
| **
|
| 74 |
-
|
|
| 75 |
-
|
|
| 76 |
-
|
|
| 77 |
-
|
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
|
|
|
| 82 |
|
| 83 |
|
| 84 |
## Quickstart
|
|
@@ -94,7 +93,7 @@ The following contains a code snippet illustrating how to use the model generate
|
|
| 94 |
```python
|
| 95 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 96 |
|
| 97 |
-
model_name = "Qwen/Qwen3-30B-A3B-
|
| 98 |
|
| 99 |
# load the tokenizer and the model
|
| 100 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
@@ -119,36 +118,26 @@ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
|
| 119 |
# conduct text completion
|
| 120 |
generated_ids = model.generate(
|
| 121 |
**model_inputs,
|
| 122 |
-
max_new_tokens=
|
| 123 |
)
|
| 124 |
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
| 125 |
|
| 126 |
-
|
| 127 |
-
try:
|
| 128 |
-
# rindex finding 151668 (</think>)
|
| 129 |
-
index = len(output_ids) - output_ids[::-1].index(151668)
|
| 130 |
-
except ValueError:
|
| 131 |
-
index = 0
|
| 132 |
|
| 133 |
-
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
|
| 134 |
-
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
|
| 135 |
-
|
| 136 |
-
print("thinking content:", thinking_content) # no opening <think> tag
|
| 137 |
print("content:", content)
|
| 138 |
-
|
| 139 |
```
|
| 140 |
|
| 141 |
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
|
| 142 |
- SGLang:
|
| 143 |
```shell
|
| 144 |
-
python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B-
|
| 145 |
```
|
| 146 |
- vLLM:
|
| 147 |
```shell
|
| 148 |
-
vllm serve Qwen/Qwen3-30B-A3B-
|
| 149 |
```
|
| 150 |
|
| 151 |
-
**Note: If you encounter out-of-memory (OOM) issues,
|
| 152 |
|
| 153 |
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
|
| 154 |
|
|
@@ -161,27 +150,13 @@ To define the available tools, you can use the MCP configuration file, use the i
|
|
| 161 |
from qwen_agent.agents import Assistant
|
| 162 |
|
| 163 |
# Define LLM
|
| 164 |
-
# Using Alibaba Cloud Model Studio
|
| 165 |
llm_cfg = {
|
| 166 |
-
'model': '
|
| 167 |
-
'model_type': 'qwen_dashscope',
|
| 168 |
-
}
|
| 169 |
-
|
| 170 |
-
# Using OpenAI-compatible API endpoint. It is recommended to disable the reasoning and the tool call parsing
|
| 171 |
-
# functionality of the deployment frameworks and let Qwen-Agent automate the related operations. For example,
|
| 172 |
-
# `VLLM_USE_MODELSCOPE=true vllm serve Qwen/Qwen3-30B-A3B-Thinking-2507 --served-model-name Qwen3-30B-A3B-Thinking-2507 --tensor-parallel-size 8 --max-model-len 262144`.
|
| 173 |
-
#
|
| 174 |
-
# llm_cfg = {
|
| 175 |
-
# 'model': 'Qwen3-30B-A3B-Thinking-2507',
|
| 176 |
-
#
|
| 177 |
-
# # Use a custom endpoint compatible with OpenAI API:
|
| 178 |
-
# 'model_server': 'http://localhost:8000/v1', # api_base without reasoning and tool call parsing
|
| 179 |
-
# 'api_key': 'EMPTY',
|
| 180 |
-
# 'generate_cfg': {
|
| 181 |
-
# 'thought_in_content': True,
|
| 182 |
-
# },
|
| 183 |
-
# }
|
| 184 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 185 |
|
| 186 |
# Define Tools
|
| 187 |
tools = [
|
|
@@ -214,18 +189,15 @@ print(responses)
|
|
| 214 |
To achieve optimal performance, we recommend the following settings:
|
| 215 |
|
| 216 |
1. **Sampling Parameters**:
|
| 217 |
-
- We suggest using `Temperature=0.
|
| 218 |
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
|
| 219 |
|
| 220 |
-
2. **Adequate Output Length**: We recommend using an output length of
|
| 221 |
|
| 222 |
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
|
| 223 |
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
|
| 224 |
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
|
| 225 |
|
| 226 |
-
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
|
| 227 |
-
|
| 228 |
-
|
| 229 |
### Citation
|
| 230 |
|
| 231 |
If you find our work helpful, feel free to give us a cite.
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
license: apache-2.0
|
| 4 |
+
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
---
|
| 7 |
|
| 8 |
+
# Qwen3-30B-A3B-Instruct-2507
|
| 9 |
+
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
|
| 10 |
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
|
| 11 |
</a>
|
| 12 |
|
| 13 |
## Highlights
|
| 14 |
|
| 15 |
+
We introduce the updated version of the **Qwen3-30B-A3B non-thinking mode**, named **Qwen3-30B-A3B-Instruct-2507**, featuring the following key enhancements:
|
| 16 |
|
| 17 |
+
- **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**.
|
| 18 |
+
- **Substantial gains** in long-tail knowledge coverage across **multiple languages**.
|
| 19 |
+
- **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation.
|
| 20 |
+
- **Enhanced capabilities** in **256K long-context understanding**.
|
| 21 |
|
| 22 |
+

|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
## Model Overview
|
| 25 |
|
| 26 |
+
**Qwen3-30B-A3B-Instruct-2507** has the following features:
|
| 27 |
- Type: Causal Language Models
|
| 28 |
- Training Stage: Pretraining & Post-training
|
| 29 |
- Number of Parameters: 30.5B in total and 3.3B activated
|
|
|
|
| 34 |
- Number of Activated Experts: 8
|
| 35 |
- Context Length: **262,144 natively**.
|
| 36 |
|
| 37 |
+
**NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
|
|
|
|
|
|
|
| 38 |
|
| 39 |
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
| 40 |
|
| 41 |
+
|
| 42 |
## Performance
|
| 43 |
|
| 44 |
+
| | Deepseek-V3-0324 | GPT-4o-0327 | Gemini-2.5-Flash Non-Thinking | Qwen3-235B-A22B Non-Thinking | Qwen3-30B-A3B Non-Thinking | Qwen3-30B-A3B-Instruct-2507 |
|
| 45 |
+
|--- | --- | --- | --- | --- | --- | --- |
|
| 46 |
+
| **Knowledge** | | | | | | |
|
| 47 |
+
| MMLU-Pro | **81.2** | 79.8 | 81.1 | 75.2 | 69.1 | 78.4 |
|
| 48 |
+
| MMLU-Redux | 90.4 | **91.3** | 90.6 | 89.2 | 84.1 | 89.3 |
|
| 49 |
+
| GPQA | 68.4 | 66.9 | **78.3** | 62.9 | 54.8 | 70.4 |
|
| 50 |
+
| SuperGPQA | **57.3** | 51.0 | 54.6 | 48.2 | 42.2 | 53.4 |
|
| 51 |
| **Reasoning** | | | | | | |
|
| 52 |
+
| AIME25 | 46.6 | 26.7 | **61.6** | 24.7 | 21.6 | 61.3 |
|
| 53 |
+
| HMMT25 | 27.5 | 7.9 | **45.8** | 10.0 | 12.0 | 43.0 |
|
| 54 |
+
| ZebraLogic | 83.4 | 52.6 | 57.9 | 37.7 | 33.2 | **90.0** |
|
| 55 |
+
| LiveBench 20241125 | 66.9 | 63.7 | **69.1** | 62.5 | 59.4 | 69.0 |
|
| 56 |
+
| **Coding** | | | | | | |
|
| 57 |
+
| LiveCodeBench v6 (25.02-25.05) | **45.2** | 35.8 | 40.1 | 32.9 | 29.0 | 43.2 |
|
| 58 |
+
| MultiPL-E | 82.2 | 82.7 | 77.7 | 79.3 | 74.6 | **83.8** |
|
| 59 |
+
| Aider-Polyglot | 55.1 | 45.3 | 44.0 | **59.6** | 24.4 | 35.6 |
|
| 60 |
+
| **Alignment** | | | | | | |
|
| 61 |
+
| IFEval | 82.3 | 83.9 | 84.3 | 83.2 | 83.7 | **84.7** |
|
| 62 |
+
| Arena-Hard v2* | 45.6 | 61.9 | 58.3 | 52.0 | 24.8 | **69.0** |
|
| 63 |
+
| Creative Writing v3 | 81.6 | 84.9 | 84.6 | 80.4 | 68.1 | **86.0** |
|
| 64 |
+
| WritingBench | 74.5 | 75.5 | 80.5 | 77.0 | 72.2 | **85.5** |
|
| 65 |
+
| **Agent** | | | | | | |
|
| 66 |
+
| BFCL-v3 | 64.7 | 66.5 | 66.1 | **68.0** | 58.6 | 65.1 |
|
| 67 |
+
| TAU1-Retail | 49.6 | 60.3# | **65.2** | 65.2 | 38.3 | 59.1 |
|
| 68 |
+
| TAU1-Airline | 32.0 | 42.8# | **48.0** | 32.0 | 18.0 | 40.0 |
|
| 69 |
+
| TAU2-Retail | **71.1** | 66.7# | 64.3 | 64.9 | 31.6 | 57.0 |
|
| 70 |
+
| TAU2-Airline | 36.0 | 42.0# | **42.5** | 36.0 | 18.0 | 38.0 |
|
| 71 |
+
| TAU2-Telecom | **34.0** | 29.8# | 16.9 | 24.6 | 18.4 | 12.3 |
|
| 72 |
+
| **Multilingualism** | | | | | | |
|
| 73 |
+
| MultiIF | 66.5 | 70.4 | 69.4 | 70.2 | **70.8** | 67.9 |
|
| 74 |
+
| MMLU-ProX | 75.8 | 76.2 | **78.3** | 73.2 | 65.1 | 72.0 |
|
| 75 |
+
| INCLUDE | 80.1 | 82.1 | **83.8** | 75.6 | 67.8 | 71.9 |
|
| 76 |
+
| PolyMATH | 32.2 | 25.5 | 41.9 | 27.0 | 23.3 | **43.1** |
|
| 77 |
+
|
| 78 |
+
*: For reproducibility, we report the win rates evaluated by GPT-4.1.
|
| 79 |
+
|
| 80 |
+
\#: Results were generated using GPT-4o-20241120, as access to the native function calling API of GPT-4o-0327 was unavailable.
|
| 81 |
|
| 82 |
|
| 83 |
## Quickstart
|
|
|
|
| 93 |
```python
|
| 94 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 95 |
|
| 96 |
+
model_name = "Qwen/Qwen3-30B-A3B-Instruct-2507"
|
| 97 |
|
| 98 |
# load the tokenizer and the model
|
| 99 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
|
|
| 118 |
# conduct text completion
|
| 119 |
generated_ids = model.generate(
|
| 120 |
**model_inputs,
|
| 121 |
+
max_new_tokens=16384
|
| 122 |
)
|
| 123 |
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
| 124 |
|
| 125 |
+
content = tokenizer.decode(output_ids, skip_special_tokens=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
print("content:", content)
|
|
|
|
| 128 |
```
|
| 129 |
|
| 130 |
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
|
| 131 |
- SGLang:
|
| 132 |
```shell
|
| 133 |
+
python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B-Instruct-2507 --context-length 262144
|
| 134 |
```
|
| 135 |
- vLLM:
|
| 136 |
```shell
|
| 137 |
+
vllm serve Qwen/Qwen3-30B-A3B-Instruct-2507 --max-model-len 262144
|
| 138 |
```
|
| 139 |
|
| 140 |
+
**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
|
| 141 |
|
| 142 |
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
|
| 143 |
|
|
|
|
| 150 |
from qwen_agent.agents import Assistant
|
| 151 |
|
| 152 |
# Define LLM
|
|
|
|
| 153 |
llm_cfg = {
|
| 154 |
+
'model': 'Qwen3-30B-A3B-Instruct-2507',
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 155 |
|
| 156 |
+
# Use a custom endpoint compatible with OpenAI API:
|
| 157 |
+
'model_server': 'http://localhost:8000/v1', # api_base
|
| 158 |
+
'api_key': 'EMPTY',
|
| 159 |
+
}
|
| 160 |
|
| 161 |
# Define Tools
|
| 162 |
tools = [
|
|
|
|
| 189 |
To achieve optimal performance, we recommend the following settings:
|
| 190 |
|
| 191 |
1. **Sampling Parameters**:
|
| 192 |
+
- We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
|
| 193 |
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
|
| 194 |
|
| 195 |
+
2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
|
| 196 |
|
| 197 |
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
|
| 198 |
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
|
| 199 |
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
|
| 200 |
|
|
|
|
|
|
|
|
|
|
| 201 |
### Citation
|
| 202 |
|
| 203 |
If you find our work helpful, feel free to give us a cite.
|