littlebird13 commited on
Commit
c1919f6
·
verified ·
1 Parent(s): e66e5a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -83,16 +83,18 @@ print("thinking content:", thinking_content)
83
  print("content:", content)
84
  ```
85
 
86
- For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
87
- - vLLM:
88
  ```shell
89
- vllm serve Qwen/Qwen3-4B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
90
  ```
91
- - SGLang:
92
  ```shell
93
- python -m sglang.launch_server --model-path Qwen/Qwen3-4B-FP8 --reasoning-parser deepseek-r1
94
  ```
95
 
 
 
96
  ## Note on FP8
97
 
98
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
@@ -128,8 +130,8 @@ However, please pay attention to the following known issues:
128
  ## Switching Between Thinking and Non-Thinking Mode
129
 
130
  > [!TIP]
131
- > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
132
- > Please refer to our documentation for [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) and [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) users.
133
 
134
  ### `enable_thinking=True`
135
 
 
83
  print("content:", content)
84
  ```
85
 
86
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
87
+ - SGLang:
88
  ```shell
89
+ python -m sglang.launch_server --model-path Qwen/Qwen3-4B-FP8 --reasoning-parser qwen3
90
  ```
91
+ - vLLM:
92
  ```shell
93
+ vllm serve Qwen/Qwen3-4B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
94
  ```
95
 
96
+ For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
97
+
98
  ## Note on FP8
99
 
100
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
 
130
  ## Switching Between Thinking and Non-Thinking Mode
131
 
132
  > [!TIP]
133
+ > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
134
+ > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
135
 
136
  ### `enable_thinking=True`
137