Update README.md
Browse files
README.md
CHANGED
|
@@ -220,9 +220,9 @@ After updating the config, proceed with either **vLLM** or **SGLang** for servin
|
|
| 220 |
To run Qwen with 1M context support:
|
| 221 |
|
| 222 |
```bash
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
|
| 226 |
```
|
| 227 |
|
| 228 |
Then launch the server with Dual Chunk Flash Attention enabled:
|
|
|
|
| 220 |
To run Qwen with 1M context support:
|
| 221 |
|
| 222 |
```bash
|
| 223 |
+
pip install -U vllm \
|
| 224 |
+
--torch-backend=auto \
|
| 225 |
+
--extra-index-url https://wheels.vllm.ai/nightly
|
| 226 |
```
|
| 227 |
|
| 228 |
Then launch the server with Dual Chunk Flash Attention enabled:
|