Update README.md
Browse files
README.md
CHANGED
|
@@ -220,7 +220,7 @@ Tell me the weather in Seoul<|im_end|>
|
|
| 220 |
|
| 221 |
```
|
| 222 |
|
| 223 |
-
- Note that the prompt ends with `assistant/think\n`(think +
|
| 224 |
- Generation continues until either the <|stop|> or <|endofturn|> token appears immediately after `<|im_end|>`.
|
| 225 |
|
| 226 |
To have the assistant respond in non-reasoning mode (i.e., answer directly), you can input the following prompt.
|
|
@@ -232,7 +232,7 @@ Tell me the weather in Seoul<|im_end|>
|
|
| 232 |
|
| 233 |
```
|
| 234 |
|
| 235 |
-
- Note that the prompt ends with `assistant\n
|
| 236 |
- Generation continues until either the <|stop|> or <|endofturn|> token appears immediately after `<|im_end|>`.
|
| 237 |
|
| 238 |
|
|
@@ -545,7 +545,7 @@ print(tokenizer.batch_decode(output_ids))
|
|
| 545 |
|
| 546 |
## **vLLM Usage Example**
|
| 547 |
|
| 548 |
-
|
| 549 |
|
| 550 |
1. Download vLLM plugin source code
|
| 551 |
|
|
|
|
| 220 |
|
| 221 |
```
|
| 222 |
|
| 223 |
+
- Note that the prompt ends with `assistant/think\n`(think + `\n`).
|
| 224 |
- Generation continues until either the <|stop|> or <|endofturn|> token appears immediately after `<|im_end|>`.
|
| 225 |
|
| 226 |
To have the assistant respond in non-reasoning mode (i.e., answer directly), you can input the following prompt.
|
|
|
|
| 232 |
|
| 233 |
```
|
| 234 |
|
| 235 |
+
- Note that the prompt ends with `assistant\n`(assistant + `\n`).
|
| 236 |
- Generation continues until either the <|stop|> or <|endofturn|> token appears immediately after `<|im_end|>`.
|
| 237 |
|
| 238 |
|
|
|
|
| 545 |
|
| 546 |
## **vLLM Usage Example**
|
| 547 |
|
| 548 |
+
The HyperCLOVA X SEED Think model is built on a custom LLM architecture based on the LLaMA architecture, incorporating μP and Peri-LN techniques. For convenient use with vLLM, it is available as a dedicated vLLM plugin that can be installed and used with ease once vLLM is set up.
|
| 549 |
|
| 550 |
1. Download vLLM plugin source code
|
| 551 |
|