llmat commited on
Commit
b444d15
·
verified ·
1 Parent(s): d83e6a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md CHANGED
@@ -18,3 +18,37 @@ NVFP4-quantized version of `Qwen/Qwen3-0.6B` produced with [llmcompressor](https
18
  - Quantization scheme: NVFP4 (linear layers, `lm_head` excluded)
19
  - Calibration samples: 512
20
  - Max sequence length during calibration: 2048
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  - Quantization scheme: NVFP4 (linear layers, `lm_head` excluded)
19
  - Calibration samples: 512
20
  - Max sequence length during calibration: 2048
21
+
22
+ ## Deployment
23
+
24
+ ### Use with vLLM
25
+
26
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
27
+
28
+ ```python
29
+ from vllm import LLM, SamplingParams
30
+ from transformers import AutoTokenizer
31
+
32
+ model_id = "llmat/Qwen3-0.6B-NVFP4"
33
+ number_gpus = 1
34
+
35
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
38
+
39
+ messages = [
40
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
41
+ {"role": "user", "content": "Who are you?"},
42
+ ]
43
+
44
+ prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
45
+
46
+ llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
47
+
48
+ outputs = llm.generate(prompts, sampling_params)
49
+
50
+ generated_text = outputs[0].outputs[0].text
51
+ print(generated_text)
52
+ ```
53
+
54
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.