Update README with new chat template example (#18)
Browse files- Update README.md (990cdb320db445f892a68db61e7223b1b3d060a1)
Co-authored-by: Raushan Turganbay <[email protected]>
README.md
CHANGED
|
@@ -118,6 +118,39 @@ response = processor.decode(output_ids, skip_special_tokens=True)
|
|
| 118 |
print(response)
|
| 119 |
```
|
| 120 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
### Advanced Inference and Fine-tuning
|
| 122 |
We provide a [codebase](https://github.com/rhymes-ai/Aria) for more advanced usage of Aria,
|
| 123 |
including vllm inference, cookbooks, and fine-tuning on custom datasets.
|
|
|
|
| 118 |
print(response)
|
| 119 |
```
|
| 120 |
|
| 121 |
+
-----------
|
| 122 |
+
From transformers>=v4.48, you can also pass image url or local path to the conversation history, and let the chat template handle the rest.
|
| 123 |
+
Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`.
|
| 124 |
+
|
| 125 |
+
Here is how to rewrite the above example
|
| 126 |
+
|
| 127 |
+
```python
|
| 128 |
+
messages = [
|
| 129 |
+
{
|
| 130 |
+
"role": "user",
|
| 131 |
+
"content": [
|
| 132 |
+
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"}
|
| 133 |
+
{"type": "text", "text": "what is the image?"},
|
| 134 |
+
],
|
| 135 |
+
},
|
| 136 |
+
]
|
| 137 |
+
|
| 138 |
+
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
|
| 139 |
+
ipnuts = inputs.to(model.device, torch.bfloat16)
|
| 140 |
+
|
| 141 |
+
output = model.generate(
|
| 142 |
+
**inputs,
|
| 143 |
+
max_new_tokens=15,
|
| 144 |
+
stop_strings=["<|im_end|>"],
|
| 145 |
+
tokenizer=processor.tokenizer,
|
| 146 |
+
do_sample=True,
|
| 147 |
+
temperature=0.9,
|
| 148 |
+
)
|
| 149 |
+
output_ids = output[0][inputs["input_ids"].shape[1]:]
|
| 150 |
+
response = processor.decode(output_ids, skip_special_tokens=True)
|
| 151 |
+
print(response)
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
### Advanced Inference and Fine-tuning
|
| 155 |
We provide a [codebase](https://github.com/rhymes-ai/Aria) for more advanced usage of Aria,
|
| 156 |
including vllm inference, cookbooks, and fine-tuning on custom datasets.
|