Update README.md
#1
by
rginsberg-gpg
- opened
README.md
CHANGED
|
@@ -21,7 +21,7 @@ We adopted exactly the same architecture and tokenizer as Llama 2. This means Ti
|
|
| 21 |
|
| 22 |
#### This Model
|
| 23 |
This is the chat model finetuned on top of [PY007/TinyLlama-1.1B-intermediate-step-480k-1T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T).
|
| 24 |
-
The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) following the [chatml](https://github.com/openai/openai-python/blob/
|
| 25 |
#### How to use
|
| 26 |
You will need the transformers>=4.31
|
| 27 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|
|
|
|
| 21 |
|
| 22 |
#### This Model
|
| 23 |
This is the chat model finetuned on top of [PY007/TinyLlama-1.1B-intermediate-step-480k-1T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T).
|
| 24 |
+
The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) following the [chatml](https://github.com/openai/openai-python/blob/v0.28.1/chatml.md) format.
|
| 25 |
#### How to use
|
| 26 |
You will need the transformers>=4.31
|
| 27 |
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
|