Update README.md
Browse files
README.md
CHANGED
|
@@ -13,5 +13,31 @@ tags:
|
|
| 13 |
- Llama-cpp
|
| 14 |
---
|
| 15 |
|
| 16 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
|
|
|
| 13 |
- Llama-cpp
|
| 14 |
---
|
| 15 |
|
| 16 |
+
## Qwen2.5-0.5B-200K
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
| Attribute | Description |
|
| 20 |
+
|---------------|-----------------------------------------------------------------------------|
|
| 21 |
+
| **License** | creativeml-openrail-m |
|
| 22 |
+
| **Datasets** | HuggingFaceH4/ultrachat_200k |
|
| 23 |
+
| **Language** | en |
|
| 24 |
+
| **Base Model**| unsloth/Qwen2.5-0.5B-bnb-4bit |
|
| 25 |
+
| **Tags** | Qwen, Qwen2.5, 0.5B, Llama-cpp |
|
| 26 |
+
|
| 27 |
+
Developed by: prithivMLmods
|
| 28 |
+
|
| 29 |
+
| File Name | Size | Description |
|
| 30 |
+
|-------------------------------|----------|-------------------------------------------------------------------------------------------------------|
|
| 31 |
+
| `.gitattributes` | 1.52kB | Git configuration file specifying attributes and LFS rules. |
|
| 32 |
+
| `README.md` | 218B | Markdown file with project information and instructions. |
|
| 33 |
+
| `added_tokens.json` | 657B | JSON file containing additional tokens for tokenization. |
|
| 34 |
+
| `config.json` | 847B | JSON configuration file for setting model parameters. |
|
| 35 |
+
| `generation_config.json` | 174B | JSON file specifying configuration for generation settings. |
|
| 36 |
+
| `merges.txt` | 1.82MB | File containing the merge rules for tokenizer. |
|
| 37 |
+
| `pytorch_model.bin` | 988MB | PyTorch model file containing the weights of the neural network. |
|
| 38 |
+
| `special_tokens_map.json` | 647B | JSON file mapping special tokens for the tokenizer. |
|
| 39 |
+
| `tokenizer.json` | 7.34MB | JSON file containing the tokenizer's configuration and vocabulary. |
|
| 40 |
+
| `tokenizer_config.json` | 7.73kB | JSON configuration file for setting tokenizer parameters. |
|
| 41 |
+
| `vocab.json` | 2.78MB | JSON file containing the vocabulary used by the tokenizer. |
|
| 42 |
+
|
| 43 |
|