Update README.md
Browse files
README.md
CHANGED
|
@@ -2,14 +2,28 @@
|
|
| 2 |
pipeline_tag: text-generation
|
| 3 |
inference: false
|
| 4 |
license: apache-2.0
|
| 5 |
-
library_name:
|
| 6 |
tags:
|
| 7 |
- language
|
| 8 |
- granite-3.2
|
| 9 |
base_model:
|
| 10 |
-
- ibm-granite/granite-3.
|
| 11 |
---
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
# Granite-3.2-8B-Instruct
|
| 14 |
|
| 15 |
**Model Summary:**
|
|
|
|
| 2 |
pipeline_tag: text-generation
|
| 3 |
inference: false
|
| 4 |
license: apache-2.0
|
| 5 |
+
library_name: exllamav2
|
| 6 |
tags:
|
| 7 |
- language
|
| 8 |
- granite-3.2
|
| 9 |
base_model:
|
| 10 |
+
- ibm-granite/granite-3.2-8b-instruct
|
| 11 |
---
|
| 12 |
+
# Granite-3.2-8B-Instruct-exl2
|
| 13 |
+
Original model: [granite-3.2-8b-instruct](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct)
|
| 14 |
+
Created by: [Granite Team, IBM](https://huggingface.co/ibm-granite)
|
| 15 |
+
|
| 16 |
+
## Quants
|
| 17 |
+
[4bpw h6 (main)](https://huggingface.co/cgus/granite-3.2-8b-instruct-exl2/tree/main)
|
| 18 |
+
[4.5bpw h6](https://huggingface.co/cgus/granite-3.2-8b-instruct-exl2/tree/4.5bpw-h6)
|
| 19 |
+
[5bpw h6](https://huggingface.co/cgus/granite-3.2-8b-instruct-exl2/tree/5bpw-h6)
|
| 20 |
+
[6bpw h6](https://huggingface.co/cgus/granite-3.2-8b-instruct-exl2/tree/6bpw-h6)
|
| 21 |
+
[8bpw h8](https://huggingface.co/cgus/granite-3.2-8b-instruct-exl2/tree/8bpw-h8)
|
| 22 |
+
|
| 23 |
+
## Quantization notes
|
| 24 |
+
Made with Exllamav2 0.2.8. Granite3 models require Exllamav2 0.2.7 or newer to use.
|
| 25 |
+
Exl2 models don't support native RAM offloading, so the model has to fully fit into GPU VRAM.
|
| 26 |
+
It's also required to use Nvidia RTX on Windows or Nvidia RTX/AMD ROCm on Linux.
|
| 27 |
# Granite-3.2-8B-Instruct
|
| 28 |
|
| 29 |
**Model Summary:**
|