Update README.md
Browse files
README.md
CHANGED
|
@@ -8,4 +8,6 @@ There is no additional fine-tuning. The resulting model seems to not be broken..
|
|
| 8 |
|
| 9 |
You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)).
|
| 10 |
|
| 11 |
-
A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/Euryale-1.3-longLORA-70b-rope8-32k-6bpw_h8_exl2).
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)).
|
| 10 |
|
| 11 |
+
A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/Euryale-1.3-longLORA-70b-rope8-32k-6bpw_h8_exl2).
|
| 12 |
+
|
| 13 |
+
See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how to create merges like these.
|