File size: 1,005 Bytes
7765750 6e3260f 4911774 6e3260f 8436daf |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
---
license: llama2
---
This is a merge of [LongAlpaca-70B-lora](https://huggingface.co/Yukang/LongAlpaca-70B-lora) into Sao10K's [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), replacing the embed and norm layers as described in the [LongLoRA repo](https://github.com/dvlab-research/LongLoRA), and removing the extra row and pad token so that the vocabularies match.
There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).
You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)).
A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/Euryale-1.3-longLORA-70b-rope8-32k-6bpw_h8_exl2).
See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how to create merges like these. |