|
|
--- |
|
|
base_model: |
|
|
- huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned |
|
|
- prithivMLmods/Primal-Mini-3B-Exp |
|
|
- MaziyarPanahi/calme-3.3-llamaloi-3b |
|
|
- prithivMLmods/Llama-Deepsync-3B |
|
|
- lunahr/thea-3b-25r |
|
|
library_name: transformers |
|
|
tags: |
|
|
- mergekit |
|
|
- merge |
|
|
license: llama3.2 |
|
|
--- |
|
|
# merge |
|
|
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
|
|
## Merge Details |
|
|
### Merge Method |
|
|
|
|
|
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned) as a base. |
|
|
|
|
|
### Models Merged |
|
|
|
|
|
The following models were included in the merge: |
|
|
* [prithivMLmods/Primal-Mini-3B-Exp](https://huggingface.co/prithivMLmods/Primal-Mini-3B-Exp) |
|
|
* [MaziyarPanahi/calme-3.3-llamaloi-3b](https://huggingface.co/MaziyarPanahi/calme-3.3-llamaloi-3b) |
|
|
* [prithivMLmods/Llama-Deepsync-3B](https://huggingface.co/prithivMLmods/Llama-Deepsync-3B) |
|
|
* [lunahr/thea-3b-25r](https://huggingface.co/lunahr/thea-3b-25r) |
|
|
|
|
|
### Configuration |
|
|
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
|
|
```yaml |
|
|
merge_method: model_stock |
|
|
models: |
|
|
- model: lunahr/thea-3b-25r |
|
|
parameters: |
|
|
weight: 1.0 |
|
|
- model: MaziyarPanahi/calme-3.3-llamaloi-3b |
|
|
parameters: |
|
|
weight: 1.0 |
|
|
- model: prithivMLmods/Llama-Deepsync-3B |
|
|
parameters: |
|
|
weight: 1.0 |
|
|
- model: prithivMLmods/Primal-Mini-3B-Exp |
|
|
parameters: |
|
|
weight: 1.0 |
|
|
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned |
|
|
dtype: float16 |
|
|
normalize: true |
|
|
``` |