File size: 723 Bytes
4b139e8
8623b7e
 
 
 
 
 
b0bea25
 
8623b7e
57553dd
 
4b139e8
57553dd
 
 
 
 
 
4b139e8
57553dd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

---
license: llama2
base_model:
- unsloth/llama-2-13b
- layoric/llama-2-13b-code-alpaca
- WizardLMTeam/WizardLM-13B-V1.2
tags:
- merge
---
# AIM Paper Checkpoints Uploaded For Replication
This repository includes one of the checkpoints used in the paper "Activation-Informed Merging of Large Language Models". Specifics of this model are as follows:

- **Merging Method:** dare_linear
- **Models Used In Merging**
    - ***Base Model:*** unsloth/llama-2-13b
    - ***Code:*** layoric/llama-2-13b-code-alpaca
    - ***Instruction Tuned:*** WizardLMTeam/WizardLM-13B-V1.2
- **AIM:** False

Benchmark results and paper details can be found at the official [GitHub](https://github.com/ahnobari/ActivationInformedMerging.git).