Update README.md
Browse files
README.md
CHANGED
|
@@ -10,9 +10,14 @@ datasets:
|
|
| 10 |
|
| 11 |
# Model Description:
|
| 12 |
Pruned from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|
| 13 |
-
using the
|
| 14 |
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
1. First, clone the [official implementation](https://github.com/horseee/LLM-Pruner) and run:
|
| 18 |
```
|
|
|
|
| 10 |
|
| 11 |
# Model Description:
|
| 12 |
Pruned from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|
| 13 |
+
using the LLM-Pruner from [`LLM-Pruner: On the Structural Pruning of Large Language Models`](https://arxiv.org/abs/2305.11627)
|
| 14 |
|
| 15 |
+
Done to test viability of LLM-Pruner for task-agnostic, low resource Generative AI for Commercial and Personal Use
|
| 16 |
+
compared to using out-of-the-box models like [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
|
| 17 |
+
|
| 18 |
+
Our presentation slides may be found [here](https://drive.google.com/file/d/1SUGGgOAq-mizqwM_KveBQ2pWdyglPVdM/view?usp=sharing)
|
| 19 |
+
|
| 20 |
+
# To replicate,
|
| 21 |
|
| 22 |
1. First, clone the [official implementation](https://github.com/horseee/LLM-Pruner) and run:
|
| 23 |
```
|