RichardErkhov commited on
Commit
62f9b79
·
verified ·
1 Parent(s): 9c991fe

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +350 -0
README.md ADDED
@@ -0,0 +1,350 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Llama-3-6B-v0.1 - bnb 4bits
11
+ - Model creator: https://huggingface.co/prince-canuma/
12
+ - Original model: https://huggingface.co/prince-canuma/Llama-3-6B-v0.1/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ language:
20
+ - en
21
+ license: llama3
22
+ library_name: transformers
23
+ datasets:
24
+ - prince-canuma/fineweb-CC-MAIN-2024-10-1B-en
25
+ - HuggingFaceFW/fineweb
26
+ tags:
27
+ - Llama-3-6B
28
+ - 6B
29
+ base_model:
30
+ - prince-canuma/Llama-3-6B-v0
31
+ ---
32
+
33
+ # Model Summary
34
+ <img src="images/llama-3-6B icon.jpeg" width="500" alt="Llama-3-6B"/>
35
+
36
+ Introducing the world's first Llama-3 base model with 6B parameters. This model is a pretrained version of [prince-canuma/Llama-3-6B-v0](https://huggingface.co/prince-canuma/Llama-3-6B-v0), which was created from Meta-Llama-3-8B using a technique called [downcycling](https://youtube.com/playlist?list=PLDn_JsyofyfTH5_5V1MNb8UYKxMl6IMNy&si=9hcOol4KHIgWThgt) .
37
+ The model was continually pretrained on 1 billion tokens of English-only text from fineweb, achieving impressive results on the evaluation set:
38
+ - Loss: 2.4942
39
+
40
+ <!-- Provide a longer summary of what this model is. -->
41
+
42
+ ## Model Description
43
+
44
+ <!-- Provide a longer summary of what this model is. -->
45
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
46
+
47
+ - **Developed by:** [Prince Canuma](https://huggingface.co/prince-canuma)
48
+ - **Sponsored by:** General
49
+ - **Model type:** Llama
50
+ - **License:** [Llama-3](https://llama.meta.com/llama3/license)
51
+ - **Pretrained from model:** prince-canuma/Llama-3-6B-v0
52
+
53
+ ### Model Sources
54
+
55
+ <!-- Provide the basic links for the model. -->
56
+
57
+ - **Repository:** https://github.com/Blaizzy/Coding-LLMs-from-scratch/tree/main/Llama-3
58
+ - **Video:** https://youtube.com/playlist?list=PLDn_JsyofyfTH5_5V1MNb8UYKxMl6IMNy&si=5Y4cm-6wrMOD1Abr
59
+
60
+ ## Uses
61
+
62
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
63
+ You can use this model to create instruct and chat versions for various use cases such as: Coding assistant, RAG, Function Calling and more.
64
+
65
+ ### Limitations
66
+
67
+ This model inherits some of the base model's limitations and some additional ones from it's creation process, such as:
68
+ - Limited scope for coding and math: According to benchmarks, this model needs more pretraining/finetuning on code and math data to excel at reasoning tasks.
69
+ - Language Limitations: This model was continually pretrained on english only data. If you are planning to use it for multilingual use cases I recommend fine-tuning or continued pretraining.
70
+
71
+ ## How to Get Started with the Model
72
+
73
+ Use the code below to get started with the model.
74
+
75
+ ```python
76
+ from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer
77
+
78
+ # Load model, config and tokenizer
79
+ model_name = "prince-canuma/Llama-3-6B-v0.1"
80
+ model = AutoModelForCausalLM.from_pretrained(model_name)
81
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
82
+
83
+ inputs = tokenizer(
84
+ [
85
+ "Who created Python?"
86
+ ], return_tensors = "pt")
87
+
88
+ from transformers import TextStreamer
89
+ text_streamer = TextStreamer(tokenizer)
90
+ _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 200)
91
+
92
+ ```
93
+
94
+ Output:
95
+ ```shell
96
+ <|begin_of_text|>Who created Python? What is Python used for? What is the difference between Python 2 and Python 3? What is the difference between Python and Python 3?
97
+ Python is a programming language that was created by Guido van Rossum in 1991. It is a widely used language for web development, data science, and machine learning. Python is also used for creating software applications and games.
98
+ Python is a powerful language that is easy to learn and use. It has a large library of built-in functions and packages that make it easy to write code. Python is also a very popular language for web development, with many popular web frameworks such as Django and Flask being written in Python.
99
+ Python is also used for data science and machine learning. It has a large library of packages for data analysis, machine learning, and artificial intelligence. Python is also used for creating software applications and games.
100
+ Python 2 and Python 3 are two different versions of the Python language. Python 2 was the original version of the
101
+ ```
102
+
103
+
104
+ ## Training Details
105
+
106
+ ### Downcycling
107
+
108
+ <img src="images/downcycling.jpeg" width="500" alt="Llama-3-8B-vs-6B-v0"/>
109
+ Fig 1. Downcycling workflow as also described in [arxiv.org/abs/2404.08634](https://arxiv.org/abs/2404.08634).
110
+
111
+ A technique that allows you to create new LLMs of diversa sizes from checkpoints of large pretrained models.
112
+ You take a reference model (i.e., Llama-3-8B) and copy the weights of 24 layers out of 32 layers alongside embedding and prediction heads.
113
+ Then you initialize a smaller target model with 24 layers and load those pretrained weights.
114
+
115
+ This new model will most likely still output legible outputs, but for it to perform well you need continue the pretraining.
116
+
117
+ <img src="images/Llama-3-8B-vs-6B-v0.png" width="500" alt="Llama-3-8B-vs-6B-v0"/>
118
+ Fig 2. Downcycled model vs Reference model, without continued pretraining.
119
+
120
+ ### Training Data
121
+
122
+ For continued pretrained, I extracted 1B tokens from [Huggingface's FineWeb CC-Main-2024-10](https://huggingface.co/datasets/HuggingFaceFW/fineweb#breakdown-by-dumpcrawl) slice.
123
+
124
+
125
+ #### Training hyperparameters
126
+
127
+ The following hyperparameters were used during training:
128
+ - learning_rate: 0.0002
129
+ - train_batch_size: 2
130
+ - eval_batch_size: 2
131
+ - seed: 42
132
+ - distributed_type: multi-GPU
133
+ - num_devices: 4
134
+ - gradient_accumulation_steps: 8
135
+ - total_train_batch_size: 64
136
+ - total_eval_batch_size: 8
137
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
138
+ - lr_scheduler_type: cosine
139
+ - lr_scheduler_warmup_steps: 100
140
+ - num_epochs: 2
141
+
142
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
143
+ <details><summary>See axolotl config</summary>
144
+
145
+ axolotl version: `0.4.0`
146
+ ```yaml
147
+ base_model: prince-canuma/Llama-3-6B-v0.1
148
+ model_type: AutoModelForCausalLM
149
+ tokenizer_type: AutoTokenizer
150
+
151
+ load_in_8bit: false
152
+ load_in_4bit: true
153
+ strict: false
154
+
155
+ datasets:
156
+ - path: prince-canuma/fineweb-CC-MAIN-2024-10-1B-en
157
+ type: completion
158
+ split: train
159
+ dataset_prepared_path: last_run_prepared
160
+ val_set_size: 0.001
161
+ output_dir: ./llama-3-6b
162
+ save_safetensors: true
163
+ adapter: qlora
164
+ lora_model_dir:
165
+
166
+ sequence_len: 8192
167
+ sample_packing: false
168
+ pad_to_sequence_len: false
169
+
170
+ lora_r: 128
171
+ lora_alpha: 128
172
+ lora_dropout: 0.05
173
+ lora_target_modules:
174
+ lora_target_linear: true
175
+ lora_fan_in_fan_out:
176
+
177
+
178
+ wandb_project: llama-3-6b
179
+ wandb_entity:
180
+ wandb_watch:
181
+ wandb_name:
182
+ wandb_log_model:
183
+
184
+ gradient_accumulation_steps: 8
185
+ micro_batch_size: 2
186
+ num_epochs: 2
187
+ optimizer: paged_adamw_32bit
188
+ lr_scheduler: cosine
189
+ learning_rate: 2e-4
190
+
191
+ train_on_inputs: false
192
+ group_by_length: false
193
+ bf16: auto
194
+ fp16:
195
+ tf32: false
196
+
197
+ gradient_checkpointing: true
198
+ early_stopping_patience:
199
+ resume_from_checkpoint:
200
+ local_rank:
201
+ logging_steps: 1
202
+ xformers_attention:
203
+ flash_attention: true
204
+
205
+ warmup_steps: 100
206
+ evals_per_epoch: 4
207
+ eval_table_size:
208
+ save_steps: 4000
209
+ debug:
210
+ deepspeed:
211
+ weight_decay: 0.0
212
+ fsdp:
213
+ fsdp_config:
214
+ special_tokens:
215
+ pad_token: "<|reserved_special_token_0|>"
216
+
217
+
218
+ ```
219
+
220
+ </details><br>
221
+
222
+ ### Training results
223
+
224
+ There were 3 distinct experiments. In these experiments, QLoRA was used instead of Full Fine-tuning due to budget constraints.
225
+ - v0: This was a test ran for 1K steps to check if the model would improve with QLoRA params.
226
+ - v1: Here the QLoRA parameters where tweaked (Rank and Alpha).
227
+ - v2: This was the main experiment, ran for 2 epochs on 1B tokens from FineWeb.
228
+
229
+ All details can be found on my Wandb dashboard: https://wandb.ai/prince-canuma/llama-3-6b?nw=nwuserprincecanuma
230
+
231
+ <img src="images/Training Loss.png" width="500" alt="Llama-3-8B-vs-6B-v0"/>
232
+ Fig 3. Experiment training loss charts on wandb.
233
+
234
+ Overal metrics:
235
+
236
+ | Training Loss | Epoch | Step | Validation Loss |
237
+ |:-------------:|:-----:|:-----:|:---------------:|
238
+ | 7.1562 | 0.0 | 1 | 7.1806 |
239
+ | 2.7339 | 0.25 | 5867 | 2.6266 |
240
+ | 2.6905 | 0.5 | 11734 | 2.5872 |
241
+ | 2.6134 | 0.75 | 17601 | 2.5549 |
242
+ | 2.532 | 1.0 | 23468 | 2.5235 |
243
+ | 2.5319 | 1.25 | 29335 | 2.5067 |
244
+ | 2.3336 | 1.5 | 35202 | 2.4968 |
245
+ | 2.3486 | 1.75 | 41069 | 2.4942 |
246
+
247
+
248
+
249
+
250
+ ### Framework versions
251
+
252
+ - PEFT 0.10.0
253
+ - Transformers 4.40.0.dev0
254
+ - Pytorch 2.2.0+cu121
255
+ - Datasets 2.15.0
256
+ - Tokenizers 0.15.0
257
+
258
+ ### Hardware:
259
+
260
+ - 4xRTX6000 using JarvisLabs (Sponsored by [General Catalyst](https://www.generalcatalyst.com/) thanks to Viet)
261
+
262
+
263
+ ## Evaluation
264
+
265
+ <!-- This section describes the evaluation protocols and provides the results. -->
266
+
267
+ #### Benchmarks
268
+
269
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
270
+
271
+ - **Hellaswag**: a dataset for studying grounded commonsense inference.
272
+ - **ARC**: a multiple-choice question-answering dataset.
273
+ from science exams from grade 3 to grade 9.
274
+ - **MMLU**: a test with 57 tasks to measure a text model's multitask accuracy.
275
+ - **TruthfulQA**: a test to measure a model's propensity to reproduce falsehoods commonly found online.
276
+ - **Winogrande**: for commonsense reasoning.
277
+ - **GSM8k**: diverse grade school math word problems to measure a model's
278
+ ability to solve multi-step mathematical reasoning problems.
279
+
280
+ ### Results
281
+
282
+ <img src="images/comparison_model_scores_histogram.png" width="500" alt="Llama-3-8B-vs-6B-v0"/>
283
+ Fig 4. Performance comparision of Llama-3-8B, Llama-3-6B and Llama-3-6B (w/ continued pretraining)
284
+
285
+ Pretraining for 2 epochs on 1B tokens had a positive effect across the board. The new base model now performs competitively with its reference model (Llama-3-8B) whilst being 1.3x smaller.
286
+
287
+ <img src="images/Comparision_of_Model_Scores.png" width="500" alt="All-vs-Llama-3-6B-v0"/>
288
+ Fig 5. Performance comparision of Llama-3-8B, Llama-2-13B, Yi-1.5-6B and Llama-3-6B.
289
+
290
+ Llama-3-6B is competive with model within it's category and upto 2x larger than it self across 6 diverse benchmarks.
291
+
292
+ #### Summary and future directions:
293
+
294
+ This experiment was a success! Using this technique, I'll be able to build many variants. This is the first of many new base models I intend to create.
295
+
296
+ Next, I plan to explore different data mixtures and perform full fine-tuning, all of which will contribute to developing other small model as well as larger and more robust models.
297
+
298
+
299
+ ## Citation
300
+
301
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
302
+
303
+ ### **BibTeX:**
304
+
305
+ ```bibtex
306
+ @misc{prince2024downcycling,
307
+ title={Efficient LLM Downcycling: Generating Diverse Model Sizes from Pretrained Giants},
308
+ author={Prince Canuma},
309
+ year={2024},
310
+ }
311
+ ```
312
+
313
+ # **Thank You!**
314
+
315
+ I want to extend my heartfelt thanks to the community for the invaluable expertise and unwavering support.
316
+
317
+ Additionally, I would like to thank Viet from General Catalyst (GC) for providing me with the much needed compute.
318
+
319
+ This is my most ambitious project yet, and it wouldn't have been possible without the incredible open-source ML community!
320
+
321
+ Developers, I am eager to see and hear about the innovative fine-tunes and applications you create.
322
+
323
+ Users, I am excited to learn about your experiences and use cases.
324
+
325
+ Thank you for your interest and support!
326
+
327
+ ## References:
328
+
329
+ ```bibtex
330
+ @misc{komatsuzaki2023sparse,
331
+ title={Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints},
332
+ author={Aran Komatsuzaki and Joan Puigcerver and James Lee-Thorp and Carlos Riquelme Ruiz and Basil Mustafa and Joshua Ainslie and Yi Tay and Mostafa Dehghani and Neil Houlsby},
333
+ year={2023},
334
+ eprint={2212.05055},
335
+ archivePrefix={arXiv},
336
+ primaryClass={cs.LG}
337
+ }
338
+ ```
339
+
340
+ ```bibtex
341
+ @misc{sanyal2024pretraining,
342
+ title={Pre-training Small Base LMs with Fewer Tokens},
343
+ author={Sunny Sanyal and Sujay Sanghavi and Alexandros G. Dimakis},
344
+ year={2024},
345
+ eprint={2404.08634},
346
+ archivePrefix={arXiv},
347
+ primaryClass={cs.CL}
348
+ }
349
+ ```
350
+