Commit
·
72558e0
1
Parent(s):
d15f16b
Update README.md (#1)
Browse files- Update README.md (8b8fffbbcef201e3af27b115c789c4a92d06391b)
Co-authored-by: Souvik Datta <[email protected]>
README.md
CHANGED
|
@@ -1,10 +1,51 @@
|
|
| 1 |
---
|
| 2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
license: apache-2.0
|
| 4 |
---
|
| 5 |
-
## Training procedure
|
| 6 |
|
| 7 |
-
###
|
| 8 |
|
|
|
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
library_name: peft
|
| 3 |
+
tags:
|
| 4 |
+
- code
|
| 5 |
+
- instruct
|
| 6 |
+
- gpt2
|
| 7 |
+
datasets:
|
| 8 |
+
- HuggingFaceH4/no_robots
|
| 9 |
+
base_model: gpt2
|
| 10 |
license: apache-2.0
|
| 11 |
---
|
|
|
|
| 12 |
|
| 13 |
+
### Finetuning Overview:
|
| 14 |
|
| 15 |
+
**Model Used:** gpt2
|
| 16 |
|
| 17 |
+
**Dataset:** HuggingFaceH4/no_robots
|
| 18 |
+
|
| 19 |
+
#### Dataset Insights:
|
| 20 |
+
|
| 21 |
+
[No Robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.
|
| 22 |
+
|
| 23 |
+
#### Finetuning Details:
|
| 24 |
+
|
| 25 |
+
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning:
|
| 26 |
+
|
| 27 |
+
- Was achieved with great cost-effectiveness.
|
| 28 |
+
- Completed in a total duration of 3mins 40s for 1 epoch using an A6000 48GB GPU.
|
| 29 |
+
- Costed `$0.101` for the entire epoch.
|
| 30 |
+
|
| 31 |
+
#### Hyperparameters & Additional Details:
|
| 32 |
+
|
| 33 |
+
- **Epochs:** 1
|
| 34 |
+
- **Cost Per Epoch:** $0.101
|
| 35 |
+
- **Total Finetuning Cost:** $0.101
|
| 36 |
+
- **Model Path:** gpt2
|
| 37 |
+
- **Learning Rate:** 0.0002
|
| 38 |
+
- **Data Split:** 100% train
|
| 39 |
+
- **Gradient Accumulation Steps:** 4
|
| 40 |
+
- **lora r:** 32
|
| 41 |
+
- **lora alpha:** 64
|
| 42 |
+
|
| 43 |
+
#### Prompt Structure
|
| 44 |
+
```
|
| 45 |
+
<|system|> <|endoftext|> <|user|> [USER PROMPT]<|endoftext|> <|assistant|> [ASSISTANT ANSWER] <|endoftext|>
|
| 46 |
+
```
|
| 47 |
+
#### Training loss :
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+
license: apache-2.0
|