chchen commited on
Commit
a31080e
·
verified ·
1 Parent(s): 610cc00

Model save

Browse files
Files changed (2) hide show
  1. README.md +73 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Llama-3.1-8B-Instruct
3
+ library_name: peft
4
+ license: llama3.1
5
+ tags:
6
+ - llama-factory
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: Llama-3.1-8B-Instruct-SFT-900
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # Llama-3.1-8B-Instruct-SFT-900
17
+
18
+ This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.1053
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 5e-06
40
+ - train_batch_size: 2
41
+ - eval_batch_size: 2
42
+ - seed: 42
43
+ - gradient_accumulation_steps: 8
44
+ - total_train_batch_size: 16
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine
47
+ - lr_scheduler_warmup_ratio: 0.1
48
+ - num_epochs: 10.0
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss |
54
+ |:-------------:|:------:|:----:|:---------------:|
55
+ | 1.201 | 0.9877 | 50 | 1.0016 |
56
+ | 0.1407 | 1.9753 | 100 | 0.1513 |
57
+ | 0.0885 | 2.9630 | 150 | 0.1082 |
58
+ | 0.0743 | 3.9506 | 200 | 0.1068 |
59
+ | 0.0855 | 4.9383 | 250 | 0.1062 |
60
+ | 0.0571 | 5.9259 | 300 | 0.1058 |
61
+ | 0.063 | 6.9136 | 350 | 0.1054 |
62
+ | 0.0597 | 7.9012 | 400 | 0.1057 |
63
+ | 0.0694 | 8.8889 | 450 | 0.1053 |
64
+ | 0.0593 | 9.8765 | 500 | 0.1053 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - PEFT 0.12.0
70
+ - Transformers 4.45.2
71
+ - Pytorch 2.3.0
72
+ - Datasets 2.19.0
73
+ - Tokenizers 0.20.0
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1da246bae573fc2d947ead26b6c4d7018ea6820016449a8f3768441f3f69c893
3
  size 83945296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a24a7751d933c419ff20a88671474ce5f1c87cc1568eaae7c47539cec49a0328
3
  size 83945296