genies-llm commited on
Commit
e42dc86
·
verified ·
1 Parent(s): 749ab75

Model save

Browse files
Files changed (5) hide show
  1. README.md +58 -0
  2. all_results.json +8 -0
  3. generation_config.json +11 -0
  4. train_results.json +8 -0
  5. trainer_state.json +1411 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-Coder-7B-Instruct
3
+ library_name: transformers
4
+ model_name: text2sql-sft-v6
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - sft
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for text2sql-sft-v6
13
+
14
+ This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="genies-llm/text2sql-sft-v6", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/genies-rnd/text2sql-sft/runs/9v6qppxc)
31
+
32
+
33
+ This model was trained with SFT.
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.18.0
38
+ - Transformers: 4.52.3
39
+ - Pytorch: 2.6.0
40
+ - Datasets: 4.0.0
41
+ - Tokenizers: 0.21.4
42
+
43
+ ## Citations
44
+
45
+
46
+
47
+ Cite TRL as:
48
+
49
+ ```bibtex
50
+ @misc{vonwerra2022trl,
51
+ title = {{TRL: Transformer Reinforcement Learning}},
52
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
53
+ year = 2020,
54
+ journal = {GitHub repository},
55
+ publisher = {GitHub},
56
+ howpublished = {\url{https://github.com/huggingface/trl}}
57
+ }
58
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 2.3168881532705178e+17,
3
+ "train_loss": 0.25350178002614027,
4
+ "train_runtime": 2712.8956,
5
+ "train_samples": 7296,
6
+ "train_samples_per_second": 8.068,
7
+ "train_steps_per_second": 0.063
8
+ }
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": 151645,
5
+ "pad_token_id": 151643,
6
+ "repetition_penalty": 1.1,
7
+ "temperature": 0.7,
8
+ "top_k": 20,
9
+ "top_p": 0.8,
10
+ "transformers_version": "4.52.3"
11
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 2.3168881532705178e+17,
3
+ "train_loss": 0.25350178002614027,
4
+ "train_runtime": 2712.8956,
5
+ "train_samples": 7296,
6
+ "train_samples_per_second": 8.068,
7
+ "train_steps_per_second": 0.063
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1411 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 3.0,
6
+ "eval_steps": 500,
7
+ "global_step": 171,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.017543859649122806,
14
+ "grad_norm": 3.9338779838849725,
15
+ "learning_rate": 0.0,
16
+ "loss": 1.2337,
17
+ "num_tokens": 415561.0,
18
+ "step": 1
19
+ },
20
+ {
21
+ "epoch": 0.03508771929824561,
22
+ "grad_norm": 4.061961880981431,
23
+ "learning_rate": 1.6666666666666667e-06,
24
+ "loss": 1.2551,
25
+ "num_tokens": 811930.0,
26
+ "step": 2
27
+ },
28
+ {
29
+ "epoch": 0.05263157894736842,
30
+ "grad_norm": 4.126747331618448,
31
+ "learning_rate": 3.3333333333333333e-06,
32
+ "loss": 1.2772,
33
+ "num_tokens": 1198366.0,
34
+ "step": 3
35
+ },
36
+ {
37
+ "epoch": 0.07017543859649122,
38
+ "grad_norm": 3.410910565563502,
39
+ "learning_rate": 5e-06,
40
+ "loss": 1.1776,
41
+ "num_tokens": 1608933.0,
42
+ "step": 4
43
+ },
44
+ {
45
+ "epoch": 0.08771929824561403,
46
+ "grad_norm": 2.233718748318594,
47
+ "learning_rate": 6.666666666666667e-06,
48
+ "loss": 0.9942,
49
+ "num_tokens": 2068099.0,
50
+ "step": 5
51
+ },
52
+ {
53
+ "epoch": 0.10526315789473684,
54
+ "grad_norm": 1.470021030676499,
55
+ "learning_rate": 8.333333333333334e-06,
56
+ "loss": 0.8448,
57
+ "num_tokens": 2506575.0,
58
+ "step": 6
59
+ },
60
+ {
61
+ "epoch": 0.12280701754385964,
62
+ "grad_norm": 1.4565364805839704,
63
+ "learning_rate": 1e-05,
64
+ "loss": 0.8213,
65
+ "num_tokens": 2932014.0,
66
+ "step": 7
67
+ },
68
+ {
69
+ "epoch": 0.14035087719298245,
70
+ "grad_norm": 2.3516845691230217,
71
+ "learning_rate": 9.999184354855868e-06,
72
+ "loss": 0.6275,
73
+ "num_tokens": 3370254.0,
74
+ "step": 8
75
+ },
76
+ {
77
+ "epoch": 0.15789473684210525,
78
+ "grad_norm": 1.8135724347288096,
79
+ "learning_rate": 9.996737715102133e-06,
80
+ "loss": 0.5931,
81
+ "num_tokens": 3782161.0,
82
+ "step": 9
83
+ },
84
+ {
85
+ "epoch": 0.17543859649122806,
86
+ "grad_norm": 1.4106997469551075,
87
+ "learning_rate": 9.99266096766761e-06,
88
+ "loss": 0.4933,
89
+ "num_tokens": 4211970.0,
90
+ "step": 10
91
+ },
92
+ {
93
+ "epoch": 0.19298245614035087,
94
+ "grad_norm": 0.8131031118317348,
95
+ "learning_rate": 9.98695559040975e-06,
96
+ "loss": 0.4229,
97
+ "num_tokens": 4613367.0,
98
+ "step": 11
99
+ },
100
+ {
101
+ "epoch": 0.21052631578947367,
102
+ "grad_norm": 0.7764134903704204,
103
+ "learning_rate": 9.979623651578881e-06,
104
+ "loss": 0.3888,
105
+ "num_tokens": 5012484.0,
106
+ "step": 12
107
+ },
108
+ {
109
+ "epoch": 0.22807017543859648,
110
+ "grad_norm": 0.37508885186302826,
111
+ "learning_rate": 9.970667809068476e-06,
112
+ "loss": 0.3783,
113
+ "num_tokens": 5420781.0,
114
+ "step": 13
115
+ },
116
+ {
117
+ "epoch": 0.24561403508771928,
118
+ "grad_norm": 0.3209831897748014,
119
+ "learning_rate": 9.960091309451626e-06,
120
+ "loss": 0.3308,
121
+ "num_tokens": 5823019.0,
122
+ "step": 14
123
+ },
124
+ {
125
+ "epoch": 0.2631578947368421,
126
+ "grad_norm": 0.2796386507143645,
127
+ "learning_rate": 9.947897986804131e-06,
128
+ "loss": 0.3456,
129
+ "num_tokens": 6241736.0,
130
+ "step": 15
131
+ },
132
+ {
133
+ "epoch": 0.2807017543859649,
134
+ "grad_norm": 0.25191330470494944,
135
+ "learning_rate": 9.93409226131462e-06,
136
+ "loss": 0.3172,
137
+ "num_tokens": 6680804.0,
138
+ "step": 16
139
+ },
140
+ {
141
+ "epoch": 0.2982456140350877,
142
+ "grad_norm": 0.2415593584049499,
143
+ "learning_rate": 9.91867913768218e-06,
144
+ "loss": 0.3253,
145
+ "num_tokens": 7107939.0,
146
+ "step": 17
147
+ },
148
+ {
149
+ "epoch": 0.3157894736842105,
150
+ "grad_norm": 0.3003770654260172,
151
+ "learning_rate": 9.901664203302126e-06,
152
+ "loss": 0.3213,
153
+ "num_tokens": 7524281.0,
154
+ "step": 18
155
+ },
156
+ {
157
+ "epoch": 0.3333333333333333,
158
+ "grad_norm": 0.3539478460817712,
159
+ "learning_rate": 9.883053626240503e-06,
160
+ "loss": 0.2966,
161
+ "num_tokens": 7939796.0,
162
+ "step": 19
163
+ },
164
+ {
165
+ "epoch": 0.3508771929824561,
166
+ "grad_norm": 0.2280690675113219,
167
+ "learning_rate": 9.862854152998112e-06,
168
+ "loss": 0.3043,
169
+ "num_tokens": 8366406.0,
170
+ "step": 20
171
+ },
172
+ {
173
+ "epoch": 0.3684210526315789,
174
+ "grad_norm": 0.21683554455375648,
175
+ "learning_rate": 9.841073106064852e-06,
176
+ "loss": 0.3057,
177
+ "num_tokens": 8772526.0,
178
+ "step": 21
179
+ },
180
+ {
181
+ "epoch": 0.38596491228070173,
182
+ "grad_norm": 0.19203110899204243,
183
+ "learning_rate": 9.81771838126524e-06,
184
+ "loss": 0.2899,
185
+ "num_tokens": 9193705.0,
186
+ "step": 22
187
+ },
188
+ {
189
+ "epoch": 0.40350877192982454,
190
+ "grad_norm": 0.21290762042544037,
191
+ "learning_rate": 9.792798444896107e-06,
192
+ "loss": 0.3073,
193
+ "num_tokens": 9641419.0,
194
+ "step": 23
195
+ },
196
+ {
197
+ "epoch": 0.42105263157894735,
198
+ "grad_norm": 0.1917960836643891,
199
+ "learning_rate": 9.766322330657499e-06,
200
+ "loss": 0.2921,
201
+ "num_tokens": 10041552.0,
202
+ "step": 24
203
+ },
204
+ {
205
+ "epoch": 0.43859649122807015,
206
+ "grad_norm": 0.18577210434865676,
207
+ "learning_rate": 9.738299636377863e-06,
208
+ "loss": 0.28,
209
+ "num_tokens": 10476244.0,
210
+ "step": 25
211
+ },
212
+ {
213
+ "epoch": 0.45614035087719296,
214
+ "grad_norm": 0.18309063662126532,
215
+ "learning_rate": 9.70874052053476e-06,
216
+ "loss": 0.2785,
217
+ "num_tokens": 10863935.0,
218
+ "step": 26
219
+ },
220
+ {
221
+ "epoch": 0.47368421052631576,
222
+ "grad_norm": 0.18434080604136077,
223
+ "learning_rate": 9.677655698572326e-06,
224
+ "loss": 0.2661,
225
+ "num_tokens": 11259192.0,
226
+ "step": 27
227
+ },
228
+ {
229
+ "epoch": 0.49122807017543857,
230
+ "grad_norm": 0.16956692098933204,
231
+ "learning_rate": 9.645056439016827e-06,
232
+ "loss": 0.273,
233
+ "num_tokens": 11708724.0,
234
+ "step": 28
235
+ },
236
+ {
237
+ "epoch": 0.5087719298245614,
238
+ "grad_norm": 0.16434051233500432,
239
+ "learning_rate": 9.610954559391704e-06,
240
+ "loss": 0.2667,
241
+ "num_tokens": 12137403.0,
242
+ "step": 29
243
+ },
244
+ {
245
+ "epoch": 0.5263157894736842,
246
+ "grad_norm": 0.17669427582556774,
247
+ "learning_rate": 9.57536242193364e-06,
248
+ "loss": 0.2584,
249
+ "num_tokens": 12543239.0,
250
+ "step": 30
251
+ },
252
+ {
253
+ "epoch": 0.543859649122807,
254
+ "grad_norm": 0.1652310917436867,
255
+ "learning_rate": 9.538292929111114e-06,
256
+ "loss": 0.2569,
257
+ "num_tokens": 12940013.0,
258
+ "step": 31
259
+ },
260
+ {
261
+ "epoch": 0.5614035087719298,
262
+ "grad_norm": 0.15646296852545927,
263
+ "learning_rate": 9.499759518947156e-06,
264
+ "loss": 0.2607,
265
+ "num_tokens": 13409215.0,
266
+ "step": 32
267
+ },
268
+ {
269
+ "epoch": 0.5789473684210527,
270
+ "grad_norm": 0.16249861759922887,
271
+ "learning_rate": 9.459776160147941e-06,
272
+ "loss": 0.2461,
273
+ "num_tokens": 13806115.0,
274
+ "step": 33
275
+ },
276
+ {
277
+ "epoch": 0.5964912280701754,
278
+ "grad_norm": 0.15613005334949157,
279
+ "learning_rate": 9.418357347038999e-06,
280
+ "loss": 0.2493,
281
+ "num_tokens": 14248072.0,
282
+ "step": 34
283
+ },
284
+ {
285
+ "epoch": 0.6140350877192983,
286
+ "grad_norm": 0.16063917924130028,
287
+ "learning_rate": 9.375518094310904e-06,
288
+ "loss": 0.255,
289
+ "num_tokens": 14680852.0,
290
+ "step": 35
291
+ },
292
+ {
293
+ "epoch": 0.631578947368421,
294
+ "grad_norm": 0.15840497254274827,
295
+ "learning_rate": 9.331273931576306e-06,
296
+ "loss": 0.2459,
297
+ "num_tokens": 15109781.0,
298
+ "step": 36
299
+ },
300
+ {
301
+ "epoch": 0.6491228070175439,
302
+ "grad_norm": 0.1549313386657812,
303
+ "learning_rate": 9.285640897740316e-06,
304
+ "loss": 0.2461,
305
+ "num_tokens": 15556489.0,
306
+ "step": 37
307
+ },
308
+ {
309
+ "epoch": 0.6666666666666666,
310
+ "grad_norm": 0.15967707273841447,
311
+ "learning_rate": 9.238635535186247e-06,
312
+ "loss": 0.2358,
313
+ "num_tokens": 15975315.0,
314
+ "step": 38
315
+ },
316
+ {
317
+ "epoch": 0.6842105263157895,
318
+ "grad_norm": 0.1664153879515682,
319
+ "learning_rate": 9.19027488377886e-06,
320
+ "loss": 0.2453,
321
+ "num_tokens": 16366263.0,
322
+ "step": 39
323
+ },
324
+ {
325
+ "epoch": 0.7017543859649122,
326
+ "grad_norm": 0.16085937077964602,
327
+ "learning_rate": 9.140576474687263e-06,
328
+ "loss": 0.2339,
329
+ "num_tokens": 16780696.0,
330
+ "step": 40
331
+ },
332
+ {
333
+ "epoch": 0.7192982456140351,
334
+ "grad_norm": 0.1525506357194124,
335
+ "learning_rate": 9.0895583240297e-06,
336
+ "loss": 0.2454,
337
+ "num_tokens": 17226365.0,
338
+ "step": 41
339
+ },
340
+ {
341
+ "epoch": 0.7368421052631579,
342
+ "grad_norm": 0.15414442592489289,
343
+ "learning_rate": 9.037238926342544e-06,
344
+ "loss": 0.2315,
345
+ "num_tokens": 17651462.0,
346
+ "step": 42
347
+ },
348
+ {
349
+ "epoch": 0.7543859649122807,
350
+ "grad_norm": 0.160113618784668,
351
+ "learning_rate": 8.983637247875872e-06,
352
+ "loss": 0.24,
353
+ "num_tokens": 18064676.0,
354
+ "step": 43
355
+ },
356
+ {
357
+ "epoch": 0.7719298245614035,
358
+ "grad_norm": 0.1592020616116995,
359
+ "learning_rate": 8.92877271971802e-06,
360
+ "loss": 0.236,
361
+ "num_tokens": 18474727.0,
362
+ "step": 44
363
+ },
364
+ {
365
+ "epoch": 0.7894736842105263,
366
+ "grad_norm": 0.15245337800135356,
367
+ "learning_rate": 8.872665230751644e-06,
368
+ "loss": 0.2405,
369
+ "num_tokens": 18902046.0,
370
+ "step": 45
371
+ },
372
+ {
373
+ "epoch": 0.8070175438596491,
374
+ "grad_norm": 0.20723449258057428,
375
+ "learning_rate": 8.815335120443822e-06,
376
+ "loss": 0.224,
377
+ "num_tokens": 19319711.0,
378
+ "step": 46
379
+ },
380
+ {
381
+ "epoch": 0.8245614035087719,
382
+ "grad_norm": 0.1474862726874557,
383
+ "learning_rate": 8.756803171472817e-06,
384
+ "loss": 0.2412,
385
+ "num_tokens": 19757131.0,
386
+ "step": 47
387
+ },
388
+ {
389
+ "epoch": 0.8421052631578947,
390
+ "grad_norm": 0.15052042969864995,
391
+ "learning_rate": 8.69709060219416e-06,
392
+ "loss": 0.2327,
393
+ "num_tokens": 20194869.0,
394
+ "step": 48
395
+ },
396
+ {
397
+ "epoch": 0.8596491228070176,
398
+ "grad_norm": 0.15385476085770403,
399
+ "learning_rate": 8.636219058948823e-06,
400
+ "loss": 0.2266,
401
+ "num_tokens": 20597224.0,
402
+ "step": 49
403
+ },
404
+ {
405
+ "epoch": 0.8771929824561403,
406
+ "grad_norm": 0.15228109362462297,
407
+ "learning_rate": 8.574210608216206e-06,
408
+ "loss": 0.239,
409
+ "num_tokens": 21031895.0,
410
+ "step": 50
411
+ },
412
+ {
413
+ "epoch": 0.8947368421052632,
414
+ "grad_norm": 0.14813164351615135,
415
+ "learning_rate": 8.511087728614863e-06,
416
+ "loss": 0.2271,
417
+ "num_tokens": 21445108.0,
418
+ "step": 51
419
+ },
420
+ {
421
+ "epoch": 0.9122807017543859,
422
+ "grad_norm": 0.14876791381975332,
423
+ "learning_rate": 8.446873302753783e-06,
424
+ "loss": 0.2277,
425
+ "num_tokens": 21886020.0,
426
+ "step": 52
427
+ },
428
+ {
429
+ "epoch": 0.9298245614035088,
430
+ "grad_norm": 0.14949287901477407,
431
+ "learning_rate": 8.381590608937251e-06,
432
+ "loss": 0.2395,
433
+ "num_tokens": 22300331.0,
434
+ "step": 53
435
+ },
436
+ {
437
+ "epoch": 0.9473684210526315,
438
+ "grad_norm": 0.1413875737037552,
439
+ "learning_rate": 8.315263312726248e-06,
440
+ "loss": 0.2321,
441
+ "num_tokens": 22731352.0,
442
+ "step": 54
443
+ },
444
+ {
445
+ "epoch": 0.9649122807017544,
446
+ "grad_norm": 0.14923397563157811,
447
+ "learning_rate": 8.247915458359473e-06,
448
+ "loss": 0.2169,
449
+ "num_tokens": 23159887.0,
450
+ "step": 55
451
+ },
452
+ {
453
+ "epoch": 0.9824561403508771,
454
+ "grad_norm": 0.15363902325932008,
455
+ "learning_rate": 8.179571460037096e-06,
456
+ "loss": 0.2357,
457
+ "num_tokens": 23618660.0,
458
+ "step": 56
459
+ },
460
+ {
461
+ "epoch": 1.0,
462
+ "grad_norm": 0.14687085036662337,
463
+ "learning_rate": 8.110256093070393e-06,
464
+ "loss": 0.2334,
465
+ "num_tokens": 24039673.0,
466
+ "step": 57
467
+ },
468
+ {
469
+ "epoch": 1.0175438596491229,
470
+ "grad_norm": 0.1476041876990848,
471
+ "learning_rate": 8.039994484900463e-06,
472
+ "loss": 0.2011,
473
+ "num_tokens": 24434236.0,
474
+ "step": 58
475
+ },
476
+ {
477
+ "epoch": 1.0350877192982457,
478
+ "grad_norm": 0.1419449695352204,
479
+ "learning_rate": 7.968812105989316e-06,
480
+ "loss": 0.2114,
481
+ "num_tokens": 24853504.0,
482
+ "step": 59
483
+ },
484
+ {
485
+ "epoch": 1.0526315789473684,
486
+ "grad_norm": 0.15337353318491537,
487
+ "learning_rate": 7.896734760586599e-06,
488
+ "loss": 0.2052,
489
+ "num_tokens": 25255898.0,
490
+ "step": 60
491
+ },
492
+ {
493
+ "epoch": 1.0701754385964912,
494
+ "grad_norm": 0.14113455068915454,
495
+ "learning_rate": 7.82378857737533e-06,
496
+ "loss": 0.2098,
497
+ "num_tokens": 25667644.0,
498
+ "step": 61
499
+ },
500
+ {
501
+ "epoch": 1.087719298245614,
502
+ "grad_norm": 0.14898616979585055,
503
+ "learning_rate": 7.75e-06,
504
+ "loss": 0.2041,
505
+ "num_tokens": 26083880.0,
506
+ "step": 62
507
+ },
508
+ {
509
+ "epoch": 1.1052631578947367,
510
+ "grad_norm": 0.15200676955309492,
511
+ "learning_rate": 7.675395777480538e-06,
512
+ "loss": 0.1911,
513
+ "num_tokens": 26465737.0,
514
+ "step": 63
515
+ },
516
+ {
517
+ "epoch": 1.1228070175438596,
518
+ "grad_norm": 0.15217717203165435,
519
+ "learning_rate": 7.600002954515532e-06,
520
+ "loss": 0.2113,
521
+ "num_tokens": 26905038.0,
522
+ "step": 64
523
+ },
524
+ {
525
+ "epoch": 1.1403508771929824,
526
+ "grad_norm": 0.1392806691058708,
527
+ "learning_rate": 7.523848861678297e-06,
528
+ "loss": 0.1981,
529
+ "num_tokens": 27327803.0,
530
+ "step": 65
531
+ },
532
+ {
533
+ "epoch": 1.1578947368421053,
534
+ "grad_norm": 0.14823210297550762,
535
+ "learning_rate": 7.446961105509289e-06,
536
+ "loss": 0.199,
537
+ "num_tokens": 27743615.0,
538
+ "step": 66
539
+ },
540
+ {
541
+ "epoch": 1.1754385964912282,
542
+ "grad_norm": 0.15025342094871685,
543
+ "learning_rate": 7.36936755850849e-06,
544
+ "loss": 0.2035,
545
+ "num_tokens": 28159990.0,
546
+ "step": 67
547
+ },
548
+ {
549
+ "epoch": 1.1929824561403508,
550
+ "grad_norm": 0.1424694767142918,
551
+ "learning_rate": 7.2910963490313815e-06,
552
+ "loss": 0.1997,
553
+ "num_tokens": 28556831.0,
554
+ "step": 68
555
+ },
556
+ {
557
+ "epoch": 1.2105263157894737,
558
+ "grad_norm": 0.14322092181407153,
559
+ "learning_rate": 7.212175851092154e-06,
560
+ "loss": 0.1961,
561
+ "num_tokens": 28995194.0,
562
+ "step": 69
563
+ },
564
+ {
565
+ "epoch": 1.2280701754385965,
566
+ "grad_norm": 0.1526511754370368,
567
+ "learning_rate": 7.132634674077884e-06,
568
+ "loss": 0.1921,
569
+ "num_tokens": 29410916.0,
570
+ "step": 70
571
+ },
572
+ {
573
+ "epoch": 1.2456140350877192,
574
+ "grad_norm": 0.139735447301268,
575
+ "learning_rate": 7.052501652377368e-06,
576
+ "loss": 0.2063,
577
+ "num_tokens": 29913120.0,
578
+ "step": 71
579
+ },
580
+ {
581
+ "epoch": 1.263157894736842,
582
+ "grad_norm": 0.14344116097027,
583
+ "learning_rate": 6.971805834928399e-06,
584
+ "loss": 0.2027,
585
+ "num_tokens": 30351179.0,
586
+ "step": 72
587
+ },
588
+ {
589
+ "epoch": 1.280701754385965,
590
+ "grad_norm": 0.1426174151835526,
591
+ "learning_rate": 6.890576474687264e-06,
592
+ "loss": 0.2064,
593
+ "num_tokens": 30791604.0,
594
+ "step": 73
595
+ },
596
+ {
597
+ "epoch": 1.2982456140350878,
598
+ "grad_norm": 0.14994921794042249,
599
+ "learning_rate": 6.808843018024296e-06,
600
+ "loss": 0.2065,
601
+ "num_tokens": 31261315.0,
602
+ "step": 74
603
+ },
604
+ {
605
+ "epoch": 1.3157894736842106,
606
+ "grad_norm": 0.15092392748993957,
607
+ "learning_rate": 6.726635094049291e-06,
608
+ "loss": 0.1908,
609
+ "num_tokens": 31659426.0,
610
+ "step": 75
611
+ },
612
+ {
613
+ "epoch": 1.3333333333333333,
614
+ "grad_norm": 0.14611148416178402,
615
+ "learning_rate": 6.643982503870693e-06,
616
+ "loss": 0.2051,
617
+ "num_tokens": 32057379.0,
618
+ "step": 76
619
+ },
620
+ {
621
+ "epoch": 1.3508771929824561,
622
+ "grad_norm": 0.1427497694651295,
623
+ "learning_rate": 6.560915209792424e-06,
624
+ "loss": 0.1992,
625
+ "num_tokens": 32476948.0,
626
+ "step": 77
627
+ },
628
+ {
629
+ "epoch": 1.368421052631579,
630
+ "grad_norm": 0.14625754427445514,
631
+ "learning_rate": 6.477463324452286e-06,
632
+ "loss": 0.1966,
633
+ "num_tokens": 32880627.0,
634
+ "step": 78
635
+ },
636
+ {
637
+ "epoch": 1.3859649122807016,
638
+ "grad_norm": 0.14428604329114683,
639
+ "learning_rate": 6.393657099905854e-06,
640
+ "loss": 0.2,
641
+ "num_tokens": 33298268.0,
642
+ "step": 79
643
+ },
644
+ {
645
+ "epoch": 1.4035087719298245,
646
+ "grad_norm": 0.14115165460163148,
647
+ "learning_rate": 6.309526916659843e-06,
648
+ "loss": 0.1961,
649
+ "num_tokens": 33735278.0,
650
+ "step": 80
651
+ },
652
+ {
653
+ "epoch": 1.4210526315789473,
654
+ "grad_norm": 0.14007382911380342,
655
+ "learning_rate": 6.225103272658889e-06,
656
+ "loss": 0.199,
657
+ "num_tokens": 34180174.0,
658
+ "step": 81
659
+ },
660
+ {
661
+ "epoch": 1.4385964912280702,
662
+ "grad_norm": 0.1464893959125568,
663
+ "learning_rate": 6.140416772229785e-06,
664
+ "loss": 0.1996,
665
+ "num_tokens": 34600325.0,
666
+ "step": 82
667
+ },
668
+ {
669
+ "epoch": 1.456140350877193,
670
+ "grad_norm": 0.14708924135818197,
671
+ "learning_rate": 6.0554981149871276e-06,
672
+ "loss": 0.1978,
673
+ "num_tokens": 35028867.0,
674
+ "step": 83
675
+ },
676
+ {
677
+ "epoch": 1.4736842105263157,
678
+ "grad_norm": 0.15137327410210355,
679
+ "learning_rate": 5.970378084704441e-06,
680
+ "loss": 0.1971,
681
+ "num_tokens": 35465932.0,
682
+ "step": 84
683
+ },
684
+ {
685
+ "epoch": 1.4912280701754386,
686
+ "grad_norm": 0.14187565136169733,
687
+ "learning_rate": 5.88508753815478e-06,
688
+ "loss": 0.1922,
689
+ "num_tokens": 35857083.0,
690
+ "step": 85
691
+ },
692
+ {
693
+ "epoch": 1.5087719298245614,
694
+ "grad_norm": 0.14750567085422958,
695
+ "learning_rate": 5.799657393924869e-06,
696
+ "loss": 0.1886,
697
+ "num_tokens": 36243816.0,
698
+ "step": 86
699
+ },
700
+ {
701
+ "epoch": 1.526315789473684,
702
+ "grad_norm": 0.13584876516210329,
703
+ "learning_rate": 5.714118621206843e-06,
704
+ "loss": 0.1949,
705
+ "num_tokens": 36682515.0,
706
+ "step": 87
707
+ },
708
+ {
709
+ "epoch": 1.543859649122807,
710
+ "grad_norm": 0.13138592337104565,
711
+ "learning_rate": 5.6285022285716325e-06,
712
+ "loss": 0.1848,
713
+ "num_tokens": 37108482.0,
714
+ "step": 88
715
+ },
716
+ {
717
+ "epoch": 1.5614035087719298,
718
+ "grad_norm": 0.14022318239136972,
719
+ "learning_rate": 5.542839252728096e-06,
720
+ "loss": 0.1986,
721
+ "num_tokens": 37546779.0,
722
+ "step": 89
723
+ },
724
+ {
725
+ "epoch": 1.5789473684210527,
726
+ "grad_norm": 0.14886854943483496,
727
+ "learning_rate": 5.457160747271906e-06,
728
+ "loss": 0.2025,
729
+ "num_tokens": 37965357.0,
730
+ "step": 90
731
+ },
732
+ {
733
+ "epoch": 1.5964912280701755,
734
+ "grad_norm": 0.14038303393485826,
735
+ "learning_rate": 5.371497771428368e-06,
736
+ "loss": 0.1975,
737
+ "num_tokens": 38406113.0,
738
+ "step": 91
739
+ },
740
+ {
741
+ "epoch": 1.6140350877192984,
742
+ "grad_norm": 0.14458491708947513,
743
+ "learning_rate": 5.2858813787931605e-06,
744
+ "loss": 0.1872,
745
+ "num_tokens": 38800413.0,
746
+ "step": 92
747
+ },
748
+ {
749
+ "epoch": 1.631578947368421,
750
+ "grad_norm": 0.1480350806263145,
751
+ "learning_rate": 5.2003426060751324e-06,
752
+ "loss": 0.1968,
753
+ "num_tokens": 39200782.0,
754
+ "step": 93
755
+ },
756
+ {
757
+ "epoch": 1.6491228070175439,
758
+ "grad_norm": 0.13730868120439885,
759
+ "learning_rate": 5.114912461845223e-06,
760
+ "loss": 0.2098,
761
+ "num_tokens": 39666940.0,
762
+ "step": 94
763
+ },
764
+ {
765
+ "epoch": 1.6666666666666665,
766
+ "grad_norm": 0.13357583490843425,
767
+ "learning_rate": 5.02962191529556e-06,
768
+ "loss": 0.2005,
769
+ "num_tokens": 40103554.0,
770
+ "step": 95
771
+ },
772
+ {
773
+ "epoch": 1.6842105263157894,
774
+ "grad_norm": 0.13277577033293314,
775
+ "learning_rate": 4.944501885012875e-06,
776
+ "loss": 0.1894,
777
+ "num_tokens": 40523764.0,
778
+ "step": 96
779
+ },
780
+ {
781
+ "epoch": 1.7017543859649122,
782
+ "grad_norm": 0.1445531104714593,
783
+ "learning_rate": 4.859583227770218e-06,
784
+ "loss": 0.1966,
785
+ "num_tokens": 40923093.0,
786
+ "step": 97
787
+ },
788
+ {
789
+ "epoch": 1.719298245614035,
790
+ "grad_norm": 0.1346892995124459,
791
+ "learning_rate": 4.774896727341113e-06,
792
+ "loss": 0.2085,
793
+ "num_tokens": 41388977.0,
794
+ "step": 98
795
+ },
796
+ {
797
+ "epoch": 1.736842105263158,
798
+ "grad_norm": 0.13950660376280916,
799
+ "learning_rate": 4.6904730833401575e-06,
800
+ "loss": 0.1993,
801
+ "num_tokens": 41804049.0,
802
+ "step": 99
803
+ },
804
+ {
805
+ "epoch": 1.7543859649122808,
806
+ "grad_norm": 0.13419027046897863,
807
+ "learning_rate": 4.606342900094147e-06,
808
+ "loss": 0.197,
809
+ "num_tokens": 42212384.0,
810
+ "step": 100
811
+ },
812
+ {
813
+ "epoch": 1.7719298245614035,
814
+ "grad_norm": 0.13471704895191902,
815
+ "learning_rate": 4.5225366755477165e-06,
816
+ "loss": 0.1991,
817
+ "num_tokens": 42626890.0,
818
+ "step": 101
819
+ },
820
+ {
821
+ "epoch": 1.7894736842105263,
822
+ "grad_norm": 0.1370431857295371,
823
+ "learning_rate": 4.439084790207577e-06,
824
+ "loss": 0.181,
825
+ "num_tokens": 43045185.0,
826
+ "step": 102
827
+ },
828
+ {
829
+ "epoch": 1.807017543859649,
830
+ "grad_norm": 0.13585245130540036,
831
+ "learning_rate": 4.35601749612931e-06,
832
+ "loss": 0.1926,
833
+ "num_tokens": 43469318.0,
834
+ "step": 103
835
+ },
836
+ {
837
+ "epoch": 1.8245614035087718,
838
+ "grad_norm": 0.14177193142263728,
839
+ "learning_rate": 4.273364905950711e-06,
840
+ "loss": 0.1883,
841
+ "num_tokens": 43860735.0,
842
+ "step": 104
843
+ },
844
+ {
845
+ "epoch": 1.8421052631578947,
846
+ "grad_norm": 0.13934124481189936,
847
+ "learning_rate": 4.191156981975704e-06,
848
+ "loss": 0.186,
849
+ "num_tokens": 44259177.0,
850
+ "step": 105
851
+ },
852
+ {
853
+ "epoch": 1.8596491228070176,
854
+ "grad_norm": 0.13757581593440663,
855
+ "learning_rate": 4.109423525312738e-06,
856
+ "loss": 0.1813,
857
+ "num_tokens": 44652826.0,
858
+ "step": 106
859
+ },
860
+ {
861
+ "epoch": 1.8771929824561404,
862
+ "grad_norm": 0.12994085331633362,
863
+ "learning_rate": 4.028194165071603e-06,
864
+ "loss": 0.2007,
865
+ "num_tokens": 45110456.0,
866
+ "step": 107
867
+ },
868
+ {
869
+ "epoch": 1.8947368421052633,
870
+ "grad_norm": 0.1315821088883057,
871
+ "learning_rate": 3.9474983476226335e-06,
872
+ "loss": 0.1984,
873
+ "num_tokens": 45561128.0,
874
+ "step": 108
875
+ },
876
+ {
877
+ "epoch": 1.912280701754386,
878
+ "grad_norm": 0.13466926193354203,
879
+ "learning_rate": 3.867365325922116e-06,
880
+ "loss": 0.195,
881
+ "num_tokens": 45999442.0,
882
+ "step": 109
883
+ },
884
+ {
885
+ "epoch": 1.9298245614035088,
886
+ "grad_norm": 0.1407044279595099,
887
+ "learning_rate": 3.7878241489078473e-06,
888
+ "loss": 0.1957,
889
+ "num_tokens": 46405471.0,
890
+ "step": 110
891
+ },
892
+ {
893
+ "epoch": 1.9473684210526314,
894
+ "grad_norm": 0.13869286010098864,
895
+ "learning_rate": 3.7089036509686216e-06,
896
+ "loss": 0.2096,
897
+ "num_tokens": 46837446.0,
898
+ "step": 111
899
+ },
900
+ {
901
+ "epoch": 1.9649122807017543,
902
+ "grad_norm": 0.13744222256799704,
903
+ "learning_rate": 3.630632441491512e-06,
904
+ "loss": 0.1874,
905
+ "num_tokens": 47238683.0,
906
+ "step": 112
907
+ },
908
+ {
909
+ "epoch": 1.9824561403508771,
910
+ "grad_norm": 0.13584540109090137,
911
+ "learning_rate": 3.5530388944907124e-06,
912
+ "loss": 0.1944,
913
+ "num_tokens": 47661310.0,
914
+ "step": 113
915
+ },
916
+ {
917
+ "epoch": 2.0,
918
+ "grad_norm": 0.14078927259649465,
919
+ "learning_rate": 3.476151138321705e-06,
920
+ "loss": 0.1893,
921
+ "num_tokens": 48079346.0,
922
+ "step": 114
923
+ },
924
+ {
925
+ "epoch": 2.017543859649123,
926
+ "grad_norm": 0.14801647762626233,
927
+ "learning_rate": 3.3999970454844688e-06,
928
+ "loss": 0.1833,
929
+ "num_tokens": 48481444.0,
930
+ "step": 115
931
+ },
932
+ {
933
+ "epoch": 2.0350877192982457,
934
+ "grad_norm": 0.14207211024797958,
935
+ "learning_rate": 3.3246042225194626e-06,
936
+ "loss": 0.1972,
937
+ "num_tokens": 48904379.0,
938
+ "step": 116
939
+ },
940
+ {
941
+ "epoch": 2.0526315789473686,
942
+ "grad_norm": 0.13459996651406178,
943
+ "learning_rate": 3.2500000000000015e-06,
944
+ "loss": 0.1734,
945
+ "num_tokens": 49314280.0,
946
+ "step": 117
947
+ },
948
+ {
949
+ "epoch": 2.0701754385964914,
950
+ "grad_norm": 0.13420885166445595,
951
+ "learning_rate": 3.176211422624672e-06,
952
+ "loss": 0.1748,
953
+ "num_tokens": 49720807.0,
954
+ "step": 118
955
+ },
956
+ {
957
+ "epoch": 2.087719298245614,
958
+ "grad_norm": 0.1396854400821284,
959
+ "learning_rate": 3.103265239413401e-06,
960
+ "loss": 0.1781,
961
+ "num_tokens": 50151950.0,
962
+ "step": 119
963
+ },
964
+ {
965
+ "epoch": 2.1052631578947367,
966
+ "grad_norm": 0.14321980578287785,
967
+ "learning_rate": 3.0311878940106864e-06,
968
+ "loss": 0.182,
969
+ "num_tokens": 50574817.0,
970
+ "step": 120
971
+ },
972
+ {
973
+ "epoch": 2.1228070175438596,
974
+ "grad_norm": 0.1402669344348115,
975
+ "learning_rate": 2.9600055150995397e-06,
976
+ "loss": 0.1804,
977
+ "num_tokens": 50991178.0,
978
+ "step": 121
979
+ },
980
+ {
981
+ "epoch": 2.1403508771929824,
982
+ "grad_norm": 0.14268249286817555,
983
+ "learning_rate": 2.889743906929609e-06,
984
+ "loss": 0.1701,
985
+ "num_tokens": 51370437.0,
986
+ "step": 122
987
+ },
988
+ {
989
+ "epoch": 2.1578947368421053,
990
+ "grad_norm": 0.13769468594260603,
991
+ "learning_rate": 2.820428539962905e-06,
992
+ "loss": 0.1803,
993
+ "num_tokens": 51807382.0,
994
+ "step": 123
995
+ },
996
+ {
997
+ "epoch": 2.175438596491228,
998
+ "grad_norm": 0.1418977810735795,
999
+ "learning_rate": 2.7520845416405285e-06,
1000
+ "loss": 0.1867,
1001
+ "num_tokens": 52214420.0,
1002
+ "step": 124
1003
+ },
1004
+ {
1005
+ "epoch": 2.192982456140351,
1006
+ "grad_norm": 0.1357482680971467,
1007
+ "learning_rate": 2.6847366872737535e-06,
1008
+ "loss": 0.1855,
1009
+ "num_tokens": 52648228.0,
1010
+ "step": 125
1011
+ },
1012
+ {
1013
+ "epoch": 2.2105263157894735,
1014
+ "grad_norm": 0.13758984714319883,
1015
+ "learning_rate": 2.618409391062751e-06,
1016
+ "loss": 0.1928,
1017
+ "num_tokens": 53085345.0,
1018
+ "step": 126
1019
+ },
1020
+ {
1021
+ "epoch": 2.2280701754385963,
1022
+ "grad_norm": 0.13835249455110674,
1023
+ "learning_rate": 2.5531266972462176e-06,
1024
+ "loss": 0.1753,
1025
+ "num_tokens": 53493140.0,
1026
+ "step": 127
1027
+ },
1028
+ {
1029
+ "epoch": 2.245614035087719,
1030
+ "grad_norm": 0.13785872746102823,
1031
+ "learning_rate": 2.4889122713851397e-06,
1032
+ "loss": 0.1734,
1033
+ "num_tokens": 53922832.0,
1034
+ "step": 128
1035
+ },
1036
+ {
1037
+ "epoch": 2.263157894736842,
1038
+ "grad_norm": 0.1369141853241696,
1039
+ "learning_rate": 2.425789391783796e-06,
1040
+ "loss": 0.1771,
1041
+ "num_tokens": 54319035.0,
1042
+ "step": 129
1043
+ },
1044
+ {
1045
+ "epoch": 2.280701754385965,
1046
+ "grad_norm": 0.13274261857238875,
1047
+ "learning_rate": 2.36378094105118e-06,
1048
+ "loss": 0.1759,
1049
+ "num_tokens": 54754161.0,
1050
+ "step": 130
1051
+ },
1052
+ {
1053
+ "epoch": 2.2982456140350878,
1054
+ "grad_norm": 0.13374187150373806,
1055
+ "learning_rate": 2.302909397805841e-06,
1056
+ "loss": 0.1757,
1057
+ "num_tokens": 55177972.0,
1058
+ "step": 131
1059
+ },
1060
+ {
1061
+ "epoch": 2.3157894736842106,
1062
+ "grad_norm": 0.14056010545232225,
1063
+ "learning_rate": 2.2431968285271843e-06,
1064
+ "loss": 0.1762,
1065
+ "num_tokens": 55567532.0,
1066
+ "step": 132
1067
+ },
1068
+ {
1069
+ "epoch": 2.3333333333333335,
1070
+ "grad_norm": 0.14440093167538778,
1071
+ "learning_rate": 2.1846648795561777e-06,
1072
+ "loss": 0.1789,
1073
+ "num_tokens": 55997257.0,
1074
+ "step": 133
1075
+ },
1076
+ {
1077
+ "epoch": 2.3508771929824563,
1078
+ "grad_norm": 0.13209921359488944,
1079
+ "learning_rate": 2.1273347692483574e-06,
1080
+ "loss": 0.1683,
1081
+ "num_tokens": 56435780.0,
1082
+ "step": 134
1083
+ },
1084
+ {
1085
+ "epoch": 2.3684210526315788,
1086
+ "grad_norm": 0.13075669290172312,
1087
+ "learning_rate": 2.071227280281982e-06,
1088
+ "loss": 0.1766,
1089
+ "num_tokens": 56880449.0,
1090
+ "step": 135
1091
+ },
1092
+ {
1093
+ "epoch": 2.3859649122807016,
1094
+ "grad_norm": 0.15520159951824955,
1095
+ "learning_rate": 2.016362752124129e-06,
1096
+ "loss": 0.1766,
1097
+ "num_tokens": 57298925.0,
1098
+ "step": 136
1099
+ },
1100
+ {
1101
+ "epoch": 2.4035087719298245,
1102
+ "grad_norm": 0.13674994090087955,
1103
+ "learning_rate": 1.9627610736574575e-06,
1104
+ "loss": 0.1689,
1105
+ "num_tokens": 57702294.0,
1106
+ "step": 137
1107
+ },
1108
+ {
1109
+ "epoch": 2.4210526315789473,
1110
+ "grad_norm": 0.13321588166669115,
1111
+ "learning_rate": 1.9104416759703017e-06,
1112
+ "loss": 0.1758,
1113
+ "num_tokens": 58154418.0,
1114
+ "step": 138
1115
+ },
1116
+ {
1117
+ "epoch": 2.43859649122807,
1118
+ "grad_norm": 0.13632315461262803,
1119
+ "learning_rate": 1.8594235253127373e-06,
1120
+ "loss": 0.1767,
1121
+ "num_tokens": 58579683.0,
1122
+ "step": 139
1123
+ },
1124
+ {
1125
+ "epoch": 2.456140350877193,
1126
+ "grad_norm": 0.1288153252214618,
1127
+ "learning_rate": 1.8097251162211405e-06,
1128
+ "loss": 0.1811,
1129
+ "num_tokens": 59051562.0,
1130
+ "step": 140
1131
+ },
1132
+ {
1133
+ "epoch": 2.473684210526316,
1134
+ "grad_norm": 0.12987465966750822,
1135
+ "learning_rate": 1.7613644648137543e-06,
1136
+ "loss": 0.1774,
1137
+ "num_tokens": 59501698.0,
1138
+ "step": 141
1139
+ },
1140
+ {
1141
+ "epoch": 2.4912280701754383,
1142
+ "grad_norm": 0.1374425426257984,
1143
+ "learning_rate": 1.7143591022596846e-06,
1144
+ "loss": 0.1944,
1145
+ "num_tokens": 59955592.0,
1146
+ "step": 142
1147
+ },
1148
+ {
1149
+ "epoch": 2.5087719298245617,
1150
+ "grad_norm": 0.13531966438989285,
1151
+ "learning_rate": 1.6687260684236943e-06,
1152
+ "loss": 0.1805,
1153
+ "num_tokens": 60381274.0,
1154
+ "step": 143
1155
+ },
1156
+ {
1157
+ "epoch": 2.526315789473684,
1158
+ "grad_norm": 0.134925096600037,
1159
+ "learning_rate": 1.6244819056890975e-06,
1160
+ "loss": 0.171,
1161
+ "num_tokens": 60788438.0,
1162
+ "step": 144
1163
+ },
1164
+ {
1165
+ "epoch": 2.543859649122807,
1166
+ "grad_norm": 0.1407719819848176,
1167
+ "learning_rate": 1.5816426529610035e-06,
1168
+ "loss": 0.1714,
1169
+ "num_tokens": 61184551.0,
1170
+ "step": 145
1171
+ },
1172
+ {
1173
+ "epoch": 2.56140350877193,
1174
+ "grad_norm": 0.1359063010745857,
1175
+ "learning_rate": 1.5402238398520614e-06,
1176
+ "loss": 0.1758,
1177
+ "num_tokens": 61619839.0,
1178
+ "step": 146
1179
+ },
1180
+ {
1181
+ "epoch": 2.5789473684210527,
1182
+ "grad_norm": 0.13246835060504009,
1183
+ "learning_rate": 1.5002404810528452e-06,
1184
+ "loss": 0.1739,
1185
+ "num_tokens": 62045889.0,
1186
+ "step": 147
1187
+ },
1188
+ {
1189
+ "epoch": 2.5964912280701755,
1190
+ "grad_norm": 0.13964950522945552,
1191
+ "learning_rate": 1.4617070708888882e-06,
1192
+ "loss": 0.1718,
1193
+ "num_tokens": 62437900.0,
1194
+ "step": 148
1195
+ },
1196
+ {
1197
+ "epoch": 2.6140350877192984,
1198
+ "grad_norm": 0.13372294299780066,
1199
+ "learning_rate": 1.4246375780663613e-06,
1200
+ "loss": 0.1661,
1201
+ "num_tokens": 62843119.0,
1202
+ "step": 149
1203
+ },
1204
+ {
1205
+ "epoch": 2.6315789473684212,
1206
+ "grad_norm": 0.13805400712254826,
1207
+ "learning_rate": 1.389045440608296e-06,
1208
+ "loss": 0.1828,
1209
+ "num_tokens": 63262284.0,
1210
+ "step": 150
1211
+ },
1212
+ {
1213
+ "epoch": 2.6491228070175437,
1214
+ "grad_norm": 0.13808339180910903,
1215
+ "learning_rate": 1.354943560983175e-06,
1216
+ "loss": 0.1803,
1217
+ "num_tokens": 63701914.0,
1218
+ "step": 151
1219
+ },
1220
+ {
1221
+ "epoch": 2.6666666666666665,
1222
+ "grad_norm": 0.13165183261262675,
1223
+ "learning_rate": 1.3223443014276738e-06,
1224
+ "loss": 0.1774,
1225
+ "num_tokens": 64149759.0,
1226
+ "step": 152
1227
+ },
1228
+ {
1229
+ "epoch": 2.6842105263157894,
1230
+ "grad_norm": 0.1319243140393186,
1231
+ "learning_rate": 1.2912594794652406e-06,
1232
+ "loss": 0.1799,
1233
+ "num_tokens": 64602514.0,
1234
+ "step": 153
1235
+ },
1236
+ {
1237
+ "epoch": 2.7017543859649122,
1238
+ "grad_norm": 0.13538494698388223,
1239
+ "learning_rate": 1.2617003636221394e-06,
1240
+ "loss": 0.1694,
1241
+ "num_tokens": 64992832.0,
1242
+ "step": 154
1243
+ },
1244
+ {
1245
+ "epoch": 2.719298245614035,
1246
+ "grad_norm": 0.13498616412912304,
1247
+ "learning_rate": 1.2336776693425028e-06,
1248
+ "loss": 0.1707,
1249
+ "num_tokens": 65389001.0,
1250
+ "step": 155
1251
+ },
1252
+ {
1253
+ "epoch": 2.736842105263158,
1254
+ "grad_norm": 0.12924445948277097,
1255
+ "learning_rate": 1.2072015551038933e-06,
1256
+ "loss": 0.1692,
1257
+ "num_tokens": 65842297.0,
1258
+ "step": 156
1259
+ },
1260
+ {
1261
+ "epoch": 2.754385964912281,
1262
+ "grad_norm": 0.13206490698518994,
1263
+ "learning_rate": 1.1822816187347625e-06,
1264
+ "loss": 0.1719,
1265
+ "num_tokens": 66254756.0,
1266
+ "step": 157
1267
+ },
1268
+ {
1269
+ "epoch": 2.7719298245614032,
1270
+ "grad_norm": 0.13946311479316087,
1271
+ "learning_rate": 1.1589268939351499e-06,
1272
+ "loss": 0.1824,
1273
+ "num_tokens": 66653349.0,
1274
+ "step": 158
1275
+ },
1276
+ {
1277
+ "epoch": 2.7894736842105265,
1278
+ "grad_norm": 0.12958606648427337,
1279
+ "learning_rate": 1.1371458470018896e-06,
1280
+ "loss": 0.1758,
1281
+ "num_tokens": 67089072.0,
1282
+ "step": 159
1283
+ },
1284
+ {
1285
+ "epoch": 2.807017543859649,
1286
+ "grad_norm": 0.12836201287391627,
1287
+ "learning_rate": 1.1169463737594995e-06,
1288
+ "loss": 0.1725,
1289
+ "num_tokens": 67530318.0,
1290
+ "step": 160
1291
+ },
1292
+ {
1293
+ "epoch": 2.824561403508772,
1294
+ "grad_norm": 0.1312877766550337,
1295
+ "learning_rate": 1.0983357966978747e-06,
1296
+ "loss": 0.17,
1297
+ "num_tokens": 67943670.0,
1298
+ "step": 161
1299
+ },
1300
+ {
1301
+ "epoch": 2.8421052631578947,
1302
+ "grad_norm": 0.1292736067143446,
1303
+ "learning_rate": 1.0813208623178199e-06,
1304
+ "loss": 0.1759,
1305
+ "num_tokens": 68380170.0,
1306
+ "step": 162
1307
+ },
1308
+ {
1309
+ "epoch": 2.8596491228070176,
1310
+ "grad_norm": 0.13227684289793154,
1311
+ "learning_rate": 1.0659077386853817e-06,
1312
+ "loss": 0.1719,
1313
+ "num_tokens": 68808114.0,
1314
+ "step": 163
1315
+ },
1316
+ {
1317
+ "epoch": 2.8771929824561404,
1318
+ "grad_norm": 0.13694616326909123,
1319
+ "learning_rate": 1.0521020131958692e-06,
1320
+ "loss": 0.168,
1321
+ "num_tokens": 69191512.0,
1322
+ "step": 164
1323
+ },
1324
+ {
1325
+ "epoch": 2.8947368421052633,
1326
+ "grad_norm": 0.135480809490847,
1327
+ "learning_rate": 1.0399086905483752e-06,
1328
+ "loss": 0.1659,
1329
+ "num_tokens": 69582710.0,
1330
+ "step": 165
1331
+ },
1332
+ {
1333
+ "epoch": 2.912280701754386,
1334
+ "grad_norm": 0.13656650923592858,
1335
+ "learning_rate": 1.0293321909315242e-06,
1336
+ "loss": 0.1764,
1337
+ "num_tokens": 69995390.0,
1338
+ "step": 166
1339
+ },
1340
+ {
1341
+ "epoch": 2.9298245614035086,
1342
+ "grad_norm": 0.13037307085567856,
1343
+ "learning_rate": 1.0203763484211196e-06,
1344
+ "loss": 0.1737,
1345
+ "num_tokens": 70418613.0,
1346
+ "step": 167
1347
+ },
1348
+ {
1349
+ "epoch": 2.9473684210526314,
1350
+ "grad_norm": 0.13386161162103719,
1351
+ "learning_rate": 1.0130444095902514e-06,
1352
+ "loss": 0.1731,
1353
+ "num_tokens": 70848952.0,
1354
+ "step": 168
1355
+ },
1356
+ {
1357
+ "epoch": 2.9649122807017543,
1358
+ "grad_norm": 0.13477603098195323,
1359
+ "learning_rate": 1.0073390323323897e-06,
1360
+ "loss": 0.1859,
1361
+ "num_tokens": 71284648.0,
1362
+ "step": 169
1363
+ },
1364
+ {
1365
+ "epoch": 2.982456140350877,
1366
+ "grad_norm": 0.13335006132129937,
1367
+ "learning_rate": 1.0032622848978689e-06,
1368
+ "loss": 0.1727,
1369
+ "num_tokens": 71712021.0,
1370
+ "step": 170
1371
+ },
1372
+ {
1373
+ "epoch": 3.0,
1374
+ "grad_norm": 0.13952787638079295,
1375
+ "learning_rate": 1.000815645144134e-06,
1376
+ "loss": 0.1803,
1377
+ "num_tokens": 72119019.0,
1378
+ "step": 171
1379
+ },
1380
+ {
1381
+ "epoch": 3.0,
1382
+ "step": 171,
1383
+ "total_flos": 2.3168881532705178e+17,
1384
+ "train_loss": 0.25350178002614027,
1385
+ "train_runtime": 2712.8956,
1386
+ "train_samples_per_second": 8.068,
1387
+ "train_steps_per_second": 0.063
1388
+ }
1389
+ ],
1390
+ "logging_steps": 1,
1391
+ "max_steps": 171,
1392
+ "num_input_tokens_seen": 0,
1393
+ "num_train_epochs": 3,
1394
+ "save_steps": 500,
1395
+ "stateful_callbacks": {
1396
+ "TrainerControl": {
1397
+ "args": {
1398
+ "should_epoch_stop": false,
1399
+ "should_evaluate": false,
1400
+ "should_log": false,
1401
+ "should_save": true,
1402
+ "should_training_stop": true
1403
+ },
1404
+ "attributes": {}
1405
+ }
1406
+ },
1407
+ "total_flos": 2.3168881532705178e+17,
1408
+ "train_batch_size": 8,
1409
+ "trial_name": null,
1410
+ "trial_params": null
1411
+ }