Gabe-Thomp commited on
Commit
723e5a1
·
verified ·
1 Parent(s): 615a5fc

Training in progress, epoch 1

Browse files
README.md CHANGED
@@ -1,19 +1,17 @@
1
  ---
2
  base_model: google/gemma-2-9b-it
3
- datasets: Gabe-Thomp/gemma-bayesian-training
4
  library_name: transformers
5
  model_name: gemma-sft-bayesian-lr2.0e-06_assistant_only
6
  tags:
7
  - generated_from_trainer
8
  - sft
9
  - trl
10
- - alignment-handbook
11
  licence: license
12
  ---
13
 
14
  # Model Card for gemma-sft-bayesian-lr2.0e-06_assistant_only
15
 
16
- This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on the [Gabe-Thomp/gemma-bayesian-training](https://huggingface.co/datasets/Gabe-Thomp/gemma-bayesian-training) dataset.
17
  It has been trained using [TRL](https://github.com/huggingface/trl).
18
 
19
  ## Quick start
@@ -29,7 +27,7 @@ print(output["generated_text"])
29
 
30
  ## Training procedure
31
 
32
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gabe-t-asher-nc-state-university/huggingface/runs/bmwscn6r)
33
 
34
 
35
  This model was trained with SFT.
 
1
  ---
2
  base_model: google/gemma-2-9b-it
 
3
  library_name: transformers
4
  model_name: gemma-sft-bayesian-lr2.0e-06_assistant_only
5
  tags:
6
  - generated_from_trainer
7
  - sft
8
  - trl
 
9
  licence: license
10
  ---
11
 
12
  # Model Card for gemma-sft-bayesian-lr2.0e-06_assistant_only
13
 
14
+ This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gabe-t-asher-nc-state-university/huggingface/runs/05ouaw6w)
31
 
32
 
33
  This model was trained with SFT.
chat_template.jinja CHANGED
@@ -1,6 +1,6 @@
1
  {{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{% if message['role'] == 'assistant' %}{{ '<start_of_turn>' + role + '
2
- ' }}{% generation %}{{ message['content'] | trim }}{% endgeneration %}{{ '<end_of_turn>
3
- ' }}{% else %}{{ '<start_of_turn>' + role + '
4
  ' + message['content'] | trim + '<end_of_turn>
5
  ' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model
6
  '}}{% endif %}
 
1
  {{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{% if message['role'] == 'assistant' %}{{ '<start_of_turn>' + role + '
2
+ ' }}{% generation %}{{ message['content'] | trim }}{{ '<end_of_turn>
3
+ ' }}{% endgeneration %}{% else %}{{ '<start_of_turn>' + role + '
4
  ' + message['content'] | trim + '<end_of_turn>
5
  ' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{'<start_of_turn>model
6
  '}}{% endif %}
config.json CHANGED
@@ -72,6 +72,6 @@
72
  "sliding_window_size": 4096,
73
  "torch_dtype": "bfloat16",
74
  "transformers_version": "4.54.0",
75
- "use_cache": true,
76
  "vocab_size": 256000
77
  }
 
72
  "sliding_window_size": 4096,
73
  "torch_dtype": "bfloat16",
74
  "transformers_version": "4.54.0",
75
+ "use_cache": false,
76
  "vocab_size": 256000
77
  }
model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6cd0028f7fb45b48035d8f770e881fc8ce44f2ec3960656143a1a1835cb8e01b
3
  size 4903351912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0e9b6419922c365538b58a07f0e29b98bd2a79936bb7b7a66d5537e54e6a3b1
3
  size 4903351912
model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7fd8b44783471716e97ccaa83944c6b44445c5945b7125bf8bb56e57d58b1be6
3
  size 4947570872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:091ef4bf5b487a45974945141147a5666709fa6e19d9048586dc44a7ae01669e
3
  size 4947570872
model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:045ccb7fcdee39a7f752fa42fd2f86d7f6f4545c37c6ac244f5c02215051ce58
3
  size 4962221464
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cf87443f82b686765d3e8d7e903f04ea19bb595d713974a89e4369d0173fe0d
3
  size 4962221464
model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:840953281537c136b6fc079673c1778b492fbd59a720c16c6ade2aad3e503b44
3
  size 3670322200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0185f460d528dfba56c4f6267bec912091e0beeac8afc1fe0ab58322ec03bcd8
3
  size 3670322200
runs/Jul30_14-54-28_bobu-l40s-1.csail.mit.edu/events.out.tfevents.1753901697.bobu-l40s-1.csail.mit.edu.2620400.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f39e58af573542ebb208a598f1f97abab1aef5974bbbfa05beb441193d195a46
3
+ size 12151
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eb31ecbf056fd72b5c5be8c2cb4ff6980cf08edc3272166c60ed7d6d57807e7c
3
  size 8056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0120cad0893b90e3acdafd196974e61b7c85a1eb907777fe7541fda3c1ad83e
3
  size 8056