tmd-rahul commited on
Commit
ccc5da2
·
verified ·
1 Parent(s): d5499ab

End of training

Browse files
Files changed (1) hide show
  1. README.md +10 -8
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
- library_name: transformers
3
- license: mit
4
- base_model: tmd-rahul/tmd-chat-bot
5
  tags:
6
  - generated_from_trainer
7
  model-index:
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # tmd-chat-bot
16
 
17
- This model is a fine-tuned version of [tmd-rahul/tmd-chat-bot](https://huggingface.co/tmd-rahul/tmd-chat-bot) on the None dataset.
18
 
19
  ## Model description
20
 
@@ -33,13 +33,14 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 5e-05
37
- - train_batch_size: 8
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: linear
42
  - num_epochs: 2
 
43
 
44
  ### Training results
45
 
@@ -47,7 +48,8 @@ The following hyperparameters were used during training:
47
 
48
  ### Framework versions
49
 
50
- - Transformers 4.50.3
 
51
  - Pytorch 2.6.0+cu124
52
  - Datasets 3.5.0
53
- - Tokenizers 0.21.1
 
1
  ---
2
+ library_name: peft
3
+ license: gemma
4
+ base_model: google/gemma-2b-it
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
14
 
15
  # tmd-chat-bot
16
 
17
+ This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on the None dataset.
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 0.0002
37
+ - train_batch_size: 2
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: linear
42
  - num_epochs: 2
43
+ - mixed_precision_training: Native AMP
44
 
45
  ### Training results
46
 
 
48
 
49
  ### Framework versions
50
 
51
+ - PEFT 0.14.0
52
+ - Transformers 4.50.0
53
  - Pytorch 2.6.0+cu124
54
  - Datasets 3.5.0
55
+ - Tokenizers 0.21.1