mediconcenoss-v1
This model is a fine-tuned version of openai/gpt-oss-20b on the mediconcen_finetune_train dataset. It achieves the following results on the evaluation set:
- Loss: 0.3587
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
 - train_batch_size: 1
 - eval_batch_size: 1
 - seed: 42
 - gradient_accumulation_steps: 4
 - total_train_batch_size: 4
 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
 - lr_scheduler_type: cosine
 - lr_scheduler_warmup_ratio: 0.1
 - num_epochs: 2.0
 
Training results
| Training Loss | Epoch | Step | Validation Loss | 
|---|---|---|---|
| 0.5792 | 0.1379 | 100 | 0.5646 | 
| 0.51 | 0.2759 | 200 | 0.4403 | 
| 0.479 | 0.4138 | 300 | 0.4121 | 
| 0.4308 | 0.5517 | 400 | 0.3971 | 
| 0.4446 | 0.6897 | 500 | 0.3854 | 
| 0.4635 | 0.8276 | 600 | 0.3771 | 
| 0.4238 | 0.9655 | 700 | 0.3751 | 
| 0.4187 | 1.1034 | 800 | 0.3719 | 
| 0.3868 | 1.2414 | 900 | 0.3683 | 
| 0.3985 | 1.3793 | 1000 | 0.3648 | 
| 0.3834 | 1.5172 | 1100 | 0.3614 | 
| 0.4284 | 1.6552 | 1200 | 0.3590 | 
| 0.4227 | 1.7931 | 1300 | 0.3588 | 
| 0.3957 | 1.9310 | 1400 | 0.3581 | 
Framework versions
- PEFT 0.17.1
 - Transformers 4.56.1
 - Pytorch 2.8.0+cu128
 - Datasets 3.2.0
 - Tokenizers 0.22.1
 
- Downloads last month
 - 12
 
Model tree for omarakwah/mediconcenoss-v1
Base model
openai/gpt-oss-20b