t5-small-finetuned-summarizer

This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9791
  • Rouge1: 18.7536
  • Rouge2: 9.301
  • Rougel: 16.4984
  • Rougelsum: 16.4631
  • Gen Len: 20.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.0808 1.0 140 2.0266 19.8203 9.7737 16.9637 16.955 20.0
2.1085 2.0 280 2.0287 20.0127 10.2365 17.3745 17.399 20.0
2.0736 3.0 420 2.0214 19.1769 9.3315 16.6286 16.6305 20.0
2.0343 4.0 560 2.0003 19.1986 9.4173 16.9178 16.9076 20.0
2.0328 5.0 700 1.9958 19.8891 10.2545 16.9206 16.8979 20.0
1.9842 6.0 840 1.9895 19.0999 9.6965 16.3588 16.3508 20.0
1.9693 7.0 980 1.9793 19.3562 9.5736 16.3363 16.324 20.0
1.9574 8.0 1120 1.9826 18.705 9.0554 16.1865 16.1408 20.0
1.94 9.0 1260 1.9814 18.9642 9.1944 16.421 16.3972 20.0
1.9123 10.0 1400 1.9791 18.7536 9.301 16.4984 16.4631 20.0

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.8.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.22.1
Downloads last month
231
Safetensors
Model size
60.5M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Princy2306/t5-small-finetuned-summarizer

Base model

google-t5/t5-small
Finetuned
(2190)
this model