b0ef99bd633b80a07ed4b29b0a2c5d91

This model is a fine-tuned version of albert/albert-xlarge-v2 on the dim/tldr_news dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4613
  • Data Size: 1.0
  • Epoch Runtime: 25.1440
  • Accuracy: 0.2756
  • F1 Macro: 0.0864
  • Rouge1: 0.2752
  • Rouge2: 0.0
  • Rougel: 0.2749
  • Rougelsum: 0.2756

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Data Size Epoch Runtime Accuracy F1 Macro Rouge1 Rouge2 Rougel Rougelsum
No log 0 0 1.7195 0 2.3979 0.2173 0.0714 0.2166 0.0 0.2173 0.2173
No log 1 178 1.6290 0.0078 3.4422 0.2472 0.0793 0.2472 0.0 0.2472 0.2464
No log 2 356 1.6889 0.0156 2.8855 0.0987 0.0825 0.0987 0.0 0.0987 0.0987
No log 3 534 1.4780 0.0312 3.2746 0.2479 0.1712 0.2479 0.0 0.2472 0.2479
No log 4 712 1.5387 0.0625 4.0645 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
No log 5 890 1.4859 0.125 5.4759 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
0.0972 6 1068 1.4760 0.25 8.3402 0.2173 0.0714 0.2166 0.0 0.2173 0.2173
1.461 7 1246 1.4646 0.5 13.7672 0.2486 0.0796 0.2486 0.0 0.2486 0.2479
1.4427 8.0 1424 1.4623 1.0 25.4379 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
1.4515 9.0 1602 1.4538 1.0 25.2654 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
1.4354 10.0 1780 1.4681 1.0 25.2701 0.2486 0.0796 0.2486 0.0 0.2486 0.2479
1.4487 11.0 1958 1.4548 1.0 25.2500 0.2401 0.0774 0.2401 0.0 0.2408 0.2393
1.4558 12.0 2136 1.4588 1.0 25.2566 0.2401 0.0774 0.2401 0.0 0.2408 0.2393
1.4562 13.0 2314 1.4537 1.0 25.1478 0.2706 0.1402 0.2699 0.0 0.2692 0.2706
1.4467 14.0 2492 1.4537 1.0 25.1871 0.2173 0.0714 0.2166 0.0 0.2173 0.2173
1.4535 15.0 2670 1.4568 1.0 25.1471 0.2486 0.0796 0.2486 0.0 0.2486 0.2479
1.4578 16.0 2848 1.4544 1.0 25.1672 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
1.4413 17.0 3026 1.4510 1.0 25.2195 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
1.4454 18.0 3204 1.4521 1.0 25.1672 0.2486 0.0796 0.2486 0.0 0.2486 0.2479
1.4468 19.0 3382 1.4508 1.0 25.0699 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
1.4568 20.0 3560 1.4548 1.0 25.2263 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
1.4404 21.0 3738 1.4542 1.0 25.2249 0.2401 0.0774 0.2401 0.0 0.2408 0.2393
1.4448 22.0 3916 1.4589 1.0 25.3352 0.2486 0.0796 0.2486 0.0 0.2486 0.2479
1.4398 23.0 4094 1.4506 1.0 25.2263 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
1.4484 24.0 4272 1.4517 1.0 25.2167 0.2486 0.0796 0.2486 0.0 0.2486 0.2479
1.4381 25.0 4450 1.4647 1.0 25.2744 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
1.4482 26.0 4628 1.4524 1.0 25.5011 0.2756 0.0864 0.2752 0.0 0.2749 0.2756
1.4197 27.0 4806 1.4613 1.0 25.1440 0.2756 0.0864 0.2752 0.0 0.2749 0.2756

Framework versions

  • Transformers 4.57.0
  • Pytorch 2.8.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.22.1
Downloads last month
2
Safetensors
Model size
58.7M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for contemmcm/b0ef99bd633b80a07ed4b29b0a2c5d91

Finetuned
(20)
this model