rtdetr-v2-r50-lpph-finetune-improved

This model is a fine-tuned version of PekingU/rtdetr_v2_r50vd on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 6.5265
  • Map: 0.6198
  • Mar 100: 0.6965
  • Map 50: 0.9772
  • Map 75: 0.7696

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 60
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Map Mar 100 Map 50 Map 75
902.5145 1.0 71 335.3991 0.0 0.0319 0.0 0.0
340.8823 2.0 142 36.6198 0.0162 0.5894 0.0292 0.0164
28.8172 3.0 213 11.6351 0.2275 0.7191 0.4021 0.2194
18.9397 4.0 284 8.3844 0.5667 0.6908 0.9434 0.6165
14.3771 5.0 355 7.1939 0.5804 0.6972 0.9557 0.6253
13.7677 6.0 426 6.9469 0.5933 0.6894 0.9891 0.6531
13.7006 7.0 497 6.6969 0.592 0.7 0.9758 0.6705
13.2446 8.0 568 6.9132 0.5882 0.6851 0.9645 0.628
13.2452 9.0 639 6.7584 0.5963 0.7021 0.9711 0.6779
12.911 10.0 710 6.5749 0.5921 0.6887 0.9741 0.6289
12.9885 11.0 781 6.4069 0.6136 0.695 0.9782 0.7026
12.8467 12.0 852 6.4163 0.628 0.7035 0.9786 0.7375
12.6598 13.0 923 6.5697 0.6067 0.695 0.99 0.6764
12.4112 14.0 994 6.3900 0.6179 0.7043 0.9847 0.7073
12.6369 15.0 1065 6.4797 0.5973 0.6986 0.9839 0.6563
12.3894 16.0 1136 6.4122 0.6177 0.6957 0.9794 0.6821
12.4854 17.0 1207 6.3533 0.61 0.6972 0.9924 0.7234
12.3836 18.0 1278 6.3432 0.6148 0.6972 0.9902 0.6818
12.3067 19.0 1349 6.2016 0.6311 0.7135 0.9811 0.7726
12.4459 20.0 1420 6.4450 0.6183 0.7014 0.986 0.7104
12.2424 21.0 1491 6.3718 0.6209 0.7007 0.9886 0.7531
12.1976 22.0 1562 6.4085 0.6145 0.6936 0.9908 0.6676
11.8316 23.0 1633 6.1773 0.6341 0.7113 0.9895 0.765
12.1943 24.0 1704 6.2294 0.6331 0.7043 0.9912 0.7454
12.2331 25.0 1775 6.1884 0.6214 0.7071 0.9856 0.7324
11.7598 26.0 1846 6.2544 0.6337 0.7184 0.9811 0.7826
11.8849 27.0 1917 6.5367 0.6086 0.6773 0.9901 0.654
11.9108 28.0 1988 6.1798 0.6265 0.7128 0.9801 0.7468
11.7802 29.0 2059 6.4903 0.6002 0.6844 0.9854 0.6316
11.6292 30.0 2130 6.3403 0.6224 0.6922 0.9897 0.7199
11.6898 31.0 2201 6.2597 0.6264 0.7177 0.9798 0.7778
11.4146 32.0 2272 6.3491 0.6222 0.7014 0.9868 0.725
11.4249 33.0 2343 6.2945 0.6157 0.6908 0.9844 0.7287
11.4244 34.0 2414 6.3481 0.6106 0.6844 0.9884 0.7273
11.4353 35.0 2485 6.2424 0.631 0.6986 0.9895 0.7417
11.44 36.0 2556 6.4155 0.6127 0.6879 0.9788 0.6941
11.5167 37.0 2627 6.2776 0.627 0.695 0.9869 0.7384
11.3045 38.0 2698 6.3039 0.6184 0.6986 0.9814 0.7349
11.1152 39.0 2769 6.2876 0.6129 0.6972 0.9873 0.7217
11.1722 40.0 2840 6.3043 0.6208 0.7007 0.9797 0.7563
11.1581 41.0 2911 6.2575 0.626 0.7 0.9811 0.7226
11.3624 42.0 2982 6.2631 0.6203 0.7 0.9814 0.7321
11.0642 43.0 3053 6.2792 0.6217 0.7035 0.981 0.7281
11.1232 44.0 3124 6.3211 0.6198 0.6943 0.9844 0.7023
11.0043 45.0 3195 6.3222 0.6224 0.6929 0.9777 0.6996
10.8526 46.0 3266 6.3033 0.6279 0.6979 0.9861 0.7566
10.9216 47.0 3337 6.3102 0.6231 0.695 0.9842 0.7186
10.7362 48.0 3408 6.3306 0.6197 0.6957 0.9838 0.7148
10.9312 49.0 3479 6.2937 0.6228 0.6979 0.9823 0.7081
10.7711 50.0 3550 6.3390 0.6212 0.7028 0.9814 0.7207
10.7214 51.0 3621 6.3430 0.6187 0.6943 0.977 0.7095
10.6512 52.0 3692 6.3411 0.6195 0.7007 0.9753 0.7386
10.5377 53.0 3763 6.3898 0.6168 0.6908 0.9755 0.7073
10.6013 54.0 3834 6.3585 0.6159 0.6936 0.9757 0.6863
10.6208 55.0 3905 6.3540 0.6185 0.6979 0.9763 0.696
10.5331 56.0 3976 6.3733 0.619 0.6943 0.9761 0.693
10.8471 57.0 4047 6.3472 0.6195 0.695 0.9764 0.7064
10.6839 58.0 4118 6.3520 0.6163 0.6915 0.9755 0.7053
10.8091 59.0 4189 6.3618 0.6151 0.6908 0.9758 0.6847
10.4817 60.0 4260 6.3440 0.6179 0.6936 0.9767 0.7158

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.9.1+cu128
  • Datasets 4.4.1
  • Tokenizers 0.22.1
Downloads last month
2
Safetensors
Model size
42.9M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ranm26/rtdetr-v2-r50-lpph-finetune-improved

Finetuned
(23)
this model

Evaluation results