Preference datasets trl-lib/hh-rlhf-helpful-base Viewer • Updated Jan 8 • 46.2k • 1.53k • 3 trl-lib/lm-human-preferences-descriptiveness Viewer • Updated Jan 8 • 6.26k • 63 • 1 trl-lib/lm-human-preferences-sentiment Viewer • Updated Jan 8 • 6.26k • 59 trl-lib/rlaif-v Viewer • Updated Jan 8 • 83.1k • 368 • 3
Prompt-completion datasets trl-lib/tldr Viewer • Updated Jan 8 • 130k • 5.07k • 28 trl-lib/OpenMathReasoning Viewer • Updated Apr 26 • 3.2M • 603
Unpaired preference datasets trl-lib/ultrafeedback-gpt-3.5-turbo-helpfulness Viewer • Updated Jan 8 • 16.6k • 152 • 3 trl-lib/kto-mix-14k Viewer • Updated Mar 25, 2024 • 15k • 278 • 9
Online-DPO trl-lib/pythia-1b-deduped-tldr-online-dpo 1B • Updated Aug 2, 2024 • 3 trl-lib/pythia-1b-deduped-tldr-sft 1B • Updated Aug 2, 2024 • 3.15k trl-lib/pythia-6.9b-deduped-tldr-online-dpo 7B • Updated Aug 2, 2024 • 3 trl-lib/pythia-2.8b-deduped-tldr-sft Updated Aug 2, 2024 • 133
Stepwise supervision datasets trl-lib/math_shepherd Viewer • Updated Jan 8 • 445k • 2.13k • 9 trl-lib/prm800k Viewer • Updated Jan 8 • 41.2k • 350 • 2
Prompt-only datasets trl-lib/ultrafeedback-prompt Viewer • Updated Jan 8 • 39.8k • 4.19k • 9 trl-lib/DeepMath-103K Viewer • Updated 21 days ago • 103k • 1.89k
Comparing DPO with IPO and KTO A collection of chat models to explore the differences between three alignment techniques: DPO, IPO, and KTO. teknium/OpenHermes-2.5-Mistral-7B Text Generation • 7B • Updated Feb 19, 2024 • 171k • 877 Intel/orca_dpo_pairs Viewer • Updated Nov 29, 2023 • 12.9k • 3.12k • 317 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.1-steps-200 Updated Dec 20, 2023 • 6 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.2-steps-200 Updated Dec 20, 2023 • 4
Preference datasets trl-lib/hh-rlhf-helpful-base Viewer • Updated Jan 8 • 46.2k • 1.53k • 3 trl-lib/lm-human-preferences-descriptiveness Viewer • Updated Jan 8 • 6.26k • 63 • 1 trl-lib/lm-human-preferences-sentiment Viewer • Updated Jan 8 • 6.26k • 59 trl-lib/rlaif-v Viewer • Updated Jan 8 • 83.1k • 368 • 3
Stepwise supervision datasets trl-lib/math_shepherd Viewer • Updated Jan 8 • 445k • 2.13k • 9 trl-lib/prm800k Viewer • Updated Jan 8 • 41.2k • 350 • 2
Prompt-completion datasets trl-lib/tldr Viewer • Updated Jan 8 • 130k • 5.07k • 28 trl-lib/OpenMathReasoning Viewer • Updated Apr 26 • 3.2M • 603
Prompt-only datasets trl-lib/ultrafeedback-prompt Viewer • Updated Jan 8 • 39.8k • 4.19k • 9 trl-lib/DeepMath-103K Viewer • Updated 21 days ago • 103k • 1.89k
Unpaired preference datasets trl-lib/ultrafeedback-gpt-3.5-turbo-helpfulness Viewer • Updated Jan 8 • 16.6k • 152 • 3 trl-lib/kto-mix-14k Viewer • Updated Mar 25, 2024 • 15k • 278 • 9
Comparing DPO with IPO and KTO A collection of chat models to explore the differences between three alignment techniques: DPO, IPO, and KTO. teknium/OpenHermes-2.5-Mistral-7B Text Generation • 7B • Updated Feb 19, 2024 • 171k • 877 Intel/orca_dpo_pairs Viewer • Updated Nov 29, 2023 • 12.9k • 3.12k • 317 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.1-steps-200 Updated Dec 20, 2023 • 6 trl-lib/OpenHermes-2-Mistral-7B-ipo-beta-0.2-steps-200 Updated Dec 20, 2023 • 4
Online-DPO trl-lib/pythia-1b-deduped-tldr-online-dpo 1B • Updated Aug 2, 2024 • 3 trl-lib/pythia-1b-deduped-tldr-sft 1B • Updated Aug 2, 2024 • 3.15k trl-lib/pythia-6.9b-deduped-tldr-online-dpo 7B • Updated Aug 2, 2024 • 3 trl-lib/pythia-2.8b-deduped-tldr-sft Updated Aug 2, 2024 • 133