Online RLHF Collection Datasets, code, and models for online RLHF (i.e., iterative DPO) • 19 items • Updated Jun 12 • 4
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint Paper • 2312.11456 • Published Dec 18, 2023 • 1
LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models Paper • 2306.12420 • Published Jun 21, 2023 • 2
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment Paper • 2304.06767 • Published Apr 13, 2023 • 2
Mixture-of-preference-reward-modeling Collection The mixture of preference datasets used for reward modeling. • 2 items • Updated Apr 29 • 2
Standard-format-preference-dataset Collection We collect the open-source datasets and process them into the standard format. • 14 items • Updated May 8 • 20
Awesome reward models Collection A curated collection of reward models to use with techniques like rejection sampling and RLHF / RLAIF • 4 items • Updated Apr 12 • 6