{ "url": "http://arxiv.org/abs/2404.16767v1", "title": "REBEL: Reinforcement Learning via Regressing Relative Rewards", "abstract": "While originally developed for continuous control problems, Proximal Policy\nOptimization (PPO) has emerged as the work-horse of a variety of reinforcement\nlearning (RL) applications including the fine-tuning of generative models.\nUnfortunately, PPO requires multiple heuristics to enable stable convergence\n(e.g. value networks, clipping) and is notorious for its sensitivity to the\nprecise implementation of these components. In response, we take a step back\nand ask what a minimalist RL algorithm for the era of generative models would\nlook like. We propose REBEL, an algorithm that cleanly reduces the problem of\npolicy optimization to regressing the relative rewards via a direct policy\nparameterization between two completions to a prompt, enabling strikingly\nlightweight implementation. In theory, we prove that fundamental RL algorithms\nlike Natural Policy Gradient can be seen as variants of REBEL, which allows us\nto match the strongest known theoretical guarantees in terms of convergence and\nsample complexity in the RL literature. REBEL can also cleanly incorporate\noffline data and handle the intransitive preferences we frequently see in\npractice. Empirically, we find that REBEL provides a unified approach to\nlanguage modeling and image generation with stronger or similar performance as\nPPO and DPO, all while being simpler to implement and more computationally\ntractable than PPO.", "authors": "Zhaolin Gao, Jonathan D. Chang, Wenhao Zhan, Owen Oertell, Gokul Swamy, Kiant\u00e9 Brantley, Thorsten Joachims, J. Andrew Bagnell, Jason D. Lee, Wen Sun", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.CL", "cs.CV" ], "label": "Original Paper", "paper_cat": "Offline AND Reinforcement AND Learning", "gt": "The generality of the reinforcement learning (RL) paradigm is striking: from continuous control problems (Kalashnikov et al., 2018) to, recently, the fine-tuning of generative models (Stiennon et al., 2022; Ouyang et al., 2022), RL has enabled concrete progress across a variety of decision-making tasks. Specifically, when it comes to fine-tuning generative models, Proximal Policy Optimization (PPO, Schulman et al. (2017)) has emerged as the de-facto RL algorithm of choice, from language models (LLMs) (Ziegler et al., 2020; Stiennon et al., 2022; Ouyang et al., 2022; Touvron et al., 2023) to image generative models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024). If we take a step back however, it is odd that we are using an algorithm designed for optimizing two-layer networks for continuous control tasks from scratch for fine-tuning the billions of parameters \u2217{zg292, jdc396, ojo2, kdb82, ws455}@cornell.edu, tj@cs.cornell.edu \u2020{wenhao.zhan, jasonlee}@princeton.edu \u2021{gswamy,bagnell2}@andrew.cmu.edu 1 arXiv:2404.16767v1 [cs.LG] 25 Apr 2024 Image Generation Language Modeling ( ) RLHF reinforcement learning regression REBEL ( ) ( ) x y x y Figure 1: We present REBEL: a simple and scalable RL algorithm that performs policy optimization via iteratively regressing the difference in rewards directly in terms of the policy. This allows us to eliminate much of the complexity (e.g. value functions, clipping) of algorithms like PPO (Schulman et al., 2017). We apply REBEL to problems in both image generation and language modeling and find that despite its conceptual and implementation-level simplicity, REBEL is able to match or sometimes outperform the performance of PPO while out-performing purely offline techniques like DPO (Rafailov et al., 2023). of modern-day generative models. In the continuous control setting, the randomly initialized neural networks and the possible stochasticity in the dynamics necessitate variance reduction through a learned value function as a baseline (Schulman et al., 2015b), while clipping updates is important to limit distribution shift from iteration to iteration (Kakade and Langford, 2002). This means that when applied to generative model fine-tuning, we need to store four models in memory simultaneously (the policy, the reference policy, the critic, and the reward model), each with billions of parameters. Furthermore, we often add a KL regularization to the base model for fine-tuning, making explicit clipping unnecessary nor advisable, as pointed out by Ahmadian et al. (2024). Even outside of the generative modeling context, PPO is notorious for the wide range of performances measured, with differences being attributed to seemingly inconsequential implementation details (Henderson et al., 2019; Engstrom et al., 2020). This begs the question: Are there simpler algorithms that scale to modern RL applications? Our answer is REBEL: an algorithm that reduces the problem of reinforcement learning to solving a sequence of squared loss regression problems on iteratively collected datasets. The regression problems directly use policies to predict the difference in rewards. This allows us to eliminate the complexity of value functions, avoid heuristics like clipping, and scale easily to problems in both language modeling and image generation. Our key insight is that regressing relative rewards via policies directly on a sequence of iteratively collected datasets implicitly enables policy improvement. Rather than being a heuristic, REBEL comes with strong guarantees in theory and can be seen as a strict generalization of classical techniques (e.g., NPG) in reinforcement learning. Furthermore, REBEL cleanly incorporates offline datasets when available, can be extended to robustly handle intransitive preferences (Swamy et al., 2024), and empirically out-performs techniques like PPO 2 and DPO (Rafailov et al., 2023) in language generation and has a faster convergence with a similar asymptotic performance in image generation. More explicitly, our key contributions are four-fold: 1. We propose REBEL, a simple and scalable RL algorithm. REBEL finds a near-optimal policy by solving a sequence of least square regression problems on iteratively collected datasets. Each regression problem involves using a policy-parameterized regressor to predict the difference in rewards across trajectories sampled from the dataset. This dataset can be generated in a purely on-policy fashion or can incorporate offline data, enabling hybrid training. Furthermore, REBEL can be easily extended to handle intransitive preferences. 2. We connect REBEL to classical RL methods. We show that REBEL is a generalization of the foundational Natural Policy Gradient (NPG, Kakade (2001)) algorithm \u2013 applying the Gauss-Newton algorithm to the sequence of regression problems that REBEL solves recovers NPG. However, by instead applying simpler first-order optimization techniques, we are able to avoid computing the Fisher Information Matrix and enjoy a variance reduction effect. Thus, REBEL can be understood as a generalization of NPG while being much more scalable. 3. We analyze the convergence properties of REBEL. We prove via a direct reduction-based analysis that as long as we can solve the regression problem well at each iteration, we will be able to compete with any policy covered by the iteratively collected datasets (matching the strongest known results in the agnostic RL). These problems involve predicting the difference in rewards between trajectories in our dataset. We expect this problem to be well-solved in practice because our class of regressors is isomorphic to a class of policies that is highly expressive for the applications we consider (i.e. flexible Transformer models). 4. We evaluate REBEL both on language modeling and image generation tasks. We find that the on-policy version of REBEL outperforms PPO and DPO on language modeling and has similar performance for image generation tasks. On the TL;DR summarization task, we show REBEL scales well by finetuning a 6.9B parameter model. For text-guided image generation, REBEL optimizes a consistency model that converges to a similar performance as PPO. In short, REBEL is a simple and scalable algorithm that enjoys strong theoretical guarantees and empirical performance. We believe it is a suitable answer to the question raised above.", "main_content": "We first outline the notation used throughout the paper. 2.1 Notation We consider the Contextual Bandit formulation (Langford and Zhang, 2007) of RL which has been used to formalize the generation process of models like LLMs (Rafailov et al., 2023; Ramamurthy et al., 2022; Chang et al., 2023) and Diffusion Models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) due to the determinism of the transitions. More explicitly, in the deterministic transition setting, explicit states are not required as they can be equivalently represented by a sequence of 3 actions. Furthermore, the entire sequence of actions can be considered as a single \u201carm\u201d in a bandit problem with an exponentially large action space. We denote by (\ud835\udc65, \ud835\udc66) a prompt/response pair with \ud835\udc65\u2208X as a prompt and \ud835\udc66\u2208Y as a response (e.g., a sequence of tokens, or in general a sequence of actions). We assume access to a reward function \ud835\udc5f(\ud835\udc65, \ud835\udc66) from which we can query for reward signals (the exact form of \ud835\udc5fdoes not need to be known). Querying \ud835\udc5fat (\ud835\udc65, \ud835\udc66) will return a scalar \ud835\udc5f(\ud835\udc65, \ud835\udc66) measuring the quality of the response. Such a reward function could be a pre-defined metric (e.g., Rouge score against human responses) or it could be learned from an offline human demonstration or preference data (e.g., the RLHF paradigm (Christiano et al., 2017; Ziegler et al., 2020)), as explored in our experiments. Denote by \ud835\udf0b\u2208X \u21a6\u2192\u0394(\ud835\udc4c), a policy (e.g. LLM) that maps from a prompt \ud835\udc65to a distribution over the response space Y. We use \ud835\udf0cto denote the distribution over prompts (i.e. initial states / contexts) \ud835\udc65. Throughout the paper, we use \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) to denote a parameterized policy with parameter \ud835\udf03(e.g., a neural network policy). At times we interchangeably use \ud835\udf0b\ud835\udc61and \ud835\udf0b\ud835\udf03\ud835\udc61when it is clear from the context. We emphasize that while we focus on the bandit formulation for notation simplicity, the algorithms proposed here can be applied to any deterministic MDP where \ud835\udc65is the initial state and the trajectory \ud835\udc66consists of the sequence of actions. At each iteration of all algorithms, our goal will be to solve the following KL-constrained RL problem: \ud835\udf0b\ud835\udc61+1 = argmax \ud835\udf0b E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02E\ud835\udc65KL (\ud835\udf0b(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) . (1) Intuitively, this can be thought of asking for the optimizer to fine-tune the policy \ud835\udf0b\ud835\udc61+1 according to \ud835\udc5f while staying close to some baseline policy \ud835\udf0b\ud835\udc61. 2.2 Deriving REBEL: REgression to RElative REward Based RL From Ziebart et al. (2008), we know that there exists a closed-form solution to the above minimum relative entropy problem (Eq. 1, Gr\u00fcnwald and Dawid (2004)): \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) = \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) \ud835\udc4d(\ud835\udc65) ; \ud835\udc4d(\ud835\udc65) = \u2211\ufe01 \ud835\udc66 \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)). (2) As first pointed out by Rafailov et al. (2023), observe that we can invert Eq. 2 and write the reward as a function of the policy, i.e. the \u201cDPO Trick\u201d: \u2200\ud835\udc65, \ud835\udc66: \ud835\udc5f(\ud835\udc65, \ud835\udc66) = 1 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013 . (3) As soon as X and Y become large, we can no longer guarantee the above expression holds exactly at all (\ud835\udc65, \ud835\udc66) and therefore need to turn our attention to choosing a policy such that Eq. 3 is approximately true. We propose using a simple square loss objective between the two sides of Eq. 3 to measure the goodness of a policy, i.e. reducing RL to a regression problem: \u0012 \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013\u00132 . (4) 4 Algorithm 1 REgression to RElative REward Based RL (REBEL) 1: Input: Reward \ud835\udc5f, policy class \u03a0 = {\ud835\udf0b\ud835\udf03}, base distribution \ud835\udf07, learning rate \ud835\udf02 2: Initialize policy \ud835\udf0b\ud835\udf030. 3: for \ud835\udc61= 0 to \ud835\udc47\u22121 do 4: // Base distribution \ud835\udf07can either be an offline dataset or \ud835\udf0b\ud835\udc61. 5: Collect dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65) 6: Solve square loss regression problem: \ud835\udf03\ud835\udc61+1 = argmin \ud835\udf03 \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 (9) 7: end for Unfortunately, this loss function includes the partition function \ud835\udc4d(\ud835\udc65), which can be challenging to approximate over large input / output domains. However, observe that \ud835\udc4d(\ud835\udc65) only depends on \ud835\udc65and not \ud835\udc66. Thus, if we have access to paired samples, i.e. (\ud835\udc65, \ud835\udc66) and (\ud835\udc65, \ud835\udc66\u2032), we can instead regress the difference in rewards to eliminate this term from our objective: \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (5) Of course, we need to evaluate this loss function on some distribution of samples. In particular, we propose using an on-policy dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} with \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65), where \ud835\udf07is some base distribution. The base distribution \ud835\udf07can either be a fixed offline dataset (e.g. the instruction fine-tuning dataset) or \ud835\udf0b\ud835\udc61itself. Thus, the choice of base distribution \ud835\udf07determines whether REBEL is hybrid or fully online. Putting it all together, we arrive at our core REBEL objective: \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (6) To recap, given a pair of completions \ud835\udc66, \ud835\udc66\u2032 to a prompt \ud835\udc65, REBEL attempt to fit the relative reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (7) by optimizing over a class of predictors of the form 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 . (8) Critically, observe that if we were able to perfectly solve this regression problem, we would indeed recover the optimal solution to the KL-constrained RL problem we outlined in Eq. 1. While the above update might seem somewhat arbitrary at first glance, it has deep connections to prior work in the literature that illuminate its strengths over past techniques. We now discuss some of them. 3 Understanding REBEL as an Adaptive Policy Gradient We begin by recapping the foundational algorithms for policy optimization before situating REBEL within this space of techniques. 5 3.1 Adaptive Gradient Algorithms for Policy Optimization In this section, we give a brief overview of three adaptive gradient algorithms: Mirror Descent (MD), Natural Policy Gradient (NPG), and Proximal Policy Optimization (PPO). We discuss why they are preferable to their non-adaptive counterparts (Gradient Descent (GD) and Policy Gradient (PG)) and the connections between them. Mirror Descent. If X and Y are small discrete spaces (i.e. we are in the tabular setting), we can used the closed-form expression for the minimum relative entropy problem (Eq. 2). This is equivalent to the classic Mirror Descent (MD) algorithm with KL as the Bregman divergence. This update procedure is also sometimes known as soft policy iteration (Ziebart et al., 2008). Note that it does not even involve a parameterized policy and is therefore manifestly covariant. MD ensures a 1/\ud835\udc47convergence rate, i.e., after \ud835\udc47iterations, it must find a policy \u02c6 \ud835\udf0b, such that E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\u2605(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65,\ud835\udc66\u223c\u02c6 \ud835\udf0b(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42(1/\ud835\udc47). In particular, the convergence is almost dimension-free: the convergence rate scales logarithmically with respect to the size of the Y space. Note that gradient ascent will not enjoy such a dimension-free rate when optimizing over the simplex. When sup\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, we can show that the KL divergence between two policies, i.e., KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)), is also bounded, ensuring \ud835\udf0b\ud835\udc61+1 stay close to \ud835\udf0b\ud835\udc61. One can also show monotonic policy improvement, i.e., E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61+1\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2265E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61\ud835\udc5f(\ud835\udc65, \ud835\udc66). Foreshadowing a key point we will soon expound upon, both NPG and PPO can be considered approximations of this idealized tabular policy update procedure. Natural Policy Gradient. When Y and X are large, we cannot simply enumerate all \ud835\udc65and \ud835\udc66. Thus, we need to use a function to approximate \ud835\udf0b, which makes it impossible to exactly implement Eq. 2. Let us use \ud835\udf0b\ud835\udf03to denote a parameterized policy with parameter \ud835\udf03(e.g. the weights of a transformer). The Natural Policy Gradient (NPG, Kakade (2001)) approximates the KL in Equation 1 via its second-order Taylor expansion, whose Hessian is known as the Fisher Information Matrix (FIM, Bagnell and Schneider (2003)), i.e. E\ud835\udc65KL(\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)) \u2248(\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65) \u0002 \u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4\u0003 | {z } Fisher Information Matrix \ud835\udc39\ud835\udc61 (\ud835\udf03\u2212\ud835\udf03\ud835\udc61). The NPG update can be derived by plugging in this approximation to Eq. 1, further approximating the E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) by its first order Taylor expansion around \ud835\udf03\ud835\udc61, and finding the root of the resulting quadratic form: \ud835\udf03\ud835\udc61+1 = \ud835\udf03\ud835\udc61+ \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0010 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u0011 (10) where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) is the standard policy gradient (i.e. REINFORCE (Williams, 1992)). As mentioned above, this update procedure can be understood as performing gradient updates in the local geometry induced by the Fisher information matrix, which ensures that we are taking small steps in policy space rather than in parameter space. Conversely, unlike regular gradient descent methods (i.e., PG), NPG allows us to make large changes in the parameter space \u0398, as long as the resulting two policies are close to each other in terms of KL divergence. This property allows NPG to make more aggressive and adaptive updates in the parameter space of the policy as well as be invariant to linear transformations of the parameters. Theoretically, Agarwal et al. (2021a) show that NPG with softmax parameterization converges at the 1/\ud835\udc47rate in a dimension-free manner, provably faster than the standard PG under the same setup. Empirically, the 6 superior convergence speed of NPG compared to that of PG was observed in its original exploration (Kakade, 2001; Bagnell and Schneider, 2003), as well as in follow-up work like TRPO (Schulman et al., 2015a). Critically, while elegant in theory, NPG, unfortunately, does not scale to modern generative models due to the need for computing the Fisher matrix inverse either explicitly or implicitly via the Hessian-vector matrix product trick. Proximal Policy Optimization. To address the scalability of NPG, Schulman et al. (2017) proposes Proximal Policy Optimization (PPO). Rather than explicitly computing the KL divergence between policies or approximating it via a Taylor expansion, PPO takes a more direct route and uses clipped updates with the hope of controlling the action probability deviation from \ud835\udf0b\ud835\udf03\ud835\udc61+1 to \ud835\udf0b\ud835\udf03\ud835\udc61, i.e. \ud835\udf03\ud835\udc61+1 := argmax \ud835\udf03 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)clip \u0012 \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) ; 1 \u2212\ud835\udf16, 1 + \ud835\udf16 \u0013 \ud835\udc5f(\ud835\udc65, \ud835\udc66). (11) Prima facie, this update follows the underlying intuition of NPG: allow big and adaptive changes in the policy\u2019s parameters \ud835\udf03, as long as the corresponding action probabilities do not change too much. This perhaps explains the superiority of PPO over vanilla REINFORCE in domains like continuous control. Unfortunately, under closer scrutiny, it becomes apparent that PPO-style clipped updates neither guarantee closeness to the prior policy nor have NPG-style adaptivity. While the clipping operator can set the gradient to be zero at samples (\ud835\udc65, \ud835\udc66) where \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) is much larger or smaller than \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65), it cannot actually guarantee \ud835\udf0b\ud835\udf03\ud835\udc61+1 staying close to \ud835\udf0b\ud835\udf03\ud835\udc61, a phenomenon empirically observed in prior work (Hsu et al., 2020). Furthermore, hard clipping is not adaptive \u2013 it treats all (\ud835\udc65, \ud835\udc66) equally and clips whenever the ratio is outside of a fixed range. In contrast, constraining the KL divergence to the prior policy allows one to vary the ratio \ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) at different (\ud835\udc65, \ud835\udc66), as long as the total KL divergence across the state space is small. Lastly, clipping reduces the effective size of a batch of training examples and thus wastes training samples. A REBEL With a Cause. Our algorithm REBEL addresses the limitations of NPG (scalability) and PPO (lack of conservativity or adaptivity) from above. First, unlike NPG, it does not rely on the Fisher information matrix at all and can easily scale to modern LLM applications, yet (as we will discuss below) can be interpreted as a generalization of NPG. Second, in contrast to PPO, it doesn\u2019t have unjustified heuristics and thus enjoys strong convergence and regret guarantees just like NPG. 3.2 Connections between REBEL and MD / NPG We now sketch a series of connections between REBEL and the methods outlined above. Exact REBEL is Mirror Descent. First, to build intuition, we interpret our algorithm\u2019s behavior under the assumption that the least square regression optimization returns the exact Bayes Optimal solution (i.e., our learned predictor achieves zero prediction error everywhere): \u2200\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032 : 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 = \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (12) Conditioned on Eq. 12 being true, a few lines of algebraic manipulation reveals that there must exist a function \ud835\udc50(\ud835\udc65) which is independent of \ud835\udc66, such that: \u2200\ud835\udc65, \ud835\udc66: 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) = \ud835\udc5f(\ud835\udc65, \ud835\udc66) + \ud835\udc50(\ud835\udc65). 7 Taking an exp on both sides and re-arrange terms, we get: \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \u221d\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) exp (\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) . In other words, under the strong assumption that least square regression returns a point-wise accurate estimator (i.e., Eq. 12), we see the REBEL recovers the exact MD update, which gives it (a) a fast 1/\ud835\udc47convergence rate (Shani et al., 2020; Agarwal et al., 2021a), (b) conservativity, i.e., max\ud835\udc65KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) is bounded as long as max\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, and (c) monotonic policy improvement via the NPG standard analysis (Agarwal et al., 2021a). NPG is Approximate REBEL with Gauss-Newton Updates. We provide another interpretation of REBEL by showing that NPG (Eq. 10) can be understood as a special case of REBEL where the least square problem in Eq. 9 is approximately solved via a single iteration of the Gauss-Newton algorithm. As for any application of Gauss-Newton, we start by approximating our predictor 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) by its first order Taylor expansion at \ud835\udf03\ud835\udc61: 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 \u22481 \ud835\udf02\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4(\ud835\udf03\u2212\ud835\udf03\ud835\udc61), where \u2248indicates that we ignore higher order terms in the expansion. If we \ud835\udeff:= \ud835\udf03\u2212\ud835\udf03\ud835\udc61and replace 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 by its above first order approximation in Eq. 9, we arrive at the following quadratic form: min \ud835\udeffE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)\u0001\u22a4\ud835\udeff\u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 . (13) Further simplifying notation, we denote the uniform mixture of \ud835\udf0b\ud835\udc61 and \ud835\udf07 as \ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) := (\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65) + \ud835\udf07(\u00b7|\ud835\udc65))/2 and the Fisher information matrix \ud835\udc39\ud835\udc61averaged under said mixture as: \ud835\udc39\ud835\udc61= E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) h \u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001\u22a4i . Solving the above least square regression to obtain a minimum norm solution, we have the following claim. Claim 1. The minimum norm minimizer \ud835\udeff\u2605of the least squares problem in Eq. 13 recovers an advantage-based variant of the NPG update: \ud835\udeff\u2605:= \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0000E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65)\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)[\ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66)]\u0001 , where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and the advantage is defined as \ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66) := \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212 E\ud835\udc66\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66). The proof of this claim is deferred to Appendix A. Observe that in REBEL, we never explicitly compute the advantage \ud835\udc34\ud835\udf0b\ud835\udc61. However, applying Gauss-Newton to our objective leads to an advantage-based NPG (rather than the traditional \ud835\udc44-function based NPG, e.g., Q-NPG from Agarwal et al. (2021a, 2019)) which indicates that predicting reward difference has an implicit variance reduction effect, as by definition, an advantage function includes a value function baseline. 1 1Note that the original form of NPG is on-policy (Kakade, 2001; Sutton et al., 1999), i.e., the expectations under \ud835\udf0b\ud835\udc61. Our formulation is more general: when set \ud835\udf07= \ud835\udf0b\ud835\udc61, a Gauss-Newton step will recover the original on-policy form of NPG from Kakade (2001); Sutton et al. (1999). More recent works have extended NPG beyond on-policy (e.g., Agarwal et al. (2021a, 2020)). 8 3.3 Extending REBEL to General Preferences In the above discussion, we assume we are given access to a ground-truth reward function. However, in the generative model fine-tuning applications of RL, we often need to learn from human preferences, rather than rewards. This shift introduces a complication: not all preferences can be rationalized by an underlying utility function. In particular, intransitive preferences which are well-known to result from aggregation of different sub-populations or users evaluating different pairs of items on the basis of different features (May, 1954; Tversky, 1969; Gardner, 1970) cannot be accurately captured by a single reward model. To see this, note that if we have \ud835\udc4e\u227b\ud835\udc4f, \ud835\udc4f\u227b\ud835\udc50, and \ud835\udc50\u227b\ud835\udc4e, it is impossible to have a reward model that simultaneously sets \u02c6 \ud835\udc5f(\ud835\udc4e) > \u02c6 \ud835\udc5f(\ud835\udc4f), \u02c6 \ud835\udc5f(\ud835\udc4f) > \u02c6 \ud835\udc5f(\ud835\udc50), and \u02c6 \ud835\udc5f(\ud835\udc50) > \u02c6 \ud835\udc5f(\ud835\udc4e). As we increase the space of possible choices to that of all possible prompt completions, the probability of such intransitivities sharply increases (Dud\u00edk et al., 2015), as reflected in the high levels of annotator disagreement in LLM fine-tuning datasets (Touvron et al., 2023). Thus, rather than assuming access to a reward model, in such settings, we assume access to a preference model (Munos et al., 2023; Swamy et al., 2024; Rosset et al., 2024; Ye et al., 2024). 3.3.1 A Game-Theoretic Perspective on Learning from Preferences More specifically, for any tuple (\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032), we assume we have access to P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65): the probability that \ud835\udc66is preferred to \ud835\udc66\u2032. We then define our preference model \ud835\udc59as \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u225c2 \u00b7 P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65) \u22121. (14) Observe that \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u2208[\u22121, 1] is skew-symmetric, i.e., \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66) = 0, \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) + \ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66) = 0 for all \ud835\udc65\u2208X, \ud835\udc66, \ud835\udc66\u2032 \u2208Y. If the learner can only receive a binary feedback \ud835\udc5c\u2208{0, 1} indicating the preference between \ud835\udc66and \ud835\udc66\u2032, we assume \ud835\udc5cis sampled from a Bernoulli distribution with mean P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65), where \ud835\udc5c= 1 means that \ud835\udc66is preferred over \ud835\udc66\u2032 and 0 otherwise. Given access to such a preference model, a solution concept to the preference aggregation problem with deep roots in the social choice theory literature (Kreweras, 1965; Fishburn, 1984; Kramer, 1973; Simpson, 1969) and the dueling bandit literature (Yue et al., 2012; Dud\u00edk et al., 2015) is that of a minimax winner (MW) \ud835\udf0bMW: the Nash Equilibrium strategy of the symmetric two-player zero-sum game with \ud835\udc59as a payoff function. In particular, due to the skew-symmetric property of \ud835\udc59, Swamy et al. (2024) proved that there exists a policy \ud835\udf0bMW such that max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] = min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] . This implies that (\ud835\udf0bMW, \ud835\udf0bMW) is a Nash Equilibrium (Wang et al., 2023b; Munos et al., 2023; Swamy et al., 2024; Ye et al., 2024). As is standard in game solving, our objective is to obtain an \ud835\udf16-approximate MW b \ud835\udf0bmeasured by the duality gap (DG): DG(b \ud835\udf0b) := max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223cb \ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2212min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223cb \ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2264\ud835\udf16. In the following discussion, we will use \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b) to denote E\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] and \ud835\udc59(\ud835\udf0b, \ud835\udf0b\u2032) to denote E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b\u2032(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] for notational convenience. 9 3.3.2 Self-Play Preference Optimization (SPO) with REBEL as Base Learner We can straightforwardly extend REBEL to the general preference setting via an instantiation of the Self-Play Preference Optimization (SPO) reduction of Swamy et al. (2024). In short, Swamy et al. (2024) prove that rather than performing adversarial training, we are able to perform a simple and stable self-play procedure while retaining strong theoretical guarantees. Practically, this corresponds to sampling at leas two completions from the current policy, querying a learned preference / supervisor model on each pair, and using the win rate for each completion as its reward. We will now describe how we can adapt REBEL to this mode of feedback. Assuming that we can query the preference oracle \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) at will, we can modify the least square objective Eq. (9) to \ud835\udf03\ud835\udc61+1 := argmin \ud835\udf03 \u2211\ufe01 \ud835\udc65,\ud835\udc66,\ud835\udc66\u2032,\ud835\udc66\u2032\u2032\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032)) \u00132 where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032\u2032 \u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). When the exact value of \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is unavailable but only a binary preference feedback \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2208{0, 1} sampling from Bernoulli with mean \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is available, we can just replace \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) by \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2212\ud835\udc5c\ud835\udc66\u2032,\ud835\udc66\u2032\u2032. It is easy to see that the Bayes optimal of the above least square regression problem is equal to: E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61). Swamy et al. (2024) define an iteration-dependent reward \ud835\udc5f\ud835\udc61(\ud835\udc65, \ud835\udc66) := E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61). Thus, the above regression problem can be understood as an extension of REBEL to the setting where the reward function changes at each iteration \ud835\udc61. Swamy et al. (2024) shows that running the exact MD (Eq. 2) with this iteration-dependent reward function \ud835\udc5f\ud835\udc61leads to fast convergence to an approximate Minimax Winner, a property that we will use to provide the regret bound of REBEL in the general preference setting while accounting for nonzero mean squared error. 4 Theoretical Analysis In the previous section, we interpret REBEL as the exact MD and show its convergence by assuming that least square regression always returns a predictor that is accurate everywhere. While such an explanation is simple and has also been used in prior work, point-wise out-of-distribution generalization is an extremely strong condition and is significantly beyond what a standard supervised learning method can promise. In this section, we significantly relax this condition via a reduction-based analysis: As long as we can solve the regression problems well in an in-distribution manner, REBEL can compete against any policy covered by the training data distributions. Formally, we assume the following generalization condition holds on the regressors we find. Assumption 1 (Regression generalization bounds). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. 10 Intuitively, this assumption is saying that there is a function in our class of regressors that is able to accurately fit the difference of rewards. Recall that our class of regressors is isomorphic to our policy class. Therefore, as long as our class of policies is expressive, we would expect this assumption to hold with small \ud835\udf16. For all domains we consider, our policy class is a flexible set of generative models (e.g. Transformer-based LLMs or diffusion models). Thus, we believe it is reasonable to believe this assumption holds in practice \u2013 see Figure 6 in Appendix G for empirical evidence of this point and Example 1 for more discussion. More formally, the above assumption bounds the standard in-distribution generalization error (v.s. the point-wise guarantee in Eq. 12) of a well-defined supervised learning problem: least squares regression. The generalization error \ud835\udf16captures the possible errors from the learning process for \ud835\udf03\ud835\udc61+1 and it could depend on the complexity of the policy class and the number of samples used in the dataset D\ud835\udc61. For instance, when the the function ln \ud835\udf0b\u2212ln \ud835\udf0b\u2032 induced by the log-difference of two policies (\ud835\udf0b, \ud835\udf0b\u2032) are rich enough (e.g., policies are deep neural networks) to capture the reward difference, then \ud835\udf16in this assumption converges to zero as we increase the number of training data. Note that while \ud835\udf16can be small, it does not imply that the learned predictor will have a small prediction error in a point-wise manner \u2013 it almost certainly will not. Example 1. One simple example is when \ud835\udf0b(\ud835\udc66|\ud835\udc65) \u221dexp(\ud835\udf03\u22a4\ud835\udf19(\ud835\udc65, \ud835\udc66)) for some features \ud835\udf19(\ud835\udc65, \ud835\udc66). In this case, ln(\ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65)) \u2212ln(\ud835\udf0b(\ud835\udc66\u2032|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)) = (\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4(\ud835\udf19(\ud835\udc65, \ud835\udc66) \u2212\ud835\udf19(\ud835\udc65, \ud835\udc66\u2032)), which means that our regression problem in Eq. 9 is a classic linear regression problem. When the reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) is also linear in feature \ud835\udf19(\ud835\udc65, \ud835\udc66), then Eq. 9 is a well-specified linear regression problem, and \ud835\udf16typically scales in the rate of \ud835\udc42(\ud835\udc51/|D\ud835\udc61|) with \ud835\udc51being the dimension of feature \ud835\udf19. We can extend the above example to the case where \ud835\udf19is the feature corresponding to some kernel, e.g., RBF kernel or even Neural Tangent Kernel, which allows us to capture the case where \ud835\udf0bis a softmax wide neural network with the least square regression problem solved by gradient flow. The error \ud835\udf16again scales poly(\ud835\udc51/|D\ud835\udc61|), where \ud835\udc51is the effective dimension of the corresponding kernel. We now define the concentrability coefficient (Kakade and Langford, 2002) that quantifies how the training data distribution is covering a comparator policy. Data Coverage. Recall that the base distribution \ud835\udf07can be some behavior policy, which in RLHF can be a human labeler, a supervised fine-tuned policy (SFT), or just the current learned policy (i.e., on-policy). Given a test policy \ud835\udf0b, we denote by \ud835\udc36\ud835\udf07\u2192\ud835\udf0bthe concentrability coefficient, i.e. \ud835\udc36\ud835\udf07\u2192\ud835\udf0b= max \ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) . (15) We say \ud835\udf07covers \ud835\udf0bif \ud835\udc36\ud835\udf07\u2192\ud835\udf0b< +\u221e. Our goal is to bound the regret between our learned policies and an arbitrary comparator \ud835\udf0b\u2217(e.g. the optimal policy if it is covered by \ud835\udf07) using \ud835\udf16and the concentrability coefficient defined in Eq. 15. The following theorem formally states the regret bound of our algorithm. Theorem 1. Under Assumption 1, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, among the learned policies \ud835\udf0b1, . . . , \ud835\udf0b\ud835\udc47, there must exist a policy \u02c6 \ud835\udf0b, such that: \u2200\ud835\udf0b\u2217: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\u2217(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\u02c6 \ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16. ! . 11 Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217,\ud835\udc47. The above theorem shows a reduction from RL to supervised learning \u2014 as long as supervised learning works (i.e., \ud835\udf16is small), then REBEL can compete against any policy \ud835\udf0b\u2217that is covered by the base data distribution \ud835\udf07. In the regret bound, the 1/ \u221a \ud835\udc47comes from Mirror Descent style update, and \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16captures the cost of distribution shift: we train our regressors under distribution \ud835\udf0b\ud835\udc61and \ud835\udf07, but we want the learned regressor to predict well under \ud835\udf0b\u2217. Similar to the NPG analysis from Agarwal et al. (2021a), we now have a slower convergence rate 1/ \u221a \ud835\udc47, which is due to the fact that we have approximation error from learning. Such an agnostic regret bound \u2014 being able compete against any policy that is covered by training distributions \u2013 is the strongest type of agnostic learning results known in the RL literature, matching the best of what has appeared in prior policy optimization work including PSDP (Bagnell et al., 2003), CPI (Kakade and Langford, 2002), NPG (Agarwal et al., 2021a), and PC-PG (Agarwal et al., 2020). While in this work, we use the simplest and most intuitive definition of coverage \u2013 the density ratio-based definition in Eq. 15 \u2013 extension to more general ones such as transfer error (Agarwal et al., 2020, 2021a) or concentrability coefficients that incorporate function class (e.g., Song et al. (2023b)) is straightforward. We defer the proof of the above theorem and the detailed constants that we omitted in the \ud835\udc42notation to Appendix B. 4.1 Extension to General Preferences Extending the above analysis to the general preference case is straightforward except that it requires a stronger coverage condition. This is because we want to find a Nash Equilibrium, which requires a comparison between the learned policy against all the other policies. Results from the Markov Game literature (Cui and Du, 2022b; Zhong et al., 2022; Cui and Du, 2022a; Xiong et al., 2023) and Cui and Du (2022b) have shown that the standard single policy coverage condition used in single-player optimization is provably not sufficient. In particular, they propose using a notion of unilateral concentrability for efficient learning, which can be defined as \ud835\udc36uni,\ud835\udf07:= max \ud835\udf0b,\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032\u2032 \ud835\udf0bMW(\ud835\udc66|\ud835\udc65)\ud835\udf0b(\ud835\udc66\u2032\u2032|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65)\ud835\udf07(\ud835\udc66\u2032\u2032|\ud835\udc65) , in the general preference setting. Notably, the above unilateral concentrability coefficient \ud835\udc36uni,\ud835\udf07is equivalent to \ud835\udc36\ud835\udf07:= max\ud835\udf0b,\ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) since \ud835\udc36\ud835\udf07\u2264\ud835\udc36uni,\ud835\udf07\u2264\ud835\udc362 \ud835\udf07. Therefore in the following discussion, we will use \ud835\udc36\ud835\udf07as the coverage condition. In addition, we also assume the generalization error of the regression problem is small, Assumption 2 (Regression generalization bounds for general preference). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. Under the above coverage condition and generalization bound, we can show that REBEL is able to learn an approximate Minimax Winner: 12 Theorem 2. With assumption 2, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, the policy b \ud835\udf0b= Unif({\ud835\udf0b\ud835\udc61}\ud835\udc47 \ud835\udc61=1) satisfies that: DG(b \ud835\udf0b) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\ud835\udf16. ! . Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07,\ud835\udc47. We defer the proof to Appendix C. Note that the coverage condition here is much stronger than the single policy coverage condition in the RL setting. We conjecture that this is the cost one has to pay by moving to the more general preference setting and leaving the investigation of the necessarily coverage condition for future work. 5 Experiments The implementation of REBEL follows Algorithm 1. In each iteration, REBEL collects a dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032}, where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). Subsequently, REBEL optimizes the least squares regression problem in Eq. 9 through gradient descent with AdamW (Loshchilov and Hutter, 2017). We choose \ud835\udf07= \ud835\udf0b\ud835\udc61such that both \ud835\udc66and \ud835\udc66\u2032 are generated by the current policy. We empirically assess REBEL\u2019s performance on both natural language generation and text-guided image generation. 5.1 Natural Language Generation Baselines: We compare REBEL with baseline RL algorithms, PPO (Schulman et al., 2017), Direct Preference Optimization (DPO) (Rafailov et al., 2023), and REINFORCE (Williams, 1992) and its multi-sample extension, REINFORCE Leave-One-Out (RLOO) (Kool et al., 2019). The REINFORCE method is implemented with a moving average baseline of the reward. We include two variants of RLOO with two (\ud835\udc58= 2) and four (\ud835\udc58= 4) generations per prompt. Dataset: We use the TL;DR summarization dataset (Stiennon et al., 2020)2 to train the model to generate summaries of Reddit posts based on human preference data. The dataset comprises human reference summaries and preference data. Following prior work (Stiennon et al., 2020; Rafailov et al., 2023; Ahmadian et al., 2024), we train the DPO baseline on the preference dataset, while conducting online RL (PPO, RLOO, REBEL) on the human reference dataset. We set the maximum context length to 512 and the maximum generation length to 53 to ensure all references in the dataset can be generated. Additional dataset details are in Appendix D.1. Models: We include results with three different model sizes: 1.4B, 2.8B, and 6.9B. Each model is trained with a supervised fine-tuned (SFT) model and/or a reward model (RM) of the same size. For SFT models, we train a Pythia 1.4B (Biderman et al., 2023)3 model for 1 epoch over the dataset with human references as labels, and use the existing fine-tuned 2.8B4 and 6.9B5 models. For reward models, we train a Pythia 1.4B parameter model for 1 epoch over the preference dataset and 2Dataset available at https://github.com/openai/summarize-from-feedback 3HuggingFace Model Card: EleutherAI/pythia-1.4b-deduped 4HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__sft__tldr 5HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr 13 Model size Algorithm Winrate (\u2191) RM Score (\u2191) KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53) (\u2193) 1.4B SFT 24.5% -0.52 DPO 43.8% 0.11 30.9 PPO 51.6% 1.73 29.1 REBEL 55.3% 1.87 32.4 2.8B SFT 28.4% -0.40 DPO 53.5% 2.41 66.5 PPO 67.2% 2.37 27.4 REBEL 70.3% 2.44 29.2 Table 1: Results on TL;DR Summarization for SFT, PPO, DPO, and REBEL using three metrics. The RM Score is computed using the reward model with the respective size and the winrate is evaluated by GPT4. The models are trained with low-rank adapters. The best-performing method for each size and metric is highlighted in bold and the second best is underlined. We note that REBEL outperforms all baselines here in terms of the winrate 6.9B SFT DPO REINFORCE PPO RLOO (\ud835\udc58= 2) RLOO (\ud835\udc58= 4) REBEL Winrate (\u2191) 44.6% 68.2% 70.7%\u2217 77.6%\u2021 74.2%\u2217 77.9%\u2217 78.0% *directly obtained from Ahmadian et al. (2024) \u2021directly obtained from Huang et al. (2024) Table 2: Results on TL;DR Summarization on 6.9B models. We perform full-parameter training for all models. The best-performing method is highlighted in bold and the second best is underlined. use the existing reward models with 2.8B6 and 6.9B7 parameters. For both REBEL and baseline methods using 1.4B and 2.8B parameters, we trained the policy and/or the critic using low-rank adapters (LoRA) (Hu et al., 2022) on top of our SFT and/or reward model respectively. For the 6.9B models, we perform full-parameter training. More details about the hyperparameters are described in Appendix D.2. Evaluation: We evaluate each method by its balance between reward model score and KLdivergence with the reference policy, testing the effectiveness of the algorithm in optimizing the regularized RL object. To evaluate the quality of the generation, we compute the winrate (Rafailov et al., 2023) against human references using GPT48 (OpenAI, 2023). The winrate is computed from a randomly sampled subset (10%) of the test set with a total of 600 samples. The prompt used to query GPT4 as well as an example response is shown in Appendix D.3. 14 1.6 1.8 2.0 2.2 2.4 2.6 RM Score ( ) 15 20 25 30 35 KL ( || ref) ( ) 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 RM Score ( ) 0 10 20 30 40 50 60 REBEL PPO Figure 2: Plot of Reward vs KL-Divergence for 2.8B REBEL and PPO. We evaluate the models across the entire test set every 100 steps for 2,000 steps. Left: each point represents the average reward score and KL-divergence for a specific time step; the eclipse represents the confidence interval with 2 standard deviations. Right: we divide the KL distribution at the 2,000-step into 10 bins with equal size and average the corresponding RM scores in each bin. 5.1.1 Quality Analysis Table 1 presents a comparison between REBEL and SFT, PPO, and DPO for 1.4B and 2.8B models trained with LoRA. We calculate the KL-divergence (KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53)) using the SFT policy of the corresponding size as the reference for all models. Notably, REBEL outperforms all the baselines on RM score across all model sizes with a slightly larger KL than PPO. In addition, REBEL achieves the highest winrate under GPT4 when evaluated against human references, indicating the benefit of regressing the relative rewards. Example generations of 2.8B REBEL are included in Appendix E. We also perform full-parameter training for 6.9B models and the winrates are shown in Table 2. We can observe that REBEL still outperforms all of the baselines while REBEL, PPO, and RLOO (\ud835\udc58= 4) have comparable performances (but we will soon show in the next section that REBEL is more tractable in computation and memory than PPO and RLOO with \ud835\udc58= 4). An ablation analysis on parameter \ud835\udf02is in Appendix F. The trade-off between the reward model score and KL-divergence is shown in Figure 2. We evaluate the 2.8B REBEL and PPO every 400 gradient updates during training for 8,000 updates. The sample complexity of each update is held constant across both algorithms for fair comparison. For the left plot, each point represents the average divergence and score over the entire test set, and the eclipse represents the confidence interval with 2 standard deviations. As observed previously, PPO exhibits lower divergence, whereas REBEL shows higher divergence but is capable of achieving larger RM scores. Notably, towards the end of the training (going to the right part of the plot), REBEL and PPO have similar KL and RM scores. For the right plot in Figure 2, we analyze a single checkpoint for each algorithm at the end of training. For each algorithm, we group every generation from the test set by its KL distribution into 10 equally sized bins and calculate the average of the corresponding RM 6HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__reward__tldr 7HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__reward__tldr 8Specific API checkpoint used throughout this section: gpt-4-0613 15 DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 20 40 60 80 100 120 Time (s) Generation Policy Update DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 5 10 15 20 25 30 35 40 Peak Memory Usage (GB) Figure 3: Plot of runtime and memory usage for DPO, REINFORCE, RLOO, PPO, and REBEL. The runtime includes both time for generation and policy update for each batch. Runtime and memory usage are measured on A6000 GPUs. Baselines on the left-hand side of the dashed line have lower winrates. Methods on the right-hand side of the dashed line have similar winrates to REBEL, but REBEL is noticeably more computationally tractable and memory efficient than PPO and RLOO (\ud835\udc58= 4). score for each bin. We can see that REBEL achieves higher RM scores for generations with small divergence while requiring larger divergence for generations with the highest scores. 5.1.2 Runtime & Memory Analysis We analyze the runtime and peak memory usage for 2.8B models using PPO, DPO, RLOO, and REBEL. The runtime includes both the generation time and the time required for policy updates. Both runtime and peak memory usage are measured on A6000 GPUs using the same hyperparameters detailed in Appendix D.2. The methods in the plots are arranged in ascending order based on winrates. To the right of the dashed line, PPO, RLOO (\ud835\udc58= 4), and REBEL have the highest winrates, which are comparable among them. While DPO and REINFORCE require less time and memory, their performance does not match up to REBEL, as discussed in Section 5.1.1. RLOO (\ud835\udc58= 2) has similar runtime and memory usage as REBEL since we set \ud835\udf07= \ud835\udf0b\ud835\udc61, making REBEL also generate twice per prompt. However, RLOO (\ud835\udc58= 2) has worse performance than REBEL. Compared to PPO and RLOO (\ud835\udc58= 4), REBEL demonstrates shorter runtimes and lower peak memory usage. PPO is slow and requires more memory because it needs to update both two networks: policy network and value network. RLOO (\ud835\udc58= 4) requires generating 4 responses per prompt which makes it slow and less memory efficient. Compared to the two baselines PPO and RLOO (\ud835\udc58= 4) that achieve similar winrates as REBEL, we see that REBEL is more computationally tractable. REBEL is also noticeably simpler to implement than PPO since it does not learn value networks or compute the advantage estimation. 16 0 10000 20000 30000 40000 50000 60000 Reward Queries 6.0 6.5 7.0 7.5 8.0 8.5 9.0 LAION Aesthetic Score REBEL PPO Figure 4: Learning curves as a function of reward queries to the LAION aesthetic predictor. We report inter-quartile means (IQM) with 95% confidence intervals (CIs) across three seeds for both REBEL and PPO. The CIs were calculated with percentile bootstrap with stratified sampling over three random seeds. 5.2 Image Generation We also consider the setting of image generation, where, given a consistency model (Song et al., 2023a) and a target reward function, we seek to train the consistency model to output images which garner a higher reward. Specifically, we compare REBEL and PPO under the RLCM framework (Oertell et al., 2024). Baselines: We compare REBEL to a clipped, policy gradient objective (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) with the aim to optimize aesthetic quality to obtain high reward from the LAION aesthetic score predictor (Schuhmann, 2022). This baseline does not use critics or GAE for advantage estimates. However, the clipping objective is clearly motivated by PPO, and thus, we simply name this baseline as PPO in this section. Dataset: We use 45 common animals as generation prompts similar to Black et al. (2023); Oertell et al. (2024)9. Models: We use the latent consistency model (Luo et al., 2023) distillation of the Dreamshaper v7 model10, a finetune of stable diffusion (Rombach et al., 2021). Evaluation: We evaluate PPO and REBEL on its reward under the LAION aesthetic reward model for an equal number of reward queries/samples generated and an equal number of gradient updates. The aesthetic predictor is trained to predict human-labeled scores of images on a scale of 1 to 10. Images that tend to have the highest reward are artwork. Following the recommendations of Agarwal et al. (2021b), we report the inter-quartile mean with 95% confidence intervals for our reported results across three random seeds. 9Dataset available at https://github.com/Owen-Oertell/rlcm 10Huggingface model card: SimianLuo/LCM_Dreamshaper_v7 17 REBEL PPO 7.29 7.38 7.37 7.27 7.14 6.85 6.17 6.00 6.29 7.06 Figure 5: Generated images using PPO and REBEL during an intermediate checkpoint. We note that at the same number of epochs, REBEL observes a higher reward under the reward model. This can further be seen by the more diverse background of images generated from REBEL with less training time. 5.3 Quality Analysis Figure 4 shows REBEL optimizes the consistency model faster during the beginning of training but eventually achieves similar performance to that of PPO. For our experiments, we tuned both batch size and learning rate for our algorithms, testing batch sizes of [4, 8, 16] per gpu and learning rates [1e \u22124, 3e \u22124, 6e \u22124, 1e \u22123]. Note, the main difference in implementation between PPO and REBEL is the replacement of the clipped PPO objective with our regression objective. Qualitatively, we observe that eventually, both PPO and REBEL start to generate good-looking images but ignore the text prompt entirely. However, from just optimizing the reward function perspective, this behavior is not surprising since the objective does not encourage the maintenance of the consistency between the text prompt and the generated image. To maximize LAION-predicted aesthetic quality, both REBEL and PPO transform a model that produces plain images into one that produces artistic drawings. We found across multiple seeds that REBEL produced lush backgrounds when compared to PPO\u2019s generations. Please see Appendix E.2 for more examples of generated images. 6 Related Work Policy Gradients. Policy gradient (PG) methods (Nemirovsk\u0133 and Yudin, 1983; Williams, 1992; Sutton et al., 1999; Konda and Tsitsiklis, 1999; Kakade, 2001; Schulman et al., 2017) are a prominent class of RL algorithms due to their direct, gradient-based policy optimization, robustness to model mis-specification (Agarwal et al., 2020), and scalability to modern AI applications from fine-tuning LLMs (Stiennon et al., 2022) to optimizing text-to-image generators (Oertell et al., 2024). 18 Broadly speaking, we can taxonomize PG methods into two families. The first family is based on REINFORCE (Williams, 1992) and often includes variance reduction techniques (Kool et al., 2019; Richter et al., 2020; Zhu et al., 2023). While prior work by Ahmadian et al. (2024) has shown that REINFORCE-based approaches can outperform more complex RL algorithms like PPO on LLM fine-tuning tasks like TL;DR, we find that a properly optimized version of PPO still out-performs a REINFORCE baseline. The second family is adaptive PG techniques that precondition the policy gradient (usually with the inverse of the Fisher Information Matrix) to ensure it is covariant to re-parameterizations of the policy, which include NPG (Kakade, 2001; Bagnell and Schneider, 2003) and its practical approximations like TRPO (Schulman et al., 2015a) and PPO (Schulman et al., 2017). Intuitively, the preconditioning ensures that we make small changes in terms of action distributions, rather than in terms of the actual policy parameters, leading to faster and more stable convergence. Unfortunately, computing and then inverting the Fisher Information Matrix is computationally intensive and therefore we often resort to approximations in practice, as done in TRPO. However, these approximations are still difficult to apply to large-scale generative models, necessitating even coarser approximations like PPO. In contrast, REBEL does not need any such approximations to be implemented at scale, giving us a much closer connection between theory and practice. Reward Regression. The heart of REBEL is a novel reduction from RL to iterative squared loss regression. While using regression to fit either the reward (Peters and Schaal, 2007) or the value (Peng et al., 2019) targets which are then used to extract a policy have previously been explored, our method instead takes a page from DPO (Rafailov et al., 2023) to implicitly parameterize the reward regressor in terms of the policy. This collapses the two stage procedure of prior methods into a single regression step. Preference Fine-Tuning (PFT) of Generative Models. RL has attracted renewed interest due to its central role in \u201caligning\u201d language models \u2013 i.e., adapting their distribution of prompt completions towards the set of responses preferred by human raters. One family of techniques for PFT, often referred to as Reinforcement Learning from Human Feedback (RLHF) involves first fitting a reward model (i.e. a classifier) to the human preference data and then using this model to provide reward values to a downstream RL algorithm (often PPO) (Christiano et al., 2017; Ziegler et al., 2020). LLMs fine-tuned by this procedure include GPT-N (OpenAI, 2023), Claude-N (Anthropic, 2024), and Llama-N (Meta, 2024). Similar approaches have proved beneficial for tasks like summarization (Stiennon et al., 2022), question answering (Nakano et al., 2022), text-to-image generation (Lee et al., 2023), and instruction following (Ouyang et al., 2022). Another family of techniques for PFT essentially treats the problem as supervised learning and uses a variety of ranking loss functions. It includes DPO (Rafailov et al., 2023), IPO (Azar et al., 2023), and KTO (Ethayarajh et al., 2023). These techniques are simpler to implement as they remove components like an explicit reward model, value network, and on-policy training from the standard RLHF setup. However, recent work finds their performance to be lesser than that of on-policy methods (Lambert et al., 2024; Tajwar et al., 2024), which agrees with our findings. This is perhaps caused by their lack of interaction during training, leading to the well-known covariate shift/compounding error issue (Ross et al., 2011; Swamy et al., 2021) and the associated lower levels of performance. The third family of PFT techniques combines elements from the previous two: it involves running an offline algorithm iteratively, collecting on-policy preference feedback from either a supervisor model (Rosset et al., 2024; Xiong et al., 2024; Guo et al., 2024) or from a preference model fit on human data 19 (Calandriello et al., 2024). All of these approaches can be considered instantiations of the general SPO reduction proposed by Swamy et al. (2024), which itself can be thought of as a preference-based variant of DAgger (Ross et al., 2011). Recent work by Tajwar et al. (2024) confirms the empirical strength of these techniques. Our approach fits best into this family of techniques \u2013 we also iteratively update our model by solving a sequence of supervised learning problems over on-policy datasets. However, REBEL comes with several key differentiating factors from the prior work. First, we can run REBEL with datasets consisting of a mixture of on-policy and off-policy data with strong guarantees, enabling hybrid training, as previously explored in the RL (Song et al., 2023b; Ball et al., 2023; Zhou et al., 2023) and inverse RL (Ren et al., 2024) literature. Second, unlike all of the aforementioned works that regularize to the initial policy \ud835\udf0b0 during updates, we perform conservative updates by regularizing \ud835\udf0b\ud835\udc61+1 to \ud835\udf0b\ud835\udc61. Thus, for the prior work, it is difficult to prove convergence or monotonic improvement as the current policy can just bounce around a ball centered at \ud835\udf0b0, a well-known issue in the theory of approximate policy iteration (Kakade and Langford, 2002; Munos, 2003). In contrast, by incorporating the prior policy\u2019s probabilities into our regression problem, we are able to prove stronger guarantees for REBEL. 7 Summary and Future Work In summary, we propose REBEL, an RL algorithm that reduces the problem of RL to solving a sequence of relative reward regression problems on iteratively collected datasets. In contrast to policy gradient approaches that require additional networks and heuristics like clipping to ensure optimization stability, REBEL requires that we can drive down training error on a least squares problem. This makes it strikingly simple to implement and scale. In theory, REBEL matches the best guarantees we have for RL algorithms in the agnostic setting, while in practice, REBEL is able to match and sometimes outperform methods that are far more complex to implement or expensive to run across both language modeling and guided image generation tasks. There are several open questions raised by our work. The first is whether using a loss function other than square loss (e.g. log loss or cross-entropy) could lead to better performance in practice (Farebrother et al., 2024) or tighter bounds (e.g. first-order / gap-dependent) in theory (Foster and Krishnamurthy, 2021; Wang et al., 2023a, 2024). The second is whether, in the general (i.e. non-utility-based) preference setting, the coverage condition assumed in our analysis is necessary \u2013 we conjecture it is. Relatedly, it would be interesting to explore whether using preference (rather than reward) models to provide supervision for REBEL replicates the performance improvements reported by Swamy et al. (2024); Munos et al. (2023). Third, while we focus primarily on the bandit setting in the preceding sections, it would be interesting to consider the more general RL setting and explore how offline datasets can be used to improve the efficiency of policy optimization via techniques like resets (Bagnell et al., 2003; Ross and Bagnell, 2014; Swamy et al., 2023; Chang et al., 2023, 2024). 20" }