diff --git "a/related_34K/test_related_short_2404.16767v1.json" "b/related_34K/test_related_short_2404.16767v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2404.16767v1.json" @@ -0,0 +1,1436 @@ +[ + { + "url": "http://arxiv.org/abs/2404.16767v1", + "title": "REBEL: Reinforcement Learning via Regressing Relative Rewards", + "abstract": "While originally developed for continuous control problems, Proximal Policy\nOptimization (PPO) has emerged as the work-horse of a variety of reinforcement\nlearning (RL) applications including the fine-tuning of generative models.\nUnfortunately, PPO requires multiple heuristics to enable stable convergence\n(e.g. value networks, clipping) and is notorious for its sensitivity to the\nprecise implementation of these components. In response, we take a step back\nand ask what a minimalist RL algorithm for the era of generative models would\nlook like. We propose REBEL, an algorithm that cleanly reduces the problem of\npolicy optimization to regressing the relative rewards via a direct policy\nparameterization between two completions to a prompt, enabling strikingly\nlightweight implementation. In theory, we prove that fundamental RL algorithms\nlike Natural Policy Gradient can be seen as variants of REBEL, which allows us\nto match the strongest known theoretical guarantees in terms of convergence and\nsample complexity in the RL literature. REBEL can also cleanly incorporate\noffline data and handle the intransitive preferences we frequently see in\npractice. Empirically, we find that REBEL provides a unified approach to\nlanguage modeling and image generation with stronger or similar performance as\nPPO and DPO, all while being simpler to implement and more computationally\ntractable than PPO.", + "authors": "Zhaolin Gao, Jonathan D. Chang, Wenhao Zhan, Owen Oertell, Gokul Swamy, Kiant\u00e9 Brantley, Thorsten Joachims, J. Andrew Bagnell, Jason D. Lee, Wen Sun", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Offline AND Reinforcement AND Learning", + "gt": "Policy Gradients. Policy gradient (PG) methods (Nemirovsk\u0133 and Yudin, 1983; Williams, 1992; Sutton et al., 1999; Konda and Tsitsiklis, 1999; Kakade, 2001; Schulman et al., 2017) are a prominent class of RL algorithms due to their direct, gradient-based policy optimization, robustness to model mis-specification (Agarwal et al., 2020), and scalability to modern AI applications from fine-tuning LLMs (Stiennon et al., 2022) to optimizing text-to-image generators (Oertell et al., 2024). 18 Broadly speaking, we can taxonomize PG methods into two families. The first family is based on REINFORCE (Williams, 1992) and often includes variance reduction techniques (Kool et al., 2019; Richter et al., 2020; Zhu et al., 2023). While prior work by Ahmadian et al. (2024) has shown that REINFORCE-based approaches can outperform more complex RL algorithms like PPO on LLM fine-tuning tasks like TL;DR, we find that a properly optimized version of PPO still out-performs a REINFORCE baseline. The second family is adaptive PG techniques that precondition the policy gradient (usually with the inverse of the Fisher Information Matrix) to ensure it is covariant to re-parameterizations of the policy, which include NPG (Kakade, 2001; Bagnell and Schneider, 2003) and its practical approximations like TRPO (Schulman et al., 2015a) and PPO (Schulman et al., 2017). Intuitively, the preconditioning ensures that we make small changes in terms of action distributions, rather than in terms of the actual policy parameters, leading to faster and more stable convergence. Unfortunately, computing and then inverting the Fisher Information Matrix is computationally intensive and therefore we often resort to approximations in practice, as done in TRPO. However, these approximations are still difficult to apply to large-scale generative models, necessitating even coarser approximations like PPO. In contrast, REBEL does not need any such approximations to be implemented at scale, giving us a much closer connection between theory and practice. Reward Regression. The heart of REBEL is a novel reduction from RL to iterative squared loss regression. While using regression to fit either the reward (Peters and Schaal, 2007) or the value (Peng et al., 2019) targets which are then used to extract a policy have previously been explored, our method instead takes a page from DPO (Rafailov et al., 2023) to implicitly parameterize the reward regressor in terms of the policy. This collapses the two stage procedure of prior methods into a single regression step. Preference Fine-Tuning (PFT) of Generative Models. RL has attracted renewed interest due to its central role in \u201caligning\u201d language models \u2013 i.e., adapting their distribution of prompt completions towards the set of responses preferred by human raters. One family of techniques for PFT, often referred to as Reinforcement Learning from Human Feedback (RLHF) involves first fitting a reward model (i.e. a classifier) to the human preference data and then using this model to provide reward values to a downstream RL algorithm (often PPO) (Christiano et al., 2017; Ziegler et al., 2020). LLMs fine-tuned by this procedure include GPT-N (OpenAI, 2023), Claude-N (Anthropic, 2024), and Llama-N (Meta, 2024). Similar approaches have proved beneficial for tasks like summarization (Stiennon et al., 2022), question answering (Nakano et al., 2022), text-to-image generation (Lee et al., 2023), and instruction following (Ouyang et al., 2022). Another family of techniques for PFT essentially treats the problem as supervised learning and uses a variety of ranking loss functions. It includes DPO (Rafailov et al., 2023), IPO (Azar et al., 2023), and KTO (Ethayarajh et al., 2023). These techniques are simpler to implement as they remove components like an explicit reward model, value network, and on-policy training from the standard RLHF setup. However, recent work finds their performance to be lesser than that of on-policy methods (Lambert et al., 2024; Tajwar et al., 2024), which agrees with our findings. This is perhaps caused by their lack of interaction during training, leading to the well-known covariate shift/compounding error issue (Ross et al., 2011; Swamy et al., 2021) and the associated lower levels of performance. The third family of PFT techniques combines elements from the previous two: it involves running an offline algorithm iteratively, collecting on-policy preference feedback from either a supervisor model (Rosset et al., 2024; Xiong et al., 2024; Guo et al., 2024) or from a preference model fit on human data 19 (Calandriello et al., 2024). All of these approaches can be considered instantiations of the general SPO reduction proposed by Swamy et al. (2024), which itself can be thought of as a preference-based variant of DAgger (Ross et al., 2011). Recent work by Tajwar et al. (2024) confirms the empirical strength of these techniques. Our approach fits best into this family of techniques \u2013 we also iteratively update our model by solving a sequence of supervised learning problems over on-policy datasets. However, REBEL comes with several key differentiating factors from the prior work. First, we can run REBEL with datasets consisting of a mixture of on-policy and off-policy data with strong guarantees, enabling hybrid training, as previously explored in the RL (Song et al., 2023b; Ball et al., 2023; Zhou et al., 2023) and inverse RL (Ren et al., 2024) literature. Second, unlike all of the aforementioned works that regularize to the initial policy \ud835\udf0b0 during updates, we perform conservative updates by regularizing \ud835\udf0b\ud835\udc61+1 to \ud835\udf0b\ud835\udc61. Thus, for the prior work, it is difficult to prove convergence or monotonic improvement as the current policy can just bounce around a ball centered at \ud835\udf0b0, a well-known issue in the theory of approximate policy iteration (Kakade and Langford, 2002; Munos, 2003). In contrast, by incorporating the prior policy\u2019s probabilities into our regression problem, we are able to prove stronger guarantees for REBEL.", + "pre_questions": [], + "main_content": "Introduction The generality of the reinforcement learning (RL) paradigm is striking: from continuous control problems (Kalashnikov et al., 2018) to, recently, the fine-tuning of generative models (Stiennon et al., 2022; Ouyang et al., 2022), RL has enabled concrete progress across a variety of decision-making tasks. Specifically, when it comes to fine-tuning generative models, Proximal Policy Optimization (PPO, Schulman et al. (2017)) has emerged as the de-facto RL algorithm of choice, from language models (LLMs) (Ziegler et al., 2020; Stiennon et al., 2022; Ouyang et al., 2022; Touvron et al., 2023) to image generative models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024). If we take a step back however, it is odd that we are using an algorithm designed for optimizing two-layer networks for continuous control tasks from scratch for fine-tuning the billions of parameters \u2217{zg292, jdc396, ojo2, kdb82, ws455}@cornell.edu, tj@cs.cornell.edu \u2020{wenhao.zhan, jasonlee}@princeton.edu \u2021{gswamy,bagnell2}@andrew.cmu.edu 1 arXiv:2404.16767v1 [cs.LG] 25 Apr 2024 Image Generation Language Modeling ( ) RLHF reinforcement learning regression REBEL ( ) ( ) x y x y Figure 1: We present REBEL: a simple and scalable RL algorithm that performs policy optimization via iteratively regressing the difference in rewards directly in terms of the policy. This allows us to eliminate much of the complexity (e.g. value functions, clipping) of algorithms like PPO (Schulman et al., 2017). We apply REBEL to problems in both image generation and language modeling and find that despite its conceptual and implementation-level simplicity, REBEL is able to match or sometimes outperform the performance of PPO while out-performing purely offline techniques like DPO (Rafailov et al., 2023). of modern-day generative models. In the continuous control setting, the randomly initialized neural networks and the possible stochasticity in the dynamics necessitate variance reduction through a learned value function as a baseline (Schulman et al., 2015b), while clipping updates is important to limit distribution shift from iteration to iteration (Kakade and Langford, 2002). This means that when applied to generative model fine-tuning, we need to store four models in memory simultaneously (the policy, the reference policy, the critic, and the reward model), each with billions of parameters. Furthermore, we often add a KL regularization to the base model for fine-tuning, making explicit clipping unnecessary nor advisable, as pointed out by Ahmadian et al. (2024). Even outside of the generative modeling context, PPO is notorious for the wide range of performances measured, with differences being attributed to seemingly inconsequential implementation details (Henderson et al., 2019; Engstrom et al., 2020). This begs the question: Are there simpler algorithms that scale to modern RL applications? Our answer is REBEL: an algorithm that reduces the problem of reinforcement learning to solving a sequence of squared loss regression problems on iteratively collected datasets. The regression problems directly use policies to predict the difference in rewards. This allows us to eliminate the complexity of value functions, avoid heuristics like clipping, and scale easily to problems in both language modeling and image generation. Our key insight is that regressing relative rewards via policies directly on a sequence of iteratively collected datasets implicitly enables policy improvement. Rather than being a heuristic, REBEL comes with strong guarantees in theory and can be seen as a strict generalization of classical techniques (e.g., NPG) in reinforcement learning. Furthermore, REBEL cleanly incorporates offline datasets when available, can be extended to robustly handle intransitive preferences (Swamy et al., 2024), and empirically out-performs techniques like PPO 2 and DPO (Rafailov et al., 2023) in language generation and has a faster convergence with a similar asymptotic performance in image generation. More explicitly, our key contributions are four-fold: 1. We propose REBEL, a simple and scalable RL algorithm. REBEL finds a near-optimal policy by solving a sequence of least square regression problems on iteratively collected datasets. Each regression problem involves using a policy-parameterized regressor to predict the difference in rewards across trajectories sampled from the dataset. This dataset can be generated in a purely on-policy fashion or can incorporate offline data, enabling hybrid training. Furthermore, REBEL can be easily extended to handle intransitive preferences. 2. We connect REBEL to classical RL methods. We show that REBEL is a generalization of the foundational Natural Policy Gradient (NPG, Kakade (2001)) algorithm \u2013 applying the Gauss-Newton algorithm to the sequence of regression problems that REBEL solves recovers NPG. However, by instead applying simpler first-order optimization techniques, we are able to avoid computing the Fisher Information Matrix and enjoy a variance reduction effect. Thus, REBEL can be understood as a generalization of NPG while being much more scalable. 3. We analyze the convergence properties of REBEL. We prove via a direct reduction-based analysis that as long as we can solve the regression problem well at each iteration, we will be able to compete with any policy covered by the iteratively collected datasets (matching the strongest known results in the agnostic RL). These problems involve predicting the difference in rewards between trajectories in our dataset. We expect this problem to be well-solved in practice because our class of regressors is isomorphic to a class of policies that is highly expressive for the applications we consider (i.e. flexible Transformer models). 4. We evaluate REBEL both on language modeling and image generation tasks. We find that the on-policy version of REBEL outperforms PPO and DPO on language modeling and has similar performance for image generation tasks. On the TL;DR summarization task, we show REBEL scales well by finetuning a 6.9B parameter model. For text-guided image generation, REBEL optimizes a consistency model that converges to a similar performance as PPO. In short, REBEL is a simple and scalable algorithm that enjoys strong theoretical guarantees and empirical performance. We believe it is a suitable answer to the question raised above. 2 REBEL: REgression to RElative REward Based RL We first outline the notation used throughout the paper. 2.1 Notation We consider the Contextual Bandit formulation (Langford and Zhang, 2007) of RL which has been used to formalize the generation process of models like LLMs (Rafailov et al., 2023; Ramamurthy et al., 2022; Chang et al., 2023) and Diffusion Models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) due to the determinism of the transitions. More explicitly, in the deterministic transition setting, explicit states are not required as they can be equivalently represented by a sequence of 3 actions. Furthermore, the entire sequence of actions can be considered as a single \u201carm\u201d in a bandit problem with an exponentially large action space. We denote by (\ud835\udc65, \ud835\udc66) a prompt/response pair with \ud835\udc65\u2208X as a prompt and \ud835\udc66\u2208Y as a response (e.g., a sequence of tokens, or in general a sequence of actions). We assume access to a reward function \ud835\udc5f(\ud835\udc65, \ud835\udc66) from which we can query for reward signals (the exact form of \ud835\udc5fdoes not need to be known). Querying \ud835\udc5fat (\ud835\udc65, \ud835\udc66) will return a scalar \ud835\udc5f(\ud835\udc65, \ud835\udc66) measuring the quality of the response. Such a reward function could be a pre-defined metric (e.g., Rouge score against human responses) or it could be learned from an offline human demonstration or preference data (e.g., the RLHF paradigm (Christiano et al., 2017; Ziegler et al., 2020)), as explored in our experiments. Denote by \ud835\udf0b\u2208X \u21a6\u2192\u0394(\ud835\udc4c), a policy (e.g. LLM) that maps from a prompt \ud835\udc65to a distribution over the response space Y. We use \ud835\udf0cto denote the distribution over prompts (i.e. initial states / contexts) \ud835\udc65. Throughout the paper, we use \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) to denote a parameterized policy with parameter \ud835\udf03(e.g., a neural network policy). At times we interchangeably use \ud835\udf0b\ud835\udc61and \ud835\udf0b\ud835\udf03\ud835\udc61when it is clear from the context. We emphasize that while we focus on the bandit formulation for notation simplicity, the algorithms proposed here can be applied to any deterministic MDP where \ud835\udc65is the initial state and the trajectory \ud835\udc66consists of the sequence of actions. At each iteration of all algorithms, our goal will be to solve the following KL-constrained RL problem: \ud835\udf0b\ud835\udc61+1 = argmax \ud835\udf0b E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02E\ud835\udc65KL (\ud835\udf0b(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) . (1) Intuitively, this can be thought of asking for the optimizer to fine-tune the policy \ud835\udf0b\ud835\udc61+1 according to \ud835\udc5f while staying close to some baseline policy \ud835\udf0b\ud835\udc61. 2.2 Deriving REBEL: REgression to RElative REward Based RL From Ziebart et al. (2008), we know that there exists a closed-form solution to the above minimum relative entropy problem (Eq. 1, Gr\u00fcnwald and Dawid (2004)): \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) = \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) \ud835\udc4d(\ud835\udc65) ; \ud835\udc4d(\ud835\udc65) = \u2211\ufe01 \ud835\udc66 \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)). (2) As first pointed out by Rafailov et al. (2023), observe that we can invert Eq. 2 and write the reward as a function of the policy, i.e. the \u201cDPO Trick\u201d: \u2200\ud835\udc65, \ud835\udc66: \ud835\udc5f(\ud835\udc65, \ud835\udc66) = 1 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013 . (3) As soon as X and Y become large, we can no longer guarantee the above expression holds exactly at all (\ud835\udc65, \ud835\udc66) and therefore need to turn our attention to choosing a policy such that Eq. 3 is approximately true. We propose using a simple square loss objective between the two sides of Eq. 3 to measure the goodness of a policy, i.e. reducing RL to a regression problem: \u0012 \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013\u00132 . (4) 4 Algorithm 1 REgression to RElative REward Based RL (REBEL) 1: Input: Reward \ud835\udc5f, policy class \u03a0 = {\ud835\udf0b\ud835\udf03}, base distribution \ud835\udf07, learning rate \ud835\udf02 2: Initialize policy \ud835\udf0b\ud835\udf030. 3: for \ud835\udc61= 0 to \ud835\udc47\u22121 do 4: // Base distribution \ud835\udf07can either be an offline dataset or \ud835\udf0b\ud835\udc61. 5: Collect dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65) 6: Solve square loss regression problem: \ud835\udf03\ud835\udc61+1 = argmin \ud835\udf03 \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 (9) 7: end for Unfortunately, this loss function includes the partition function \ud835\udc4d(\ud835\udc65), which can be challenging to approximate over large input / output domains. However, observe that \ud835\udc4d(\ud835\udc65) only depends on \ud835\udc65and not \ud835\udc66. Thus, if we have access to paired samples, i.e. (\ud835\udc65, \ud835\udc66) and (\ud835\udc65, \ud835\udc66\u2032), we can instead regress the difference in rewards to eliminate this term from our objective: \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (5) Of course, we need to evaluate this loss function on some distribution of samples. In particular, we propose using an on-policy dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} with \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65), where \ud835\udf07is some base distribution. The base distribution \ud835\udf07can either be a fixed offline dataset (e.g. the instruction fine-tuning dataset) or \ud835\udf0b\ud835\udc61itself. Thus, the choice of base distribution \ud835\udf07determines whether REBEL is hybrid or fully online. Putting it all together, we arrive at our core REBEL objective: \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (6) To recap, given a pair of completions \ud835\udc66, \ud835\udc66\u2032 to a prompt \ud835\udc65, REBEL attempt to fit the relative reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (7) by optimizing over a class of predictors of the form 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 . (8) Critically, observe that if we were able to perfectly solve this regression problem, we would indeed recover the optimal solution to the KL-constrained RL problem we outlined in Eq. 1. While the above update might seem somewhat arbitrary at first glance, it has deep connections to prior work in the literature that illuminate its strengths over past techniques. We now discuss some of them. 3 Understanding REBEL as an Adaptive Policy Gradient We begin by recapping the foundational algorithms for policy optimization before situating REBEL within this space of techniques. 5 3.1 Adaptive Gradient Algorithms for Policy Optimization In this section, we give a brief overview of three adaptive gradient algorithms: Mirror Descent (MD), Natural Policy Gradient (NPG), and Proximal Policy Optimization (PPO). We discuss why they are preferable to their non-adaptive counterparts (Gradient Descent (GD) and Policy Gradient (PG)) and the connections between them. Mirror Descent. If X and Y are small discrete spaces (i.e. we are in the tabular setting), we can used the closed-form expression for the minimum relative entropy problem (Eq. 2). This is equivalent to the classic Mirror Descent (MD) algorithm with KL as the Bregman divergence. This update procedure is also sometimes known as soft policy iteration (Ziebart et al., 2008). Note that it does not even involve a parameterized policy and is therefore manifestly covariant. MD ensures a 1/\ud835\udc47convergence rate, i.e., after \ud835\udc47iterations, it must find a policy \u02c6 \ud835\udf0b, such that E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\u2605(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65,\ud835\udc66\u223c\u02c6 \ud835\udf0b(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42(1/\ud835\udc47). In particular, the convergence is almost dimension-free: the convergence rate scales logarithmically with respect to the size of the Y space. Note that gradient ascent will not enjoy such a dimension-free rate when optimizing over the simplex. When sup\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, we can show that the KL divergence between two policies, i.e., KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)), is also bounded, ensuring \ud835\udf0b\ud835\udc61+1 stay close to \ud835\udf0b\ud835\udc61. One can also show monotonic policy improvement, i.e., E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61+1\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2265E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61\ud835\udc5f(\ud835\udc65, \ud835\udc66). Foreshadowing a key point we will soon expound upon, both NPG and PPO can be considered approximations of this idealized tabular policy update procedure. Natural Policy Gradient. When Y and X are large, we cannot simply enumerate all \ud835\udc65and \ud835\udc66. Thus, we need to use a function to approximate \ud835\udf0b, which makes it impossible to exactly implement Eq. 2. Let us use \ud835\udf0b\ud835\udf03to denote a parameterized policy with parameter \ud835\udf03(e.g. the weights of a transformer). The Natural Policy Gradient (NPG, Kakade (2001)) approximates the KL in Equation 1 via its second-order Taylor expansion, whose Hessian is known as the Fisher Information Matrix (FIM, Bagnell and Schneider (2003)), i.e. E\ud835\udc65KL(\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)) \u2248(\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65) \u0002 \u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4\u0003 | {z } Fisher Information Matrix \ud835\udc39\ud835\udc61 (\ud835\udf03\u2212\ud835\udf03\ud835\udc61). The NPG update can be derived by plugging in this approximation to Eq. 1, further approximating the E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) by its first order Taylor expansion around \ud835\udf03\ud835\udc61, and finding the root of the resulting quadratic form: \ud835\udf03\ud835\udc61+1 = \ud835\udf03\ud835\udc61+ \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0010 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u0011 (10) where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) is the standard policy gradient (i.e. REINFORCE (Williams, 1992)). As mentioned above, this update procedure can be understood as performing gradient updates in the local geometry induced by the Fisher information matrix, which ensures that we are taking small steps in policy space rather than in parameter space. Conversely, unlike regular gradient descent methods (i.e., PG), NPG allows us to make large changes in the parameter space \u0398, as long as the resulting two policies are close to each other in terms of KL divergence. This property allows NPG to make more aggressive and adaptive updates in the parameter space of the policy as well as be invariant to linear transformations of the parameters. Theoretically, Agarwal et al. (2021a) show that NPG with softmax parameterization converges at the 1/\ud835\udc47rate in a dimension-free manner, provably faster than the standard PG under the same setup. Empirically, the 6 superior convergence speed of NPG compared to that of PG was observed in its original exploration (Kakade, 2001; Bagnell and Schneider, 2003), as well as in follow-up work like TRPO (Schulman et al., 2015a). Critically, while elegant in theory, NPG, unfortunately, does not scale to modern generative models due to the need for computing the Fisher matrix inverse either explicitly or implicitly via the Hessian-vector matrix product trick. Proximal Policy Optimization. To address the scalability of NPG, Schulman et al. (2017) proposes Proximal Policy Optimization (PPO). Rather than explicitly computing the KL divergence between policies or approximating it via a Taylor expansion, PPO takes a more direct route and uses clipped updates with the hope of controlling the action probability deviation from \ud835\udf0b\ud835\udf03\ud835\udc61+1 to \ud835\udf0b\ud835\udf03\ud835\udc61, i.e. \ud835\udf03\ud835\udc61+1 := argmax \ud835\udf03 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)clip \u0012 \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) ; 1 \u2212\ud835\udf16, 1 + \ud835\udf16 \u0013 \ud835\udc5f(\ud835\udc65, \ud835\udc66). (11) Prima facie, this update follows the underlying intuition of NPG: allow big and adaptive changes in the policy\u2019s parameters \ud835\udf03, as long as the corresponding action probabilities do not change too much. This perhaps explains the superiority of PPO over vanilla REINFORCE in domains like continuous control. Unfortunately, under closer scrutiny, it becomes apparent that PPO-style clipped updates neither guarantee closeness to the prior policy nor have NPG-style adaptivity. While the clipping operator can set the gradient to be zero at samples (\ud835\udc65, \ud835\udc66) where \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) is much larger or smaller than \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65), it cannot actually guarantee \ud835\udf0b\ud835\udf03\ud835\udc61+1 staying close to \ud835\udf0b\ud835\udf03\ud835\udc61, a phenomenon empirically observed in prior work (Hsu et al., 2020). Furthermore, hard clipping is not adaptive \u2013 it treats all (\ud835\udc65, \ud835\udc66) equally and clips whenever the ratio is outside of a fixed range. In contrast, constraining the KL divergence to the prior policy allows one to vary the ratio \ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) at different (\ud835\udc65, \ud835\udc66), as long as the total KL divergence across the state space is small. Lastly, clipping reduces the effective size of a batch of training examples and thus wastes training samples. A REBEL With a Cause. Our algorithm REBEL addresses the limitations of NPG (scalability) and PPO (lack of conservativity or adaptivity) from above. First, unlike NPG, it does not rely on the Fisher information matrix at all and can easily scale to modern LLM applications, yet (as we will discuss below) can be interpreted as a generalization of NPG. Second, in contrast to PPO, it doesn\u2019t have unjustified heuristics and thus enjoys strong convergence and regret guarantees just like NPG. 3.2 Connections between REBEL and MD / NPG We now sketch a series of connections between REBEL and the methods outlined above. Exact REBEL is Mirror Descent. First, to build intuition, we interpret our algorithm\u2019s behavior under the assumption that the least square regression optimization returns the exact Bayes Optimal solution (i.e., our learned predictor achieves zero prediction error everywhere): \u2200\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032 : 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 = \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (12) Conditioned on Eq. 12 being true, a few lines of algebraic manipulation reveals that there must exist a function \ud835\udc50(\ud835\udc65) which is independent of \ud835\udc66, such that: \u2200\ud835\udc65, \ud835\udc66: 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) = \ud835\udc5f(\ud835\udc65, \ud835\udc66) + \ud835\udc50(\ud835\udc65). 7 Taking an exp on both sides and re-arrange terms, we get: \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \u221d\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) exp (\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) . In other words, under the strong assumption that least square regression returns a point-wise accurate estimator (i.e., Eq. 12), we see the REBEL recovers the exact MD update, which gives it (a) a fast 1/\ud835\udc47convergence rate (Shani et al., 2020; Agarwal et al., 2021a), (b) conservativity, i.e., max\ud835\udc65KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) is bounded as long as max\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, and (c) monotonic policy improvement via the NPG standard analysis (Agarwal et al., 2021a). NPG is Approximate REBEL with Gauss-Newton Updates. We provide another interpretation of REBEL by showing that NPG (Eq. 10) can be understood as a special case of REBEL where the least square problem in Eq. 9 is approximately solved via a single iteration of the Gauss-Newton algorithm. As for any application of Gauss-Newton, we start by approximating our predictor 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) by its first order Taylor expansion at \ud835\udf03\ud835\udc61: 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 \u22481 \ud835\udf02\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4(\ud835\udf03\u2212\ud835\udf03\ud835\udc61), where \u2248indicates that we ignore higher order terms in the expansion. If we \ud835\udeff:= \ud835\udf03\u2212\ud835\udf03\ud835\udc61and replace 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 by its above first order approximation in Eq. 9, we arrive at the following quadratic form: min \ud835\udeffE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)\u0001\u22a4\ud835\udeff\u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 . (13) Further simplifying notation, we denote the uniform mixture of \ud835\udf0b\ud835\udc61 and \ud835\udf07 as \ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) := (\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65) + \ud835\udf07(\u00b7|\ud835\udc65))/2 and the Fisher information matrix \ud835\udc39\ud835\udc61averaged under said mixture as: \ud835\udc39\ud835\udc61= E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) h \u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001\u22a4i . Solving the above least square regression to obtain a minimum norm solution, we have the following claim. Claim 1. The minimum norm minimizer \ud835\udeff\u2605of the least squares problem in Eq. 13 recovers an advantage-based variant of the NPG update: \ud835\udeff\u2605:= \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0000E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65)\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)[\ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66)]\u0001 , where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and the advantage is defined as \ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66) := \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212 E\ud835\udc66\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66). The proof of this claim is deferred to Appendix A. Observe that in REBEL, we never explicitly compute the advantage \ud835\udc34\ud835\udf0b\ud835\udc61. However, applying Gauss-Newton to our objective leads to an advantage-based NPG (rather than the traditional \ud835\udc44-function based NPG, e.g., Q-NPG from Agarwal et al. (2021a, 2019)) which indicates that predicting reward difference has an implicit variance reduction effect, as by definition, an advantage function includes a value function baseline. 1 1Note that the original form of NPG is on-policy (Kakade, 2001; Sutton et al., 1999), i.e., the expectations under \ud835\udf0b\ud835\udc61. Our formulation is more general: when set \ud835\udf07= \ud835\udf0b\ud835\udc61, a Gauss-Newton step will recover the original on-policy form of NPG from Kakade (2001); Sutton et al. (1999). More recent works have extended NPG beyond on-policy (e.g., Agarwal et al. (2021a, 2020)). 8 3.3 Extending REBEL to General Preferences In the above discussion, we assume we are given access to a ground-truth reward function. However, in the generative model fine-tuning applications of RL, we often need to learn from human preferences, rather than rewards. This shift introduces a complication: not all preferences can be rationalized by an underlying utility function. In particular, intransitive preferences which are well-known to result from aggregation of different sub-populations or users evaluating different pairs of items on the basis of different features (May, 1954; Tversky, 1969; Gardner, 1970) cannot be accurately captured by a single reward model. To see this, note that if we have \ud835\udc4e\u227b\ud835\udc4f, \ud835\udc4f\u227b\ud835\udc50, and \ud835\udc50\u227b\ud835\udc4e, it is impossible to have a reward model that simultaneously sets \u02c6 \ud835\udc5f(\ud835\udc4e) > \u02c6 \ud835\udc5f(\ud835\udc4f), \u02c6 \ud835\udc5f(\ud835\udc4f) > \u02c6 \ud835\udc5f(\ud835\udc50), and \u02c6 \ud835\udc5f(\ud835\udc50) > \u02c6 \ud835\udc5f(\ud835\udc4e). As we increase the space of possible choices to that of all possible prompt completions, the probability of such intransitivities sharply increases (Dud\u00edk et al., 2015), as reflected in the high levels of annotator disagreement in LLM fine-tuning datasets (Touvron et al., 2023). Thus, rather than assuming access to a reward model, in such settings, we assume access to a preference model (Munos et al., 2023; Swamy et al., 2024; Rosset et al., 2024; Ye et al., 2024). 3.3.1 A Game-Theoretic Perspective on Learning from Preferences More specifically, for any tuple (\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032), we assume we have access to P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65): the probability that \ud835\udc66is preferred to \ud835\udc66\u2032. We then define our preference model \ud835\udc59as \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u225c2 \u00b7 P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65) \u22121. (14) Observe that \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u2208[\u22121, 1] is skew-symmetric, i.e., \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66) = 0, \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) + \ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66) = 0 for all \ud835\udc65\u2208X, \ud835\udc66, \ud835\udc66\u2032 \u2208Y. If the learner can only receive a binary feedback \ud835\udc5c\u2208{0, 1} indicating the preference between \ud835\udc66and \ud835\udc66\u2032, we assume \ud835\udc5cis sampled from a Bernoulli distribution with mean P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65), where \ud835\udc5c= 1 means that \ud835\udc66is preferred over \ud835\udc66\u2032 and 0 otherwise. Given access to such a preference model, a solution concept to the preference aggregation problem with deep roots in the social choice theory literature (Kreweras, 1965; Fishburn, 1984; Kramer, 1973; Simpson, 1969) and the dueling bandit literature (Yue et al., 2012; Dud\u00edk et al., 2015) is that of a minimax winner (MW) \ud835\udf0bMW: the Nash Equilibrium strategy of the symmetric two-player zero-sum game with \ud835\udc59as a payoff function. In particular, due to the skew-symmetric property of \ud835\udc59, Swamy et al. (2024) proved that there exists a policy \ud835\udf0bMW such that max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] = min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] . This implies that (\ud835\udf0bMW, \ud835\udf0bMW) is a Nash Equilibrium (Wang et al., 2023b; Munos et al., 2023; Swamy et al., 2024; Ye et al., 2024). As is standard in game solving, our objective is to obtain an \ud835\udf16-approximate MW b \ud835\udf0bmeasured by the duality gap (DG): DG(b \ud835\udf0b) := max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223cb \ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2212min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223cb \ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2264\ud835\udf16. In the following discussion, we will use \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b) to denote E\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] and \ud835\udc59(\ud835\udf0b, \ud835\udf0b\u2032) to denote E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b\u2032(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] for notational convenience. 9 3.3.2 Self-Play Preference Optimization (SPO) with REBEL as Base Learner We can straightforwardly extend REBEL to the general preference setting via an instantiation of the Self-Play Preference Optimization (SPO) reduction of Swamy et al. (2024). In short, Swamy et al. (2024) prove that rather than performing adversarial training, we are able to perform a simple and stable self-play procedure while retaining strong theoretical guarantees. Practically, this corresponds to sampling at leas two completions from the current policy, querying a learned preference / supervisor model on each pair, and using the win rate for each completion as its reward. We will now describe how we can adapt REBEL to this mode of feedback. Assuming that we can query the preference oracle \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) at will, we can modify the least square objective Eq. (9) to \ud835\udf03\ud835\udc61+1 := argmin \ud835\udf03 \u2211\ufe01 \ud835\udc65,\ud835\udc66,\ud835\udc66\u2032,\ud835\udc66\u2032\u2032\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032)) \u00132 where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032\u2032 \u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). When the exact value of \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is unavailable but only a binary preference feedback \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2208{0, 1} sampling from Bernoulli with mean \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is available, we can just replace \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) by \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2212\ud835\udc5c\ud835\udc66\u2032,\ud835\udc66\u2032\u2032. It is easy to see that the Bayes optimal of the above least square regression problem is equal to: E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61). Swamy et al. (2024) define an iteration-dependent reward \ud835\udc5f\ud835\udc61(\ud835\udc65, \ud835\udc66) := E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61). Thus, the above regression problem can be understood as an extension of REBEL to the setting where the reward function changes at each iteration \ud835\udc61. Swamy et al. (2024) shows that running the exact MD (Eq. 2) with this iteration-dependent reward function \ud835\udc5f\ud835\udc61leads to fast convergence to an approximate Minimax Winner, a property that we will use to provide the regret bound of REBEL in the general preference setting while accounting for nonzero mean squared error. 4 Theoretical Analysis In the previous section, we interpret REBEL as the exact MD and show its convergence by assuming that least square regression always returns a predictor that is accurate everywhere. While such an explanation is simple and has also been used in prior work, point-wise out-of-distribution generalization is an extremely strong condition and is significantly beyond what a standard supervised learning method can promise. In this section, we significantly relax this condition via a reduction-based analysis: As long as we can solve the regression problems well in an in-distribution manner, REBEL can compete against any policy covered by the training data distributions. Formally, we assume the following generalization condition holds on the regressors we find. Assumption 1 (Regression generalization bounds). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. 10 Intuitively, this assumption is saying that there is a function in our class of regressors that is able to accurately fit the difference of rewards. Recall that our class of regressors is isomorphic to our policy class. Therefore, as long as our class of policies is expressive, we would expect this assumption to hold with small \ud835\udf16. For all domains we consider, our policy class is a flexible set of generative models (e.g. Transformer-based LLMs or diffusion models). Thus, we believe it is reasonable to believe this assumption holds in practice \u2013 see Figure 6 in Appendix G for empirical evidence of this point and Example 1 for more discussion. More formally, the above assumption bounds the standard in-distribution generalization error (v.s. the point-wise guarantee in Eq. 12) of a well-defined supervised learning problem: least squares regression. The generalization error \ud835\udf16captures the possible errors from the learning process for \ud835\udf03\ud835\udc61+1 and it could depend on the complexity of the policy class and the number of samples used in the dataset D\ud835\udc61. For instance, when the the function ln \ud835\udf0b\u2212ln \ud835\udf0b\u2032 induced by the log-difference of two policies (\ud835\udf0b, \ud835\udf0b\u2032) are rich enough (e.g., policies are deep neural networks) to capture the reward difference, then \ud835\udf16in this assumption converges to zero as we increase the number of training data. Note that while \ud835\udf16can be small, it does not imply that the learned predictor will have a small prediction error in a point-wise manner \u2013 it almost certainly will not. Example 1. One simple example is when \ud835\udf0b(\ud835\udc66|\ud835\udc65) \u221dexp(\ud835\udf03\u22a4\ud835\udf19(\ud835\udc65, \ud835\udc66)) for some features \ud835\udf19(\ud835\udc65, \ud835\udc66). In this case, ln(\ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65)) \u2212ln(\ud835\udf0b(\ud835\udc66\u2032|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)) = (\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4(\ud835\udf19(\ud835\udc65, \ud835\udc66) \u2212\ud835\udf19(\ud835\udc65, \ud835\udc66\u2032)), which means that our regression problem in Eq. 9 is a classic linear regression problem. When the reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) is also linear in feature \ud835\udf19(\ud835\udc65, \ud835\udc66), then Eq. 9 is a well-specified linear regression problem, and \ud835\udf16typically scales in the rate of \ud835\udc42(\ud835\udc51/|D\ud835\udc61|) with \ud835\udc51being the dimension of feature \ud835\udf19. We can extend the above example to the case where \ud835\udf19is the feature corresponding to some kernel, e.g., RBF kernel or even Neural Tangent Kernel, which allows us to capture the case where \ud835\udf0bis a softmax wide neural network with the least square regression problem solved by gradient flow. The error \ud835\udf16again scales poly(\ud835\udc51/|D\ud835\udc61|), where \ud835\udc51is the effective dimension of the corresponding kernel. We now define the concentrability coefficient (Kakade and Langford, 2002) that quantifies how the training data distribution is covering a comparator policy. Data Coverage. Recall that the base distribution \ud835\udf07can be some behavior policy, which in RLHF can be a human labeler, a supervised fine-tuned policy (SFT), or just the current learned policy (i.e., on-policy). Given a test policy \ud835\udf0b, we denote by \ud835\udc36\ud835\udf07\u2192\ud835\udf0bthe concentrability coefficient, i.e. \ud835\udc36\ud835\udf07\u2192\ud835\udf0b= max \ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) . (15) We say \ud835\udf07covers \ud835\udf0bif \ud835\udc36\ud835\udf07\u2192\ud835\udf0b< +\u221e. Our goal is to bound the regret between our learned policies and an arbitrary comparator \ud835\udf0b\u2217(e.g. the optimal policy if it is covered by \ud835\udf07) using \ud835\udf16and the concentrability coefficient defined in Eq. 15. The following theorem formally states the regret bound of our algorithm. Theorem 1. Under Assumption 1, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, among the learned policies \ud835\udf0b1, . . . , \ud835\udf0b\ud835\udc47, there must exist a policy \u02c6 \ud835\udf0b, such that: \u2200\ud835\udf0b\u2217: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\u2217(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\u02c6 \ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16. ! . 11 Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217,\ud835\udc47. The above theorem shows a reduction from RL to supervised learning \u2014 as long as supervised learning works (i.e., \ud835\udf16is small), then REBEL can compete against any policy \ud835\udf0b\u2217that is covered by the base data distribution \ud835\udf07. In the regret bound, the 1/ \u221a \ud835\udc47comes from Mirror Descent style update, and \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16captures the cost of distribution shift: we train our regressors under distribution \ud835\udf0b\ud835\udc61and \ud835\udf07, but we want the learned regressor to predict well under \ud835\udf0b\u2217. Similar to the NPG analysis from Agarwal et al. (2021a), we now have a slower convergence rate 1/ \u221a \ud835\udc47, which is due to the fact that we have approximation error from learning. Such an agnostic regret bound \u2014 being able compete against any policy that is covered by training distributions \u2013 is the strongest type of agnostic learning results known in the RL literature, matching the best of what has appeared in prior policy optimization work including PSDP (Bagnell et al., 2003), CPI (Kakade and Langford, 2002), NPG (Agarwal et al., 2021a), and PC-PG (Agarwal et al., 2020). While in this work, we use the simplest and most intuitive definition of coverage \u2013 the density ratio-based definition in Eq. 15 \u2013 extension to more general ones such as transfer error (Agarwal et al., 2020, 2021a) or concentrability coefficients that incorporate function class (e.g., Song et al. (2023b)) is straightforward. We defer the proof of the above theorem and the detailed constants that we omitted in the \ud835\udc42notation to Appendix B. 4.1 Extension to General Preferences Extending the above analysis to the general preference case is straightforward except that it requires a stronger coverage condition. This is because we want to find a Nash Equilibrium, which requires a comparison between the learned policy against all the other policies. Results from the Markov Game literature (Cui and Du, 2022b; Zhong et al., 2022; Cui and Du, 2022a; Xiong et al., 2023) and Cui and Du (2022b) have shown that the standard single policy coverage condition used in single-player optimization is provably not sufficient. In particular, they propose using a notion of unilateral concentrability for efficient learning, which can be defined as \ud835\udc36uni,\ud835\udf07:= max \ud835\udf0b,\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032\u2032 \ud835\udf0bMW(\ud835\udc66|\ud835\udc65)\ud835\udf0b(\ud835\udc66\u2032\u2032|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65)\ud835\udf07(\ud835\udc66\u2032\u2032|\ud835\udc65) , in the general preference setting. Notably, the above unilateral concentrability coefficient \ud835\udc36uni,\ud835\udf07is equivalent to \ud835\udc36\ud835\udf07:= max\ud835\udf0b,\ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) since \ud835\udc36\ud835\udf07\u2264\ud835\udc36uni,\ud835\udf07\u2264\ud835\udc362 \ud835\udf07. Therefore in the following discussion, we will use \ud835\udc36\ud835\udf07as the coverage condition. In addition, we also assume the generalization error of the regression problem is small, Assumption 2 (Regression generalization bounds for general preference). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. Under the above coverage condition and generalization bound, we can show that REBEL is able to learn an approximate Minimax Winner: 12 Theorem 2. With assumption 2, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, the policy b \ud835\udf0b= Unif({\ud835\udf0b\ud835\udc61}\ud835\udc47 \ud835\udc61=1) satisfies that: DG(b \ud835\udf0b) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\ud835\udf16. ! . Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07,\ud835\udc47. We defer the proof to Appendix C. Note that the coverage condition here is much stronger than the single policy coverage condition in the RL setting. We conjecture that this is the cost one has to pay by moving to the more general preference setting and leaving the investigation of the necessarily coverage condition for future work. 5 Experiments The implementation of REBEL follows Algorithm 1. In each iteration, REBEL collects a dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032}, where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). Subsequently, REBEL optimizes the least squares regression problem in Eq. 9 through gradient descent with AdamW (Loshchilov and Hutter, 2017). We choose \ud835\udf07= \ud835\udf0b\ud835\udc61such that both \ud835\udc66and \ud835\udc66\u2032 are generated by the current policy. We empirically assess REBEL\u2019s performance on both natural language generation and text-guided image generation. 5.1 Natural Language Generation Baselines: We compare REBEL with baseline RL algorithms, PPO (Schulman et al., 2017), Direct Preference Optimization (DPO) (Rafailov et al., 2023), and REINFORCE (Williams, 1992) and its multi-sample extension, REINFORCE Leave-One-Out (RLOO) (Kool et al., 2019). The REINFORCE method is implemented with a moving average baseline of the reward. We include two variants of RLOO with two (\ud835\udc58= 2) and four (\ud835\udc58= 4) generations per prompt. Dataset: We use the TL;DR summarization dataset (Stiennon et al., 2020)2 to train the model to generate summaries of Reddit posts based on human preference data. The dataset comprises human reference summaries and preference data. Following prior work (Stiennon et al., 2020; Rafailov et al., 2023; Ahmadian et al., 2024), we train the DPO baseline on the preference dataset, while conducting online RL (PPO, RLOO, REBEL) on the human reference dataset. We set the maximum context length to 512 and the maximum generation length to 53 to ensure all references in the dataset can be generated. Additional dataset details are in Appendix D.1. Models: We include results with three different model sizes: 1.4B, 2.8B, and 6.9B. Each model is trained with a supervised fine-tuned (SFT) model and/or a reward model (RM) of the same size. For SFT models, we train a Pythia 1.4B (Biderman et al., 2023)3 model for 1 epoch over the dataset with human references as labels, and use the existing fine-tuned 2.8B4 and 6.9B5 models. For reward models, we train a Pythia 1.4B parameter model for 1 epoch over the preference dataset and 2Dataset available at https://github.com/openai/summarize-from-feedback 3HuggingFace Model Card: EleutherAI/pythia-1.4b-deduped 4HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__sft__tldr 5HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr 13 Model size Algorithm Winrate (\u2191) RM Score (\u2191) KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53) (\u2193) 1.4B SFT 24.5% -0.52 DPO 43.8% 0.11 30.9 PPO 51.6% 1.73 29.1 REBEL 55.3% 1.87 32.4 2.8B SFT 28.4% -0.40 DPO 53.5% 2.41 66.5 PPO 67.2% 2.37 27.4 REBEL 70.3% 2.44 29.2 Table 1: Results on TL;DR Summarization for SFT, PPO, DPO, and REBEL using three metrics. The RM Score is computed using the reward model with the respective size and the winrate is evaluated by GPT4. The models are trained with low-rank adapters. The best-performing method for each size and metric is highlighted in bold and the second best is underlined. We note that REBEL outperforms all baselines here in terms of the winrate 6.9B SFT DPO REINFORCE PPO RLOO (\ud835\udc58= 2) RLOO (\ud835\udc58= 4) REBEL Winrate (\u2191) 44.6% 68.2% 70.7%\u2217 77.6%\u2021 74.2%\u2217 77.9%\u2217 78.0% *directly obtained from Ahmadian et al. (2024) \u2021directly obtained from Huang et al. (2024) Table 2: Results on TL;DR Summarization on 6.9B models. We perform full-parameter training for all models. The best-performing method is highlighted in bold and the second best is underlined. use the existing reward models with 2.8B6 and 6.9B7 parameters. For both REBEL and baseline methods using 1.4B and 2.8B parameters, we trained the policy and/or the critic using low-rank adapters (LoRA) (Hu et al., 2022) on top of our SFT and/or reward model respectively. For the 6.9B models, we perform full-parameter training. More details about the hyperparameters are described in Appendix D.2. Evaluation: We evaluate each method by its balance between reward model score and KLdivergence with the reference policy, testing the effectiveness of the algorithm in optimizing the regularized RL object. To evaluate the quality of the generation, we compute the winrate (Rafailov et al., 2023) against human references using GPT48 (OpenAI, 2023). The winrate is computed from a randomly sampled subset (10%) of the test set with a total of 600 samples. The prompt used to query GPT4 as well as an example response is shown in Appendix D.3. 14 1.6 1.8 2.0 2.2 2.4 2.6 RM Score ( ) 15 20 25 30 35 KL ( || ref) ( ) 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 RM Score ( ) 0 10 20 30 40 50 60 REBEL PPO Figure 2: Plot of Reward vs KL-Divergence for 2.8B REBEL and PPO. We evaluate the models across the entire test set every 100 steps for 2,000 steps. Left: each point represents the average reward score and KL-divergence for a specific time step; the eclipse represents the confidence interval with 2 standard deviations. Right: we divide the KL distribution at the 2,000-step into 10 bins with equal size and average the corresponding RM scores in each bin. 5.1.1 Quality Analysis Table 1 presents a comparison between REBEL and SFT, PPO, and DPO for 1.4B and 2.8B models trained with LoRA. We calculate the KL-divergence (KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53)) using the SFT policy of the corresponding size as the reference for all models. Notably, REBEL outperforms all the baselines on RM score across all model sizes with a slightly larger KL than PPO. In addition, REBEL achieves the highest winrate under GPT4 when evaluated against human references, indicating the benefit of regressing the relative rewards. Example generations of 2.8B REBEL are included in Appendix E. We also perform full-parameter training for 6.9B models and the winrates are shown in Table 2. We can observe that REBEL still outperforms all of the baselines while REBEL, PPO, and RLOO (\ud835\udc58= 4) have comparable performances (but we will soon show in the next section that REBEL is more tractable in computation and memory than PPO and RLOO with \ud835\udc58= 4). An ablation analysis on parameter \ud835\udf02is in Appendix F. The trade-off between the reward model score and KL-divergence is shown in Figure 2. We evaluate the 2.8B REBEL and PPO every 400 gradient updates during training for 8,000 updates. The sample complexity of each update is held constant across both algorithms for fair comparison. For the left plot, each point represents the average divergence and score over the entire test set, and the eclipse represents the confidence interval with 2 standard deviations. As observed previously, PPO exhibits lower divergence, whereas REBEL shows higher divergence but is capable of achieving larger RM scores. Notably, towards the end of the training (going to the right part of the plot), REBEL and PPO have similar KL and RM scores. For the right plot in Figure 2, we analyze a single checkpoint for each algorithm at the end of training. For each algorithm, we group every generation from the test set by its KL distribution into 10 equally sized bins and calculate the average of the corresponding RM 6HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__reward__tldr 7HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__reward__tldr 8Specific API checkpoint used throughout this section: gpt-4-0613 15 DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 20 40 60 80 100 120 Time (s) Generation Policy Update DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 5 10 15 20 25 30 35 40 Peak Memory Usage (GB) Figure 3: Plot of runtime and memory usage for DPO, REINFORCE, RLOO, PPO, and REBEL. The runtime includes both time for generation and policy update for each batch. Runtime and memory usage are measured on A6000 GPUs. Baselines on the left-hand side of the dashed line have lower winrates. Methods on the right-hand side of the dashed line have similar winrates to REBEL, but REBEL is noticeably more computationally tractable and memory efficient than PPO and RLOO (\ud835\udc58= 4). score for each bin. We can see that REBEL achieves higher RM scores for generations with small divergence while requiring larger divergence for generations with the highest scores. 5.1.2 Runtime & Memory Analysis We analyze the runtime and peak memory usage for 2.8B models using PPO, DPO, RLOO, and REBEL. The runtime includes both the generation time and the time required for policy updates. Both runtime and peak memory usage are measured on A6000 GPUs using the same hyperparameters detailed in Appendix D.2. The methods in the plots are arranged in ascending order based on winrates. To the right of the dashed line, PPO, RLOO (\ud835\udc58= 4), and REBEL have the highest winrates, which are comparable among them. While DPO and REINFORCE require less time and memory, their performance does not match up to REBEL, as discussed in Section 5.1.1. RLOO (\ud835\udc58= 2) has similar runtime and memory usage as REBEL since we set \ud835\udf07= \ud835\udf0b\ud835\udc61, making REBEL also generate twice per prompt. However, RLOO (\ud835\udc58= 2) has worse performance than REBEL. Compared to PPO and RLOO (\ud835\udc58= 4), REBEL demonstrates shorter runtimes and lower peak memory usage. PPO is slow and requires more memory because it needs to update both two networks: policy network and value network. RLOO (\ud835\udc58= 4) requires generating 4 responses per prompt which makes it slow and less memory efficient. Compared to the two baselines PPO and RLOO (\ud835\udc58= 4) that achieve similar winrates as REBEL, we see that REBEL is more computationally tractable. REBEL is also noticeably simpler to implement than PPO since it does not learn value networks or compute the advantage estimation. 16 0 10000 20000 30000 40000 50000 60000 Reward Queries 6.0 6.5 7.0 7.5 8.0 8.5 9.0 LAION Aesthetic Score REBEL PPO Figure 4: Learning curves as a function of reward queries to the LAION aesthetic predictor. We report inter-quartile means (IQM) with 95% confidence intervals (CIs) across three seeds for both REBEL and PPO. The CIs were calculated with percentile bootstrap with stratified sampling over three random seeds. 5.2 Image Generation We also consider the setting of image generation, where, given a consistency model (Song et al., 2023a) and a target reward function, we seek to train the consistency model to output images which garner a higher reward. Specifically, we compare REBEL and PPO under the RLCM framework (Oertell et al., 2024). Baselines: We compare REBEL to a clipped, policy gradient objective (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) with the aim to optimize aesthetic quality to obtain high reward from the LAION aesthetic score predictor (Schuhmann, 2022). This baseline does not use critics or GAE for advantage estimates. However, the clipping objective is clearly motivated by PPO, and thus, we simply name this baseline as PPO in this section. Dataset: We use 45 common animals as generation prompts similar to Black et al. (2023); Oertell et al. (2024)9. Models: We use the latent consistency model (Luo et al., 2023) distillation of the Dreamshaper v7 model10, a finetune of stable diffusion (Rombach et al., 2021). Evaluation: We evaluate PPO and REBEL on its reward under the LAION aesthetic reward model for an equal number of reward queries/samples generated and an equal number of gradient updates. The aesthetic predictor is trained to predict human-labeled scores of images on a scale of 1 to 10. Images that tend to have the highest reward are artwork. Following the recommendations of Agarwal et al. (2021b), we report the inter-quartile mean with 95% confidence intervals for our reported results across three random seeds. 9Dataset available at https://github.com/Owen-Oertell/rlcm 10Huggingface model card: SimianLuo/LCM_Dreamshaper_v7 17 REBEL PPO 7.29 7.38 7.37 7.27 7.14 6.85 6.17 6.00 6.29 7.06 Figure 5: Generated images using PPO and REBEL during an intermediate checkpoint. We note that at the same number of epochs, REBEL observes a higher reward under the reward model. This can further be seen by the more diverse background of images generated from REBEL with less training time. 5.3 Quality Analysis Figure 4 shows REBEL optimizes the consistency model faster during the beginning of training but eventually achieves similar performance to that of PPO. For our experiments, we tuned both batch size and learning rate for our algorithms, testing batch sizes of [4, 8, 16] per gpu and learning rates [1e \u22124, 3e \u22124, 6e \u22124, 1e \u22123]. Note, the main difference in implementation between PPO and REBEL is the replacement of the clipped PPO objective with our regression objective. Qualitatively, we observe that eventually, both PPO and REBEL start to generate good-looking images but ignore the text prompt entirely. However, from just optimizing the reward function perspective, this behavior is not surprising since the objective does not encourage the maintenance of the consistency between the text prompt and the generated image. To maximize LAION-predicted aesthetic quality, both REBEL and PPO transform a model that produces plain images into one that produces artistic drawings. We found across multiple seeds that REBEL produced lush backgrounds when compared to PPO\u2019s generations. Please see Appendix E.2 for more examples of generated images. In summary, we propose REBEL, an RL algorithm that reduces the problem of RL to solving a sequence of relative reward regression problems on iteratively collected datasets. In contrast to policy gradient approaches that require additional networks and heuristics like clipping to ensure optimization stability, REBEL requires that we can drive down training error on a least squares problem. This makes it strikingly simple to implement and scale. In theory, REBEL matches the best guarantees we have for RL algorithms in the agnostic setting, while in practice, REBEL is able to match and sometimes outperform methods that are far more complex to implement or expensive to run across both language modeling and guided image generation tasks. There are several open questions raised by our work. The first is whether using a loss function other than square loss (e.g. log loss or cross-entropy) could lead to better performance in practice (Farebrother et al., 2024) or tighter bounds (e.g. first-order / gap-dependent) in theory (Foster and Krishnamurthy, 2021; Wang et al., 2023a, 2024). The second is whether, in the general (i.e. non-utility-based) preference setting, the coverage condition assumed in our analysis is necessary \u2013 we conjecture it is. Relatedly, it would be interesting to explore whether using preference (rather than reward) models to provide supervision for REBEL replicates the performance improvements reported by Swamy et al. (2024); Munos et al. (2023). Third, while we focus primarily on the bandit setting in the preceding sections, it would be interesting to consider the more general RL setting and explore how offline datasets can be used to improve the efficiency of policy optimization via techniques like resets (Bagnell et al., 2003; Ross and Bagnell, 2014; Swamy et al., 2023; Chang et al., 2023, 2024). 20" + }, + { + "url": "http://arxiv.org/abs/2305.18290v2", + "title": "Direct Preference Optimization: Your Language Model is Secretly a Reward Model", + "abstract": "While large-scale unsupervised language models (LMs) learn broad world\nknowledge and some reasoning skills, achieving precise control of their\nbehavior is difficult due to the completely unsupervised nature of their\ntraining. Existing methods for gaining such steerability collect human labels\nof the relative quality of model generations and fine-tune the unsupervised LM\nto align with these preferences, often with reinforcement learning from human\nfeedback (RLHF). However, RLHF is a complex and often unstable procedure, first\nfitting a reward model that reflects the human preferences, and then\nfine-tuning the large unsupervised LM using reinforcement learning to maximize\nthis estimated reward without drifting too far from the original model. In this\npaper we introduce a new parameterization of the reward model in RLHF that\nenables extraction of the corresponding optimal policy in closed form, allowing\nus to solve the standard RLHF problem with only a simple classification loss.\nThe resulting algorithm, which we call Direct Preference Optimization (DPO), is\nstable, performant, and computationally lightweight, eliminating the need for\nsampling from the LM during fine-tuning or performing significant\nhyperparameter tuning. Our experiments show that DPO can fine-tune LMs to align\nwith human preferences as well as or better than existing methods. Notably,\nfine-tuning with DPO exceeds PPO-based RLHF in ability to control sentiment of\ngenerations, and matches or improves response quality in summarization and\nsingle-turn dialogue while being substantially simpler to implement and train.", + "authors": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, Chelsea Finn", + "published": "2023-05-29", + "updated": "2023-12-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.02155v1", + "title": "Training language models to follow instructions with human feedback", + "abstract": "Making language models bigger does not inherently make them better at\nfollowing a user's intent. For example, large language models can generate\noutputs that are untruthful, toxic, or simply not helpful to the user. In other\nwords, these models are not aligned with their users. In this paper, we show an\navenue for aligning language models with user intent on a wide range of tasks\nby fine-tuning with human feedback. Starting with a set of labeler-written\nprompts and prompts submitted through the OpenAI API, we collect a dataset of\nlabeler demonstrations of the desired model behavior, which we use to fine-tune\nGPT-3 using supervised learning. We then collect a dataset of rankings of model\noutputs, which we use to further fine-tune this supervised model using\nreinforcement learning from human feedback. We call the resulting models\nInstructGPT. In human evaluations on our prompt distribution, outputs from the\n1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3,\ndespite having 100x fewer parameters. Moreover, InstructGPT models show\nimprovements in truthfulness and reductions in toxic output generation while\nhaving minimal performance regressions on public NLP datasets. Even though\nInstructGPT still makes simple mistakes, our results show that fine-tuning with\nhuman feedback is a promising direction for aligning language models with human\nintent.", + "authors": "Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe", + "published": "2022-03-04", + "updated": "2022-03-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1502.05477v5", + "title": "Trust Region Policy Optimization", + "abstract": "We describe an iterative procedure for optimizing policies, with guaranteed\nmonotonic improvement. By making several approximations to the\ntheoretically-justified procedure, we develop a practical algorithm, called\nTrust Region Policy Optimization (TRPO). This algorithm is similar to natural\npolicy gradient methods and is effective for optimizing large nonlinear\npolicies such as neural networks. Our experiments demonstrate its robust\nperformance on a wide variety of tasks: learning simulated robotic swimming,\nhopping, and walking gaits; and playing Atari games using images of the screen\nas input. Despite its approximations that deviate from the theory, TRPO tends\nto give monotonic improvement, with little tuning of hyperparameters.", + "authors": "John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel", + "published": "2015-02-19", + "updated": "2017-04-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1707.06347v2", + "title": "Proximal Policy Optimization Algorithms", + "abstract": "We propose a new family of policy gradient methods for reinforcement\nlearning, which alternate between sampling data through interaction with the\nenvironment, and optimizing a \"surrogate\" objective function using stochastic\ngradient ascent. Whereas standard policy gradient methods perform one gradient\nupdate per data sample, we propose a novel objective function that enables\nmultiple epochs of minibatch updates. The new methods, which we call proximal\npolicy optimization (PPO), have some of the benefits of trust region policy\noptimization (TRPO), but they are much simpler to implement, more general, and\nhave better sample complexity (empirically). Our experiments test PPO on a\ncollection of benchmark tasks, including simulated robotic locomotion and Atari\ngame playing, and we show that PPO outperforms other online policy gradient\nmethods, and overall strikes a favorable balance between sample complexity,\nsimplicity, and wall-time.", + "authors": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov", + "published": "2017-07-20", + "updated": "2017-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.14367v2", + "title": "Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data", + "abstract": "Learning from preference labels plays a crucial role in fine-tuning large\nlanguage models. There are several distinct approaches for preference\nfine-tuning, including supervised learning, on-policy reinforcement learning\n(RL), and contrastive learning. Different methods come with different\nimplementation tradeoffs and performance differences, and existing empirical\nfindings present different conclusions, for instance, some results show that\nonline RL is quite important to attain good fine-tuning results, while others\nfind (offline) contrastive or even purely supervised methods sufficient. This\nraises a natural question: what kind of approaches are important for\nfine-tuning with preference data and why? In this paper, we answer this\nquestion by performing a rigorous analysis of a number of fine-tuning\ntechniques on didactic and full-scale LLM problems. Our main finding is that,\nin general, approaches that use on-policy sampling or attempt to push down the\nlikelihood on certain responses (i.e., employ a \"negative gradient\") outperform\noffline and maximum likelihood objectives. We conceptualize our insights and\nunify methods that use on-policy sampling or negative gradient under a notion\nof mode-seeking objectives for categorical distributions. Mode-seeking\nobjectives are able to alter probability mass on specific bins of a categorical\ndistribution at a fast rate compared to maximum likelihood, allowing them to\nrelocate masses across bins more effectively. Our analysis prescribes\nactionable insights for preference fine-tuning of LLMs and informs how data\nshould be collected for maximal improvement.", + "authors": "Fahim Tajwar, Anikait Singh, Archit Sharma, Rafael Rafailov, Jeff Schneider, Tengyang Xie, Stefano Ermon, Chelsea Finn, Aviral Kumar", + "published": "2024-04-22", + "updated": "2024-04-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.08848v1", + "title": "Hybrid Inverse Reinforcement Learning", + "abstract": "The inverse reinforcement learning approach to imitation learning is a\ndouble-edged sword. On the one hand, it can enable learning from a smaller\nnumber of expert demonstrations with more robustness to error compounding than\nbehavioral cloning approaches. On the other hand, it requires that the learner\nrepeatedly solve a computationally expensive reinforcement learning (RL)\nproblem. Often, much of this computation is wasted searching over policies very\ndissimilar to the expert's. In this work, we propose using hybrid RL --\ntraining on a mixture of online and expert data -- to curtail unnecessary\nexploration. Intuitively, the expert data focuses the learner on good states\nduring training, which reduces the amount of exploration required to compute a\nstrong policy. Notably, such an approach doesn't need the ability to reset the\nlearner to arbitrary states in the environment, a requirement of prior work in\nefficient inverse RL. More formally, we derive a reduction from inverse RL to\nexpert-competitive RL (rather than globally optimal RL) that allows us to\ndramatically reduce interaction during the inner policy search loop while\nmaintaining the benefits of the IRL approach. This allows us to derive both\nmodel-free and model-based hybrid inverse RL algorithms with strong policy\nperformance guarantees. Empirically, we find that our approaches are\nsignificantly more sample efficient than standard inverse RL and several other\nbaselines on a suite of continuous control tasks.", + "authors": "Juntao Ren, Gokul Swamy, Zhiwei Steven Wu, J. Andrew Bagnell, Sanjiban Choudhury", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.03236v2", + "title": "Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap", + "abstract": "We provide a unifying view of a large family of previous imitation learning\nalgorithms through the lens of moment matching. At its core, our classification\nscheme is based on whether the learner attempts to match (1) reward or (2)\naction-value moments of the expert's behavior, with each option leading to\ndiffering algorithmic approaches. By considering adversarially chosen\ndivergences between learner and expert behavior, we are able to derive bounds\non policy performance that apply for all algorithms in each of these classes,\nthe first to our knowledge. We also introduce the notion of moment\nrecoverability, implicit in many previous analyses of imitation learning, which\nallows us to cleanly delineate how well each algorithmic family is able to\nmitigate compounding errors. We derive three novel algorithm templates (AdVIL,\nAdRIL, and DAeQuIL) with strong guarantees, simple implementation, and\ncompetitive empirical performance.", + "authors": "Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu", + "published": "2021-03-04", + "updated": "2021-06-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.03715v1", + "title": "Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences", + "abstract": "This paper studies post-training large language models (LLMs) using\npreference feedback from a powerful oracle to help a model iteratively improve\nover itself. The typical approach for post-training LLMs involves Reinforcement\nLearning from Human Feedback (RLHF), which traditionally separates reward\nlearning and subsequent policy optimization. However, such a reward\nmaximization approach is limited by the nature of \"point-wise\" rewards (such as\nBradley-Terry model), which fails to express complex intransitive or cyclic\npreference relations. While advances on RLHF show reward learning and policy\noptimization can be merged into a single contrastive objective for stability,\nthey yet still remain tethered to the reward maximization framework. Recently,\na new wave of research sidesteps the reward maximization presumptions in favor\nof directly optimizing over \"pair-wise\" or general preferences. In this paper,\nwe introduce Direct Nash Optimization (DNO), a provable and scalable algorithm\nthat marries the simplicity and stability of contrastive learning with\ntheoretical generality from optimizing general preferences. Because DNO is a\nbatched on-policy algorithm using a regression-based objective, its\nimplementation is straightforward and efficient. Moreover, DNO enjoys monotonic\nimprovement across iterations that help it improve even over a strong teacher\n(such as GPT-4). In our experiments, a resulting 7B parameter Orca-2.5 model\naligned by DNO achieves the state-of-the-art win-rate against GPT-4-Turbo of\n33% on AlpacaEval 2.0 (even after controlling for response length), an absolute\ngain of 26% (7% to 33%) over the initializing model. It outperforms models with\nfar more parameters, including Mistral Large, Self-Rewarding LM (70B\nparameters), and older versions of GPT-4.", + "authors": "Corby Rosset, Ching-An Cheng, Arindam Mitra, Michael Santacroce, Ahmed Awadallah, Tengyang Xie", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.14367v2", + "title": "Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data", + "abstract": "Learning from preference labels plays a crucial role in fine-tuning large\nlanguage models. There are several distinct approaches for preference\nfine-tuning, including supervised learning, on-policy reinforcement learning\n(RL), and contrastive learning. Different methods come with different\nimplementation tradeoffs and performance differences, and existing empirical\nfindings present different conclusions, for instance, some results show that\nonline RL is quite important to attain good fine-tuning results, while others\nfind (offline) contrastive or even purely supervised methods sufficient. This\nraises a natural question: what kind of approaches are important for\nfine-tuning with preference data and why? In this paper, we answer this\nquestion by performing a rigorous analysis of a number of fine-tuning\ntechniques on didactic and full-scale LLM problems. Our main finding is that,\nin general, approaches that use on-policy sampling or attempt to push down the\nlikelihood on certain responses (i.e., employ a \"negative gradient\") outperform\noffline and maximum likelihood objectives. We conceptualize our insights and\nunify methods that use on-policy sampling or negative gradient under a notion\nof mode-seeking objectives for categorical distributions. Mode-seeking\nobjectives are able to alter probability mass on specific bins of a categorical\ndistribution at a fast rate compared to maximum likelihood, allowing them to\nrelocate masses across bins more effectively. Our analysis prescribes\nactionable insights for preference fine-tuning of LLMs and informs how data\nshould be collected for maximal improvement.", + "authors": "Fahim Tajwar, Anikait Singh, Archit Sharma, Rafael Rafailov, Jeff Schneider, Tengyang Xie, Stefano Ermon, Chelsea Finn, Aviral Kumar", + "published": "2024-04-22", + "updated": "2024-04-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2007.08459v2", + "title": "PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning", + "abstract": "Direct policy gradient methods for reinforcement learning are a successful\napproach for a variety of reasons: they are model free, they directly optimize\nthe performance metric of interest, and they allow for richly parameterized\npolicies. Their primary drawback is that, by being local in nature, they fail\nto adequately explore the environment. In contrast, while model-based\napproaches and Q-learning directly handle exploration through the use of\noptimism, their ability to handle model misspecification and function\napproximation is far less evident. This work introduces the the Policy\nCover-Policy Gradient (PC-PG) algorithm, which provably balances the\nexploration vs. exploitation tradeoff using an ensemble of learned policies\n(the policy cover). PC-PG enjoys polynomial sample complexity and run time for\nboth tabular MDPs and, more generally, linear MDPs in an infinite dimensional\nRKHS. Furthermore, PC-PG also has strong guarantees under model\nmisspecification that go beyond the standard worst case $\\ell_{\\infty}$\nassumptions; this includes approximation guarantees for state aggregation under\nan average case error assumption, along with guarantees under a more general\nassumption where the approximation error under distribution shift is\ncontrolled. We complement the theory with empirical evaluation across a variety\nof domains in both reward-free and reward-driven settings.", + "authors": "Alekh Agarwal, Mikael Henaff, Sham Kakade, Wen Sun", + "published": "2020-07-16", + "updated": "2020-08-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.12192v1", + "title": "Aligning Text-to-Image Models using Human Feedback", + "abstract": "Deep generative models have shown impressive results in text-to-image\nsynthesis. However, current text-to-image models often generate images that are\ninadequately aligned with text prompts. We propose a fine-tuning method for\naligning such models using human feedback, comprising three stages. First, we\ncollect human feedback assessing model output alignment from a set of diverse\ntext prompts. We then use the human-labeled image-text dataset to train a\nreward function that predicts human feedback. Lastly, the text-to-image model\nis fine-tuned by maximizing reward-weighted likelihood to improve image-text\nalignment. Our method generates objects with specified colors, counts and\nbackgrounds more accurately than the pre-trained model. We also analyze several\ndesign choices and find that careful investigations on such design choices are\nimportant in balancing the alignment-fidelity tradeoffs. Our results\ndemonstrate the potential for learning from human feedback to significantly\nimprove text-to-image models.", + "authors": "Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Shixiang Shane Gu", + "published": "2023-02-23", + "updated": "2023-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.11456v4", + "title": "Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint", + "abstract": "This paper studies the alignment process of generative models with\nReinforcement Learning from Human Feedback (RLHF). We first identify the\nprimary challenges of existing popular methods like offline PPO and offline DPO\nas lacking in strategical exploration of the environment. Then, to understand\nthe mathematical principle of RLHF, we consider a standard mathematical\nformulation, the reverse-KL regularized contextual bandit for RLHF. Despite its\nwidespread practical application, a rigorous theoretical analysis of this\nformulation remains open. We investigate its behavior in three distinct\nsettings -- offline, online, and hybrid -- and propose efficient algorithms\nwith finite-sample theoretical guarantees.\n Moving towards practical applications, our framework, with a robust\napproximation of the information-theoretical policy improvement oracle,\nnaturally gives rise to several novel RLHF algorithms. This includes an\niterative version of the Direct Preference Optimization (DPO) algorithm for\nonline settings, and a multi-step rejection sampling strategy for offline\nscenarios. Our empirical evaluations on real-world alignment experiment of\nlarge language model demonstrate that these proposed methods significantly\nsurpass existing strong baselines, such as DPO and Rejection Sampling\nOptimization (RSO), showcasing the connections between solid theoretical\nfoundations and their potent practical implementations.", + "authors": "Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, Tong Zhang", + "published": "2023-12-18", + "updated": "2024-05-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.04056v1", + "title": "A Minimaximalist Approach to Reinforcement Learning from Human Feedback", + "abstract": "We present Self-Play Preference Optimization (SPO), an algorithm for\nreinforcement learning from human feedback. Our approach is minimalist in that\nit does not require training a reward model nor unstable adversarial training\nand is therefore rather simple to implement. Our approach is maximalist in that\nit provably handles non-Markovian, intransitive, and stochastic preferences\nwhile being robust to the compounding errors that plague offline approaches to\nsequential prediction. To achieve the preceding qualities, we build upon the\nconcept of a Minimax Winner (MW), a notion of preference aggregation from the\nsocial choice theory literature that frames learning from preferences as a\nzero-sum game between two policies. By leveraging the symmetry of this game, we\nprove that rather than using the traditional technique of dueling two policies\nto compute the MW, we can simply have a single agent play against itself while\nmaintaining strong convergence guarantees. Practically, this corresponds to\nsampling multiple trajectories from a policy, asking a rater or preference\nmodel to compare them, and then using the proportion of wins as the reward for\na particular trajectory. We demonstrate that on a suite of continuous control\ntasks, we are able to learn significantly more efficiently than reward-model\nbased approaches while maintaining robustness to the intransitive and\nstochastic preferences that frequently occur in practice when aggregating human\njudgments.", + "authors": "Gokul Swamy, Christoph Dann, Rahul Kidambi, Zhiwei Steven Wu, Alekh Agarwal", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.08384v1", + "title": "Offline Data Enhanced On-Policy Policy Gradient with Provable Guarantees", + "abstract": "Hybrid RL is the setting where an RL agent has access to both offline data\nand online data by interacting with the real-world environment. In this work,\nwe propose a new hybrid RL algorithm that combines an on-policy actor-critic\nmethod with offline data. On-policy methods such as policy gradient and natural\npolicy gradient (NPG) have shown to be more robust to model misspecification,\nthough sometimes it may not be as sample efficient as methods that rely on\noff-policy learning. On the other hand, offline methods that depend on\noff-policy training often require strong assumptions in theory and are less\nstable to train in practice. Our new approach integrates a procedure of\noff-policy training on the offline data into an on-policy NPG framework. We\nshow that our approach, in theory, can obtain a best-of-both-worlds type of\nresult -- it achieves the state-of-art theoretical guarantees of offline RL\nwhen offline RL-specific assumptions hold, while at the same time maintaining\nthe theoretical guarantees of on-policy NPG regardless of the offline RL\nassumptions' validity. Experimentally, in challenging rich-observation\nenvironments, we show that our approach outperforms a state-of-the-art hybrid\nRL baseline which only relies on off-policy policy optimization, demonstrating\nthe empirical benefit of combining on-policy and off-policy learning. Our code\nis publicly available at https://github.com/YifeiZhou02/HNPG.", + "authors": "Yifei Zhou, Ayush Sekhari, Yuda Song, Wen Sun", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.10436v2", + "title": "VarGrad: A Low-Variance Gradient Estimator for Variational Inference", + "abstract": "We analyse the properties of an unbiased gradient estimator of the ELBO for\nvariational inference, based on the score function method with leave-one-out\ncontrol variates. We show that this gradient estimator can be obtained using a\nnew loss, defined as the variance of the log-ratio between the exact posterior\nand the variational approximation, which we call the $\\textit{log-variance\nloss}$. Under certain conditions, the gradient of the log-variance loss equals\nthe gradient of the (negative) ELBO. We show theoretically that this gradient\nestimator, which we call $\\textit{VarGrad}$ due to its connection to the\nlog-variance loss, exhibits lower variance than the score function method in\ncertain settings, and that the leave-one-out control variate coefficients are\nclose to the optimal ones. We empirically demonstrate that VarGrad offers a\nfavourable variance versus computation trade-off compared to other\nstate-of-the-art estimators on a discrete VAE.", + "authors": "Lorenz Richter, Ayman Boustati, Nikolas N\u00fcsken, Francisco J. R. Ruiz, \u00d6mer Deniz Akyildiz", + "published": "2020-10-20", + "updated": "2020-10-29", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "math.ST", + "stat.TH" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.06718v3", + "title": "Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient", + "abstract": "We consider a hybrid reinforcement learning setting (Hybrid RL), in which an\nagent has access to an offline dataset and the ability to collect experience\nvia real-world online interaction. The framework mitigates the challenges that\narise in both pure offline and online RL settings, allowing for the design of\nsimple and highly effective algorithms, in both theory and practice. We\ndemonstrate these advantages by adapting the classical Q learning/iteration\nalgorithm to the hybrid setting, which we call Hybrid Q-Learning or Hy-Q. In\nour theoretical results, we prove that the algorithm is both computationally\nand statistically efficient whenever the offline dataset supports a\nhigh-quality policy and the environment has bounded bilinear rank. Notably, we\nrequire no assumptions on the coverage provided by the initial distribution, in\ncontrast with guarantees for policy gradient/iteration methods. In our\nexperimental results, we show that Hy-Q with neural network function\napproximation outperforms state-of-the-art online, offline, and hybrid RL\nbaselines on challenging benchmarks, including Montezuma's Revenge.", + "authors": "Yuda Song, Yifei Zhou, Ayush Sekhari, J. Andrew Bagnell, Akshay Krishnamurthy, Wen Sun", + "published": "2022-10-13", + "updated": "2023-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.06294v1", + "title": "Online and Offline Reinforcement Learning by Planning with a Learned Model", + "abstract": "Learning efficiently from small amounts of data has long been the focus of\nmodel-based reinforcement learning, both for the online case when interacting\nwith the environment and the offline case when learning from a fixed dataset.\nHowever, to date no single unified algorithm could demonstrate state-of-the-art\nresults in both settings. In this work, we describe the Reanalyse algorithm\nwhich uses model-based policy and value improvement operators to compute new\nimproved training targets on existing data points, allowing efficient learning\nfor data budgets varying by several orders of magnitude. We further show that\nReanalyse can also be used to learn entirely from demonstrations without any\nenvironment interactions, as in the case of offline Reinforcement Learning\n(offline RL). Combining Reanalyse with the MuZero algorithm, we introduce\nMuZero Unplugged, a single unified algorithm for any data budget, including\noffline RL. In contrast to previous work, our algorithm does not require any\nspecial adaptations for the off-policy or offline RL settings. MuZero Unplugged\nsets new state-of-the-art results in the RL Unplugged offline RL benchmark as\nwell as in the online RL benchmark of Atari in the standard 200 million frame\nsetting.", + "authors": "Julian Schrittwieser, Thomas Hubert, Amol Mandhane, Mohammadamin Barekatain, Ioannis Antonoglou, David Silver", + "published": "2021-04-13", + "updated": "2021-04-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.05440v1", + "title": "Dealing with the Unknown: Pessimistic Offline Reinforcement Learning", + "abstract": "Reinforcement Learning (RL) has been shown effective in domains where the\nagent can learn policies by actively interacting with its operating\nenvironment. However, if we change the RL scheme to offline setting where the\nagent can only update its policy via static datasets, one of the major issues\nin offline reinforcement learning emerges, i.e. distributional shift. We\npropose a Pessimistic Offline Reinforcement Learning (PessORL) algorithm to\nactively lead the agent back to the area where it is familiar by manipulating\nthe value function. We focus on problems caused by out-of-distribution (OOD)\nstates, and deliberately penalize high values at states that are absent in the\ntraining dataset, so that the learned pessimistic value function lower bounds\nthe true value anywhere within the state space. We evaluate the PessORL\nalgorithm on various benchmark tasks, where we show that our method gains\nbetter performance by explicitly handling OOD states, when compared to those\nmethods merely considering OOD actions.", + "authors": "Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan", + "published": "2021-11-09", + "updated": "2021-11-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.00750v2", + "title": "Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient", + "abstract": "Offline reinforcement learning, which aims at optimizing sequential\ndecision-making strategies with historical data, has been extensively applied\nin real-life applications. State-Of-The-Art algorithms usually leverage\npowerful function approximators (e.g. neural networks) to alleviate the sample\ncomplexity hurdle for better empirical performances. Despite the successes, a\nmore systematic understanding of the statistical complexity for function\napproximation remains lacking. Towards bridging the gap, we take a step by\nconsidering offline reinforcement learning with differentiable function class\napproximation (DFA). This function class naturally incorporates a wide range of\nmodels with nonlinear/nonconvex structures. Most importantly, we show offline\nRL with differentiable function approximation is provably efficient by\nanalyzing the pessimistic fitted Q-learning (PFQL) algorithm, and our results\nprovide the theoretical basis for understanding a variety of practical\nheuristics that rely on Fitted Q-Iteration style design. In addition, we\nfurther improve our guarantee with a tighter instance-dependent\ncharacterization. We hope our work could draw interest in studying\nreinforcement learning with differentiable function approximation beyond the\nscope of current research.", + "authors": "Ming Yin, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-10-03", + "updated": "2022-11-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08232v1", + "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", + "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", + "authors": "Ashish Kumar, Ilya Kuzovkin", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.11574v1", + "title": "Offline Multitask Representation Learning for Reinforcement Learning", + "abstract": "We study offline multitask representation learning in reinforcement learning\n(RL), where a learner is provided with an offline dataset from different tasks\nthat share a common representation and is asked to learn the shared\nrepresentation. We theoretically investigate offline multitask low-rank RL, and\npropose a new algorithm called MORL for offline multitask representation\nlearning. Furthermore, we examine downstream RL in reward-free, offline and\nonline scenarios, where a new task is introduced to the agent that shares the\nsame representation as the upstream offline tasks. Our theoretical results\ndemonstrate the benefits of using the learned representation from the upstream\noffline task instead of directly learning the representation of the low-rank\nmodel.", + "authors": "Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02000v1", + "title": "Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning", + "abstract": "Value function estimation is an indispensable subroutine in reinforcement\nlearning, which becomes more challenging in the offline setting. In this paper,\nwe propose Hybrid Value Estimation (HVE) to reduce value estimation error,\nwhich trades off bias and variance by balancing between the value estimation\nfrom offline data and the learned model. Theoretical analysis discloses that\nHVE enjoys a better error bound than the direct methods. HVE can be leveraged\nin both off-policy evaluation and offline reinforcement learning settings. We,\ntherefore, provide two concrete algorithms Off-policy HVE (OPHVE) and\nModel-based Offline HVE (MOHVE), respectively. Empirical evaluations on MuJoCo\ntasks corroborate the theoretical claim. OPHVE outperforms other off-policy\nevaluation methods in all three metrics measuring the estimation effectiveness,\nwhile MOHVE achieves better or comparable performance with state-of-the-art\noffline reinforcement learning algorithms. We hope that HVE could shed some\nlight on further research on reinforcement learning from fixed data.", + "authors": "Xue-Kun Jin, Xu-Hui Liu, Shengyi Jiang, Yang Yu", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.11620v2", + "title": "Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization", + "abstract": "Offline reinforcement learning (RL) has received considerable attention in\nrecent years due to its attractive capability of learning policies from offline\ndatasets without environmental interactions. Despite some success in the\nsingle-agent setting, offline multi-agent RL (MARL) remains to be a challenge.\nThe large joint state-action space and the coupled multi-agent behaviors pose\nextra complexities for offline policy optimization. Most existing offline MARL\nstudies simply apply offline data-related regularizations on individual agents,\nwithout fully considering the multi-agent system at the global level. In this\nwork, we present OMIGA, a new offline m ulti-agent RL algorithm with implicit\nglobal-to-local v alue regularization. OMIGA provides a principled framework to\nconvert global-level value regularization into equivalent implicit local value\nregularizations and simultaneously enables in-sample learning, thus elegantly\nbridging multi-agent value decomposition and policy learning with offline\nregularizations. Based on comprehensive experiments on the offline multi-agent\nMuJoCo and StarCraft II micro-management tasks, we show that OMIGA achieves\nsuperior performance over the state-of-the-art offline MARL methods in almost\nall tasks.", + "authors": "Xiangsen Wang, Haoran Xu, Yinan Zheng, Xianyuan Zhan", + "published": "2023-07-21", + "updated": "2023-11-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.10393v1", + "title": "Offline Trajectory Generalization for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn policies from static\ndatasets of previously collected trajectories. Existing methods for offline RL\neither constrain the learned policy to the support of offline data or utilize\nmodel-based virtual environments to generate simulated rollouts. However, these\nmethods suffer from (i) poor generalization to unseen states; and (ii) trivial\nimprovement from low-qualified rollout simulation. In this paper, we propose\noffline trajectory generalization through world transformers for offline\nreinforcement learning (OTTO). Specifically, we use casual Transformers, a.k.a.\nWorld Transformers, to predict state dynamics and the immediate reward. Then we\npropose four strategies to use World Transformers to generate high-rewarded\ntrajectory simulation by perturbing the offline data. Finally, we jointly use\noffline data with simulated data to train an offline RL algorithm. OTTO serves\nas a plug-in module and can be integrated with existing offline RL methods to\nenhance them with better generalization capability of transformers and\nhigh-rewarded data augmentation. Conducting extensive experiments on D4RL\nbenchmark datasets, we verify that OTTO significantly outperforms\nstate-of-the-art offline RL methods.", + "authors": "Ziqi Zhao, Zhaochun Ren, Liu Yang, Fajie Yuan, Pengjie Ren, Zhumin Chen, jun Ma, Xin Xin", + "published": "2024-04-16", + "updated": "2024-04-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.07693v1", + "title": "Adaptive Policy Learning for Offline-to-Online Reinforcement Learning", + "abstract": "Conventional reinforcement learning (RL) needs an environment to collect\nfresh data, which is impractical when online interactions are costly. Offline\nRL provides an alternative solution by directly learning from the previously\ncollected dataset. However, it will yield unsatisfactory performance if the\nquality of the offline datasets is poor. In this paper, we consider an\noffline-to-online setting where the agent is first learned from the offline\ndataset and then trained online, and propose a framework called Adaptive Policy\nLearning for effectively taking advantage of offline and online data.\nSpecifically, we explicitly consider the difference between the online and\noffline data and apply an adaptive update scheme accordingly, that is, a\npessimistic update strategy for the offline dataset and an optimistic/greedy\nupdate scheme for the online dataset. Such a simple and effective method\nprovides a way to mix the offline and online RL and achieve the best of both\nworlds. We further provide two detailed algorithms for implementing the\nframework through embedding value or policy-based RL algorithms into it.\nFinally, we conduct extensive experiments on popular continuous control tasks,\nand results show that our algorithm can learn the expert policy with high\nsample efficiency even when the quality of offline dataset is poor, e.g.,\nrandom dataset.", + "authors": "Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08569v2", + "title": "Bootstrapped Transformer for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims at learning policies from previously\ncollected static trajectory data without interacting with the real environment.\nRecent works provide a novel perspective by viewing offline RL as a generic\nsequence generation problem, adopting sequence models such as Transformer\narchitecture to model distributions over trajectories, and repurposing beam\nsearch as a planning algorithm. However, the training datasets utilized in\ngeneral offline RL tasks are quite limited and often suffer from insufficient\ndistribution coverage, which could be harmful to training sequence generation\nmodels yet has not drawn enough attention in the previous works. In this paper,\nwe propose a novel algorithm named Bootstrapped Transformer, which incorporates\nthe idea of bootstrapping and leverages the learned model to self-generate more\noffline data to further boost the sequence model training. We conduct extensive\nexperiments on two offline RL benchmarks and demonstrate that our model can\nlargely remedy the existing offline RL training limitations and beat other\nstrong baseline methods. We also analyze the generated pseudo data and the\nrevealed characteristics may shed some light on offline RL training. The codes\nare available at https://seqml.github.io/bootorl.", + "authors": "Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, Dongsheng Li", + "published": "2022-06-17", + "updated": "2022-10-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03360v1", + "title": "A Survey on Offline Model-Based Reinforcement Learning", + "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", + "authors": "Haoyang He", + "published": "2023-05-05", + "updated": "2023-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "I.2.6; I.2.8" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08016v1", + "title": "Contextual Transformer for Offline Meta Reinforcement Learning", + "abstract": "The pretrain-finetuning paradigm in large-scale sequence models has made\nsignificant progress in natural language processing and computer vision tasks.\nHowever, such a paradigm is still hindered by several challenges in\nReinforcement Learning (RL), including the lack of self-supervised pretraining\nalgorithms based on offline data and efficient fine-tuning/prompt-tuning over\nunseen downstream tasks. In this work, we explore how prompts can improve\nsequence modeling-based offline reinforcement learning (offline-RL) algorithms.\nFirstly, we propose prompt tuning for offline RL, where a context vector\nsequence is concatenated with the input to guide the conditional policy\ngeneration. As such, we can pretrain a model on the offline dataset with\nself-supervised loss and learn a prompt to guide the policy towards desired\nactions. Secondly, we extend our framework to Meta-RL settings and propose\nContextual Meta Transformer (CMT); CMT leverages the context among different\ntasks as the prompt to improve generalization on unseen tasks. We conduct\nextensive experiments across three different offline-RL settings: offline\nsingle-agent RL on the D4RL dataset, offline Meta-RL on the MuJoCo benchmark,\nand offline MARL on the SMAC benchmark. Superior results validate the strong\nperformance, and generality of our methods.", + "authors": "Runji Lin, Ye Li, Xidong Feng, Zhaowei Zhang, Xian Hong Wu Fung, Haifeng Zhang, Jun Wang, Yali Du, Yaodong Yang", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.03383v2", + "title": "On the Role of Discount Factor in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables effective learning from\npreviously collected data without exploration, which shows great promise in\nreal-world applications when exploration is expensive or even infeasible. The\ndiscount factor, $\\gamma$, plays a vital role in improving online RL sample\nefficiency and estimation accuracy, but the role of the discount factor in\noffline RL is not well explored. This paper examines two distinct effects of\n$\\gamma$ in offline RL with theoretical analysis, namely the regularization\neffect and the pessimism effect. On the one hand, $\\gamma$ is a regulator to\ntrade-off optimality with sample efficiency upon existing offline techniques.\nOn the other hand, lower guidance $\\gamma$ can also be seen as a way of\npessimism where we optimize the policy's performance in the worst possible\nmodels. We empirically verify the above theoretical observation with tabular\nMDPs and standard D4RL tasks. The results show that the discount factor plays\nan essential role in the performance of offline RL algorithms, both under small\ndata regimes upon existing offline methods and in large data regimes without\nother conservative methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2022-06-07", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03788v2", + "title": "d3rlpy: An Offline Deep Reinforcement Learning Library", + "abstract": "In this paper, we introduce d3rlpy, an open-sourced offline deep\nreinforcement learning (RL) library for Python. d3rlpy supports a set of\noffline deep RL algorithms as well as off-policy online algorithms via a fully\ndocumented plug-and-play API. To address a reproducibility issue, we conduct a\nlarge-scale benchmark with D4RL and Atari 2600 dataset to ensure implementation\nquality and provide experimental scripts and full tables of results. The d3rlpy\nsource code can be found on GitHub: \\url{https://github.com/takuseno/d3rlpy}.", + "authors": "Takuma Seno, Michita Imai", + "published": "2021-11-06", + "updated": "2022-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13630v1", + "title": "Offline Skill Graph (OSG): A Framework for Learning and Planning using Offline Reinforcement Learning Skills", + "abstract": "Reinforcement Learning has received wide interest due to its success in\ncompetitive games. Yet, its adoption in everyday applications is limited (e.g.\nindustrial, home, healthcare, etc.). In this paper, we address this limitation\nby presenting a framework for planning over offline skills and solving complex\ntasks in real-world environments. Our framework is comprised of three modules\nthat together enable the agent to learn from previously collected data and\ngeneralize over it to solve long-horizon tasks. We demonstrate our approach by\ntesting it on a robotic arm that is required to solve complex tasks.", + "authors": "Ben-ya Halevy, Yehudit Aperstein, Dotan Di Castro", + "published": "2023-06-23", + "updated": "2023-06-23", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.03086v1", + "title": "DITTO: Offline Imitation Learning with World Models", + "abstract": "We propose DITTO, an offline imitation learning algorithm which uses world\nmodels and on-policy reinforcement learning to addresses the problem of\ncovariate shift, without access to an oracle or any additional online\ninteractions. We discuss how world models enable offline, on-policy imitation\nlearning, and propose a simple intrinsic reward defined in the world model\nlatent space that induces imitation learning by reinforcement learning.\nTheoretically, we show that our formulation induces a divergence bound between\nexpert and learner, in turn bounding the difference in reward. We test our\nmethod on difficult Atari environments from pixels alone, and achieve\nstate-of-the-art performance in the offline setting.", + "authors": "Branton DeMoss, Paul Duckworth, Nick Hawes, Ingmar Posner", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.04268v1", + "title": "On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples", + "abstract": "Offline reinforcement learning (offline RL) considers problems where learning\nis performed using only previously collected samples and is helpful for the\nsettings in which collecting new data is costly or risky. In model-based\noffline RL, the learner performs estimation (or optimization) using a model\nconstructed according to the empirical transition frequencies. We analyze the\nsample complexity of vanilla model-based offline RL with dependent samples in\nthe infinite-horizon discounted-reward setting. In our setting, the samples\nobey the dynamics of the Markov decision process and, consequently, may have\ninterdependencies. Under no assumption of independent samples, we provide a\nhigh-probability, polynomial sample complexity bound for vanilla model-based\noff-policy evaluation that requires partial or uniform coverage. We extend this\nresult to the off-policy optimization under uniform coverage. As a comparison\nto the model-based approach, we analyze the sample complexity of off-policy\nevaluation with vanilla importance sampling in the infinite-horizon setting.\nFinally, we provide an estimator that outperforms the sample-mean estimator for\nalmost deterministic dynamics that are prevalent in reinforcement learning.", + "authors": "Mustafa O. Karabag, Ufuk Topcu", + "published": "2023-03-07", + "updated": "2023-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.13464v3", + "title": "When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning", + "abstract": "Learning effective reinforcement learning (RL) policies to solve real-world\ncomplex tasks can be quite challenging without a high-fidelity simulation\nenvironment. In most cases, we are only given imperfect simulators with\nsimplified dynamics, which inevitably lead to severe sim-to-real gaps in RL\npolicy learning. The recently emerged field of offline RL provides another\npossibility to learn policies directly from pre-collected historical data.\nHowever, to achieve reasonable performance, existing offline RL algorithms need\nimpractically large offline data with sufficient state-action space coverage\nfor training. This brings up a new question: is it possible to combine learning\nfrom limited real data in offline RL and unrestricted exploration through\nimperfect simulators in online RL to address the drawbacks of both approaches?\nIn this study, we propose the Dynamics-Aware Hybrid Offline-and-Online\nReinforcement Learning (H2O) framework to provide an affirmative answer to this\nquestion. H2O introduces a dynamics-aware policy evaluation scheme, which\nadaptively penalizes the Q function learning on simulated state-action pairs\nwith large dynamics gaps, while also simultaneously allowing learning from a\nfixed real-world dataset. Through extensive simulation and real-world tasks, as\nwell as theoretical analysis, we demonstrate the superior performance of H2O\nagainst other cross-domain online and offline RL algorithms. H2O provides a\nbrand new hybrid offline-and-online RL paradigm, which can potentially shed\nlight on future RL algorithm design for solving practical real-world tasks.", + "authors": "Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2022-06-27", + "updated": "2023-01-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08331v1", + "title": "Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation", + "abstract": "In recommender systems (RecSys) and real-time bidding (RTB) for online\nadvertisements, we often try to optimize sequential decision making using\nbandit and reinforcement learning (RL) techniques. In these applications,\noffline reinforcement learning (offline RL) and off-policy evaluation (OPE) are\nbeneficial because they enable safe policy optimization using only logged data\nwithout any risky online interaction. In this position paper, we explore the\npotential of using simulation to accelerate practical research of offline RL\nand OPE, particularly in RecSys and RTB. Specifically, we discuss how\nsimulation can help us conduct empirical research of offline RL and OPE. We\ntake a position to argue that we should effectively use simulations in the\nempirical research of offline RL and OPE. To refute the counterclaim that\nexperiments using only real-world data are preferable, we first point out the\nunderlying risks and reproducibility issue in real-world experiments. Then, we\ndescribe how these issues can be addressed by using simulations. Moreover, we\nshow how to incorporate the benefits of both real-world and simulation-based\nexperiments to defend our position. Finally, we also present an open challenge\nto further facilitate practical research of offline RL and OPE in RecSys and\nRTB, with respect to public simulation platforms. As a possible solution for\nthe issue, we show our ongoing open source project and its potential use case.\nWe believe that building and utilizing simulation-based evaluation platforms\nfor offline RL and OPE will be of great interest and relevance for the RecSys\nand RTB community.", + "authors": "Haruka Kiyohara, Kosuke Kawakami, Yuta Saito", + "published": "2021-09-17", + "updated": "2021-09-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.11336v1", + "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", + "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", + "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.00935v3", + "title": "Policy Expansion for Bridging Offline-to-Online Reinforcement Learning", + "abstract": "Pre-training with offline data and online fine-tuning using reinforcement\nlearning is a promising strategy for learning control policies by leveraging\nthe best of both worlds in terms of sample efficiency and performance. One\nnatural approach is to initialize the policy for online learning with the one\ntrained offline. In this work, we introduce a policy expansion scheme for this\ntask. After learning the offline policy, we use it as one candidate policy in a\npolicy set. We then expand the policy set with another policy which will be\nresponsible for further learning. The two policies will be composed in an\nadaptive manner for interacting with the environment. With this approach, the\npolicy previously learned offline is fully retained during online learning,\nthus mitigating the potential issues such as destroying the useful behaviors of\nthe offline policy in the initial stage of online learning while allowing the\noffline policy participate in the exploration naturally in an adaptive manner.\nMoreover, new useful behaviors can potentially be captured by the newly added\npolicy through learning. Experiments are conducted on a number of tasks and the\nresults demonstrate the effectiveness of the proposed approach.", + "authors": "Haichao Zhang, We Xu, Haonan Yu", + "published": "2023-02-02", + "updated": "2023-04-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.05422v1", + "title": "Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning", + "abstract": "Learning a precise dynamics model can be crucial for offline reinforcement\nlearning, which, unfortunately, has been found to be quite challenging.\nDynamics models that are learned by fitting historical transitions often\nstruggle to generalize to unseen transitions. In this study, we identify a\nhidden but pivotal factor termed dynamics reward that remains consistent across\ntransitions, offering a pathway to better generalization. Therefore, we propose\nthe idea of reward-consistent dynamics models: any trajectory generated by the\ndynamics model should maximize the dynamics reward derived from the data. We\nimplement this idea as the MOREC (Model-based Offline reinforcement learning\nwith Reward Consistency) method, which can be seamlessly integrated into\nprevious offline model-based reinforcement learning (MBRL) methods. MOREC\nlearns a generalizable dynamics reward function from offline data, which is\nsubsequently employed as a transition filter in any offline MBRL method: when\ngenerating transitions, the dynamics model generates a batch of transitions and\nselects the one with the highest dynamics reward value. On a synthetic task, we\nvisualize that MOREC has a strong generalization ability and can surprisingly\nrecover some distant unseen transitions. On 21 offline tasks in D4RL and NeoRL\nbenchmarks, MOREC improves the previous state-of-the-art performance by a\nsignificant margin, i.e., 4.6% on D4RL tasks and 25.9% on NeoRL tasks. Notably,\nMOREC is the first method that can achieve above 95% online RL performance in 6\nout of 12 D4RL tasks and 3 out of 9 NeoRL tasks.", + "authors": "Fan-Ming Luo, Tian Xu, Xingchen Cao, Yang Yu", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.11566v1", + "title": "Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning", + "abstract": "Offline Reinforcement Learning (RL) aims to learn policies from previously\ncollected datasets without exploring the environment. Directly applying\noff-policy algorithms to offline RL usually fails due to the extrapolation\nerror caused by the out-of-distribution (OOD) actions. Previous methods tackle\nsuch problem by penalizing the Q-values of OOD actions or constraining the\ntrained policy to be close to the behavior policy. Nevertheless, such methods\ntypically prevent the generalization of value functions beyond the offline data\nand also lack precise characterization of OOD data. In this paper, we propose\nPessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven\noffline algorithm without explicit policy constraints. Specifically, PBRL\nconducts uncertainty quantification via the disagreement of bootstrapped\nQ-functions, and performs pessimistic updates by penalizing the value function\nbased on the estimated uncertainty. To tackle the extrapolating error, we\nfurther propose a novel OOD sampling method. We show that such OOD sampling and\npessimistic bootstrapping yields provable uncertainty quantifier in linear\nMDPs, thus providing the theoretical underpinning for PBRL. Extensive\nexperiments on D4RL benchmark show that PBRL has better performance compared to\nthe state-of-the-art algorithms.", + "authors": "Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang", + "published": "2022-02-23", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08251v1", + "title": "Offline Reinforcement Learning with Adaptive Behavior Regularization", + "abstract": "Offline reinforcement learning (RL) defines a sample-efficient learning\nparadigm, where a policy is learned from static and previously collected\ndatasets without additional interaction with the environment. The major\nobstacle to offline RL is the estimation error arising from evaluating the\nvalue of out-of-distribution actions. To tackle this problem, most existing\noffline RL methods attempt to acquire a policy both ``close\" to the behaviors\ncontained in the dataset and sufficiently improved over them, which requires a\ntrade-off between two possibly conflicting targets. In this paper, we propose a\nnovel approach, which we refer to as adaptive behavior regularization (ABR), to\nbalance this critical trade-off. By simply utilizing a sample-based\nregularization, ABR enables the policy to adaptively adjust its optimization\nobjective between cloning and improving over the policy used to generate the\ndataset. In the evaluation on D4RL datasets, a widely adopted benchmark for\noffline reinforcement learning, ABR can achieve improved or competitive\nperformance compared to existing state-of-the-art algorithms.", + "authors": "Yunfan Zhou, Xijun Li, Qingyu Qu", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.09844v2", + "title": "Small Dataset, Big Gains: Enhancing Reinforcement Learning by Offline Pre-Training with Model Based Augmentation", + "abstract": "Offline reinforcement learning leverages pre-collected datasets of\ntransitions to train policies. It can serve as effective initialization for\nonline algorithms, enhancing sample efficiency and speeding up convergence.\nHowever, when such datasets are limited in size and quality, offline\npre-training can produce sub-optimal policies and lead to degraded online\nreinforcement learning performance. In this paper we propose a model-based data\naugmentation strategy to maximize the benefits of offline reinforcement\nlearning pre-training and reduce the scale of data needed to be effective. Our\napproach leverages a world model of the environment trained on the offline\ndataset to augment states during offline pre-training. We evaluate our approach\non a variety of MuJoCo robotic tasks and our results show it can jump-start\nonline fine-tuning and substantially reduce - in some cases by an order of\nmagnitude - the required number of environment interactions.", + "authors": "Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov", + "published": "2023-12-15", + "updated": "2023-12-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.07344v1", + "title": "Measurement Scheduling for ICU Patients with Offline Reinforcement Learning", + "abstract": "Scheduling laboratory tests for ICU patients presents a significant\nchallenge. Studies show that 20-40% of lab tests ordered in the ICU are\nredundant and could be eliminated without compromising patient safety. Prior\nwork has leveraged offline reinforcement learning (Offline-RL) to find optimal\npolicies for ordering lab tests based on patient information. However, new ICU\npatient datasets have since been released, and various advancements have been\nmade in Offline-RL methods. In this study, we first introduce a preprocessing\npipeline for the newly-released MIMIC-IV dataset geared toward time-series\ntasks. We then explore the efficacy of state-of-the-art Offline-RL methods in\nidentifying better policies for ICU patient lab test scheduling. Besides\nassessing methodological performance, we also discuss the overall suitability\nand practicality of using Offline-RL frameworks for scheduling laboratory tests\nin ICU settings.", + "authors": "Zongliang Ji, Anna Goldenberg, Rahul G. Krishnan", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.06860v2", + "title": "A Minimalist Approach to Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) defines the task of learning from a fixed\nbatch of data. Due to errors in value estimation from out-of-distribution\nactions, most offline RL algorithms take the approach of constraining or\nregularizing the policy with the actions contained in the dataset. Built on\npre-existing RL algorithms, modifications to make an RL algorithm work offline\ncomes at the cost of additional complexity. Offline RL algorithms introduce new\nhyperparameters and often leverage secondary components such as generative\nmodels, while adjusting the underlying RL algorithm. In this paper we aim to\nmake a deep RL algorithm work while making minimal changes. We find that we can\nmatch the performance of state-of-the-art offline RL algorithms by simply\nadding a behavior cloning term to the policy update of an online RL algorithm\nand normalizing the data. The resulting algorithm is a simple to implement and\ntune baseline, while more than halving the overall run time by removing the\nadditional computational overhead of previous methods.", + "authors": "Scott Fujimoto, Shixiang Shane Gu", + "published": "2021-06-12", + "updated": "2021-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.10442v1", + "title": "Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning", + "abstract": "We study offline meta-reinforcement learning, a practical reinforcement\nlearning paradigm that learns from offline data to adapt to new tasks. The\ndistribution of offline data is determined jointly by the behavior policy and\nthe task. Existing offline meta-reinforcement learning algorithms cannot\ndistinguish these factors, making task representations unstable to the change\nof behavior policies. To address this problem, we propose a contrastive\nlearning framework for task representations that are robust to the distribution\nmismatch of behavior policies in training and test. We design a bi-level\nencoder structure, use mutual information maximization to formalize task\nrepresentation learning, derive a contrastive learning objective, and introduce\nseveral approaches to approximate the true distribution of negative pairs.\nExperiments on a variety of offline meta-reinforcement learning benchmarks\ndemonstrate the advantages of our method over prior methods, especially on the\ngeneralization to out-of-distribution behavior policies. The code is available\nat https://github.com/PKU-AI-Edge/CORRO.", + "authors": "Haoqi Yuan, Zongqing Lu", + "published": "2022-06-21", + "updated": "2022-06-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.08566v1", + "title": "Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining", + "abstract": "Large transformer models pretrained on offline reinforcement learning\ndatasets have demonstrated remarkable in-context reinforcement learning (ICRL)\ncapabilities, where they can make good decisions when prompted with interaction\ntrajectories from unseen environments. However, when and how transformers can\nbe trained to perform ICRL have not been theoretically well-understood. In\nparticular, it is unclear which reinforcement-learning algorithms transformers\ncan perform in context, and how distribution mismatch in offline training data\naffects the learned algorithms. This paper provides a theoretical framework\nthat analyzes supervised pretraining for ICRL. This includes two recently\nproposed training methods -- algorithm distillation and decision-pretrained\ntransformers. First, assuming model realizability, we prove the\nsupervised-pretrained transformer will imitate the conditional expectation of\nthe expert algorithm given the observed trajectory. The generalization error\nwill scale with model capacity and a distribution divergence factor between the\nexpert and offline algorithms. Second, we show transformers with ReLU attention\ncan efficiently approximate near-optimal online reinforcement learning\nalgorithms like LinUCB and Thompson sampling for stochastic linear bandits, and\nUCB-VI for tabular Markov decision processes. This provides the first\nquantitative analysis of the ICRL capabilities of transformers pretrained from\noffline trajectories.", + "authors": "Licong Lin, Yu Bai, Song Mei", + "published": "2023-10-12", + "updated": "2023-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "math.ST", + "stat.ML", + "stat.TH" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.13846v1", + "title": "Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning", + "abstract": "Offline reinforcement learning, by learning from a fixed dataset, makes it\npossible to learn agent behaviors without interacting with the environment.\nHowever, depending on the quality of the offline dataset, such pre-trained\nagents may have limited performance and would further need to be fine-tuned\nonline by interacting with the environment. During online fine-tuning, the\nperformance of the pre-trained agent may collapse quickly due to the sudden\ndistribution shift from offline to online data. While constraints enforced by\noffline RL methods such as a behaviour cloning loss prevent this to an extent,\nthese constraints also significantly slow down online fine-tuning by forcing\nthe agent to stay close to the behavior policy. We propose to adaptively weigh\nthe behavior cloning loss during online fine-tuning based on the agent's\nperformance and training stability. Moreover, we use a randomized ensemble of Q\nfunctions to further increase the sample efficiency of online fine-tuning by\nperforming a large number of learning updates. Experiments show that the\nproposed method yields state-of-the-art offline-to-online reinforcement\nlearning performance on the popular D4RL benchmark. Code is available:\n\\url{https://github.com/zhaoyi11/adaptive_bc}.", + "authors": "Yi Zhao, Rinu Boney, Alexander Ilin, Juho Kannala, Joni Pajarinen", + "published": "2022-10-25", + "updated": "2022-10-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.14379v1", + "title": "Offline Reinforcement Learning Hands-On", + "abstract": "Offline Reinforcement Learning (RL) aims to turn large datasets into powerful\ndecision-making engines without any online interactions with the environment.\nThis great promise has motivated a large amount of research that hopes to\nreplicate the success RL has experienced in simulation settings. This work\nambitions to reflect upon these efforts from a practitioner viewpoint. We start\nby discussing the dataset properties that we hypothesise can characterise the\ntype of offline methods that will be the most successful. We then verify these\nclaims through a set of experiments and designed datasets generated from\nenvironments with both discrete and continuous action spaces. We experimentally\nvalidate that diversity and high-return examples in the data are crucial to the\nsuccess of offline RL and show that behavioural cloning remains a strong\ncontender compared to its contemporaries. Overall, this work stands as a\ntutorial to help people build their intuition on today's offline RL methods and\ntheir applicability.", + "authors": "Louis Monier, Jakub Kmec, Alexandre Laterre, Thomas Pierrot, Valentin Courgeau, Olivier Sigaud, Karim Beguir", + "published": "2020-11-29", + "updated": "2020-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.04779v3", + "title": "Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations", + "abstract": "Offline reinforcement learning has shown great promise in leveraging large\npre-collected datasets for policy learning, allowing agents to forgo\noften-expensive online data collection. However, offline reinforcement learning\nfrom visual observations with continuous action spaces remains under-explored,\nwith a limited understanding of the key challenges in this complex domain. In\nthis paper, we establish simple baselines for continuous control in the visual\ndomain and introduce a suite of benchmarking tasks for offline reinforcement\nlearning from visual observations designed to better represent the data\ndistributions present in real-world offline RL problems and guided by a set of\ndesiderata for offline RL from visual observations, including robustness to\nvisual distractions and visually identifiable changes in dynamics. Using this\nsuite of benchmarking tasks, we show that simple modifications to two popular\nvision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2,\nsuffice to outperform existing offline RL methods and establish competitive\nbaselines for continuous control in the visual domain. We rigorously evaluate\nthese algorithms and perform an empirical evaluation of the differences between\nstate-of-the-art model-based and model-free offline RL methods for continuous\ncontrol from visual observations. All code and data used in this evaluation are\nopen-sourced to facilitate progress in this domain.", + "authors": "Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh", + "published": "2022-06-09", + "updated": "2023-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.05433v1", + "title": "Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) Algorithms are often designed with\nenvironments such as MuJoCo in mind, in which the planning horizon is extremely\nlong and no noise exists. We compare model-free, model-based, as well as hybrid\noffline RL approaches on various industrial benchmark (IB) datasets to test the\nalgorithms in settings closer to real world problems, including complex noise\nand partially observable states. We find that on the IB, hybrid approaches face\nsevere difficulties and that simpler algorithms, such as rollout based\nalgorithms or model-free algorithms with simpler regularizers perform best on\nthe datasets.", + "authors": "Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler", + "published": "2022-01-14", + "updated": "2022-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.05922v1", + "title": "A Unified Framework for Alternating Offline Model Training and Policy Learning", + "abstract": "In offline model-based reinforcement learning (offline MBRL), we learn a\ndynamic model from historically collected data, and subsequently utilize the\nlearned model and fixed datasets for policy learning, without further\ninteracting with the environment. Offline MBRL algorithms can improve the\nefficiency and stability of policy learning over the model-free algorithms.\nHowever, in most of the existing offline MBRL algorithms, the learning\nobjectives for the dynamic models and the policies are isolated from each\nother. Such an objective mismatch may lead to inferior performance of the\nlearned agents. In this paper, we address this issue by developing an iterative\noffline MBRL framework, where we maximize a lower bound of the true expected\nreturn, by alternating between dynamic-model training and policy learning. With\nthe proposed unified model-policy learning framework, we achieve competitive\nperformance on a wide range of continuous-control offline reinforcement\nlearning datasets. Source code is publicly released.", + "authors": "Shentao Yang, Shujian Zhang, Yihao Feng, Mingyuan Zhou", + "published": "2022-10-12", + "updated": "2022-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.09712v1", + "title": "Semi-Offline Reinforcement Learning for Optimized Text Generation", + "abstract": "In reinforcement learning (RL), there are two major settings for interacting\nwith the environment: online and offline. Online methods explore the\nenvironment at significant time cost, and offline methods efficiently obtain\nreward signals by sacrificing exploration capability. We propose semi-offline\nRL, a novel paradigm that smoothly transits from offline to online settings,\nbalances exploration capability and training cost, and provides a theoretical\nfoundation for comparing different RL settings. Based on the semi-offline\nformulation, we present the RL setting that is optimal in terms of optimization\ncost, asymptotic error, and overfitting error bound. Extensive experiments show\nthat our semi-offline approach is efficient and yields comparable or often\nbetter performance compared with state-of-the-art methods.", + "authors": "Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, Rui Yan", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.12639v1", + "title": "Single-Task Continual Offline Reinforcement Learning", + "abstract": "In this paper, we study the continual learning problem of single-task offline\nreinforcement learning. In the past, continual reinforcement learning usually\nonly dealt with multitasking, that is, learning multiple related or unrelated\ntasks in a row, but once each learned task was learned, it was not relearned,\nbut only used in subsequent processes. However, offline reinforcement learning\ntasks require the continuously learning of multiple different datasets for the\nsame task. Existing algorithms will try their best to achieve the best results\nin each offline dataset they have learned and the skills of the network will\noverwrite the high-quality datasets that have been learned after learning the\nsubsequent poor datasets. On the other hand, if too much emphasis is placed on\nstability, the network will learn the subsequent better dataset after learning\nthe poor offline dataset, and the problem of insufficient plasticity and\nnon-learning will occur. How to design a strategy that can always preserve the\nbest performance for each state in the data that has been learned is a new\nchallenge and the focus of this study. Therefore, this study proposes a new\nalgorithm, called Ensemble Offline Reinforcement Learning Based on Experience\nReplay, which introduces multiple value networks to learn the same dataset and\njudge whether the strategy has been learned by the discrete degree of the value\nnetwork, to improve the performance of the network in single-task offline\nreinforcement learning.", + "authors": "Sibo Gai, Donglin Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "I.2.6" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08900v1", + "title": "Offline Multi-Agent Reinforcement Learning with Coupled Value Factorization", + "abstract": "Offline reinforcement learning (RL) that learns policies from offline\ndatasets without environment interaction has received considerable attention in\nrecent years. Compared with the rich literature in the single-agent case,\noffline multi-agent RL is still a relatively underexplored area. Most existing\nmethods directly apply offline RL ingredients in the multi-agent setting\nwithout fully leveraging the decomposable problem structure, leading to less\nsatisfactory performance in complex tasks. We present OMAC, a new offline\nmulti-agent RL algorithm with coupled value factorization. OMAC adopts a\ncoupled value factorization scheme that decomposes the global value function\ninto local and shared components, and also maintains the credit assignment\nconsistency between the state-value and Q-value functions. Moreover, OMAC\nperforms in-sample learning on the decomposed local state-value functions,\nwhich implicitly conducts max-Q operation at the local level while avoiding\ndistributional shift caused by evaluating out-of-distribution actions. Based on\nthe comprehensive evaluations of the offline multi-agent StarCraft II\nmicro-management tasks, we demonstrate the superior performance of OMAC over\nthe state-of-the-art offline multi-agent RL methods.", + "authors": "Xiangsen Wang, Xianyuan Zhan", + "published": "2023-06-15", + "updated": "2023-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07166v1", + "title": "Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) extends the paradigm of classical RL\nalgorithms to purely learning from static datasets, without interacting with\nthe underlying environment during the learning process. A key challenge of\noffline RL is the instability of policy training, caused by the mismatch\nbetween the distribution of the offline data and the undiscounted stationary\nstate-action distribution of the learned policy. To avoid the detrimental\nimpact of distribution mismatch, we regularize the undiscounted stationary\ndistribution of the current policy towards the offline data during the policy\noptimization process. Further, we train a dynamics model to both implement this\nregularization and better estimate the stationary distribution of the current\npolicy, reducing the error induced by distribution mismatch. On a wide range of\ncontinuous-control offline RL datasets, our method indicates competitive\nperformance, which validates our algorithm. The code is publicly available.", + "authors": "Shentao Yang, Yihao Feng, Shujian Zhang, Mingyuan Zhou", + "published": "2022-06-14", + "updated": "2022-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.00188v2", + "title": "Offline Reinforcement Learning with Reverse Model-based Imagination", + "abstract": "In offline reinforcement learning (offline RL), one of the main challenges is\nto deal with the distributional shift between the learning policy and the given\ndataset. To address this problem, recent offline RL methods attempt to\nintroduce conservatism bias to encourage learning in high-confidence areas.\nModel-free approaches directly encode such bias into policy or value function\nlearning using conservative regularizations or special network structures, but\ntheir constrained policy search limits the generalization beyond the offline\ndataset. Model-based approaches learn forward dynamics models with conservatism\nquantifications and then generate imaginary trajectories to extend the offline\ndatasets. However, due to limited samples in offline datasets, conservatism\nquantifications often suffer from overgeneralization in out-of-support regions.\nThe unreliable conservative measures will mislead forward model-based\nimaginations to undesired areas, leading to overaggressive behaviors. To\nencourage more conservatism, we propose a novel model-based offline RL\nframework, called Reverse Offline Model-based Imagination (ROMI). We learn a\nreverse dynamics model in conjunction with a novel reverse policy, which can\ngenerate rollouts leading to the target goal states within the offline dataset.\nThese reverse imaginations provide informed data augmentation for model-free\npolicy learning and enable conservative generalization beyond the offline\ndataset. ROMI can effectively combine with off-the-shelf model-free algorithms\nto enable model-based generalization with proper conservatism. Empirical\nresults show that our method can generate more conservative behaviors and\nachieve state-of-the-art performance on offline RL benchmark tasks.", + "authors": "Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, Chongjie Zhang", + "published": "2021-10-01", + "updated": "2021-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13412v2", + "title": "CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn an optimal policy from\npre-collected and labeled datasets, which eliminates the time-consuming data\ncollection in online RL. However, offline RL still bears a large burden of\nspecifying/handcrafting extrinsic rewards for each transition in the offline\ndata. As a remedy for the labor-intensive labeling, we propose to endow offline\nRL tasks with a few expert data and utilize the limited expert data to drive\nintrinsic rewards, thus eliminating the need for extrinsic rewards. To achieve\nthat, we introduce \\textbf{C}alibrated \\textbf{L}atent\ng\\textbf{U}idanc\\textbf{E} (CLUE), which utilizes a conditional variational\nauto-encoder to learn a latent space such that intrinsic rewards can be\ndirectly qualified over the latent space. CLUE's key idea is to align the\nintrinsic rewards consistent with the expert intention via enforcing the\nembeddings of expert data to a calibrated contextual representation. We\ninstantiate the expert-driven intrinsic rewards in sparse-reward offline RL\ntasks, offline imitation learning (IL) tasks, and unsupervised offline RL\ntasks. Empirically, we find that CLUE can effectively improve the sparse-reward\noffline RL performance, outperform the state-of-the-art offline IL baselines,\nand discover diverse skills from static reward-free offline data.", + "authors": "Jinxin Liu, Lipeng Zu, Li He, Donglin Wang", + "published": "2023-06-23", + "updated": "2023-10-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.10813v2", + "title": "A Workflow for Offline Model-Free Robotic Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables learning control policies by\nutilizing only prior experience, without any online interaction. This can allow\nrobots to acquire generalizable skills from large and diverse datasets, without\nany costly or unsafe online data collection. Despite recent algorithmic\nadvances in offline RL, applying these methods to real-world problems has\nproven challenging. Although offline RL methods can learn from prior data,\nthere is no clear and well-understood process for making various design\nchoices, from model architecture to algorithm hyperparameters, without actually\nevaluating the learned policies online. In this paper, our aim is to develop a\npractical workflow for using offline RL analogous to the relatively\nwell-understood workflows for supervised learning problems. To this end, we\ndevise a set of metrics and conditions that can be tracked over the course of\noffline training, and can inform the practitioner about how the algorithm and\nmodel architecture should be adjusted to improve final performance. Our\nworkflow is derived from a conceptual understanding of the behavior of\nconservative offline RL algorithms and cross-validation in supervised learning.\nWe demonstrate the efficacy of this workflow in producing effective policies\nwithout any online tuning, both in several simulated robotic learning scenarios\nand for three tasks on two distinct real robots, focusing on learning\nmanipulation skills with raw image observations with sparse binary rewards.\nExplanatory video and additional results can be found at\nsites.google.com/view/offline-rl-workflow", + "authors": "Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine", + "published": "2021-09-22", + "updated": "2021-09-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.02929v2", + "title": "Model-Based Offline Meta-Reinforcement Learning with Regularization", + "abstract": "Existing offline reinforcement learning (RL) methods face a few major\nchallenges, particularly the distributional shift between the learned policy\nand the behavior policy. Offline Meta-RL is emerging as a promising approach to\naddress these challenges, aiming to learn an informative meta-policy from a\ncollection of tasks. Nevertheless, as shown in our empirical studies, offline\nMeta-RL could be outperformed by offline single-task RL methods on tasks with\ngood quality of datasets, indicating that a right balance has to be delicately\ncalibrated between \"exploring\" the out-of-distribution state-actions by\nfollowing the meta-policy and \"exploiting\" the offline dataset by staying close\nto the behavior policy. Motivated by such empirical analysis, we explore\nmodel-based offline Meta-RL with regularized Policy Optimization (MerPO), which\nlearns a meta-model for efficient task structure inference and an informative\nmeta-policy for safe exploration of out-of-distribution state-actions. In\nparticular, we devise a new meta-Regularized model-based Actor-Critic (RAC)\nmethod for within-task policy optimization, as a key building block of MerPO,\nusing conservative policy evaluation and regularized policy improvement; and\nthe intrinsic tradeoff therein is achieved via striking the right balance\nbetween two regularizers, one based on the behavior policy and the other on the\nmeta-policy. We theoretically show that the learnt policy offers guaranteed\nimprovement over both the behavior policy and the meta-policy, thus ensuring\nthe performance improvement on new tasks via offline Meta-RL. Experiments\ncorroborate the superior performance of MerPO over existing offline Meta-RL\nmethods.", + "authors": "Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, Junshan Zhang", + "published": "2022-02-07", + "updated": "2022-07-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.09119v2", + "title": "Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL", + "abstract": "Offline Reinforcement Learning (RL) aims to extract near-optimal policies\nfrom imperfect offline data without additional environment interactions.\nExtracting policies from diverse offline datasets has the potential to expand\nthe range of applicability of RL by making the training process safer, faster,\nand more streamlined. We investigate how to improve the performance of offline\nRL algorithms, its robustness to the quality of offline data, as well as its\ngeneralization capabilities. To this end, we introduce Offline Model-based RL\nwith Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding\nthat dynamics models, which support within-domain generalization, and\nbehavioral priors, which support cross-domain generalization, are\ncomplementary. When combined together, they substantially improve the\nperformance and generalization of offline RL policies. In the widely studied\nD4RL offline RL benchmark, we find that MABE achieves higher average\nperformance compared to prior model-free and model-based algorithms. In\nexperiments that require cross-domain generalization, we find that MABE\noutperforms prior methods. Our website is available at\nhttps://sites.google.com/berkeley.edu/mabe .", + "authors": "Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, Michael Laskin", + "published": "2021-06-16", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08128v1", + "title": "Conservative Data Sharing for Multi-Task Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) algorithms have shown promising results\nin domains where abundant pre-collected data is available. However, prior\nmethods focus on solving individual problems from scratch with an offline\ndataset without considering how an offline RL agent can acquire multiple\nskills. We argue that a natural use case of offline RL is in settings where we\ncan pool large amounts of data collected in various scenarios for solving\ndifferent tasks, and utilize all of this data to learn behaviors for all the\ntasks more effectively rather than training each one in isolation. However,\nsharing data across all tasks in multi-task offline RL performs surprisingly\npoorly in practice. Thorough empirical analysis, we find that sharing data can\nactually exacerbate the distributional shift between the learned policy and the\ndataset, which in turn can lead to divergence of the learned policy and poor\nperformance. To address this challenge, we develop a simple technique for\ndata-sharing in multi-task offline RL that routes data based on the improvement\nover the task-specific data. We call this approach conservative data sharing\n(CDS), and it can be applied with multiple single-task offline RL methods. On a\nrange of challenging multi-task locomotion, navigation, and vision-based\nrobotic manipulation problems, CDS achieves the best or comparable performance\ncompared to prior offline multi-task RL methods and previous data sharing\napproaches.", + "authors": "Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn", + "published": "2021-09-16", + "updated": "2021-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10411v2", + "title": "Boosting Offline Reinforcement Learning with Residual Generative Modeling", + "abstract": "Offline reinforcement learning (RL) tries to learn the near-optimal policy\nwith recorded offline experience without online exploration. Current offline RL\nresearch includes: 1) generative modeling, i.e., approximating a policy using\nfixed data; and 2) learning the state-action value function. While most\nresearch focuses on the state-action function part through reducing the\nbootstrapping error in value function approximation induced by the distribution\nshift of training data, the effects of error propagation in generative modeling\nhave been neglected. In this paper, we analyze the error in generative\nmodeling. We propose AQL (action-conditioned Q-learning), a residual generative\nmodel to reduce policy approximation error for offline RL. We show that our\nmethod can learn more accurate policy approximations in different benchmark\ndatasets. In addition, we show that the proposed offline RL method can learn\nmore competitive AI agents in complex control tasks under the multiplayer\nonline battle arena (MOBA) game Honor of Kings.", + "authors": "Hua Wei, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, Zhenhui Li", + "published": "2021-06-19", + "updated": "2021-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T01", + "I.2.8; I.2.1" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.06871v3", + "title": "Improving Offline-to-Online Reinforcement Learning with Q-Ensembles", + "abstract": "Offline reinforcement learning (RL) is a learning paradigm where an agent\nlearns from a fixed dataset of experience. However, learning solely from a\nstatic dataset can limit the performance due to the lack of exploration. To\novercome it, offline-to-online RL combines offline pre-training with online\nfine-tuning, which enables the agent to further refine its policy by\ninteracting with the environment in real-time. Despite its benefits, existing\noffline-to-online RL methods suffer from performance degradation and slow\nimprovement during the online phase. To tackle these challenges, we propose a\nnovel framework called Ensemble-based Offline-to-Online (E2O) RL. By increasing\nthe number of Q-networks, we seamlessly bridge offline pre-training and online\nfine-tuning without degrading performance. Moreover, to expedite online\nperformance enhancement, we appropriately loosen the pessimism of Q-value\nestimation and incorporate ensemble-based exploration mechanisms into our\nframework. Experimental results demonstrate that E2O can substantially improve\nthe training stability, learning efficiency, and final performance of existing\noffline RL methods during online fine-tuning on a range of locomotion and\nnavigation tasks, significantly outperforming existing offline-to-online RL\nmethods.", + "authors": "Kai Zhao, Yi Ma, Jianye Hao, Jinyi Liu, Yan Zheng, Zhaopeng Meng", + "published": "2023-06-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.06734v1", + "title": "Corruption Robust Offline Reinforcement Learning with Human Feedback", + "abstract": "We study data corruption robustness for reinforcement learning with human\nfeedback (RLHF) in an offline setting. Given an offline dataset of pairs of\ntrajectories along with feedback about human preferences, an\n$\\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or\ntrajectory features manipulated), capturing an adversarial attack or noisy\nhuman preferences. We aim to design algorithms that identify a near-optimal\npolicy from the corrupted data, with provable guarantees. Existing theoretical\nworks have separately studied the settings of corruption robust RL (learning\nfrom scalar rewards directly under corruption) and offline RLHF (learning from\nhuman feedback without corruption); however, they are inapplicable to our\nproblem of dealing with corrupted data in offline RLHF setting. To this end, we\ndesign novel corruption robust offline RLHF methods under various assumptions\non the coverage of the data-generating distributions. At a high level, our\nmethodology robustifies an offline RLHF framework by first learning a reward\nmodel along with confidence sets and then learning a pessimistic optimal policy\nover the confidence set. Our key insight is that learning optimal policy can be\ndone by leveraging an offline corruption-robust RL oracle in different ways\n(e.g., zero-order oracle or first-order oracle), depending on the data coverage\nassumptions. To our knowledge, ours is the first work that provides provable\ncorruption robust offline RLHF methods.", + "authors": "Debmalya Mandal, Andi Nika, Parameswaran Kamalaruban, Adish Singla, Goran Radanovi\u0107", + "published": "2024-02-09", + "updated": "2024-02-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05951v3", + "title": "MOReL : Model-Based Offline Reinforcement Learning", + "abstract": "In offline reinforcement learning (RL), the goal is to learn a highly\nrewarding policy based solely on a dataset of historical interactions with the\nenvironment. The ability to train RL policies offline can greatly expand the\napplicability of RL, its data efficiency, and its experimental velocity. Prior\nwork in offline RL has been confined almost exclusively to model-free RL\napproaches. In this work, we present MOReL, an algorithmic framework for\nmodel-based offline RL. This framework consists of two steps: (a) learning a\npessimistic MDP (P-MDP) using the offline dataset; and (b) learning a\nnear-optimal policy in this P-MDP. The learned P-MDP has the property that for\nany policy, the performance in the real environment is approximately\nlower-bounded by the performance in the P-MDP. This enables it to serve as a\ngood surrogate for purposes of policy evaluation and learning, and overcome\ncommon pitfalls of model-based RL like model exploitation. Theoretically, we\nshow that MOReL is minimax optimal (up to log factors) for offline RL. Through\nexperiments, we show that MOReL matches or exceeds state-of-the-art results in\nwidely studied offline RL benchmarks. Moreover, the modular design of MOReL\nenables future advances in its components (e.g. generative modeling,\nuncertainty estimation, planning etc.) to directly translate into advances for\noffline RL.", + "authors": "Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims", + "published": "2020-05-12", + "updated": "2021-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2107.06106v2", + "title": "Conservative Offline Distributional Reinforcement Learning", + "abstract": "Many reinforcement learning (RL) problems in practice are offline, learning\npurely from observational data. A key challenge is how to ensure the learned\npolicy is safe, which requires quantifying the risk associated with different\nactions. In the online setting, distributional RL algorithms do so by learning\nthe distribution over returns (i.e., cumulative rewards) instead of the\nexpected return; beyond quantifying risk, they have also been shown to learn\nbetter representations for planning. We propose Conservative Offline\nDistributional Actor Critic (CODAC), an offline RL algorithm suitable for both\nrisk-neutral and risk-averse domains. CODAC adapts distributional RL to the\noffline setting by penalizing the predicted quantiles of the return for\nout-of-distribution actions. We prove that CODAC learns a conservative return\ndistribution -- in particular, for finite MDPs, CODAC converges to an uniform\nlower bound on the quantiles of the return distribution; our proof relies on a\nnovel analysis of the distributional Bellman operator. In our experiments, on\ntwo challenging robot navigation tasks, CODAC successfully learns risk-averse\npolicies using offline data collected purely from risk-neutral agents.\nFurthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of\nboth expected and risk-sensitive performance.", + "authors": "Yecheng Jason Ma, Dinesh Jayaraman, Osbert Bastani", + "published": "2021-07-12", + "updated": "2021-10-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.15578v1", + "title": "Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning", + "abstract": "We hypothesize that empirically studying the sample complexity of offline\nreinforcement learning (RL) is crucial for the practical applications of RL in\nthe real world. Several recent works have demonstrated the ability to learn\npolicies directly from offline data. In this work, we ask the question of the\ndependency on the number of samples for learning from offline data. Our\nobjective is to emphasize that studying sample complexity for offline RL is\nimportant, and is an indicator of the usefulness of existing offline\nalgorithms. We propose an evaluation approach for sample complexity analysis of\noffline RL.", + "authors": "Samin Yeasar Arnob, Riashat Islam, Doina Precup", + "published": "2021-12-31", + "updated": "2021-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2205.09550v1", + "title": "Data Valuation for Offline Reinforcement Learning", + "abstract": "The success of deep reinforcement learning (DRL) hinges on the availability\nof training data, which is typically obtained via a large number of environment\ninteractions. In many real-world scenarios, costs and risks are associated with\ngathering these data. The field of offline reinforcement learning addresses\nthese issues through outsourcing the collection of data to a domain expert or a\ncarefully monitored program and subsequently searching for a batch-constrained\noptimal policy. With the emergence of data markets, an alternative to\nconstructing a dataset in-house is to purchase external data. However, while\nstate-of-the-art offline reinforcement learning approaches have shown a lot of\npromise, they currently rely on carefully constructed datasets that are well\naligned with the intended target domains. This raises questions regarding the\ntransferability and robustness of an offline reinforcement learning agent\ntrained on externally acquired data. In this paper, we empirically evaluate the\nability of the current state-of-the-art offline reinforcement learning\napproaches to coping with the source-target domain mismatch within two MuJoCo\nenvironments, finding that current state-of-the-art offline reinforcement\nlearning algorithms underperform in the target domain. To address this, we\npropose data valuation for offline reinforcement learning (DVORL), which allows\nus to identify relevant and high-quality transitions, improving the performance\nand transferability of policies learned by offline reinforcement learning\nalgorithms. The results show that our method outperforms offline reinforcement\nlearning baselines on two MuJoCo environments.", + "authors": "Amir Abolfazli, Gregory Palmer, Daniel Kudenko", + "published": "2022-05-19", + "updated": "2022-05-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.13493v1", + "title": "The Provable Benefits of Unsupervised Data Sharing for Offline Reinforcement Learning", + "abstract": "Self-supervised methods have become crucial for advancing deep learning by\nleveraging data itself to reduce the need for expensive annotations. However,\nthe question of how to conduct self-supervised offline reinforcement learning\n(RL) in a principled way remains unclear. In this paper, we address this issue\nby investigating the theoretical benefits of utilizing reward-free data in\nlinear Markov Decision Processes (MDPs) within a semi-supervised setting.\n Further, we propose a novel, Provable Data Sharing algorithm (PDS) to utilize\nsuch reward-free data for offline RL. PDS uses additional penalties on the\nreward function learned from labeled data to prevent overestimation, ensuring a\nconservative algorithm. Our results on various offline RL tasks demonstrate\nthat PDS significantly improves the performance of offline RL algorithms with\nreward-free data. Overall, our work provides a promising approach to leveraging\nthe benefits of unlabeled data in offline RL while maintaining theoretical\nguarantees. We believe our findings will contribute to developing more robust\nself-supervised RL methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.13885v1", + "title": "Offline Learning from Demonstrations and Unlabeled Experience", + "abstract": "Behavior cloning (BC) is often practical for robot learning because it allows\na policy to be trained offline without rewards, by supervised learning on\nexpert demonstrations. However, BC does not effectively leverage what we will\nrefer to as unlabeled experience: data of mixed and unknown quality without\nreward annotations. This unlabeled data can be generated by a variety of\nsources such as human teleoperation, scripted policies and other agents on the\nsame robot. Towards data-driven offline robot learning that can use this\nunlabeled experience, we introduce Offline Reinforced Imitation Learning\n(ORIL). ORIL first learns a reward function by contrasting observations from\ndemonstrator and unlabeled trajectories, then annotates all data with the\nlearned reward, and finally trains an agent via offline reinforcement learning.\nAcross a diverse set of continuous control and simulated robotic manipulation\ntasks, we show that ORIL consistently outperforms comparable BC agents by\neffectively leveraging unlabeled experience.", + "authors": "Konrad Zolna, Alexander Novikov, Ksenia Konyushkova, Caglar Gulcehre, Ziyu Wang, Yusuf Aytar, Misha Denil, Nando de Freitas, Scott Reed", + "published": "2020-11-27", + "updated": "2020-11-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08302v1", + "title": "Safe Evaluation For Offline Learning: Are We Ready To Deploy?", + "abstract": "The world currently offers an abundance of data in multiple domains, from\nwhich we can learn reinforcement learning (RL) policies without further\ninteraction with the environment. RL agents learning offline from such data is\npossible but deploying them while learning might be dangerous in domains where\nsafety is critical. Therefore, it is essential to find a way to estimate how a\nnewly-learned agent will perform if deployed in the target environment before\nactually deploying it and without the risk of overestimating its true\nperformance. To achieve this, we introduce a framework for safe evaluation of\noffline learning using approximate high-confidence off-policy evaluation\n(HCOPE) to estimate the performance of offline policies during learning. In our\nsetting, we assume a source of data, which we split into a train-set, to learn\nan offline policy, and a test-set, to estimate a lower-bound on the offline\npolicy using off-policy evaluation with bootstrapping. A lower-bound estimate\ntells us how good a newly-learned target policy would perform before it is\ndeployed in the real environment, and therefore allows us to decide when to\ndeploy our learned policy.", + "authors": "Hager Radi, Josiah P. Hanna, Peter Stone, Matthew E. Taylor", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.13777v4", + "title": "Deep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future Directions", + "abstract": "Deep generative models (DGMs) have demonstrated great success across various\ndomains, particularly in generating texts, images, and videos using models\ntrained from offline data. Similarly, data-driven decision-making and robotic\ncontrol also necessitate learning a generator function from the offline data to\nserve as the strategy or policy. In this case, applying deep generative models\nin offline policy learning exhibits great potential, and numerous studies have\nexplored in this direction. However, this field still lacks a comprehensive\nreview and so developments of different branches are relatively independent.\nThus, we provide the first systematic review on the applications of deep\ngenerative models for offline policy learning. In particular, we cover five\nmainstream deep generative models, including Variational Auto-Encoders,\nGenerative Adversarial Networks, Normalizing Flows, Transformers, and Diffusion\nModels, and their applications in both offline reinforcement learning (offline\nRL) and imitation learning (IL). Offline RL and IL are two main branches of\noffline policy learning and are widely-adopted techniques for sequential\ndecision-making. Specifically, for each type of DGM-based offline policy\nlearning, we distill its fundamental scheme, categorize related works based on\nthe usage of the DGM, and sort out the development process of algorithms in\nthat field. Subsequent to the main content, we provide in-depth discussions on\ndeep generative models and offline policy learning as a summary, based on which\nwe present our perspectives on future research directions. This work offers a\nhands-on reference for the research progress in deep generative models for\noffline policy learning, and aims to inspire improved DGM-based offline RL or\nIL algorithms. For convenience, we maintain a paper list on\nhttps://github.com/LucasCJYSDL/DGMs-for-Offline-Policy-Learning.", + "authors": "Jiayu Chen, Bhargav Ganguly, Yang Xu, Yongsheng Mei, Tian Lan, Vaneet Aggarwal", + "published": "2024-02-21", + "updated": "2024-02-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.03351v4", + "title": "Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization", + "abstract": "Combining offline and online reinforcement learning (RL) is crucial for\nefficient and safe learning. However, previous approaches treat offline and\nonline learning as separate procedures, resulting in redundant designs and\nlimited performance. We ask: Can we achieve straightforward yet effective\noffline and online learning without introducing extra conservatism or\nregularization? In this study, we propose Uni-o4, which utilizes an on-policy\nobjective for both offline and online learning. Owning to the alignment of\nobjectives in two phases, the RL agent can transfer between offline and online\nlearning seamlessly. This property enhances the flexibility of the learning\nparadigm, allowing for arbitrary combinations of pretraining, fine-tuning,\noffline, and online learning. In the offline phase, specifically, Uni-o4\nleverages diverse ensemble policies to address the mismatch issues between the\nestimated behavior policy and the offline dataset. Through a simple offline\npolicy evaluation (OPE) approach, Uni-o4 can achieve multi-step policy\nimprovement safely. We demonstrate that by employing the method above, the\nfusion of these two paradigms can yield superior offline initialization as well\nas stable and rapid online fine-tuning capabilities. Through real-world robot\ntasks, we highlight the benefits of this paradigm for rapid deployment in\nchallenging, previously unseen real-world environments. Additionally, through\ncomprehensive evaluations using numerous simulated benchmarks, we substantiate\nthat our method achieves state-of-the-art performance in both offline and\noffline-to-online fine-tuning learning. Our website:\nhttps://lei-kun.github.io/uni-o4/ .", + "authors": "Kun Lei, Zhengmao He, Chenhao Lu, Kaizhe Hu, Yang Gao, Huazhe Xu", + "published": "2023-11-06", + "updated": "2024-03-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.04974v2", + "title": "Leveraging Offline Data in Online Reinforcement Learning", + "abstract": "Two central paradigms have emerged in the reinforcement learning (RL)\ncommunity: online RL and offline RL. In the online RL setting, the agent has no\nprior knowledge of the environment, and must interact with it in order to find\nan $\\epsilon$-optimal policy. In the offline RL setting, the learner instead\nhas access to a fixed dataset to learn from, but is unable to otherwise\ninteract with the environment, and must obtain the best policy it can from this\noffline data. Practical scenarios often motivate an intermediate setting: if we\nhave some set of offline data and, in addition, may also interact with the\nenvironment, how can we best use the offline data to minimize the number of\nonline interactions necessary to learn an $\\epsilon$-optimal policy?\n In this work, we consider this setting, which we call the \\textsf{FineTuneRL}\nsetting, for MDPs with linear structure. We characterize the necessary number\nof online samples needed in this setting given access to some offline dataset,\nand develop an algorithm, \\textsc{FTPedel}, which is provably optimal, up to\n$H$ factors. We show through an explicit example that combining offline data\nwith online interactions can lead to a provable improvement over either purely\noffline or purely online RL. Finally, our results illustrate the distinction\nbetween \\emph{verifiable} learning, the typical setting considered in online\nRL, and \\emph{unverifiable} learning, the setting often considered in offline\nRL, and show that there is a formal separation between these regimes.", + "authors": "Andrew Wagenmaker, Aldo Pacchiano", + "published": "2022-11-09", + "updated": "2023-07-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.06043v3", + "title": "Offline Meta-Reinforcement Learning with Advantage Weighting", + "abstract": "This paper introduces the offline meta-reinforcement learning (offline\nmeta-RL) problem setting and proposes an algorithm that performs well in this\nsetting. Offline meta-RL is analogous to the widely successful supervised\nlearning strategy of pre-training a model on a large batch of fixed,\npre-collected data (possibly from various tasks) and fine-tuning the model to a\nnew task with relatively little data. That is, in offline meta-RL, we\nmeta-train on fixed, pre-collected data from several tasks in order to adapt to\na new task with a very small amount (less than 5 trajectories) of data from the\nnew task. By nature of being offline, algorithms for offline meta-RL can\nutilize the largest possible pool of training data available and eliminate\npotentially unsafe or costly data collection during meta-training. This setting\ninherits the challenges of offline RL, but it differs significantly because\noffline RL does not generally consider a) transfer to new tasks or b) limited\ndata from the test task, both of which we face in offline meta-RL. Targeting\nthe offline meta-RL setting, we propose Meta-Actor Critic with Advantage\nWeighting (MACAW), an optimization-based meta-learning algorithm that uses\nsimple, supervised regression objectives for both the inner and outer loop of\nmeta-training. On offline variants of common meta-RL benchmarks, we empirically\nfind that this approach enables fully offline meta-reinforcement learning and\nachieves notable gains over prior methods.", + "authors": "Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn", + "published": "2020-08-13", + "updated": "2021-07-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.14629v1", + "title": "Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions", + "abstract": "Reinforcement learning (RL) agents are widely used for solving complex\nsequential decision making tasks, but still exhibit difficulty in generalizing\nto scenarios not seen during training. While prior online approaches\ndemonstrated that using additional signals beyond the reward function can lead\nto better generalization capabilities in RL agents, i.e. using self-supervised\nlearning (SSL), they struggle in the offline RL setting, i.e. learning from a\nstatic dataset. We show that performance of online algorithms for\ngeneralization in RL can be hindered in the offline setting due to poor\nestimation of similarity between observations. We propose a new\ntheoretically-motivated framework called Generalized Similarity Functions\n(GSF), which uses contrastive learning to train an offline RL agent to\naggregate observations based on the similarity of their expected future\nbehavior, where we quantify this similarity using \\emph{generalized value\nfunctions}. We show that GSF is general enough to recover existing SSL\nobjectives while also improving zero-shot generalization performance on a\ncomplex offline RL benchmark, offline Procgen.", + "authors": "Bogdan Mazoure, Ilya Kostrikov, Ofir Nachum, Jonathan Tompson", + "published": "2021-11-29", + "updated": "2021-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02752v2", + "title": "Offline Reinforcement Learning with Imbalanced Datasets", + "abstract": "The prevalent use of benchmarks in current offline reinforcement learning\n(RL) research has led to a neglect of the imbalance of real-world dataset\ndistributions in the development of models. The real-world offline RL dataset\nis often imbalanced over the state space due to the challenge of exploration or\nsafety considerations. In this paper, we specify properties of imbalanced\ndatasets in offline RL, where the state coverage follows a power law\ndistribution characterized by skewed policies. Theoretically and empirically,\nwe show that typically offline RL methods based on distributional constraints,\nsuch as conservative Q-learning (CQL), are ineffective in extracting policies\nunder the imbalanced dataset. Inspired by natural intelligence, we propose a\nnovel offline RL method that utilizes the augmentation of CQL with a retrieval\nprocess to recall past related experiences, effectively alleviating the\nchallenges posed by imbalanced datasets. We evaluate our method on several\ntasks in the context of imbalanced datasets with varying levels of imbalance,\nutilizing the variant of D4RL. Empirical results demonstrate the superiority of\nour method over other baselines.", + "authors": "Li Jiang, Sijie Chen, Jielin Qiu, Haoran Xu, Wai Kin Chan, Zhao Ding", + "published": "2023-07-06", + "updated": "2023-07-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.16217v2", + "title": "Beyond Reward: Offline Preference-guided Policy Optimization", + "abstract": "This study focuses on the topic of offline preference-based reinforcement\nlearning (PbRL), a variant of conventional reinforcement learning that\ndispenses with the need for online interaction or specification of reward\nfunctions. Instead, the agent is provided with fixed offline trajectories and\nhuman preferences between pairs of trajectories to extract the dynamics and\ntask information, respectively. Since the dynamics and task information are\northogonal, a naive approach would involve using preference-based reward\nlearning followed by an off-the-shelf offline RL algorithm. However, this\nrequires the separate learning of a scalar reward function, which is assumed to\nbe an information bottleneck of the learning process. To address this issue, we\npropose the offline preference-guided policy optimization (OPPO) paradigm,\nwhich models offline trajectories and preferences in a one-step process,\neliminating the need for separately learning a reward function. OPPO achieves\nthis by introducing an offline hindsight information matching objective for\noptimizing a contextual policy and a preference modeling objective for finding\nthe optimal context. OPPO further integrates a well-performing decision policy\nby optimizing the two objectives iteratively. Our empirical results demonstrate\nthat OPPO effectively models offline preferences and outperforms prior\ncompeting baselines, including offline RL algorithms performed over either true\nor pseudo reward function specifications. Our code is available on the project\nwebsite: https://sites.google.com/view/oppo-icml-2023 .", + "authors": "Yachen Kang, Diyuan Shi, Jinxin Liu, Li He, Donglin Wang", + "published": "2023-05-25", + "updated": "2023-06-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.12755v1", + "title": "Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn a policy using only\npre-collected and fixed data. Although avoiding the time-consuming online\ninteractions in RL, it poses challenges for out-of-distribution (OOD) state\nactions and often suffers from data inefficiency for training. Despite many\nefforts being devoted to addressing OOD state actions, the latter (data\ninefficiency) receives little attention in offline RL. To address this, this\npaper proposes the cross-domain offline RL, which assumes offline data\nincorporate additional source-domain data from varying transition dynamics\n(environments), and expects it to contribute to the offline data efficiency. To\ndo so, we identify a new challenge of OOD transition dynamics, beyond the\ncommon OOD state actions issue, when utilizing cross-domain offline data. Then,\nwe propose our method BOSA, which employs two support-constrained objectives to\naddress the above OOD issues. Through extensive experiments in the cross-domain\noffline RL setting, we demonstrate BOSA can greatly improve offline data\nefficiency: using only 10\\% of the target data, BOSA could achieve {74.4\\%} of\nthe SOTA offline RL performance that uses 100\\% of the target data.\nAdditionally, we also show BOSA can be effortlessly plugged into model-based\noffline RL and noising data augmentation techniques (used for generating\nsource-domain data), which naturally avoids the potential dynamics mismatch\nbetween target-domain data and newly generated source-domain data.", + "authors": "Jinxin Liu, Ziqi Zhang, Zhenyu Wei, Zifeng Zhuang, Yachen Kang, Sibo Gai, Donglin Wang", + "published": "2023-06-22", + "updated": "2023-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03097v1", + "title": "Federated Ensemble-Directed Offline Reinforcement Learning", + "abstract": "We consider the problem of federated offline reinforcement learning (RL), a\nscenario under which distributed learning agents must collaboratively learn a\nhigh-quality control policy only using small pre-collected datasets generated\naccording to different unknown behavior policies. Naively combining a standard\noffline RL approach with a standard federated learning approach to solve this\nproblem can lead to poorly performing policies. In response, we develop the\nFederated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA),\nwhich distills the collective wisdom of the clients using an ensemble learning\napproach. We develop the FEDORA codebase to utilize distributed compute\nresources on a federated learning platform. We show that FEDORA significantly\noutperforms other approaches, including offline RL over the combined data pool,\nin various complex continuous control environments and real world datasets.\nFinally, we demonstrate the performance of FEDORA in the real-world on a mobile\nrobot.", + "authors": "Desik Rengarajan, Nitin Ragothaman, Dileep Kalathil, Srinivas Shakkottai", + "published": "2023-05-04", + "updated": "2023-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02439v2", + "title": "DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching", + "abstract": "In offline reinforcement learning (RL), the performance of the learned policy\nhighly depends on the quality of offline datasets. However, in many cases, the\noffline dataset contains very limited optimal trajectories, which poses a\nchallenge for offline RL algorithms as agents must acquire the ability to\ntransit to high-reward regions. To address this issue, we introduce\nDiffusion-based Trajectory Stitching (DiffStitch), a novel diffusion-based data\naugmentation pipeline that systematically generates stitching transitions\nbetween trajectories. DiffStitch effectively connects low-reward trajectories\nwith high-reward trajectories, forming globally optimal trajectories to address\nthe challenges faced by offline RL algorithms. Empirical experiments conducted\non D4RL datasets demonstrate the effectiveness of DiffStitch across RL\nmethodologies. Notably, DiffStitch demonstrates substantial enhancements in the\nperformance of one-step methods (IQL), imitation learning methods (TD3+BC), and\ntrajectory optimization methods (DT).", + "authors": "Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, Weinan Zhang", + "published": "2024-02-04", + "updated": "2024-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.17396v1", + "title": "Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions", + "abstract": "Offline reinforcement learning (RL) allows for the training of competent\nagents from offline datasets without any interaction with the environment.\nOnline finetuning of such offline models can further improve performance. But\nhow should we ideally finetune agents obtained from offline RL training? While\noffline RL algorithms can in principle be used for finetuning, in practice,\ntheir online performance improves slowly. In contrast, we show that it is\npossible to use standard online off-policy algorithms for faster improvement.\nHowever, we find this approach may suffer from policy collapse, where the\npolicy undergoes severe performance deterioration during initial online\nlearning. We investigate the issue of policy collapse and how it relates to\ndata diversity, algorithm choices and online replay distribution. Based on\nthese insights, we propose a conservative policy optimization procedure that\ncan achieve stable and sample-efficient online learning from offline\npretraining.", + "authors": "Yicheng Luo, Jackie Kay, Edward Grefenstette, Marc Peter Deisenroth", + "published": "2023-03-30", + "updated": "2023-03-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.05546v1", + "title": "Offline Actor-Critic Reinforcement Learning Scales to Large Models", + "abstract": "We show that offline actor-critic reinforcement learning can scale to large\nmodels - such as transformers - and follows similar scaling laws as supervised\nlearning. We find that offline actor-critic algorithms can outperform strong,\nsupervised, behavioral cloning baselines for multi-task training on a large\ndataset containing both sub-optimal and expert behavior on 132 continuous\ncontrol tasks. We introduce a Perceiver-based actor-critic model and elucidate\nthe key model features needed to make offline RL work with self- and\ncross-attention modules. Overall, we find that: i) simple offline actor critic\nalgorithms are a natural choice for gradually moving away from the currently\npredominant paradigm of behavioral cloning, and ii) via offline RL it is\npossible to learn multi-task policies that master many domains simultaneously,\nincluding real robotics tasks, from sub-optimal demonstrations or\nself-generated data.", + "authors": "Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin Riedmiller", + "published": "2024-02-08", + "updated": "2024-02-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.17156v2", + "title": "MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations", + "abstract": "We study a new paradigm for sequential decision making, called offline policy\nlearning from observations (PLfO). Offline PLfO aims to learn policies using\ndatasets with substandard qualities: 1) only a subset of trajectories is\nlabeled with rewards, 2) labeled trajectories may not contain actions, 3)\nlabeled trajectories may not be of high quality, and 4) the data may not have\nfull coverage. Such imperfection is common in real-world learning scenarios,\nand offline PLfO encompasses many existing offline learning setups, including\noffline imitation learning (IL), offline IL from observations (ILfO), and\noffline reinforcement learning (RL). In this work, we present a generic\napproach to offline PLfO, called $\\textbf{M}$odality-agnostic\n$\\textbf{A}$dversarial $\\textbf{H}$ypothesis $\\textbf{A}$daptation for\n$\\textbf{L}$earning from $\\textbf{O}$bservations (MAHALO). Built upon the\npessimism concept in offline RL, MAHALO optimizes the policy using a\nperformance lower bound that accounts for uncertainty due to the dataset's\ninsufficient coverage. We implement this idea by adversarially training\ndata-consistent critic and reward functions, which forces the learned policy to\nbe robust to data deficiency. We show that MAHALO consistently outperforms or\nmatches specialized algorithms across a variety of offline PLfO tasks in theory\nand experiments. Our code is available at https://github.com/AnqiLi/mahalo.", + "authors": "Anqi Li, Byron Boots, Ching-An Cheng", + "published": "2023-03-30", + "updated": "2023-08-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.07920v2", + "title": "Causal Decision Transformer for Recommender Systems via Offline Reinforcement Learning", + "abstract": "Reinforcement learning-based recommender systems have recently gained\npopularity. However, the design of the reward function, on which the agent\nrelies to optimize its recommendation policy, is often not straightforward.\nExploring the causality underlying users' behavior can take the place of the\nreward function in guiding the agent to capture the dynamic interests of users.\nMoreover, due to the typical limitations of simulation environments (e.g., data\ninefficiency), most of the work cannot be broadly applied in large-scale\nsituations. Although some works attempt to convert the offline dataset into a\nsimulator, data inefficiency makes the learning process even slower. Because of\nthe nature of reinforcement learning (i.e., learning by interaction), it cannot\ncollect enough data to train during a single interaction. Furthermore,\ntraditional reinforcement learning algorithms do not have a solid capability\nlike supervised learning methods to learn from offline datasets directly. In\nthis paper, we propose a new model named the causal decision transformer for\nrecommender systems (CDT4Rec). CDT4Rec is an offline reinforcement learning\nsystem that can learn from a dataset rather than from online interaction.\nMoreover, CDT4Rec employs the transformer architecture, which is capable of\nprocessing large offline datasets and capturing both short-term and long-term\ndependencies within the data to estimate the causal relationship between\naction, state, and reward. To demonstrate the feasibility and superiority of\nour model, we have conducted experiments on six real-world offline datasets and\none online simulator.", + "authors": "Siyu Wang, Xiaocong Chen, Dietmar Jannach, Lina Yao", + "published": "2023-04-17", + "updated": "2023-08-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.12716v1", + "title": "H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps", + "abstract": "Solving real-world complex tasks using reinforcement learning (RL) without\nhigh-fidelity simulation environments or large amounts of offline data can be\nquite challenging. Online RL agents trained in imperfect simulation\nenvironments can suffer from severe sim-to-real issues. Offline RL approaches\nalthough bypass the need for simulators, often pose demanding requirements on\nthe size and quality of the offline datasets. The recently emerged hybrid\noffline-and-online RL provides an attractive framework that enables joint use\nof limited offline data and imperfect simulator for transferable policy\nlearning. In this paper, we develop a new algorithm, called H2O+, which offers\ngreat flexibility to bridge various choices of offline and online learning\nmethods, while also accounting for dynamics gaps between the real and\nsimulation environment. Through extensive simulation and real-world robotics\nexperiments, we demonstrate superior performance and flexibility over advanced\ncross-domain online and offline RL algorithms.", + "authors": "Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu, Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2023-09-22", + "updated": "2023-09-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.13425v3", + "title": "Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning", + "abstract": "Recent progress in deep learning has relied on access to large and diverse\ndatasets. Such data-driven progress has been less evident in offline\nreinforcement learning (RL), because offline RL data is usually collected to\noptimize specific target tasks limiting the data's diversity. In this work, we\npropose Exploratory data for Offline RL (ExORL), a data-centric approach to\noffline RL. ExORL first generates data with unsupervised reward-free\nexploration, then relabels this data with a downstream reward before training a\npolicy with offline RL. We find that exploratory data allows vanilla off-policy\nRL algorithms, without any offline-specific modifications, to outperform or\nmatch state-of-the-art offline RL algorithms on downstream tasks. Our findings\nsuggest that data generation is as important as algorithmic advances for\noffline RL and hence requires careful consideration from the community. Code\nand data can be found at https://github.com/denisyarats/exorl .", + "authors": "Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, Lerrel Pinto", + "published": "2022-01-31", + "updated": "2022-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02829v3", + "title": "RORL: Robust Offline Reinforcement Learning via Conservative Smoothing", + "abstract": "Offline reinforcement learning (RL) provides a promising direction to exploit\nmassive amount of offline data for complex decision-making tasks. Due to the\ndistribution shift issue, current offline RL algorithms are generally designed\nto be conservative in value estimation and action selection. However, such\nconservatism can impair the robustness of learned policies when encountering\nobservation deviation under realistic conditions, such as sensor errors and\nadversarial attacks. To trade off robustness and conservatism, we propose\nRobust Offline Reinforcement Learning (RORL) with a novel conservative\nsmoothing technique. In RORL, we explicitly introduce regularization on the\npolicy and the value function for states near the dataset, as well as\nadditional conservative value estimation on these states. Theoretically, we\nshow RORL enjoys a tighter suboptimality bound than recent theoretical results\nin linear MDPs. We demonstrate that RORL can achieve state-of-the-art\nperformance on the general offline RL benchmark and is considerably robust to\nadversarial observation perturbations.", + "authors": "Rui Yang, Chenjia Bai, Xiaoteng Ma, Zhaoran Wang, Chongjie Zhang, Lei Han", + "published": "2022-06-06", + "updated": "2022-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11895v1", + "title": "What are the Statistical Limits of Offline RL with Linear Function Approximation?", + "abstract": "Offline reinforcement learning seeks to utilize offline (observational) data\nto guide the learning of (causal) sequential decision making strategies. The\nhope is that offline reinforcement learning coupled with function approximation\nmethods (to deal with the curse of dimensionality) can provide a means to help\nalleviate the excessive sample complexity burden in modern sequential decision\nmaking problems. However, the extent to which this broader approach can be\neffective is not well understood, where the literature largely consists of\nsufficient conditions.\n This work focuses on the basic question of what are necessary\nrepresentational and distributional conditions that permit provable\nsample-efficient offline reinforcement learning. Perhaps surprisingly, our main\nresult shows that even if: i) we have realizability in that the true value\nfunction of \\emph{every} policy is linear in a given set of features and 2) our\noff-policy data has good coverage over all features (under a strong spectral\ncondition), then any algorithm still (information-theoretically) requires a\nnumber of offline samples that is exponential in the problem horizon in order\nto non-trivially estimate the value of \\emph{any} given policy. Our results\nhighlight that sample-efficient offline policy evaluation is simply not\npossible unless significantly stronger conditions hold; such conditions include\neither having low distribution shift (where the offline data distribution is\nclose to the distribution of the policy to be evaluated) or significantly\nstronger representational conditions (beyond realizability).", + "authors": "Ruosong Wang, Dean P. Foster, Sham M. Kakade", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13611v3", + "title": "OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning", + "abstract": "Reinforcement learning (RL) has achieved impressive performance in a variety\nof online settings in which an agent's ability to query the environment for\ntransitions and rewards is effectively unlimited. However, in many practical\napplications, the situation is reversed: an agent may have access to large\namounts of undirected offline experience data, while access to the online\nenvironment is severely limited. In this work, we focus on this offline\nsetting. Our main insight is that, when presented with offline data composed of\na variety of behaviors, an effective way to leverage this data is to extract a\ncontinuous space of recurring and temporally extended primitive behaviors\nbefore using these primitives for downstream task learning. Primitives\nextracted in this way serve two purposes: they delineate the behaviors that are\nsupported by the data from those that are not, making them useful for avoiding\ndistributional shift in offline RL; and they provide a degree of temporal\nabstraction, which reduces the effective horizon yielding better learning in\ntheory, and improved offline RL in practice. In addition to benefiting offline\npolicy optimization, we show that performing offline primitive learning in this\nway can also be leveraged for improving few-shot imitation learning as well as\nexploration and transfer in online RL on a variety of benchmark domains.\nVisualizations are available at https://sites.google.com/view/opal-iclr", + "authors": "Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum", + "published": "2020-10-26", + "updated": "2021-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.12876v2", + "title": "Guiding Online Reinforcement Learning with Action-Free Offline Pretraining", + "abstract": "Offline RL methods have been shown to reduce the need for environment\ninteraction by training agents using offline collected episodes. However, these\nmethods typically require action information to be logged during data\ncollection, which can be difficult or even impossible in some practical cases.\nIn this paper, we investigate the potential of using action-free offline\ndatasets to improve online reinforcement learning, name this problem\nReinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We\nintroduce Action-Free Guide (AF-Guide), a method that guides online training by\nextracting knowledge from action-free offline datasets. AF-Guide consists of an\nAction-Free Decision Transformer (AFDT) implementing a variant of Upside-Down\nReinforcement Learning. It learns to plan the next states from the offline\ndataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with\nguidance from AFDT. Experimental results show that AF-Guide can improve sample\nefficiency and performance in online training thanks to the knowledge from the\naction-free offline dataset. Code is available at\nhttps://github.com/Vision-CAIR/AF-Guide.", + "authors": "Deyao Zhu, Yuhui Wang, J\u00fcrgen Schmidhuber, Mohamed Elhoseiny", + "published": "2023-01-30", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.08066v5", + "title": "Exploiting Action Impact Regularity and Exogenous State Variables for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning -- learning a policy from a batch of data --\nis known to be hard for general MDPs. These results motivate the need to look\nat specific classes of MDPs where offline reinforcement learning might be\nfeasible. In this work, we explore a restricted class of MDPs to obtain\nguarantees for offline reinforcement learning. The key property, which we call\nAction Impact Regularity (AIR), is that actions primarily impact a part of the\nstate (an endogenous component) and have limited impact on the remaining part\nof the state (an exogenous component). AIR is a strong assumption, but it\nnonetheless holds in a number of real-world domains including financial\nmarkets. We discuss algorithms that exploit the AIR property, and provide a\ntheoretical analysis for an algorithm based on Fitted-Q Iteration. Finally, we\ndemonstrate that the algorithm outperforms existing offline reinforcement\nlearning algorithms across different data collection policies in simulated and\nreal world environments where the regularity holds.", + "authors": "Vincent Liu, James R. Wright, Martha White", + "published": "2021-11-15", + "updated": "2023-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.05804v1", + "title": "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism", + "abstract": "Offline reinforcement learning, which seeks to utilize offline/historical\ndata to optimize sequential decision-making strategies, has gained surging\nprominence in recent studies. Due to the advantage that appropriate function\napproximators can help mitigate the sample complexity burden in modern\nreinforcement learning problems, existing endeavors usually enforce powerful\nfunction representation models (e.g. neural networks) to learn the optimal\npolicies. However, a precise understanding of the statistical limits with\nfunction representations, remains elusive, even when such a representation is\nlinear.\n Towards this goal, we study the statistical limits of offline reinforcement\nlearning with linear model representations. To derive the tight offline\nlearning bound, we design the variance-aware pessimistic value iteration\n(VAPVI), which adopts the conditional variance information of the value\nfunction for time-inhomogeneous episodic linear Markov decision processes\n(MDPs). VAPVI leverages estimated variances of the value functions to reweight\nthe Bellman residuals in the least-square pessimistic value iteration and\nprovides improved offline learning bounds over the best-known existing results\n(whereas the Bellman residuals are equally weighted by design). More\nimportantly, our learning bounds are expressed in terms of system quantities,\nwhich provide natural instance-dependent characterizations that previous\nresults are short of. We hope our results draw a clearer picture of what\noffline learning should look like when linear representations are provided.", + "authors": "Ming Yin, Yaqi Duan, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-03-11", + "updated": "2022-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.18617v1", + "title": "ELA: Exploited Level Augmentation for Offline Learning in Zero-Sum Games", + "abstract": "Offline learning has become widely used due to its ability to derive\neffective policies from offline datasets gathered by expert demonstrators\nwithout interacting with the environment directly. Recent research has explored\nvarious ways to enhance offline learning efficiency by considering the\ncharacteristics (e.g., expertise level or multiple demonstrators) of the\ndataset. However, a different approach is necessary in the context of zero-sum\ngames, where outcomes vary significantly based on the strategy of the opponent.\nIn this study, we introduce a novel approach that uses unsupervised learning\ntechniques to estimate the exploited level of each trajectory from the offline\ndataset of zero-sum games made by diverse demonstrators. Subsequently, we\nincorporate the estimated exploited level into the offline learning to maximize\nthe influence of the dominant strategy. Our method enables interpretable\nexploited level estimation in multiple zero-sum games and effectively\nidentifies dominant strategy data. Also, our exploited level augmented offline\nlearning significantly enhances the original offline learning algorithms\nincluding imitation learning and offline reinforcement learning for zero-sum\ngames.", + "authors": "Shiqi Lei, Kanghoon Lee, Linjing Li, Jinkyoo Park, Jiachen Li", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.AI", + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.09796v1", + "title": "Offline Reinforcement Learning with Value-based Episodic Memory", + "abstract": "Offline reinforcement learning (RL) shows promise of applying RL to\nreal-world problems by effectively utilizing previously collected data. Most\nexisting offline RL algorithms use regularization or constraints to suppress\nextrapolation error for actions outside the dataset. In this paper, we adopt a\ndifferent framework, which learns the V-function instead of the Q-function to\nnaturally keep the learning procedure within the support of an offline dataset.\nTo enable effective generalization while maintaining proper conservatism in\noffline learning, we propose Expectile V-Learning (EVL), which smoothly\ninterpolates between the optimal value learning and behavior cloning. Further,\nwe introduce implicit planning along offline trajectories to enhance learned\nV-values and accelerate convergence. Together, we present a new offline method\ncalled Value-based Episodic Memory (VEM). We provide theoretical analysis for\nthe convergence properties of our proposed VEM method, and empirical results in\nthe D4RL benchmark show that our method achieves superior performance in most\ntasks, particularly in sparse-reward tasks.", + "authors": "Xiaoteng Ma, Yiqin Yang, Hao Hu, Qihan Liu, Jun Yang, Chongjie Zhang, Qianchuan Zhao, Bin Liang", + "published": "2021-10-19", + "updated": "2021-10-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1907.04543v4", + "title": "An Optimistic Perspective on Offline Reinforcement Learning", + "abstract": "Off-policy reinforcement learning (RL) using a fixed offline dataset of\nlogged interactions is an important consideration in real world applications.\nThis paper studies offline RL using the DQN replay dataset comprising the\nentire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate\nthat recent off-policy deep RL algorithms, even when trained solely on this\nfixed dataset, outperform the fully trained DQN agent. To enhance\ngeneralization in the offline setting, we present Random Ensemble Mixture\n(REM), a robust Q-learning algorithm that enforces optimal Bellman consistency\non random convex combinations of multiple Q-value estimates. Offline REM\ntrained on the DQN replay dataset surpasses strong RL baselines. Ablation\nstudies highlight the role of offline dataset size and diversity as well as the\nalgorithm choice in our positive results. Overall, the results here present an\noptimistic view that robust RL algorithms trained on sufficiently large and\ndiverse offline datasets can lead to high quality policies. The DQN replay\ndataset can serve as an offline RL benchmark and is open-sourced.", + "authors": "Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi", + "published": "2019-07-10", + "updated": "2020-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.10905v1", + "title": "Efficient Robotic Manipulation Through Offline-to-Online Reinforcement Learning and Goal-Aware State Information", + "abstract": "End-to-end learning robotic manipulation with high data efficiency is one of\nthe key challenges in robotics. The latest methods that utilize human\ndemonstration data and unsupervised representation learning has proven to be a\npromising direction to improve RL learning efficiency. The use of demonstration\ndata also allows \"warming-up\" the RL policies using offline data with imitation\nlearning or the recently emerged offline reinforcement learning algorithms.\nHowever, existing works often treat offline policy learning and online\nexploration as two separate processes, which are often accompanied by severe\nperformance drop during the offline-to-online transition. Furthermore, many\nrobotic manipulation tasks involve complex sub-task structures, which are very\nchallenging to be solved in RL with sparse reward. In this work, we propose a\nunified offline-to-online RL framework that resolves the transition performance\ndrop issue. Additionally, we introduce goal-aware state information to the RL\nagent, which can greatly reduce task complexity and accelerate policy learning.\nCombined with an advanced unsupervised representation learning module, our\nframework achieves great training efficiency and performance compared with the\nstate-of-the-art methods in multiple robotic manipulation tasks.", + "authors": "Jin Li, Xianyuan Zhan, Zixu Xiao, Guyue Zhou", + "published": "2021-10-21", + "updated": "2021-10-21", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.15690v1", + "title": "Benchmarking Offline Reinforcement Learning on Real-Robot Hardware", + "abstract": "Learning policies from previously recorded data is a promising direction for\nreal-world robotics tasks, as online learning is often infeasible. Dexterous\nmanipulation in particular remains an open problem in its general form. The\ncombination of offline reinforcement learning with large diverse datasets,\nhowever, has the potential to lead to a breakthrough in this challenging domain\nanalogously to the rapid progress made in supervised learning in recent years.\nTo coordinate the efforts of the research community toward tackling this\nproblem, we propose a benchmark including: i) a large collection of data for\noffline learning from a dexterous manipulation platform on two tasks, obtained\nwith capable RL agents trained in simulation; ii) the option to execute learned\npolicies on a real-world robotic system and a simulation for efficient\ndebugging. We evaluate prominent open-sourced offline reinforcement learning\nalgorithms on the datasets and provide a reproducible experimental setup for\noffline reinforcement learning on real systems.", + "authors": "Nico G\u00fcrtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel W\u00fcthrich, Stefan Bauer, Bernhard Sch\u00f6lkopf, Georg Martius", + "published": "2023-07-28", + "updated": "2023-07-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + } +] \ No newline at end of file