diff --git "a/related_53K/test_related_long_2404.16767v1.json" "b/related_53K/test_related_long_2404.16767v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.16767v1.json" @@ -0,0 +1,8636 @@ +[ + { + "url": "http://arxiv.org/abs/2404.16767v1", + "title": "REBEL: Reinforcement Learning via Regressing Relative Rewards", + "abstract": "While originally developed for continuous control problems, Proximal Policy\nOptimization (PPO) has emerged as the work-horse of a variety of reinforcement\nlearning (RL) applications including the fine-tuning of generative models.\nUnfortunately, PPO requires multiple heuristics to enable stable convergence\n(e.g. value networks, clipping) and is notorious for its sensitivity to the\nprecise implementation of these components. In response, we take a step back\nand ask what a minimalist RL algorithm for the era of generative models would\nlook like. We propose REBEL, an algorithm that cleanly reduces the problem of\npolicy optimization to regressing the relative rewards via a direct policy\nparameterization between two completions to a prompt, enabling strikingly\nlightweight implementation. In theory, we prove that fundamental RL algorithms\nlike Natural Policy Gradient can be seen as variants of REBEL, which allows us\nto match the strongest known theoretical guarantees in terms of convergence and\nsample complexity in the RL literature. REBEL can also cleanly incorporate\noffline data and handle the intransitive preferences we frequently see in\npractice. Empirically, we find that REBEL provides a unified approach to\nlanguage modeling and image generation with stronger or similar performance as\nPPO and DPO, all while being simpler to implement and more computationally\ntractable than PPO.", + "authors": "Zhaolin Gao, Jonathan D. Chang, Wenhao Zhan, Owen Oertell, Gokul Swamy, Kiant\u00e9 Brantley, Thorsten Joachims, J. Andrew Bagnell, Jason D. Lee, Wen Sun", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Offline AND Reinforcement AND Learning", + "gt": "Policy Gradients. Policy gradient (PG) methods (Nemirovsk\u0133 and Yudin, 1983; Williams, 1992; Sutton et al., 1999; Konda and Tsitsiklis, 1999; Kakade, 2001; Schulman et al., 2017) are a prominent class of RL algorithms due to their direct, gradient-based policy optimization, robustness to model mis-specification (Agarwal et al., 2020), and scalability to modern AI applications from fine-tuning LLMs (Stiennon et al., 2022) to optimizing text-to-image generators (Oertell et al., 2024). 18 Broadly speaking, we can taxonomize PG methods into two families. The first family is based on REINFORCE (Williams, 1992) and often includes variance reduction techniques (Kool et al., 2019; Richter et al., 2020; Zhu et al., 2023). While prior work by Ahmadian et al. (2024) has shown that REINFORCE-based approaches can outperform more complex RL algorithms like PPO on LLM fine-tuning tasks like TL;DR, we find that a properly optimized version of PPO still out-performs a REINFORCE baseline. The second family is adaptive PG techniques that precondition the policy gradient (usually with the inverse of the Fisher Information Matrix) to ensure it is covariant to re-parameterizations of the policy, which include NPG (Kakade, 2001; Bagnell and Schneider, 2003) and its practical approximations like TRPO (Schulman et al., 2015a) and PPO (Schulman et al., 2017). Intuitively, the preconditioning ensures that we make small changes in terms of action distributions, rather than in terms of the actual policy parameters, leading to faster and more stable convergence. Unfortunately, computing and then inverting the Fisher Information Matrix is computationally intensive and therefore we often resort to approximations in practice, as done in TRPO. However, these approximations are still difficult to apply to large-scale generative models, necessitating even coarser approximations like PPO. In contrast, REBEL does not need any such approximations to be implemented at scale, giving us a much closer connection between theory and practice. Reward Regression. The heart of REBEL is a novel reduction from RL to iterative squared loss regression. While using regression to fit either the reward (Peters and Schaal, 2007) or the value (Peng et al., 2019) targets which are then used to extract a policy have previously been explored, our method instead takes a page from DPO (Rafailov et al., 2023) to implicitly parameterize the reward regressor in terms of the policy. This collapses the two stage procedure of prior methods into a single regression step. Preference Fine-Tuning (PFT) of Generative Models. RL has attracted renewed interest due to its central role in \u201caligning\u201d language models \u2013 i.e., adapting their distribution of prompt completions towards the set of responses preferred by human raters. One family of techniques for PFT, often referred to as Reinforcement Learning from Human Feedback (RLHF) involves first fitting a reward model (i.e. a classifier) to the human preference data and then using this model to provide reward values to a downstream RL algorithm (often PPO) (Christiano et al., 2017; Ziegler et al., 2020). LLMs fine-tuned by this procedure include GPT-N (OpenAI, 2023), Claude-N (Anthropic, 2024), and Llama-N (Meta, 2024). Similar approaches have proved beneficial for tasks like summarization (Stiennon et al., 2022), question answering (Nakano et al., 2022), text-to-image generation (Lee et al., 2023), and instruction following (Ouyang et al., 2022). Another family of techniques for PFT essentially treats the problem as supervised learning and uses a variety of ranking loss functions. It includes DPO (Rafailov et al., 2023), IPO (Azar et al., 2023), and KTO (Ethayarajh et al., 2023). These techniques are simpler to implement as they remove components like an explicit reward model, value network, and on-policy training from the standard RLHF setup. However, recent work finds their performance to be lesser than that of on-policy methods (Lambert et al., 2024; Tajwar et al., 2024), which agrees with our findings. This is perhaps caused by their lack of interaction during training, leading to the well-known covariate shift/compounding error issue (Ross et al., 2011; Swamy et al., 2021) and the associated lower levels of performance. The third family of PFT techniques combines elements from the previous two: it involves running an offline algorithm iteratively, collecting on-policy preference feedback from either a supervisor model (Rosset et al., 2024; Xiong et al., 2024; Guo et al., 2024) or from a preference model fit on human data 19 (Calandriello et al., 2024). All of these approaches can be considered instantiations of the general SPO reduction proposed by Swamy et al. (2024), which itself can be thought of as a preference-based variant of DAgger (Ross et al., 2011). Recent work by Tajwar et al. (2024) confirms the empirical strength of these techniques. Our approach fits best into this family of techniques \u2013 we also iteratively update our model by solving a sequence of supervised learning problems over on-policy datasets. However, REBEL comes with several key differentiating factors from the prior work. First, we can run REBEL with datasets consisting of a mixture of on-policy and off-policy data with strong guarantees, enabling hybrid training, as previously explored in the RL (Song et al., 2023b; Ball et al., 2023; Zhou et al., 2023) and inverse RL (Ren et al., 2024) literature. Second, unlike all of the aforementioned works that regularize to the initial policy \ud835\udf0b0 during updates, we perform conservative updates by regularizing \ud835\udf0b\ud835\udc61+1 to \ud835\udf0b\ud835\udc61. Thus, for the prior work, it is difficult to prove convergence or monotonic improvement as the current policy can just bounce around a ball centered at \ud835\udf0b0, a well-known issue in the theory of approximate policy iteration (Kakade and Langford, 2002; Munos, 2003). In contrast, by incorporating the prior policy\u2019s probabilities into our regression problem, we are able to prove stronger guarantees for REBEL.", + "pre_questions": [], + "main_content": "Introduction The generality of the reinforcement learning (RL) paradigm is striking: from continuous control problems (Kalashnikov et al., 2018) to, recently, the fine-tuning of generative models (Stiennon et al., 2022; Ouyang et al., 2022), RL has enabled concrete progress across a variety of decision-making tasks. Specifically, when it comes to fine-tuning generative models, Proximal Policy Optimization (PPO, Schulman et al. (2017)) has emerged as the de-facto RL algorithm of choice, from language models (LLMs) (Ziegler et al., 2020; Stiennon et al., 2022; Ouyang et al., 2022; Touvron et al., 2023) to image generative models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024). If we take a step back however, it is odd that we are using an algorithm designed for optimizing two-layer networks for continuous control tasks from scratch for fine-tuning the billions of parameters \u2217{zg292, jdc396, ojo2, kdb82, ws455}@cornell.edu, tj@cs.cornell.edu \u2020{wenhao.zhan, jasonlee}@princeton.edu \u2021{gswamy,bagnell2}@andrew.cmu.edu 1 arXiv:2404.16767v1 [cs.LG] 25 Apr 2024 Image Generation Language Modeling ( ) RLHF reinforcement learning regression REBEL ( ) ( ) x y x y Figure 1: We present REBEL: a simple and scalable RL algorithm that performs policy optimization via iteratively regressing the difference in rewards directly in terms of the policy. This allows us to eliminate much of the complexity (e.g. value functions, clipping) of algorithms like PPO (Schulman et al., 2017). We apply REBEL to problems in both image generation and language modeling and find that despite its conceptual and implementation-level simplicity, REBEL is able to match or sometimes outperform the performance of PPO while out-performing purely offline techniques like DPO (Rafailov et al., 2023). of modern-day generative models. In the continuous control setting, the randomly initialized neural networks and the possible stochasticity in the dynamics necessitate variance reduction through a learned value function as a baseline (Schulman et al., 2015b), while clipping updates is important to limit distribution shift from iteration to iteration (Kakade and Langford, 2002). This means that when applied to generative model fine-tuning, we need to store four models in memory simultaneously (the policy, the reference policy, the critic, and the reward model), each with billions of parameters. Furthermore, we often add a KL regularization to the base model for fine-tuning, making explicit clipping unnecessary nor advisable, as pointed out by Ahmadian et al. (2024). Even outside of the generative modeling context, PPO is notorious for the wide range of performances measured, with differences being attributed to seemingly inconsequential implementation details (Henderson et al., 2019; Engstrom et al., 2020). This begs the question: Are there simpler algorithms that scale to modern RL applications? Our answer is REBEL: an algorithm that reduces the problem of reinforcement learning to solving a sequence of squared loss regression problems on iteratively collected datasets. The regression problems directly use policies to predict the difference in rewards. This allows us to eliminate the complexity of value functions, avoid heuristics like clipping, and scale easily to problems in both language modeling and image generation. Our key insight is that regressing relative rewards via policies directly on a sequence of iteratively collected datasets implicitly enables policy improvement. Rather than being a heuristic, REBEL comes with strong guarantees in theory and can be seen as a strict generalization of classical techniques (e.g., NPG) in reinforcement learning. Furthermore, REBEL cleanly incorporates offline datasets when available, can be extended to robustly handle intransitive preferences (Swamy et al., 2024), and empirically out-performs techniques like PPO 2 and DPO (Rafailov et al., 2023) in language generation and has a faster convergence with a similar asymptotic performance in image generation. More explicitly, our key contributions are four-fold: 1. We propose REBEL, a simple and scalable RL algorithm. REBEL finds a near-optimal policy by solving a sequence of least square regression problems on iteratively collected datasets. Each regression problem involves using a policy-parameterized regressor to predict the difference in rewards across trajectories sampled from the dataset. This dataset can be generated in a purely on-policy fashion or can incorporate offline data, enabling hybrid training. Furthermore, REBEL can be easily extended to handle intransitive preferences. 2. We connect REBEL to classical RL methods. We show that REBEL is a generalization of the foundational Natural Policy Gradient (NPG, Kakade (2001)) algorithm \u2013 applying the Gauss-Newton algorithm to the sequence of regression problems that REBEL solves recovers NPG. However, by instead applying simpler first-order optimization techniques, we are able to avoid computing the Fisher Information Matrix and enjoy a variance reduction effect. Thus, REBEL can be understood as a generalization of NPG while being much more scalable. 3. We analyze the convergence properties of REBEL. We prove via a direct reduction-based analysis that as long as we can solve the regression problem well at each iteration, we will be able to compete with any policy covered by the iteratively collected datasets (matching the strongest known results in the agnostic RL). These problems involve predicting the difference in rewards between trajectories in our dataset. We expect this problem to be well-solved in practice because our class of regressors is isomorphic to a class of policies that is highly expressive for the applications we consider (i.e. flexible Transformer models). 4. We evaluate REBEL both on language modeling and image generation tasks. We find that the on-policy version of REBEL outperforms PPO and DPO on language modeling and has similar performance for image generation tasks. On the TL;DR summarization task, we show REBEL scales well by finetuning a 6.9B parameter model. For text-guided image generation, REBEL optimizes a consistency model that converges to a similar performance as PPO. In short, REBEL is a simple and scalable algorithm that enjoys strong theoretical guarantees and empirical performance. We believe it is a suitable answer to the question raised above. 2 REBEL: REgression to RElative REward Based RL We first outline the notation used throughout the paper. 2.1 Notation We consider the Contextual Bandit formulation (Langford and Zhang, 2007) of RL which has been used to formalize the generation process of models like LLMs (Rafailov et al., 2023; Ramamurthy et al., 2022; Chang et al., 2023) and Diffusion Models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) due to the determinism of the transitions. More explicitly, in the deterministic transition setting, explicit states are not required as they can be equivalently represented by a sequence of 3 actions. Furthermore, the entire sequence of actions can be considered as a single \u201carm\u201d in a bandit problem with an exponentially large action space. We denote by (\ud835\udc65, \ud835\udc66) a prompt/response pair with \ud835\udc65\u2208X as a prompt and \ud835\udc66\u2208Y as a response (e.g., a sequence of tokens, or in general a sequence of actions). We assume access to a reward function \ud835\udc5f(\ud835\udc65, \ud835\udc66) from which we can query for reward signals (the exact form of \ud835\udc5fdoes not need to be known). Querying \ud835\udc5fat (\ud835\udc65, \ud835\udc66) will return a scalar \ud835\udc5f(\ud835\udc65, \ud835\udc66) measuring the quality of the response. Such a reward function could be a pre-defined metric (e.g., Rouge score against human responses) or it could be learned from an offline human demonstration or preference data (e.g., the RLHF paradigm (Christiano et al., 2017; Ziegler et al., 2020)), as explored in our experiments. Denote by \ud835\udf0b\u2208X \u21a6\u2192\u0394(\ud835\udc4c), a policy (e.g. LLM) that maps from a prompt \ud835\udc65to a distribution over the response space Y. We use \ud835\udf0cto denote the distribution over prompts (i.e. initial states / contexts) \ud835\udc65. Throughout the paper, we use \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) to denote a parameterized policy with parameter \ud835\udf03(e.g., a neural network policy). At times we interchangeably use \ud835\udf0b\ud835\udc61and \ud835\udf0b\ud835\udf03\ud835\udc61when it is clear from the context. We emphasize that while we focus on the bandit formulation for notation simplicity, the algorithms proposed here can be applied to any deterministic MDP where \ud835\udc65is the initial state and the trajectory \ud835\udc66consists of the sequence of actions. At each iteration of all algorithms, our goal will be to solve the following KL-constrained RL problem: \ud835\udf0b\ud835\udc61+1 = argmax \ud835\udf0b E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02E\ud835\udc65KL (\ud835\udf0b(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) . (1) Intuitively, this can be thought of asking for the optimizer to fine-tune the policy \ud835\udf0b\ud835\udc61+1 according to \ud835\udc5f while staying close to some baseline policy \ud835\udf0b\ud835\udc61. 2.2 Deriving REBEL: REgression to RElative REward Based RL From Ziebart et al. (2008), we know that there exists a closed-form solution to the above minimum relative entropy problem (Eq. 1, Gr\u00fcnwald and Dawid (2004)): \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) = \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) \ud835\udc4d(\ud835\udc65) ; \ud835\udc4d(\ud835\udc65) = \u2211\ufe01 \ud835\udc66 \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)). (2) As first pointed out by Rafailov et al. (2023), observe that we can invert Eq. 2 and write the reward as a function of the policy, i.e. the \u201cDPO Trick\u201d: \u2200\ud835\udc65, \ud835\udc66: \ud835\udc5f(\ud835\udc65, \ud835\udc66) = 1 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013 . (3) As soon as X and Y become large, we can no longer guarantee the above expression holds exactly at all (\ud835\udc65, \ud835\udc66) and therefore need to turn our attention to choosing a policy such that Eq. 3 is approximately true. We propose using a simple square loss objective between the two sides of Eq. 3 to measure the goodness of a policy, i.e. reducing RL to a regression problem: \u0012 \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013\u00132 . (4) 4 Algorithm 1 REgression to RElative REward Based RL (REBEL) 1: Input: Reward \ud835\udc5f, policy class \u03a0 = {\ud835\udf0b\ud835\udf03}, base distribution \ud835\udf07, learning rate \ud835\udf02 2: Initialize policy \ud835\udf0b\ud835\udf030. 3: for \ud835\udc61= 0 to \ud835\udc47\u22121 do 4: // Base distribution \ud835\udf07can either be an offline dataset or \ud835\udf0b\ud835\udc61. 5: Collect dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65) 6: Solve square loss regression problem: \ud835\udf03\ud835\udc61+1 = argmin \ud835\udf03 \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 (9) 7: end for Unfortunately, this loss function includes the partition function \ud835\udc4d(\ud835\udc65), which can be challenging to approximate over large input / output domains. However, observe that \ud835\udc4d(\ud835\udc65) only depends on \ud835\udc65and not \ud835\udc66. Thus, if we have access to paired samples, i.e. (\ud835\udc65, \ud835\udc66) and (\ud835\udc65, \ud835\udc66\u2032), we can instead regress the difference in rewards to eliminate this term from our objective: \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (5) Of course, we need to evaluate this loss function on some distribution of samples. In particular, we propose using an on-policy dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} with \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65), where \ud835\udf07is some base distribution. The base distribution \ud835\udf07can either be a fixed offline dataset (e.g. the instruction fine-tuning dataset) or \ud835\udf0b\ud835\udc61itself. Thus, the choice of base distribution \ud835\udf07determines whether REBEL is hybrid or fully online. Putting it all together, we arrive at our core REBEL objective: \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (6) To recap, given a pair of completions \ud835\udc66, \ud835\udc66\u2032 to a prompt \ud835\udc65, REBEL attempt to fit the relative reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (7) by optimizing over a class of predictors of the form 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 . (8) Critically, observe that if we were able to perfectly solve this regression problem, we would indeed recover the optimal solution to the KL-constrained RL problem we outlined in Eq. 1. While the above update might seem somewhat arbitrary at first glance, it has deep connections to prior work in the literature that illuminate its strengths over past techniques. We now discuss some of them. 3 Understanding REBEL as an Adaptive Policy Gradient We begin by recapping the foundational algorithms for policy optimization before situating REBEL within this space of techniques. 5 3.1 Adaptive Gradient Algorithms for Policy Optimization In this section, we give a brief overview of three adaptive gradient algorithms: Mirror Descent (MD), Natural Policy Gradient (NPG), and Proximal Policy Optimization (PPO). We discuss why they are preferable to their non-adaptive counterparts (Gradient Descent (GD) and Policy Gradient (PG)) and the connections between them. Mirror Descent. If X and Y are small discrete spaces (i.e. we are in the tabular setting), we can used the closed-form expression for the minimum relative entropy problem (Eq. 2). This is equivalent to the classic Mirror Descent (MD) algorithm with KL as the Bregman divergence. This update procedure is also sometimes known as soft policy iteration (Ziebart et al., 2008). Note that it does not even involve a parameterized policy and is therefore manifestly covariant. MD ensures a 1/\ud835\udc47convergence rate, i.e., after \ud835\udc47iterations, it must find a policy \u02c6 \ud835\udf0b, such that E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\u2605(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65,\ud835\udc66\u223c\u02c6 \ud835\udf0b(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42(1/\ud835\udc47). In particular, the convergence is almost dimension-free: the convergence rate scales logarithmically with respect to the size of the Y space. Note that gradient ascent will not enjoy such a dimension-free rate when optimizing over the simplex. When sup\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, we can show that the KL divergence between two policies, i.e., KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)), is also bounded, ensuring \ud835\udf0b\ud835\udc61+1 stay close to \ud835\udf0b\ud835\udc61. One can also show monotonic policy improvement, i.e., E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61+1\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2265E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61\ud835\udc5f(\ud835\udc65, \ud835\udc66). Foreshadowing a key point we will soon expound upon, both NPG and PPO can be considered approximations of this idealized tabular policy update procedure. Natural Policy Gradient. When Y and X are large, we cannot simply enumerate all \ud835\udc65and \ud835\udc66. Thus, we need to use a function to approximate \ud835\udf0b, which makes it impossible to exactly implement Eq. 2. Let us use \ud835\udf0b\ud835\udf03to denote a parameterized policy with parameter \ud835\udf03(e.g. the weights of a transformer). The Natural Policy Gradient (NPG, Kakade (2001)) approximates the KL in Equation 1 via its second-order Taylor expansion, whose Hessian is known as the Fisher Information Matrix (FIM, Bagnell and Schneider (2003)), i.e. E\ud835\udc65KL(\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)) \u2248(\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65) \u0002 \u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4\u0003 | {z } Fisher Information Matrix \ud835\udc39\ud835\udc61 (\ud835\udf03\u2212\ud835\udf03\ud835\udc61). The NPG update can be derived by plugging in this approximation to Eq. 1, further approximating the E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) by its first order Taylor expansion around \ud835\udf03\ud835\udc61, and finding the root of the resulting quadratic form: \ud835\udf03\ud835\udc61+1 = \ud835\udf03\ud835\udc61+ \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0010 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u0011 (10) where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) is the standard policy gradient (i.e. REINFORCE (Williams, 1992)). As mentioned above, this update procedure can be understood as performing gradient updates in the local geometry induced by the Fisher information matrix, which ensures that we are taking small steps in policy space rather than in parameter space. Conversely, unlike regular gradient descent methods (i.e., PG), NPG allows us to make large changes in the parameter space \u0398, as long as the resulting two policies are close to each other in terms of KL divergence. This property allows NPG to make more aggressive and adaptive updates in the parameter space of the policy as well as be invariant to linear transformations of the parameters. Theoretically, Agarwal et al. (2021a) show that NPG with softmax parameterization converges at the 1/\ud835\udc47rate in a dimension-free manner, provably faster than the standard PG under the same setup. Empirically, the 6 superior convergence speed of NPG compared to that of PG was observed in its original exploration (Kakade, 2001; Bagnell and Schneider, 2003), as well as in follow-up work like TRPO (Schulman et al., 2015a). Critically, while elegant in theory, NPG, unfortunately, does not scale to modern generative models due to the need for computing the Fisher matrix inverse either explicitly or implicitly via the Hessian-vector matrix product trick. Proximal Policy Optimization. To address the scalability of NPG, Schulman et al. (2017) proposes Proximal Policy Optimization (PPO). Rather than explicitly computing the KL divergence between policies or approximating it via a Taylor expansion, PPO takes a more direct route and uses clipped updates with the hope of controlling the action probability deviation from \ud835\udf0b\ud835\udf03\ud835\udc61+1 to \ud835\udf0b\ud835\udf03\ud835\udc61, i.e. \ud835\udf03\ud835\udc61+1 := argmax \ud835\udf03 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)clip \u0012 \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) ; 1 \u2212\ud835\udf16, 1 + \ud835\udf16 \u0013 \ud835\udc5f(\ud835\udc65, \ud835\udc66). (11) Prima facie, this update follows the underlying intuition of NPG: allow big and adaptive changes in the policy\u2019s parameters \ud835\udf03, as long as the corresponding action probabilities do not change too much. This perhaps explains the superiority of PPO over vanilla REINFORCE in domains like continuous control. Unfortunately, under closer scrutiny, it becomes apparent that PPO-style clipped updates neither guarantee closeness to the prior policy nor have NPG-style adaptivity. While the clipping operator can set the gradient to be zero at samples (\ud835\udc65, \ud835\udc66) where \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) is much larger or smaller than \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65), it cannot actually guarantee \ud835\udf0b\ud835\udf03\ud835\udc61+1 staying close to \ud835\udf0b\ud835\udf03\ud835\udc61, a phenomenon empirically observed in prior work (Hsu et al., 2020). Furthermore, hard clipping is not adaptive \u2013 it treats all (\ud835\udc65, \ud835\udc66) equally and clips whenever the ratio is outside of a fixed range. In contrast, constraining the KL divergence to the prior policy allows one to vary the ratio \ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) at different (\ud835\udc65, \ud835\udc66), as long as the total KL divergence across the state space is small. Lastly, clipping reduces the effective size of a batch of training examples and thus wastes training samples. A REBEL With a Cause. Our algorithm REBEL addresses the limitations of NPG (scalability) and PPO (lack of conservativity or adaptivity) from above. First, unlike NPG, it does not rely on the Fisher information matrix at all and can easily scale to modern LLM applications, yet (as we will discuss below) can be interpreted as a generalization of NPG. Second, in contrast to PPO, it doesn\u2019t have unjustified heuristics and thus enjoys strong convergence and regret guarantees just like NPG. 3.2 Connections between REBEL and MD / NPG We now sketch a series of connections between REBEL and the methods outlined above. Exact REBEL is Mirror Descent. First, to build intuition, we interpret our algorithm\u2019s behavior under the assumption that the least square regression optimization returns the exact Bayes Optimal solution (i.e., our learned predictor achieves zero prediction error everywhere): \u2200\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032 : 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 = \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (12) Conditioned on Eq. 12 being true, a few lines of algebraic manipulation reveals that there must exist a function \ud835\udc50(\ud835\udc65) which is independent of \ud835\udc66, such that: \u2200\ud835\udc65, \ud835\udc66: 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) = \ud835\udc5f(\ud835\udc65, \ud835\udc66) + \ud835\udc50(\ud835\udc65). 7 Taking an exp on both sides and re-arrange terms, we get: \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \u221d\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) exp (\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) . In other words, under the strong assumption that least square regression returns a point-wise accurate estimator (i.e., Eq. 12), we see the REBEL recovers the exact MD update, which gives it (a) a fast 1/\ud835\udc47convergence rate (Shani et al., 2020; Agarwal et al., 2021a), (b) conservativity, i.e., max\ud835\udc65KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) is bounded as long as max\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, and (c) monotonic policy improvement via the NPG standard analysis (Agarwal et al., 2021a). NPG is Approximate REBEL with Gauss-Newton Updates. We provide another interpretation of REBEL by showing that NPG (Eq. 10) can be understood as a special case of REBEL where the least square problem in Eq. 9 is approximately solved via a single iteration of the Gauss-Newton algorithm. As for any application of Gauss-Newton, we start by approximating our predictor 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) by its first order Taylor expansion at \ud835\udf03\ud835\udc61: 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 \u22481 \ud835\udf02\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4(\ud835\udf03\u2212\ud835\udf03\ud835\udc61), where \u2248indicates that we ignore higher order terms in the expansion. If we \ud835\udeff:= \ud835\udf03\u2212\ud835\udf03\ud835\udc61and replace 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 by its above first order approximation in Eq. 9, we arrive at the following quadratic form: min \ud835\udeffE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)\u0001\u22a4\ud835\udeff\u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 . (13) Further simplifying notation, we denote the uniform mixture of \ud835\udf0b\ud835\udc61 and \ud835\udf07 as \ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) := (\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65) + \ud835\udf07(\u00b7|\ud835\udc65))/2 and the Fisher information matrix \ud835\udc39\ud835\udc61averaged under said mixture as: \ud835\udc39\ud835\udc61= E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) h \u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001\u22a4i . Solving the above least square regression to obtain a minimum norm solution, we have the following claim. Claim 1. The minimum norm minimizer \ud835\udeff\u2605of the least squares problem in Eq. 13 recovers an advantage-based variant of the NPG update: \ud835\udeff\u2605:= \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0000E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65)\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)[\ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66)]\u0001 , where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and the advantage is defined as \ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66) := \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212 E\ud835\udc66\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66). The proof of this claim is deferred to Appendix A. Observe that in REBEL, we never explicitly compute the advantage \ud835\udc34\ud835\udf0b\ud835\udc61. However, applying Gauss-Newton to our objective leads to an advantage-based NPG (rather than the traditional \ud835\udc44-function based NPG, e.g., Q-NPG from Agarwal et al. (2021a, 2019)) which indicates that predicting reward difference has an implicit variance reduction effect, as by definition, an advantage function includes a value function baseline. 1 1Note that the original form of NPG is on-policy (Kakade, 2001; Sutton et al., 1999), i.e., the expectations under \ud835\udf0b\ud835\udc61. Our formulation is more general: when set \ud835\udf07= \ud835\udf0b\ud835\udc61, a Gauss-Newton step will recover the original on-policy form of NPG from Kakade (2001); Sutton et al. (1999). More recent works have extended NPG beyond on-policy (e.g., Agarwal et al. (2021a, 2020)). 8 3.3 Extending REBEL to General Preferences In the above discussion, we assume we are given access to a ground-truth reward function. However, in the generative model fine-tuning applications of RL, we often need to learn from human preferences, rather than rewards. This shift introduces a complication: not all preferences can be rationalized by an underlying utility function. In particular, intransitive preferences which are well-known to result from aggregation of different sub-populations or users evaluating different pairs of items on the basis of different features (May, 1954; Tversky, 1969; Gardner, 1970) cannot be accurately captured by a single reward model. To see this, note that if we have \ud835\udc4e\u227b\ud835\udc4f, \ud835\udc4f\u227b\ud835\udc50, and \ud835\udc50\u227b\ud835\udc4e, it is impossible to have a reward model that simultaneously sets \u02c6 \ud835\udc5f(\ud835\udc4e) > \u02c6 \ud835\udc5f(\ud835\udc4f), \u02c6 \ud835\udc5f(\ud835\udc4f) > \u02c6 \ud835\udc5f(\ud835\udc50), and \u02c6 \ud835\udc5f(\ud835\udc50) > \u02c6 \ud835\udc5f(\ud835\udc4e). As we increase the space of possible choices to that of all possible prompt completions, the probability of such intransitivities sharply increases (Dud\u00edk et al., 2015), as reflected in the high levels of annotator disagreement in LLM fine-tuning datasets (Touvron et al., 2023). Thus, rather than assuming access to a reward model, in such settings, we assume access to a preference model (Munos et al., 2023; Swamy et al., 2024; Rosset et al., 2024; Ye et al., 2024). 3.3.1 A Game-Theoretic Perspective on Learning from Preferences More specifically, for any tuple (\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032), we assume we have access to P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65): the probability that \ud835\udc66is preferred to \ud835\udc66\u2032. We then define our preference model \ud835\udc59as \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u225c2 \u00b7 P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65) \u22121. (14) Observe that \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u2208[\u22121, 1] is skew-symmetric, i.e., \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66) = 0, \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) + \ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66) = 0 for all \ud835\udc65\u2208X, \ud835\udc66, \ud835\udc66\u2032 \u2208Y. If the learner can only receive a binary feedback \ud835\udc5c\u2208{0, 1} indicating the preference between \ud835\udc66and \ud835\udc66\u2032, we assume \ud835\udc5cis sampled from a Bernoulli distribution with mean P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65), where \ud835\udc5c= 1 means that \ud835\udc66is preferred over \ud835\udc66\u2032 and 0 otherwise. Given access to such a preference model, a solution concept to the preference aggregation problem with deep roots in the social choice theory literature (Kreweras, 1965; Fishburn, 1984; Kramer, 1973; Simpson, 1969) and the dueling bandit literature (Yue et al., 2012; Dud\u00edk et al., 2015) is that of a minimax winner (MW) \ud835\udf0bMW: the Nash Equilibrium strategy of the symmetric two-player zero-sum game with \ud835\udc59as a payoff function. In particular, due to the skew-symmetric property of \ud835\udc59, Swamy et al. (2024) proved that there exists a policy \ud835\udf0bMW such that max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] = min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] . This implies that (\ud835\udf0bMW, \ud835\udf0bMW) is a Nash Equilibrium (Wang et al., 2023b; Munos et al., 2023; Swamy et al., 2024; Ye et al., 2024). As is standard in game solving, our objective is to obtain an \ud835\udf16-approximate MW b \ud835\udf0bmeasured by the duality gap (DG): DG(b \ud835\udf0b) := max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223cb \ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2212min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223cb \ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2264\ud835\udf16. In the following discussion, we will use \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b) to denote E\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] and \ud835\udc59(\ud835\udf0b, \ud835\udf0b\u2032) to denote E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b\u2032(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] for notational convenience. 9 3.3.2 Self-Play Preference Optimization (SPO) with REBEL as Base Learner We can straightforwardly extend REBEL to the general preference setting via an instantiation of the Self-Play Preference Optimization (SPO) reduction of Swamy et al. (2024). In short, Swamy et al. (2024) prove that rather than performing adversarial training, we are able to perform a simple and stable self-play procedure while retaining strong theoretical guarantees. Practically, this corresponds to sampling at leas two completions from the current policy, querying a learned preference / supervisor model on each pair, and using the win rate for each completion as its reward. We will now describe how we can adapt REBEL to this mode of feedback. Assuming that we can query the preference oracle \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) at will, we can modify the least square objective Eq. (9) to \ud835\udf03\ud835\udc61+1 := argmin \ud835\udf03 \u2211\ufe01 \ud835\udc65,\ud835\udc66,\ud835\udc66\u2032,\ud835\udc66\u2032\u2032\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032)) \u00132 where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032\u2032 \u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). When the exact value of \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is unavailable but only a binary preference feedback \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2208{0, 1} sampling from Bernoulli with mean \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is available, we can just replace \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) by \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2212\ud835\udc5c\ud835\udc66\u2032,\ud835\udc66\u2032\u2032. It is easy to see that the Bayes optimal of the above least square regression problem is equal to: E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61). Swamy et al. (2024) define an iteration-dependent reward \ud835\udc5f\ud835\udc61(\ud835\udc65, \ud835\udc66) := E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61). Thus, the above regression problem can be understood as an extension of REBEL to the setting where the reward function changes at each iteration \ud835\udc61. Swamy et al. (2024) shows that running the exact MD (Eq. 2) with this iteration-dependent reward function \ud835\udc5f\ud835\udc61leads to fast convergence to an approximate Minimax Winner, a property that we will use to provide the regret bound of REBEL in the general preference setting while accounting for nonzero mean squared error. 4 Theoretical Analysis In the previous section, we interpret REBEL as the exact MD and show its convergence by assuming that least square regression always returns a predictor that is accurate everywhere. While such an explanation is simple and has also been used in prior work, point-wise out-of-distribution generalization is an extremely strong condition and is significantly beyond what a standard supervised learning method can promise. In this section, we significantly relax this condition via a reduction-based analysis: As long as we can solve the regression problems well in an in-distribution manner, REBEL can compete against any policy covered by the training data distributions. Formally, we assume the following generalization condition holds on the regressors we find. Assumption 1 (Regression generalization bounds). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. 10 Intuitively, this assumption is saying that there is a function in our class of regressors that is able to accurately fit the difference of rewards. Recall that our class of regressors is isomorphic to our policy class. Therefore, as long as our class of policies is expressive, we would expect this assumption to hold with small \ud835\udf16. For all domains we consider, our policy class is a flexible set of generative models (e.g. Transformer-based LLMs or diffusion models). Thus, we believe it is reasonable to believe this assumption holds in practice \u2013 see Figure 6 in Appendix G for empirical evidence of this point and Example 1 for more discussion. More formally, the above assumption bounds the standard in-distribution generalization error (v.s. the point-wise guarantee in Eq. 12) of a well-defined supervised learning problem: least squares regression. The generalization error \ud835\udf16captures the possible errors from the learning process for \ud835\udf03\ud835\udc61+1 and it could depend on the complexity of the policy class and the number of samples used in the dataset D\ud835\udc61. For instance, when the the function ln \ud835\udf0b\u2212ln \ud835\udf0b\u2032 induced by the log-difference of two policies (\ud835\udf0b, \ud835\udf0b\u2032) are rich enough (e.g., policies are deep neural networks) to capture the reward difference, then \ud835\udf16in this assumption converges to zero as we increase the number of training data. Note that while \ud835\udf16can be small, it does not imply that the learned predictor will have a small prediction error in a point-wise manner \u2013 it almost certainly will not. Example 1. One simple example is when \ud835\udf0b(\ud835\udc66|\ud835\udc65) \u221dexp(\ud835\udf03\u22a4\ud835\udf19(\ud835\udc65, \ud835\udc66)) for some features \ud835\udf19(\ud835\udc65, \ud835\udc66). In this case, ln(\ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65)) \u2212ln(\ud835\udf0b(\ud835\udc66\u2032|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)) = (\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4(\ud835\udf19(\ud835\udc65, \ud835\udc66) \u2212\ud835\udf19(\ud835\udc65, \ud835\udc66\u2032)), which means that our regression problem in Eq. 9 is a classic linear regression problem. When the reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) is also linear in feature \ud835\udf19(\ud835\udc65, \ud835\udc66), then Eq. 9 is a well-specified linear regression problem, and \ud835\udf16typically scales in the rate of \ud835\udc42(\ud835\udc51/|D\ud835\udc61|) with \ud835\udc51being the dimension of feature \ud835\udf19. We can extend the above example to the case where \ud835\udf19is the feature corresponding to some kernel, e.g., RBF kernel or even Neural Tangent Kernel, which allows us to capture the case where \ud835\udf0bis a softmax wide neural network with the least square regression problem solved by gradient flow. The error \ud835\udf16again scales poly(\ud835\udc51/|D\ud835\udc61|), where \ud835\udc51is the effective dimension of the corresponding kernel. We now define the concentrability coefficient (Kakade and Langford, 2002) that quantifies how the training data distribution is covering a comparator policy. Data Coverage. Recall that the base distribution \ud835\udf07can be some behavior policy, which in RLHF can be a human labeler, a supervised fine-tuned policy (SFT), or just the current learned policy (i.e., on-policy). Given a test policy \ud835\udf0b, we denote by \ud835\udc36\ud835\udf07\u2192\ud835\udf0bthe concentrability coefficient, i.e. \ud835\udc36\ud835\udf07\u2192\ud835\udf0b= max \ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) . (15) We say \ud835\udf07covers \ud835\udf0bif \ud835\udc36\ud835\udf07\u2192\ud835\udf0b< +\u221e. Our goal is to bound the regret between our learned policies and an arbitrary comparator \ud835\udf0b\u2217(e.g. the optimal policy if it is covered by \ud835\udf07) using \ud835\udf16and the concentrability coefficient defined in Eq. 15. The following theorem formally states the regret bound of our algorithm. Theorem 1. Under Assumption 1, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, among the learned policies \ud835\udf0b1, . . . , \ud835\udf0b\ud835\udc47, there must exist a policy \u02c6 \ud835\udf0b, such that: \u2200\ud835\udf0b\u2217: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\u2217(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\u02c6 \ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16. ! . 11 Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217,\ud835\udc47. The above theorem shows a reduction from RL to supervised learning \u2014 as long as supervised learning works (i.e., \ud835\udf16is small), then REBEL can compete against any policy \ud835\udf0b\u2217that is covered by the base data distribution \ud835\udf07. In the regret bound, the 1/ \u221a \ud835\udc47comes from Mirror Descent style update, and \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16captures the cost of distribution shift: we train our regressors under distribution \ud835\udf0b\ud835\udc61and \ud835\udf07, but we want the learned regressor to predict well under \ud835\udf0b\u2217. Similar to the NPG analysis from Agarwal et al. (2021a), we now have a slower convergence rate 1/ \u221a \ud835\udc47, which is due to the fact that we have approximation error from learning. Such an agnostic regret bound \u2014 being able compete against any policy that is covered by training distributions \u2013 is the strongest type of agnostic learning results known in the RL literature, matching the best of what has appeared in prior policy optimization work including PSDP (Bagnell et al., 2003), CPI (Kakade and Langford, 2002), NPG (Agarwal et al., 2021a), and PC-PG (Agarwal et al., 2020). While in this work, we use the simplest and most intuitive definition of coverage \u2013 the density ratio-based definition in Eq. 15 \u2013 extension to more general ones such as transfer error (Agarwal et al., 2020, 2021a) or concentrability coefficients that incorporate function class (e.g., Song et al. (2023b)) is straightforward. We defer the proof of the above theorem and the detailed constants that we omitted in the \ud835\udc42notation to Appendix B. 4.1 Extension to General Preferences Extending the above analysis to the general preference case is straightforward except that it requires a stronger coverage condition. This is because we want to find a Nash Equilibrium, which requires a comparison between the learned policy against all the other policies. Results from the Markov Game literature (Cui and Du, 2022b; Zhong et al., 2022; Cui and Du, 2022a; Xiong et al., 2023) and Cui and Du (2022b) have shown that the standard single policy coverage condition used in single-player optimization is provably not sufficient. In particular, they propose using a notion of unilateral concentrability for efficient learning, which can be defined as \ud835\udc36uni,\ud835\udf07:= max \ud835\udf0b,\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032\u2032 \ud835\udf0bMW(\ud835\udc66|\ud835\udc65)\ud835\udf0b(\ud835\udc66\u2032\u2032|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65)\ud835\udf07(\ud835\udc66\u2032\u2032|\ud835\udc65) , in the general preference setting. Notably, the above unilateral concentrability coefficient \ud835\udc36uni,\ud835\udf07is equivalent to \ud835\udc36\ud835\udf07:= max\ud835\udf0b,\ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) since \ud835\udc36\ud835\udf07\u2264\ud835\udc36uni,\ud835\udf07\u2264\ud835\udc362 \ud835\udf07. Therefore in the following discussion, we will use \ud835\udc36\ud835\udf07as the coverage condition. In addition, we also assume the generalization error of the regression problem is small, Assumption 2 (Regression generalization bounds for general preference). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. Under the above coverage condition and generalization bound, we can show that REBEL is able to learn an approximate Minimax Winner: 12 Theorem 2. With assumption 2, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, the policy b \ud835\udf0b= Unif({\ud835\udf0b\ud835\udc61}\ud835\udc47 \ud835\udc61=1) satisfies that: DG(b \ud835\udf0b) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\ud835\udf16. ! . Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07,\ud835\udc47. We defer the proof to Appendix C. Note that the coverage condition here is much stronger than the single policy coverage condition in the RL setting. We conjecture that this is the cost one has to pay by moving to the more general preference setting and leaving the investigation of the necessarily coverage condition for future work. 5 Experiments The implementation of REBEL follows Algorithm 1. In each iteration, REBEL collects a dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032}, where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). Subsequently, REBEL optimizes the least squares regression problem in Eq. 9 through gradient descent with AdamW (Loshchilov and Hutter, 2017). We choose \ud835\udf07= \ud835\udf0b\ud835\udc61such that both \ud835\udc66and \ud835\udc66\u2032 are generated by the current policy. We empirically assess REBEL\u2019s performance on both natural language generation and text-guided image generation. 5.1 Natural Language Generation Baselines: We compare REBEL with baseline RL algorithms, PPO (Schulman et al., 2017), Direct Preference Optimization (DPO) (Rafailov et al., 2023), and REINFORCE (Williams, 1992) and its multi-sample extension, REINFORCE Leave-One-Out (RLOO) (Kool et al., 2019). The REINFORCE method is implemented with a moving average baseline of the reward. We include two variants of RLOO with two (\ud835\udc58= 2) and four (\ud835\udc58= 4) generations per prompt. Dataset: We use the TL;DR summarization dataset (Stiennon et al., 2020)2 to train the model to generate summaries of Reddit posts based on human preference data. The dataset comprises human reference summaries and preference data. Following prior work (Stiennon et al., 2020; Rafailov et al., 2023; Ahmadian et al., 2024), we train the DPO baseline on the preference dataset, while conducting online RL (PPO, RLOO, REBEL) on the human reference dataset. We set the maximum context length to 512 and the maximum generation length to 53 to ensure all references in the dataset can be generated. Additional dataset details are in Appendix D.1. Models: We include results with three different model sizes: 1.4B, 2.8B, and 6.9B. Each model is trained with a supervised fine-tuned (SFT) model and/or a reward model (RM) of the same size. For SFT models, we train a Pythia 1.4B (Biderman et al., 2023)3 model for 1 epoch over the dataset with human references as labels, and use the existing fine-tuned 2.8B4 and 6.9B5 models. For reward models, we train a Pythia 1.4B parameter model for 1 epoch over the preference dataset and 2Dataset available at https://github.com/openai/summarize-from-feedback 3HuggingFace Model Card: EleutherAI/pythia-1.4b-deduped 4HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__sft__tldr 5HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr 13 Model size Algorithm Winrate (\u2191) RM Score (\u2191) KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53) (\u2193) 1.4B SFT 24.5% -0.52 DPO 43.8% 0.11 30.9 PPO 51.6% 1.73 29.1 REBEL 55.3% 1.87 32.4 2.8B SFT 28.4% -0.40 DPO 53.5% 2.41 66.5 PPO 67.2% 2.37 27.4 REBEL 70.3% 2.44 29.2 Table 1: Results on TL;DR Summarization for SFT, PPO, DPO, and REBEL using three metrics. The RM Score is computed using the reward model with the respective size and the winrate is evaluated by GPT4. The models are trained with low-rank adapters. The best-performing method for each size and metric is highlighted in bold and the second best is underlined. We note that REBEL outperforms all baselines here in terms of the winrate 6.9B SFT DPO REINFORCE PPO RLOO (\ud835\udc58= 2) RLOO (\ud835\udc58= 4) REBEL Winrate (\u2191) 44.6% 68.2% 70.7%\u2217 77.6%\u2021 74.2%\u2217 77.9%\u2217 78.0% *directly obtained from Ahmadian et al. (2024) \u2021directly obtained from Huang et al. (2024) Table 2: Results on TL;DR Summarization on 6.9B models. We perform full-parameter training for all models. The best-performing method is highlighted in bold and the second best is underlined. use the existing reward models with 2.8B6 and 6.9B7 parameters. For both REBEL and baseline methods using 1.4B and 2.8B parameters, we trained the policy and/or the critic using low-rank adapters (LoRA) (Hu et al., 2022) on top of our SFT and/or reward model respectively. For the 6.9B models, we perform full-parameter training. More details about the hyperparameters are described in Appendix D.2. Evaluation: We evaluate each method by its balance between reward model score and KLdivergence with the reference policy, testing the effectiveness of the algorithm in optimizing the regularized RL object. To evaluate the quality of the generation, we compute the winrate (Rafailov et al., 2023) against human references using GPT48 (OpenAI, 2023). The winrate is computed from a randomly sampled subset (10%) of the test set with a total of 600 samples. The prompt used to query GPT4 as well as an example response is shown in Appendix D.3. 14 1.6 1.8 2.0 2.2 2.4 2.6 RM Score ( ) 15 20 25 30 35 KL ( || ref) ( ) 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 RM Score ( ) 0 10 20 30 40 50 60 REBEL PPO Figure 2: Plot of Reward vs KL-Divergence for 2.8B REBEL and PPO. We evaluate the models across the entire test set every 100 steps for 2,000 steps. Left: each point represents the average reward score and KL-divergence for a specific time step; the eclipse represents the confidence interval with 2 standard deviations. Right: we divide the KL distribution at the 2,000-step into 10 bins with equal size and average the corresponding RM scores in each bin. 5.1.1 Quality Analysis Table 1 presents a comparison between REBEL and SFT, PPO, and DPO for 1.4B and 2.8B models trained with LoRA. We calculate the KL-divergence (KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53)) using the SFT policy of the corresponding size as the reference for all models. Notably, REBEL outperforms all the baselines on RM score across all model sizes with a slightly larger KL than PPO. In addition, REBEL achieves the highest winrate under GPT4 when evaluated against human references, indicating the benefit of regressing the relative rewards. Example generations of 2.8B REBEL are included in Appendix E. We also perform full-parameter training for 6.9B models and the winrates are shown in Table 2. We can observe that REBEL still outperforms all of the baselines while REBEL, PPO, and RLOO (\ud835\udc58= 4) have comparable performances (but we will soon show in the next section that REBEL is more tractable in computation and memory than PPO and RLOO with \ud835\udc58= 4). An ablation analysis on parameter \ud835\udf02is in Appendix F. The trade-off between the reward model score and KL-divergence is shown in Figure 2. We evaluate the 2.8B REBEL and PPO every 400 gradient updates during training for 8,000 updates. The sample complexity of each update is held constant across both algorithms for fair comparison. For the left plot, each point represents the average divergence and score over the entire test set, and the eclipse represents the confidence interval with 2 standard deviations. As observed previously, PPO exhibits lower divergence, whereas REBEL shows higher divergence but is capable of achieving larger RM scores. Notably, towards the end of the training (going to the right part of the plot), REBEL and PPO have similar KL and RM scores. For the right plot in Figure 2, we analyze a single checkpoint for each algorithm at the end of training. For each algorithm, we group every generation from the test set by its KL distribution into 10 equally sized bins and calculate the average of the corresponding RM 6HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__reward__tldr 7HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__reward__tldr 8Specific API checkpoint used throughout this section: gpt-4-0613 15 DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 20 40 60 80 100 120 Time (s) Generation Policy Update DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 5 10 15 20 25 30 35 40 Peak Memory Usage (GB) Figure 3: Plot of runtime and memory usage for DPO, REINFORCE, RLOO, PPO, and REBEL. The runtime includes both time for generation and policy update for each batch. Runtime and memory usage are measured on A6000 GPUs. Baselines on the left-hand side of the dashed line have lower winrates. Methods on the right-hand side of the dashed line have similar winrates to REBEL, but REBEL is noticeably more computationally tractable and memory efficient than PPO and RLOO (\ud835\udc58= 4). score for each bin. We can see that REBEL achieves higher RM scores for generations with small divergence while requiring larger divergence for generations with the highest scores. 5.1.2 Runtime & Memory Analysis We analyze the runtime and peak memory usage for 2.8B models using PPO, DPO, RLOO, and REBEL. The runtime includes both the generation time and the time required for policy updates. Both runtime and peak memory usage are measured on A6000 GPUs using the same hyperparameters detailed in Appendix D.2. The methods in the plots are arranged in ascending order based on winrates. To the right of the dashed line, PPO, RLOO (\ud835\udc58= 4), and REBEL have the highest winrates, which are comparable among them. While DPO and REINFORCE require less time and memory, their performance does not match up to REBEL, as discussed in Section 5.1.1. RLOO (\ud835\udc58= 2) has similar runtime and memory usage as REBEL since we set \ud835\udf07= \ud835\udf0b\ud835\udc61, making REBEL also generate twice per prompt. However, RLOO (\ud835\udc58= 2) has worse performance than REBEL. Compared to PPO and RLOO (\ud835\udc58= 4), REBEL demonstrates shorter runtimes and lower peak memory usage. PPO is slow and requires more memory because it needs to update both two networks: policy network and value network. RLOO (\ud835\udc58= 4) requires generating 4 responses per prompt which makes it slow and less memory efficient. Compared to the two baselines PPO and RLOO (\ud835\udc58= 4) that achieve similar winrates as REBEL, we see that REBEL is more computationally tractable. REBEL is also noticeably simpler to implement than PPO since it does not learn value networks or compute the advantage estimation. 16 0 10000 20000 30000 40000 50000 60000 Reward Queries 6.0 6.5 7.0 7.5 8.0 8.5 9.0 LAION Aesthetic Score REBEL PPO Figure 4: Learning curves as a function of reward queries to the LAION aesthetic predictor. We report inter-quartile means (IQM) with 95% confidence intervals (CIs) across three seeds for both REBEL and PPO. The CIs were calculated with percentile bootstrap with stratified sampling over three random seeds. 5.2 Image Generation We also consider the setting of image generation, where, given a consistency model (Song et al., 2023a) and a target reward function, we seek to train the consistency model to output images which garner a higher reward. Specifically, we compare REBEL and PPO under the RLCM framework (Oertell et al., 2024). Baselines: We compare REBEL to a clipped, policy gradient objective (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) with the aim to optimize aesthetic quality to obtain high reward from the LAION aesthetic score predictor (Schuhmann, 2022). This baseline does not use critics or GAE for advantage estimates. However, the clipping objective is clearly motivated by PPO, and thus, we simply name this baseline as PPO in this section. Dataset: We use 45 common animals as generation prompts similar to Black et al. (2023); Oertell et al. (2024)9. Models: We use the latent consistency model (Luo et al., 2023) distillation of the Dreamshaper v7 model10, a finetune of stable diffusion (Rombach et al., 2021). Evaluation: We evaluate PPO and REBEL on its reward under the LAION aesthetic reward model for an equal number of reward queries/samples generated and an equal number of gradient updates. The aesthetic predictor is trained to predict human-labeled scores of images on a scale of 1 to 10. Images that tend to have the highest reward are artwork. Following the recommendations of Agarwal et al. (2021b), we report the inter-quartile mean with 95% confidence intervals for our reported results across three random seeds. 9Dataset available at https://github.com/Owen-Oertell/rlcm 10Huggingface model card: SimianLuo/LCM_Dreamshaper_v7 17 REBEL PPO 7.29 7.38 7.37 7.27 7.14 6.85 6.17 6.00 6.29 7.06 Figure 5: Generated images using PPO and REBEL during an intermediate checkpoint. We note that at the same number of epochs, REBEL observes a higher reward under the reward model. This can further be seen by the more diverse background of images generated from REBEL with less training time. 5.3 Quality Analysis Figure 4 shows REBEL optimizes the consistency model faster during the beginning of training but eventually achieves similar performance to that of PPO. For our experiments, we tuned both batch size and learning rate for our algorithms, testing batch sizes of [4, 8, 16] per gpu and learning rates [1e \u22124, 3e \u22124, 6e \u22124, 1e \u22123]. Note, the main difference in implementation between PPO and REBEL is the replacement of the clipped PPO objective with our regression objective. Qualitatively, we observe that eventually, both PPO and REBEL start to generate good-looking images but ignore the text prompt entirely. However, from just optimizing the reward function perspective, this behavior is not surprising since the objective does not encourage the maintenance of the consistency between the text prompt and the generated image. To maximize LAION-predicted aesthetic quality, both REBEL and PPO transform a model that produces plain images into one that produces artistic drawings. We found across multiple seeds that REBEL produced lush backgrounds when compared to PPO\u2019s generations. Please see Appendix E.2 for more examples of generated images. In summary, we propose REBEL, an RL algorithm that reduces the problem of RL to solving a sequence of relative reward regression problems on iteratively collected datasets. In contrast to policy gradient approaches that require additional networks and heuristics like clipping to ensure optimization stability, REBEL requires that we can drive down training error on a least squares problem. This makes it strikingly simple to implement and scale. In theory, REBEL matches the best guarantees we have for RL algorithms in the agnostic setting, while in practice, REBEL is able to match and sometimes outperform methods that are far more complex to implement or expensive to run across both language modeling and guided image generation tasks. There are several open questions raised by our work. The first is whether using a loss function other than square loss (e.g. log loss or cross-entropy) could lead to better performance in practice (Farebrother et al., 2024) or tighter bounds (e.g. first-order / gap-dependent) in theory (Foster and Krishnamurthy, 2021; Wang et al., 2023a, 2024). The second is whether, in the general (i.e. non-utility-based) preference setting, the coverage condition assumed in our analysis is necessary \u2013 we conjecture it is. Relatedly, it would be interesting to explore whether using preference (rather than reward) models to provide supervision for REBEL replicates the performance improvements reported by Swamy et al. (2024); Munos et al. (2023). Third, while we focus primarily on the bandit setting in the preceding sections, it would be interesting to consider the more general RL setting and explore how offline datasets can be used to improve the efficiency of policy optimization via techniques like resets (Bagnell et al., 2003; Ross and Bagnell, 2014; Swamy et al., 2023; Chang et al., 2023, 2024). 20", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2307.06328v1", + "title": "Budgeting Counterfactual for Offline RL", + "abstract": "The main challenge of offline reinforcement learning, where data is limited,\narises from a sequence of counterfactual reasoning dilemmas within the realm of\npotential actions: What if we were to choose a different course of action?\nThese circumstances frequently give rise to extrapolation errors, which tend to\naccumulate exponentially with the problem horizon. Hence, it becomes crucial to\nacknowledge that not all decision steps are equally important to the final\noutcome, and to budget the number of counterfactual decisions a policy make in\norder to control the extrapolation. Contrary to existing approaches that use\nregularization on either the policy or value function, we propose an approach\nto explicitly bound the amount of out-of-distribution actions during training.\nSpecifically, our method utilizes dynamic programming to decide where to\nextrapolate and where not to, with an upper bound on the decisions different\nfrom behavior policy. It balances between the potential for improvement from\ntaking out-of-distribution actions and the risk of making errors due to\nextrapolation. Theoretically, we justify our method by the constrained\noptimality of the fixed point solution to our $Q$ updating rules. Empirically,\nwe show that the overall performance of our method is better than the\nstate-of-the-art offline RL methods on tasks in the widely-used D4RL\nbenchmarks.", + "authors": "Yao Liu, Pratik Chaudhari, Rasool Fakoor", + "published": "2023-07-12", + "updated": "2023-07-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Offline AND Reinforcement AND Learning", + "gt": "Most recently proposed offline reinforcement methods rely on some mechanism to force the learned counterfactual decision policy to stay close to the data support. One approach to this end is regularizing the policy by its divergence with data, either by parameterization [13, 15], constraints and projection [28, 33, 42], divergence regularization [10, 11, 23], or implicit regularization by weighted regression [10, 37\u201339, 47]. The other similar idea, which is often applied together, is regularizing the value estimate of policy in an actor-critic or Q learning architecture [5, 7, 8, 21, 24, 27, 31, 32, 48]. Similar regularization can also be realized by the ensemble of Q functions [1, 2, 14]. Some recent batch RL methods focus on more adaptive policy, similarly to this paper, but conditioning on history [16] or confidence level [19]. A similar idea to us of making a few key decisions in one trajectory was studied [18, 43], but they focus on stitching the logged trajectories in data. In contrast to batch RL, imitation learning often does not involve dynamic programming on extrapolated actions. It has been used as a regularization in online RL [30, 36] and offline RL [11, 37]. Recently imitation learning methods using transformers and conditional training also achieves good performance on offline RL benchmarks [4, 51]. One-step RL [3] can also be viewed as an extension of imitation learning with one-step policy extraction [46]. Although utilizing the behavior policy as well, Our key idea is different from these as we train a policy with awareness of behavior policy, rather than regularized by behavior cloning or imitation learning loss.", + "pre_questions": [], + "main_content": "Introduction One of the primary hurdles in reinforcement learning (RL), or online RL, is its reliance on interacting with an environment in order to learn [44]. This can be a significant barrier to applying RL to realworld problems, where it may not be feasible or safe to interact with the environment directly [45]. In contrast, batch or offline RL [6, 26, 29] provides a more suitable framework for effectively learning policies by leveraging previously collected data to learn a policy. Notably, offline RL relies on a fixed but limited dataset comprising previously collected data from an unknown policy(s) and lacks the ability to continue interacting with the environment to gather additional samples. These limitations in offline RL give rise to various challenges, with one notable issue being the occurrence of extrapolation problems resulting from the scarcity of training data [8, 13]. In order to overcome this challenge, previous approaches to offline RL primarily focus on constraining the gap between the behavioral policy and the learned policy [4, 11, 13, 23, 39, 48], or limiting the disparity between the state-action values of logged actions in the data and extrapolated actions [8, 21, 24, 47, 49]. Alternatively, other approaches utilize a learned model that avoids making predictions outside the distribution of the dataset [20, 34, 50]. All these offline RL methods involve a sequence of counterfactual reasoning problems within the realm of potential actions, explicitly or implicitly. The question of \u201dwhat if\u201d we were to choose a different course of action than the behavior policy often arises, leading to extrapolation errors. Hence the difficulty of the problem increases as we plan for a longer horizon and involves more counterfactual decisions. For offline RL algorithms, it is difficult to find a balance between the potential for improvement from taking out-of-distribution actions and the risk of making errors due to extrapolation. Preprint. Under review. arXiv:2307.06328v1 [cs.LG] 12 Jul 2023 A critical aspect of offline RL which is relatively unexplored pertains to the varying importance of different actions at each step in determining the final outcome. Not all actions are created equal, and some have more influence on the outcome than others. Thus, it is not always essential to consider alternative actions or counterfactuals at every step than the one dictated by the behavior policy. Rather, the emphasis should be placed on the actions that wield the most impact, warranting careful decision-making at those steps. This limits the number of counterfactual decisions we make and assigns them to the most needed states/steps, to increase the return on the \u201cextrapolation investment\u201d. Unlike existing approaches that employ explicit or implicit regularization on either the policy, value functions, or both, we introduce a novel and effective algorithm, called Budgeting Counterfactual for Offline RL (BCOL), that explicitly constrains the level of extrapolation during training. Specifically, our approach leverages dynamic programming to determine when and where extrapolation should occur, with a budget on the deviation from the behavior policy. This enables us to strike a balance between the potential for improvement achieved by exploring out-of-distribution actions and the inherent risks associated with extrapolation errors. Conceptually, such a method is different from regularized offline RL by allowing non-uniform constraints while keeping strict bounds on counterfactual decisions. Our contribution. We propose a novel algorithm, BCOL, making only a few but important counterfactual decisions in offline RL. This idea extends the space of current offline RL methods that extensively focus on regularized methods. We demonstrate that the fixed point resulting from our Q-value updating rule corresponds to the optimal Q-value function, under the constraints that the policy deviates from the behavior policy within a budget. We conduct thorough evaluations on a diverse array of tasks from the widely-utilized D4RL benchmarks [9]. In terms of overall performance, our approach exhibited a favorable comparison to existing state-of-the-art methods. 2 Problem Settings We study the RL problem in the context of the infinite horizon, discounted Markov Decision Process (MDP) [40]. An MDP is defined as a tuple M =< S, A, r, P0, P, \u03b3 > where S is a state space, A is an action space, and both of these spaces can be infinite or continuous in nature. r : S \u00d7 A \u2192R is the reward function. P0 is a distribution over S that the initial states are drawn from. P : S \u00d7 A \u2192\u2206(S) maps a state-action pair to a distribution over state space. \u03b3 is the discount factor over future reward. The goal in MDP is to maximize the expectation of discounted future reward v\u03c0 = E [P\u221e t=0 \u03b3trt | at \u223c\u03c0]. An important function in MDP is the state-action value function, known as the Q function, which represents the expected future reward, given an initial state or state-action pair. Q\u03c0(s, a) := E [P\u221e t=0 \u03b3trt | s0 = s, a0 = a]. The primary emphasis of this paper is on model-free learning algorithms within the context of offline RL. The goal is to learn a target policy \u03c0 that maximizes the expected future reward using a fixed dataset D. The dataset D consists of transitions (s, a, r, s\u2032, a\u2032) where s can be drawn from any fixed distribution, a \u223c\u00b5(\u00b7|s), r = r(s, a), s\u2032 \u223cP(\u00b7|s, a), a\u2032 \u223c\u00b5(\u00b7|s\u2032). Here \u00b5 denotes an unknown behavior policy that was utilized in the past to collect dataset D. In offline RL, the task of learning an optimal policy poses a greater challenge compared to online RL. This is primarily due to the fact that the target policy can be different from the behavior policy, leading to the introduction of out-of-distribution actions. This challenge is further amplified by the use of function approximation and the problem horizon as the extrapolation errors from fitted Q function tend to accumulate over time-steps [8, 13]. In particular, it will results in the Q(s, a) values increasing dramatically for out-of-distribution state and action pairs. Consequently, this results in the learning of a policy that is likely to perform poorly and risky at the deployment time. Many previous works on offline RL aim to address these issues by some form of regularization, which might hurt the flexibility of making certain key decisions at each step (which we will return to later). However, we take a different approach in this paper where we emphasize on cautiousness of making counterfactual decisions and, hence propose a method that only deviates from behavior policy for a few steps. 3 Method As mentioned earlier, prior studies in offline RL mainly address the issue of extrapolation through two types of regularization terms: policy regularization or value regularization. Policy regularization 2 typically involves utilizing a divergence metric between current candidate policy and behavior policy while value regularization is commonly defined as the positive difference between Q(s, \u02c6 a)1 and Q(s, a), where a \u223c\u00b5(\u00b7|s). In these works, the regularization terms were uniformly applied to all samples in the dataset, enforcing a flat penalization on the distance between \u03c0(\u00b7|s) and \u00b5(\u00b7|s), or flat penalization of overestimating Q(s, \u02c6 a) for all s. The utilization of a uniform regularization approach can present a problem. When strong regularization is enforced, the divergence bound from behavior policy over all states are small, hence the policy may not be able to improve significantly. In such cases, the resulting policy can only be as good as the behavior policy itself. Conversely, due to the fact that trajectory distribution mismatch increase exponentially with respect to the problem horizon [8, 13, 32, 35], if the bound is not small enough, it causes significant issues such as large extrapolation errors that lead to a risky and poor-performing policy. Therefore, applying regularization uniformly without considering the specific characteristics of each state can pose challenges and limitations in offline RL. This work emphasizes the importance of making counterfactual decisions, i.e., decisions that are different from the decisions that would have been made by the behavior policy. Building upon the notion that not all decision steps carry equal importance, we introduce a novel offline RL algorithm that incorporates only a limited number of counterfactual decisions. In other words, the algorithm follows the behavior policy \u00b5 most of the time, but it only makes counterfactual decisions (i.e. performing offline RL) in certain states. Thus intuitively it maximizes the potential improvement from counterfactual decisions while keeping the extrapolation error under control2. We refer to this number by the budget of counterfactual, denoted as B. In order to spend the budget of counterfactual strategically, a dynamic programming algorithm is utilized to plan and asses the potential policy improvement resulting from actions taken outside of the data distribution. Thus, the choice of policy on each step needs to balance between the Q value gain from the current step and the potential benefit from future counterfactual decisions. Naturally, the policy also depends on how many counterfactual decisions it can take before it exceeds the upper bound B. We define the budget of counterfactual decisions at a time step t as follows: bt+1 = bt \u22121{\u03c0(\u00b7|st, bt) \u0338= \u00b5(\u00b7|st)}, b0 = B, (1) where 1 is an indicator function. The policies studied in this paper take the current budget bt as an input, which bt is the initial budget B subtracted by the number of counterfactual steps taken before time step t as shown in (1). Given that, the goal of our method is to solve the following constrained policy optimization: max \u03c0 E \" \u221e X t=0 \u03b3trt | s0 \u223cP0, \u03c0 # s.t. bt \u22650, \u2200t \u22650, \u2200{(st, bt, at)}\u221e t=0 \u2208EM(\u03c0) (2) EM(\u03c0) is the support set of trajectory distribution introduced by the MDP and policy \u03c0. Without additional statements, later we only consider state-action-budget trajectories in EM(\u03c0) and drop this requirement for the ease of notation. Note that bt in (2) is a function of the initial budget b0 = B, policies, and states before step t. To provide more clarity, we expand the constraint in (2) as follows: \u221e X t=0 1{\u03c0(\u00b7|st, bt) \u0338= \u00b5(\u00b7|st)} \u2264B (3) In order to develop our offline RL algorithm that maximizes (2), we introduce a new Bellman operator that plans on backup future values as well as the number of counterfactual decisions. Definition 1 (Counterfactual-Budgeting Bellman Operator). TCBQ(s, b, a) := r(s, a) + \u03b3 E s\u2032 [VQ(s\u2032, b)] (4) where VQ(s\u2032, b) := \u001a max{maxa\u2032 Q(s\u2032, b \u22121, a\u2032), Ea\u2032\u223c\u00b5 Q(s\u2032, b, a\u2032)} b > 0 Ea\u2032\u223c\u00b5 Q(s\u2032, b, a\u2032) b = 0 1There are different approaches to obtaining \u02c6 a [5, 8]. Generally, it approximates argmaxa Q(s, \u00b7) or is sampled from the current policy. 2Technically, extrapolation refers to out-of-distribution actions. A counterfactual decision policy may still overlap with the behavior policy and thus not truly extrapolate. Whether or not actions from counterfactual policy lead to extrapolation is unknown when we only observe logged action rather than \u00b5. In this paper, we refer to this \u201cpossible extrapolation\u201d as extrapolation as well, for the simplicity of discussion. 3 The Bellman operator TCB updates the Q values by taking the maximum value between two terms. The first term refers to the case where a counterfactual decision is made by maximizing Q values over all possible actions in the next state. This leads to a decrease in the counterfactual budget. The second term involves following the behavior policy in the next state (similar to SARSA [41]) while keeping the counterfactual budget intact. By selecting the optimal backup from the two cases, the Q value for a given budget b strikes a balance between the benefits of counterfactual decisions (i.e. maxa\u2032 Q(s\u2032, b\u22121, a\u2032)) in the next step and in further future. Since b \u2264B, there are at most B backup steps taking the maximum maxa\u2032 Q(s\u2032, b \u22121, a\u2032) in the backup path. This intuitively upper bounds the amount of extrapolation that the Q function can take. We can recover the standard Bellman operator on state-action Q functions by considering b \u2261\u221ein Q(s, b, a) and the rule \u221e\u22121 = \u221e. Since b = b \u22121 = \u221e, the first max operator in VQ always prefers the former term and gives us the standard Bellman operator. It is straightforward to show the counterfactual-budgeting Bellman operator is a \u03b3-contraction. Based on that, we prove the fixed point is the optimal Q function constrained by a counterfactual budget. Theorem 2. There exists a unique fixed point of TCB, and it is1 Q\u22c6(s, b, a) := max \u03c0 E \" \u221e X t=0 \u03b3trt | s0 = s, a0 = a, b1 = b, \u03c0 # , s.t. bt \u22650, \u2200t \u22651 (5) This theorem (proved in the appendix) indicates that the fixed point iteration of TCB will lead to the optimal value function with the upper bound of counterfactual decisions. Thus it motivates us to minimize a temporal difference (TD) error of the counterfactual-budgeting Bellman operator, towards the goal of only improving behavior policy over a limited number of but important decision steps. 3.1 Algorithm In this section, we derive a practical offline algorithm for our approach when the counterfactualbudgeting Bellman operator (4) is estimated via function approximation. First, we need to replace the expectations in (4) by one-sample estimation to evaluate the Bellman backup with the fixed dataset D. Next, considering that the action space is continuous, we approximate the maxa operator in (4) by maximizing over actions sampled from a policy network that is trained to maximize the current Q function. Note that these design choices are commonly employed in previous offline RL algorithms [8, 13, 23, 24]. As a result, we obtain a sampled-based counterfactual-budgeting Bellman operator denoted as b TCB. Definition 3 (Approximate Counterfactual-Budgeting Bellman Operator). \u2200(s, a), b TCBQ\u03b8(s, b, a) := r(s, a) + \u03b3 ( max{ max \u00af a\u2208{ak}m k=1 Q\u03b8(s\u2032, b \u22121, \u00af a), Q\u03b8(s\u2032, b, a\u2032)} b > 0 Q\u03b8(s\u2032, b, a\u2032) b = 0 where (s\u2032, a\u2032) sampled from D and {ak}m k=1 are sampled from \u03c0. Although this operator requires additional input b to the Q functions compared with the standard Bellman operator, it does not add any extra data requirement. We can augment the (s, a, s\u2032, a\u2032) tuple from an offline dataset with all 0 \u2264b \u2264B where B is the maximum number of counterfactual decisions that we consider. To learn the Q function, we minimize the least square TD error introduced by b TCB, using dataset D consisting of (s, a, s\u2032, a\u2032) sampled from behavior policy \u00b5. Subsequently, we update the policy by making it more likely to choose actions with higher Q-values. The objective functions for the Q function and the policy \u03c0 are formally defined as follows, with \u03b8 representing the parameters of the Q function and \u03d5 representing the parameters of the policy networks: LQ(\u03b8, \u00af \u03b8; \u03d5, B, D) := B X b=0 E (s,a,s\u2032,a\u2032)\u223cD \u0014\u0010 Q\u03b8(s, b, a) \u2212b TCBQ\u00af \u03b8(s, b, a) \u00112\u0015 (6) L\u03c0(\u03d5; \u03b8, B, D) := \u2212 B X b=0 E s\u223cD,a\u223c\u03c0\u03d5(\u00b7|s,b) Q\u03b8(s, b, a) (7) 1Here Q\u22c6(s, b, a) is defined as given b1 = b rather than b0 because the action a is already given and the future (optimal) value should be independent to which distribution a is drawn from. 4 Algorithm 1 BCOL Training 1: Input: \u03b80, \u03d50, T, B, D 2: for t = 0 to T \u22121 do 3: \u03b8t+1 \u2190\u03b8t \u2212\u03b1q\u2207\u03b8LQ(\u03b8, \u03b8t; \u03d5t, B, D) 4: \u03d5t+1 \u2190\u03d5t \u2212\u03b1p\u2207\u03d5L\u03c0(\u03d5; \u03b8t, B, D) 5: end for 6: Return \u03b8T , \u03d5T Algorithm 2 BCOL Inference 1: Input: \u03b8, \u03d5, B, b \u00b5, M 2: b0 \u2190B, s0 \u223cM 3: for t = 0 to trajectory ends do 4: b \u03c0, bt+1 \u2190Select(\u03c0\u03d5, b \u00b5; st, bt, Q\u03b8) 5: at \u223cb \u03c0(\u00b7), rt, st+1 \u223cM 6: end for where \u00af \u03b8 denotes delayed target network. Algorithm 1 describes the training process of our method1. The counterfactual-budgeting Bellman operator has an interesting property that Q is monotonically increased with b, for any Q such that Q = b TCBQ. Q(s, b, a) \u2265Q(s, b\u2032, a), \u2200s, a, b > b\u2032. (8) The proof is straightforward since Q with a larger budget maximizes the value over more action sequences. This property will be followed asymptotically. However, with a limited number of iterations, and function approximations, the gap is not always positive. Intuitively, enforcing the monotonic gap might reduce the search space and potentially accelerate the convergence. It is important to note that this modification does not alter the fixed point solution since Q\u22c6always has a monotonic gap. To implement it, we introduce the following penalty term and add it to the LQ. This results in the revised form of LQ as follows: LQ(\u03b8, \u00af \u03b8; \u03d5, B, D) + \u03c9 B\u22121 X b=0 E s\u223cD,a\u223c\u03c0\u03d5(\u00b7|s,b) h (max{Q\u03b8(s, b, a) \u2212Q\u03b8(s, b + 1, a), 0})2i (9) This penalty term minimizes the gap for the actions sampled from \u03c0\u03d5(\u00b7|s, b). As \u03c0\u03d5(\u00b7|s, b) maximizes Q\u03b8(s, b, \u00b7), which is supposed to be smaller than Q\u03b8(s, b + 1, \u00b7), it can be viewed as a heuristic for sampling actions efficiently in the action space, without adding any new module to the algorithm. Putting these together, we have Algorithm 1: Budgeting Counterfactual for Offline reinforcement Learning (BCOL). It uses an offline actor-critic style algorithm to learn the Q function and a policy maximizing the learned Q values. However, as the problem is to find the best policy under the constraints, greedy policy \u03c0\u03d5 itself is not enough for inference. The action selection method during inference is implicitly included in the definition of TCB. At test time, the policy needs to look ahead based on the current budget bt and the Q values of taking counterfactual actions v.s. taking the behavior policy. Based on that decision, we updated the new budget as well. We define this step as an operator below. Select(\u03c0, \u00b5; s, b, Q) := ( (\u00b5(\u00b7|s), b) if E \u00af a\u223c\u03c0(s,b) Q(s, b \u22121, \u00af a) \u2264 E \u00af a\u223c\u00b5(s) Q(s, b, \u00af a) or b = 0 (\u03c0(\u00b7|s, b), b \u22121) o.w. The complete inference time procedure is described in Algorithm 2. It also takes the learned policy and Q function parameters as well as an approximate behavior policy b \u00b5. In case of unknown behavior policy, b \u00b5 can be learned from behavior cloning or any other imitation learning algorithm. Algorithm 2 starts with an initial counterfactual budget B, takes action each time according to the condition in Select, and update the budget bt if the action is not drawn from b \u00b5. 3.2 Comparison to regularized and one-step offline RL One of the most used methods in offline RL methods is adding policy or value regularization and constraints terms on top of a vanilla off-policy RL algorithm [5, 8, 13, 21\u201324, 47, 48], referred to as regularized methods in this paper. Our method can be viewed as an alternative to using regularized losses. Instead, we enforce a non-Markov budget constraint on the policy class, and we argue this provides several unique advantages. First, compared with the coefficients of regularization terms, the budget parameter has a more clear physical explanation in most applications. Thus it is easier to tune the hyper-parameters and explain the decisions from the policy in practice. Second, the constraints on budget will never be violated in the test, it provides an additional level of safety. While regularization 1Note L\u03c0 is not exact but abstract in the sense that gradient to the \u03d5 as a distribution parameter cannot be calculated directly and need the so-called policy gradient estimator. We clarify our implementation in Section 3.3. 5 terms penalize the divergence during training, they cannot provide a guarantee on the divergence or distance between test policy and behavior policy. Another type of offline RL that is related to our core idea is one-step RL [3, 46]. We propose and leverage the concept of limiting the number of counterfactual/off-policy steps. However, the \u201cstep\u201d here refers to the decision steps, and in one-step RL it refers to the (training) iteration step. Although one-step RL only applies only one training step, the resulting policy can still be far away from the behavior policy in many different states, and give different decisions during the test. From another perspective, our method, although with a limited number (or even one) of counterfactual decision steps, still applies dynamic programming to find the best allocation of counterfactual steps. 3.3 Important implementation details Algorithm 1 provided a general actor-critic framework to optimize policy value constrained by counterfactual decisions. To implement a practical offline deep RL algorithm, the first design choice is how we model the policy and choose the policy gradient estimator to \u2207\u03d5L\u03c0. Algorithm 1 is compatible with any off-policy policy gradient methods. We implement Algorithm 1 using SAC-style [17] policy gradient for stochastic policy and TD3-style [12] policy gradient for deterministic policy, to provide a generic algorithm for both stochastic and deterministic policy models. As TD3 uses deterministic policy, the value of m is 1 in b TCB and the actor loss becomes Q\u03b8(s, b, \u03c0\u03d5(s, b)). In both cases, we implement target updates following SAC/TD3. We defer more details to the appendix. Both SAC and TD3 are originally proposed for online, off-policy RL. To better fit into the offline setting, previous offline RL methods based on these algorithms equip them with several key adaptations or implementation tricks. To eliminate orthogonal factors and focus on the budgeting idea in algorithm comparison, we follow these adaptations and describe them below. For SAC, prior work use twin Q functions [24], and a linear combination of the two Q values [8]. Prior work [8, 25] also sample m actions from the actor and take the maximum Q values in backup, instead of a standard actor-critic backup, which is a trick firstly introduced in [13, 15]. The entropy term in SAC is dropped in [8] as this term is mostly for online exploration. Previous TD3-based work [11] normalizes the state features and Q losses in TD3. All these adaptations are commonly used in state-of-the-art offline RL and are mostly considered as minor implementation details rather than major algorithmic designs. We refer to SAC and TD3 with all these adaptations as offline SAC and offline TD3 in this paper and refer to the two families of algorithms SAC-style and TD3-style. One-step RL[3] and IQL [22] for offline RL is built on top of estimating the behavior value Q\u00b5 by SARSA. Unlike SAC or TD3, it is not applicable to apply the counterfactual budgeting idea on top of SARSA, since it is a native on-policy method and does not consider counterfactual decisions. Thus we focus on the implementation of BCOL with SAC and TD3. Looping over all budget values in the loss is not efficient in practice. We implement the Q function and policy with B output heads and tensorize the loop over b. One can further reduce the computation cost by sampling budget values instead of updating all B heads over every sample. 4 Experiments We evaluate our BCOL algorithm against prior offline RL methods on the OpenAI gym MuJoCo tasks and AntMaze tasks in the D4RL benchmark [9]. We compare the SAC-style and TD3-style implementation of BCOL with state-of-the-art offline RL algorithms, studying the effectiveness of budgeting counterfactual in offline RL. Our experiment results also reveal that behavior cloning with only one strategic counterfactual decision still work surprisingly well on MuJoCo tasks. Finally, we study the value of our dynamic programming methods on where to spend the counterfactual budget. Baselines. We compared BCOL with regularized methods (policy and/or value regularization) based on SAC and TD3: CQL [24], CDC [8], TD3+BC [11]. Besides regularized methods, we also compare with one-step RL method [3] and IQL [22] as other state-of-the-art algorithms, and behavior cloning as imitation learning baseline. This covers a set of strong offline RL baselines. We are interested in both studying the effectiveness of counterfactual budget ideas in contrast to the closest regularized methods, as well as other state-of-the-art offline RL methods. 6 Task Name BC IQL Onestep TD3+BC BCOL (TD3) CQL CDC BCOL (SAC) halfcheetah-m 42.6 47.4 55.6 48.4 45.0 46.1 62.5 50.1 hopper-m 52.9 66.3 83.3 59.4 85.8 64.6 84.9 83.2 walker2d-m 75.3 78.3 85.6 84.5 76.7 74.5 70.7 84.1 halfcheetah-mr 36.6 44.2 42.5 44.4 40.9 45.4 52.3 46.2 hopper-mr 18.1 94.7 71.0 50.1 83.4 92.3 87.4 99.8 walker2d-mr 26.0 73.9 71.6 80.2 49.7 83.7 87.8 86.0 halfcheetah-me 55.2 86.7 93.5 91.5 88.7 87.3 66.3 86.9 hopper-me 52.5 91.5 102.1 100.5 106.8 109.2 83.2 99.0 walker2d-me 107.5 109.6 110.9 110.1 108.5 109.9 103.9 110.9 antmaze-u 54.6 87.5 64.3 96.3 93.3 94.0 93.6 90.3 antmaze-u-d 45.6 62.2 60.7 71.7 68.0 47.3 57.3 90.0 antmaze-m-p 0.0 71.2 0.3 1.7 12.3 62.4 59.5 70.0 antmaze-m-d 0.0 70.0 0.0 0.3 14.0 74.3 64.6 72.3 antmaze-l-p 0.0 39.6 0.0 0.0 0.0 34.2 33.0 35.6 antmaze-l-d 0.0 47.5 0.0 0.3 0.0 40.7 25.3 37.6 mujoco total 466.7 692.4 716.0 669.2 685.6 713.0 699.0 746.0 antmaze total 100.2 378.0 125.3 171.3 187.7 352.9 333.5 396.0 Total 566.9 1070.4 841.3 840.2 873.3 1065.9 1032.5 1142.0 Table 1: Average normalized scores on D4RL tasks. Task names for MuJoCo: m=medium, mr=medium replay, me=medium expert. Task names for antmaze: u=umaze, m=medium, l=large, p=play, d=diverse. Bold numbers stand for the globally best and underlined numbers stand for the best with in the base method (SAC or TD3) group. Benchmark. We report the results of BCOL together with baselines on 9 OpenAI gym MuJoCo tasks and 6 AntMaze tasks in D4RL. The gym MuJoCo tasks consist of the v2 version of medium, medium-reply, and medium-expert datasets in halfcheetah, walker2d, and hopper. MuJoCo tasks generally prefer imitation-type algorithm, as it contains a large portion of near-optimal trajectories. The 6 AntMaze tasks are considered harder tasks for offline RL as they contain very few or no near-optimal trajectories, and many previous methods lack performance reports on these tasks. We exclude the random datasets in MuJoCo tasks as they are not less discriminating and none of the offline RL algorithms learns a meaningful policy there. We exclude expert datasets in MuJoCo tasks since they can be solved simply by behavior cloning and mismatches the consideration in most offline RL algorithm designs. How we report the results. Due to the inconsistency in the version of D4RL in the literature, it is necessary to clarify the source of baseline results here. We retrieve the results of baseline methods from the original papers if applicable (IQL, CQL, one-step RL). We report the Rev. KL Reg variant of one-step RL as it shows the best performance on MuJoCo tasks [3], and for AntMaze tasks we report one-step RL results from the IQL paper as they are not reported in the original paper. We report the scores of behavior cloning and CQL from an updated version of CQL paper1 for D4RL v2 environments. We report the score of TD3+BC and CDC based on our implementation since the original results are for D4RL v0 environments. The v2 results are generally better than the original v0 scores in the paper and are consistent with TD3+BC\u2019s v2 scores from the IQL paper. Table 1 shows the results of baseline algorithms as well as two variants of BCOL (TD3-style and SAC-style) on 15 D4RL tasks. For the scores from our experiment (BCOL , CDC, TD3+BC), we report the test episodic reward averaged over 300 test episodes as 10 test episodes per evaluation, the last 10 evaluations, and 3 runs with different random seeds. All algorithms run with 1M steps following the literature. We defer details like learning curves and standard deviations in the appendix. Main results. As Table 1 shows, BCOL-SAC outperforms prior offline RL methods for both MuJoCo and AntMaze total scores. BCOL never falls behind the best methods by a large margin in any task. This indicates the effectiveness and robustness of the counterfactual budget in different scenarios. Especially for the total score of AntMaze, the harder tasks, BCOL-SAC outperforms most prior approaches by a large margin except IQL. We also find for both TD3-style and SAC-style algorithms, BCOL outperform the regularized method based on the same architecture, ablating orthogonal factors other than the key idea about the counterfactual budget. 1https://sites.google.com/view/cql-offline-rl 7 Figure 1: Total normalized score with different values of B and \u03c9 in BCOL. The left two plots show MuJoCo average scores and the right two plots show AntMaze average scores. Figure 2: Percent difference of the performance on different budgeting methods compared with the full BCOL Algorithm (hc = HalfCheetah, hop = Hopper, w = Walker2d, am=AntMaze). The top row shows SAC-based experiments and the bottom row shows TD3-based experiments. TD3 plots do not include AntMaze-large tasks since the performances of BCOL are zero. No budgeting stands for offline SAC/TD3 without the budgeting constraints (equivalent to B \u2192\u221e). Budgeting without planning stands for randomly selecting B steps to follow from \u03c0 and the rest from \u02c6 \u00b5 during the test, where \u03c0 is learned by offline SAC/TD3. Budgeting without test-time planning stands for randomly selecting B steps (uniformly within the max horizon) to follow from \u03c0 and the rest from \u02c6 \u00b5 during the test, where \u03c0 is learned by Algorithm 1. In all settings, B is the same value as selected by BCOL . Hyper-parameters: B and \u03c9. Our algorithm only adds two hyper-parameters on top of SAC/TD3: budget B and \u03c9. We searched the value of B in {1, 10, 50} and the value of \u03c9 in {0, 1, 10, 100}. We select one set of hyper-parameters for MuJoCo (SAC-style: B = 10, \u03c9 = 10, TD3-style: B = 50, \u03c9 = 10) and one set for AntMaze (SAC-style and TD3-style: B = 50, \u03c9 = 0) based on the overall performance. This provides a fair comparison with baseline methods as all of them select their hyper-parameters either as this, or per task.1 Figure 1 shows how the total scores changes with different values of B and \u03c9. Full results with scores on each task are in the appendix. Figure 1 also shows another surprising finding from our experiments on the MuJoCo tasks. With the budget being only one B = 1, BCOL (SAC) shows comparable performance to some recent offline RL work including CDC and IQL. With 10 steps out of 1000 steps, it is able to outperform most prior methods. This highlights how strategically adding a few important actions on top of behavior policy can provide strong performance in this benchmark. It also indicates that the MuJoCo tasks in D4RL, even the more mixed datasets, considerably prefer the imitation-type approach. 1TD3+BC paper does not include results and hyperparameters in AntMaze. We do the hyper-parameter search of their \u03b1, within the range provided in the paper, and report the highest total score with \u03b1 = 3. 8 Ablation on how we realize the counterfactual budgeting constraints. The idea of budgeting counterfactual decisions naturally requires both the planning on budgeting in Algorithm 1 during the training and the planning on budgeting in Algorithm 2 during the testing. Ablation on the roles of budgeting itself and two planning parts is nonetheless helpful for us to understand how BCOL works. In Figure 2, we report the results of three ablation methods. The first column shows the performance without budgeting constraints. This leads to offline TD3 or SAC algorithm with all implementation adaptations used by BCOL and others [11, 13, 24]. As we expected, vanilla offline RL without budgeting does not work well. The second column shows the performance with budgeting but without any planning on budgeting during either the training or the testing. This method realizes the budgeting by randomly assigning B decisions to the policy learned by offline SAC/TD3, and the rest to the estimated behavior policy. Randomly stitching the counterfactual policy and behavior policy fails to fully realize the benefit of budgeting, since both training and inference are not budget-aware. The experiment in the third column studies if it is sufficient to be budget-aware during training. This ablation randomly selects B actions from \u03c0 trained by BCOL and the rest from the estimated behavior policy. The setting is closest to BCOL, but the lack of planning on how to spend the budget during the testing still hurts the performance. The results on SAC-style and TD3-style implementations are consistent. This ablation shows that the effectiveness of BCOL relies on the collaboration of all three parts of our core idea: budgeting, planning on budgeting during the training, and planning on budgeting during the testing. The form of budget considered in this work is counting the different decision distributions. A natural alternative is divergences between \u03c0 and \u00b5. This provides a soft count of counterfactual decisions. However, it requires good calibration of both distributions can be more suitable to the discrete-action settings. We leave this as future work. On the theory side, a natural question about the value of budgeting counterfactuals is how the benefit of less counterfactual decisions is reflected in the theoretical properties of offline RL. We leave the investigation on this for future theoretical work. In conclusion, this paper studies offline reinforcement learning which aims to learn a good counterfactual decision policy from a fixed dataset. We propose a novel idea of budgeting the number of counterfactual decisions and solving the allocation problem by dynamic programming. We provide strong empirical performance over offline RL benchmarks and optimality of the fixed point solution as theoretical justification. 9" + }, + { + "url": "http://arxiv.org/abs/2007.11091v2", + "title": "EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL", + "abstract": "Off-policy reinforcement learning holds the promise of sample-efficient\nlearning of decision-making policies by leveraging past experience. However, in\nthe offline RL setting -- where a fixed collection of interactions are provided\nand no further interactions are allowed -- it has been shown that standard\noff-policy RL methods can significantly underperform. Recently proposed methods\noften aim to address this shortcoming by constraining learned policies to\nremain close to the given dataset of interactions. In this work, we closely\ninvestigate an important simplification of BCQ -- a prior approach for offline\nRL -- which removes a heuristic design choice and naturally restricts extracted\npolicies to remain exactly within the support of a given behavior policy.\nImportantly, in contrast to their original theoretical considerations, we\nderive this simplified algorithm through the introduction of a novel backup\noperator, Expected-Max Q-Learning (EMaQ), which is more closely related to the\nresulting practical algorithm. Specifically, in addition to the distribution\nsupport, EMaQ explicitly considers the number of samples and the proposal\ndistribution, allowing us to derive new sub-optimality bounds which can serve\nas a novel measure of complexity for offline RL problems. In the offline RL\nsetting -- the main focus of this work -- EMaQ matches and outperforms prior\nstate-of-the-art in the D4RL benchmarks. In the online RL setting, we\ndemonstrate that EMaQ is competitive with Soft Actor Critic. The key\ncontributions of our empirical findings are demonstrating the importance of\ncareful generative model design for estimating behavior policies, and an\nintuitive notion of complexity for offline RL problems. With its simple\ninterpretation and fewer moving parts, such as no explicit function\napproximator representing the policy, EMaQ serves as a strong yet easy to\nimplement baseline for future work.", + "authors": "Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, Shixiang Shane Gu", + "published": "2020-07-21", + "updated": "2021-01-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.13703v1", + "title": "Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters", + "abstract": "Motivated by the success of ensembles for uncertainty estimation in\nsupervised learning, we take a renewed look at how ensembles of $Q$-functions\ncan be leveraged as the primary source of pessimism for offline reinforcement\nlearning (RL). We begin by identifying a critical flaw in a popular algorithmic\nchoice used by many ensemble-based RL algorithms, namely the use of shared\npessimistic target values when computing each ensemble member's Bellman error.\nThrough theoretical analyses and construction of examples in toy MDPs, we\ndemonstrate that shared pessimistic targets can paradoxically lead to value\nestimates that are effectively optimistic. Given this result, we propose MSG, a\npractical offline RL algorithm that trains an ensemble of $Q$-functions with\nindependently computed targets based on completely separate networks, and\noptimizes a policy with respect to the lower confidence bound of predicted\naction values. Our experiments on the popular D4RL and RL Unplugged offline RL\nbenchmarks demonstrate that on challenging domains such as antmazes, MSG with\ndeep ensembles surpasses highly well-tuned state-of-the-art methods by a wide\nmargin. Additionally, through ablations on benchmarks domains, we verify the\ncritical significance of using independently trained $Q$-functions, and study\nthe role of ensemble size. Finally, as using separate networks per ensemble\nmember can become computationally costly with larger neural network\narchitectures, we investigate whether efficient ensemble approximations\ndeveloped for supervised learning can be similarly effective, and demonstrate\nthat they do not match the performance and robustness of MSG with separate\nnetworks, highlighting the need for new efforts into efficient uncertainty\nestimation directed at RL.", + "authors": "Seyed Kamyar Seyed Ghasemipour, Shixiang Shane Gu, Ofir Nachum", + "published": "2022-05-27", + "updated": "2022-05-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1903.08738v1", + "title": "Batch Policy Learning under Constraints", + "abstract": "When learning policies for real-world domains, two important questions arise:\n(i) how to efficiently use pre-collected off-policy, non-optimal behavior data;\nand (ii) how to mediate among different competing objectives and constraints.\nWe thus study the problem of batch policy learning under multiple constraints,\nand offer a systematic solution. We first propose a flexible meta-algorithm\nthat admits any batch reinforcement learning and online learning procedure as\nsubroutines. We then present a specific algorithmic instantiation and provide\nperformance guarantees for the main objective and all constraints. To certify\nconstraint satisfaction, we propose a new and simple method for off-policy\npolicy evaluation (OPE) and derive PAC-style bounds. Our algorithm achieves\nstrong empirical results in different domains, including in a challenging\nproblem of simulated car driving subject to multiple constraints such as lane\nkeeping and smooth driving. We also show experimentally that our OPE method\noutperforms other popular OPE techniques on a standalone basis, especially in a\nhigh-dimensional setting.", + "authors": "Hoang M. Le, Cameron Voloshin, Yisong Yue", + "published": "2019-03-20", + "updated": "2019-03-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.00632v1", + "title": "Offline Policy Optimization with Eligible Actions", + "abstract": "Offline policy optimization could have a large impact on many real-world\ndecision-making problems, as online learning may be infeasible in many\napplications. Importance sampling and its variants are a commonly used type of\nestimator in offline policy evaluation, and such estimators typically do not\nrequire assumptions on the properties and representational capabilities of\nvalue function or decision process model function classes. In this paper, we\nidentify an important overfitting phenomenon in optimizing the importance\nweighted return, in which it may be possible for the learned policy to\nessentially avoid making aligned decisions for part of the initial state space.\nWe propose an algorithm to avoid this overfitting through a new\nper-state-neighborhood normalization constraint, and provide a theoretical\njustification of the proposed algorithm. We also show the limitations of\nprevious attempts to this approach. We test our algorithm in a\nhealthcare-inspired simulator, a logged dataset collected from real hospitals\nand continuous control tasks. These experiments show the proposed method yields\nless overfitting and better test performance compared to state-of-the-art batch\nreinforcement learning algorithms.", + "authors": "Yao Liu, Yannis Flet-Berliac, Emma Brunskill", + "published": "2022-07-01", + "updated": "2022-07-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2202.02446v2", + "title": "Adversarially Trained Actor Critic for Offline Reinforcement Learning", + "abstract": "We propose Adversarially Trained Actor Critic (ATAC), a new model-free\nalgorithm for offline reinforcement learning (RL) under insufficient data\ncoverage, based on the concept of relative pessimism. ATAC is designed as a\ntwo-player Stackelberg game: A policy actor competes against an adversarially\ntrained value critic, who finds data-consistent scenarios where the actor is\ninferior to the data-collection behavior policy. We prove that, when the actor\nattains no regret in the two-player game, running ATAC produces a policy that\nprovably 1) outperforms the behavior policy over a wide range of\nhyperparameters that control the degree of pessimism, and 2) competes with the\nbest policy covered by data with appropriately chosen hyperparameters. Compared\nwith existing works, notably our framework offers both theoretical guarantees\nfor general function approximation and a deep RL implementation scalable to\ncomplex environments and large datasets. In the D4RL benchmark, ATAC\nconsistently outperforms state-of-the-art offline RL algorithms on a range of\ncontinuous control tasks.", + "authors": "Ching-An Cheng, Tengyang Xie, Nan Jiang, Alekh Agarwal", + "published": "2022-02-05", + "updated": "2022-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1812.02900v3", + "title": "Off-Policy Deep Reinforcement Learning without Exploration", + "abstract": "Many practical applications of reinforcement learning constrain agents to\nlearn from a fixed batch of data which has already been gathered, without\noffering further possibility for data collection. In this paper, we demonstrate\nthat due to errors introduced by extrapolation, standard off-policy deep\nreinforcement learning algorithms, such as DQN and DDPG, are incapable of\nlearning with data uncorrelated to the distribution under the current policy,\nmaking them ineffective for this fixed batch setting. We introduce a novel\nclass of off-policy algorithms, batch-constrained reinforcement learning, which\nrestricts the action space in order to force the agent towards behaving close\nto on-policy with respect to a subset of the given data. We present the first\ncontinuous control deep reinforcement learning algorithm which can learn\neffectively from arbitrary, fixed batch data, and empirically demonstrate the\nquality of its behavior in several tasks.", + "authors": "Scott Fujimoto, David Meger, Doina Precup", + "published": "2018-12-07", + "updated": "2019-08-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.01345v2", + "title": "Decision Transformer: Reinforcement Learning via Sequence Modeling", + "abstract": "We introduce a framework that abstracts Reinforcement Learning (RL) as a\nsequence modeling problem. This allows us to draw upon the simplicity and\nscalability of the Transformer architecture, and associated advances in\nlanguage modeling such as GPT-x and BERT. In particular, we present Decision\nTransformer, an architecture that casts the problem of RL as conditional\nsequence modeling. Unlike prior approaches to RL that fit value functions or\ncompute policy gradients, Decision Transformer simply outputs the optimal\nactions by leveraging a causally masked Transformer. By conditioning an\nautoregressive model on the desired return (reward), past states, and actions,\nour Decision Transformer model can generate future actions that achieve the\ndesired return. Despite its simplicity, Decision Transformer matches or exceeds\nthe performance of state-of-the-art model-free offline RL baselines on Atari,\nOpenAI Gym, and Key-to-Door tasks.", + "authors": "Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch", + "published": "2021-06-02", + "updated": "2021-06-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.11361v1", + "title": "Behavior Regularized Offline Reinforcement Learning", + "abstract": "In reinforcement learning (RL) research, it is common to assume access to\ndirect online interactions with the environment. However in many real-world\napplications, access to the environment is limited to a fixed offline dataset\nof logged experience. In such settings, standard RL algorithms have been shown\nto diverge or otherwise yield poor performance. Accordingly, recent work has\nsuggested a number of remedies to these issues. In this work, we introduce a\ngeneral framework, behavior regularized actor critic (BRAC), to empirically\nevaluate recently proposed methods as well as a number of simple baselines\nacross a variety of offline continuous control tasks. Surprisingly, we find\nthat many of the technical complexities introduced in recent methods are\nunnecessary to achieve strong performance. Additional ablations provide\ninsights into which design choices matter most in the offline RL setting.", + "authors": "Yifan Wu, George Tucker, Ofir Nachum", + "published": "2019-11-26", + "updated": "2019-11-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.04745v3", + "title": "Mildly Conservative Q-Learning for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) defines the task of learning from a\nstatic logged dataset without continually interacting with the environment. The\ndistribution shift between the learned policy and the behavior policy makes it\nnecessary for the value function to stay conservative such that\nout-of-distribution (OOD) actions will not be severely overestimated. However,\nexisting approaches, penalizing the unseen actions or regularizing with the\nbehavior policy, are too pessimistic, which suppresses the generalization of\nthe value function and hinders the performance improvement. This paper explores\nmild but enough conservatism for offline learning while not harming\ngeneralization. We propose Mildly Conservative Q-learning (MCQ), where OOD\nactions are actively trained by assigning them proper pseudo Q values. We\ntheoretically show that MCQ induces a policy that behaves at least as well as\nthe behavior policy and no erroneous overestimation will occur for OOD actions.\nExperimental results on the D4RL benchmarks demonstrate that MCQ achieves\nremarkable performance compared with prior work. Furthermore, MCQ shows\nsuperior generalization ability when transferring from offline to online, and\nsignificantly outperforms baselines. Our code is publicly available at\nhttps://github.com/dmksjfl/MCQ.", + "authors": "Jiafei Lyu, Xiaoteng Ma, Xiu Li, Zongqing Lu", + "published": "2022-06-09", + "updated": "2024-02-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1709.10089v2", + "title": "Overcoming Exploration in Reinforcement Learning with Demonstrations", + "abstract": "Exploration in environments with sparse rewards has been a persistent problem\nin reinforcement learning (RL). Many tasks are natural to specify with a sparse\nreward, and manually shaping a reward function can result in suboptimal\nperformance. However, finding a non-zero reward is exponentially more difficult\nwith increasing task horizon or action dimensionality. This puts many\nreal-world tasks out of practical reach of RL methods. In this work, we use\ndemonstrations to overcome the exploration problem and successfully learn to\nperform long-horizon, multi-step robotics tasks with continuous control such as\nstacking blocks with a robot arm. Our method, which builds on top of Deep\nDeterministic Policy Gradients and Hindsight Experience Replay, provides an\norder of magnitude of speedup over RL on simulated robotics tasks. It is simple\nto implement and makes only the additional assumption that we can collect a\nsmall set of demonstrations. Furthermore, our method is able to solve tasks not\nsolvable by either RL or behavior cloning alone, and often ends up\noutperforming the demonstrator policy.", + "authors": "Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, Pieter Abbeel", + "published": "2017-09-28", + "updated": "2018-02-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.04607v2", + "title": "Confidence-Conditioned Value Functions for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) promises the ability to learn effective\npolicies solely using existing, static datasets, without any costly online\ninteraction. To do so, offline RL methods must handle distributional shift\nbetween the dataset and the learned policy. The most common approach is to\nlearn conservative, or lower-bound, value functions, which underestimate the\nreturn of out-of-distribution (OOD) actions. However, such methods exhibit one\nnotable drawback: policies optimized on such value functions can only behave\naccording to a fixed, possibly suboptimal, degree of conservatism. However,\nthis can be alleviated if we instead are able to learn policies for varying\ndegrees of conservatism at training time and devise a method to dynamically\nchoose one of them during evaluation. To do so, in this work, we propose\nlearning value functions that additionally condition on the degree of\nconservatism, which we dub confidence-conditioned value functions. We derive a\nnew form of a Bellman backup that simultaneously learns Q-values for any degree\nof confidence with high probability. By conditioning on confidence, our value\nfunctions enable adaptive strategies during online evaluation by controlling\nfor confidence level using the history of observations thus far. This approach\ncan be implemented in practice by conditioning the Q-function from existing\nconservative algorithms on the confidence.We theoretically show that our\nlearned value functions produce conservative estimates of the true value at any\ndesired confidence. Finally, we empirically show that our algorithm outperforms\nexisting conservative offline RL algorithms on multiple discrete control\ndomains.", + "authors": "Joey Hong, Aviral Kumar, Sergey Levine", + "published": "2022-12-08", + "updated": "2023-10-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.09438v1", + "title": "Off-policy Bandits with Deficient Support", + "abstract": "Learning effective contextual-bandit policies from past actions of a deployed\nsystem is highly desirable in many settings (e.g. voice assistants,\nrecommendation, search), since it enables the reuse of large amounts of log\ndata. State-of-the-art methods for such off-policy learning, however, are based\non inverse propensity score (IPS) weighting. A key theoretical requirement of\nIPS weighting is that the policy that logged the data has \"full support\", which\ntypically translates into requiring non-zero probability for any action in any\ncontext. Unfortunately, many real-world systems produce support deficient data,\nespecially when the action space is large, and we show how existing methods can\nfail catastrophically. To overcome this gap between theory and applications, we\nidentify three approaches that provide various guarantees for IPS-based\nlearning despite the inherent limitations of support-deficient data:\nrestricting the action space, reward extrapolation, and restricting the policy\nspace. We systematically analyze the statistical and computational properties\nof these three approaches, and we empirically evaluate their effectiveness. In\naddition to providing the first systematic analysis of support-deficiency in\ncontextual-bandit learning, we conclude with recommendations that provide\npractical guidance.", + "authors": "Noveen Sachdeva, Yi Su, Thorsten Joachims", + "published": "2020-06-16", + "updated": "2020-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1906.00949v2", + "title": "Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction", + "abstract": "Off-policy reinforcement learning aims to leverage experience collected from\nprior policies for sample-efficient learning. However, in practice, commonly\nused off-policy approximate dynamic programming methods based on Q-learning and\nactor-critic methods are highly sensitive to the data distribution, and can\nmake only limited progress without collecting additional on-policy data. As a\nstep towards more robust off-policy algorithms, we study the setting where the\noff-policy experience is fixed and there is no further interaction with the\nenvironment. We identify bootstrapping error as a key source of instability in\ncurrent methods. Bootstrapping error is due to bootstrapping from actions that\nlie outside of the training data distribution, and it accumulates via the\nBellman backup operator. We theoretically analyze bootstrapping error, and\ndemonstrate how carefully constraining action selection in the backup can\nmitigate it. Based on our analysis, we propose a practical algorithm,\nbootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is\nable to learn robustly from different off-policy distributions, including\nrandom and suboptimal demonstrations, on a range of continuous control tasks.", + "authors": "Aviral Kumar, Justin Fu, George Tucker, Sergey Levine", + "published": "2019-06-03", + "updated": "2019-11-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1907.04543v4", + "title": "An Optimistic Perspective on Offline Reinforcement Learning", + "abstract": "Off-policy reinforcement learning (RL) using a fixed offline dataset of\nlogged interactions is an important consideration in real world applications.\nThis paper studies offline RL using the DQN replay dataset comprising the\nentire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate\nthat recent off-policy deep RL algorithms, even when trained solely on this\nfixed dataset, outperform the fully trained DQN agent. To enhance\ngeneralization in the offline setting, we present Random Ensemble Mixture\n(REM), a robust Q-learning algorithm that enforces optimal Bellman consistency\non random convex combinations of multiple Q-value estimates. Offline REM\ntrained on the DQN replay dataset surpasses strong RL baselines. Ablation\nstudies highlight the role of offline dataset size and diversity as well as the\nalgorithm choice in our positive results. Overall, the results here present an\noptimistic view that robust RL algorithms trained on sufficiently large and\ndiverse offline datasets can lead to high quality policies. The DQN replay\ndataset can serve as an offline RL benchmark and is open-sourced.", + "authors": "Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi", + "published": "2019-07-10", + "updated": "2020-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1904.08473v2", + "title": "Off-Policy Policy Gradient with State Distribution Correction", + "abstract": "We study the problem of off-policy policy optimization in Markov decision\nprocesses, and develop a novel off-policy policy gradient method. Prior\noff-policy policy gradient approaches have generally ignored the mismatch\nbetween the distribution of states visited under the behavior policy used to\ncollect data, and what would be the distribution of states under the learned\npolicy. Here we build on recent progress for estimating the ratio of the state\ndistributions under behavior and evaluation policies for policy evaluation, and\npresent an off-policy policy gradient optimization technique that can account\nfor this mismatch in distributions. We present an illustrative example of why\nthis is important and a theoretical convergence guarantee for our approach.\nEmpirically, we compare our method in simulations to several strong baselines\nwhich do not correct for this mismatch, significantly improving in the quality\nof the policy discovered.", + "authors": "Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill", + "published": "2019-04-17", + "updated": "2019-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.06860v2", + "title": "A Minimalist Approach to Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) defines the task of learning from a fixed\nbatch of data. Due to errors in value estimation from out-of-distribution\nactions, most offline RL algorithms take the approach of constraining or\nregularizing the policy with the actions contained in the dataset. Built on\npre-existing RL algorithms, modifications to make an RL algorithm work offline\ncomes at the cost of additional complexity. Offline RL algorithms introduce new\nhyperparameters and often leverage secondary components such as generative\nmodels, while adjusting the underlying RL algorithm. In this paper we aim to\nmake a deep RL algorithm work while making minimal changes. We find that we can\nmatch the performance of state-of-the-art offline RL algorithms by simply\nadding a behavior cloning term to the policy update of an online RL algorithm\nand normalizing the data. The resulting algorithm is a simple to implement and\ntune baseline, while more than halving the overall run time by removing the\nadditional computational overhead of previous methods.", + "authors": "Scott Fujimoto, Shixiang Shane Gu", + "published": "2021-06-12", + "updated": "2021-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.01548v2", + "title": "Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble", + "abstract": "Offline reinforcement learning (offline RL), which aims to find an optimal\npolicy from a previously collected static dataset, bears algorithmic\ndifficulties due to function approximation errors from out-of-distribution\n(OOD) data points. To this end, offline RL algorithms adopt either a constraint\nor a penalty term that explicitly guides the policy to stay close to the given\ndataset. However, prior methods typically require accurate estimation of the\nbehavior policy or sampling from OOD data points, which themselves can be a\nnon-trivial problem. Moreover, these methods under-utilize the generalization\nability of deep neural networks and often fall into suboptimal solutions too\nclose to the given dataset. In this work, we propose an uncertainty-based\noffline RL method that takes into account the confidence of the Q-value\nprediction and does not require any estimation or sampling of the data\ndistribution. We show that the clipped Q-learning, a technique widely used in\nonline RL, can be leveraged to successfully penalize OOD data points with high\nprediction uncertainties. Surprisingly, we find that it is possible to\nsubstantially outperform existing offline RL methods on various tasks by simply\nincreasing the number of Q-networks along with the clipped Q-learning. Based on\nthis observation, we propose an ensemble-diversified actor-critic algorithm\nthat reduces the number of required ensemble networks down to a tenth compared\nto the naive ensemble while achieving state-of-the-art performance on most of\nthe D4RL benchmarks considered.", + "authors": "Gaon An, Seungyong Moon, Jang-Hyun Kim, Hyun Oh Song", + "published": "2021-10-04", + "updated": "2021-10-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2102.09225v4", + "title": "Continuous Doubly Constrained Batch Reinforcement Learning", + "abstract": "Reliant on too many experiments to learn good actions, current Reinforcement\nLearning (RL) algorithms have limited applicability in real-world settings,\nwhich can be too expensive to allow exploration. We propose an algorithm for\nbatch RL, where effective policies are learned using only a fixed offline\ndataset instead of online interactions with the environment. The limited data\nin batch RL produces inherent uncertainty in value estimates of states/actions\nthat were insufficiently represented in the training data. This leads to\nparticularly severe extrapolation when our candidate policies diverge from one\nthat generated the data. We propose to mitigate this issue via two\nstraightforward penalties: a policy-constraint to reduce this divergence and a\nvalue-constraint that discourages overly optimistic estimates. Over a\ncomprehensive set of 32 continuous-action batch RL benchmarks, our approach\ncompares favorably to state-of-the-art methods, regardless of how the offline\ndata were collected.", + "authors": "Rasool Fakoor, Jonas Mueller, Kavosh Asadi, Pratik Chaudhari, Alexander J. Smola", + "published": "2021-02-18", + "updated": "2021-12-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1907.05079v1", + "title": "Safe Policy Improvement with Soft Baseline Bootstrapping", + "abstract": "Batch Reinforcement Learning (Batch RL) consists in training a policy using\ntrajectories collected with another policy, called the behavioural policy. Safe\npolicy improvement (SPI) provides guarantees with high probability that the\ntrained policy performs better than the behavioural policy, also called\nbaseline in this setting. Previous work shows that the SPI objective improves\nmean performance as compared to using the basic RL objective, which boils down\nto solving the MDP with maximum likelihood. Here, we build on that work and\nimprove more precisely the SPI with Baseline Bootstrapping algorithm (SPIBB) by\nallowing the policy search over a wider set of policies. Instead of binarily\nclassifying the state-action pairs into two sets (the \\textit{uncertain} and\nthe \\textit{safe-to-train-on} ones), we adopt a softer strategy that controls\nthe error in the value estimates by constraining the policy change according to\nthe local model uncertainty. The method can take more risks on uncertain\nactions all the while remaining provably-safe, and is therefore less\nconservative than the state-of-the-art methods. We propose two algorithms (one\noptimal and one approximate) to solve this constrained optimization problem and\nempirically show a significant improvement over existing SPI algorithms both on\nfinite MDPs and on infinite MDPs with a neural network function approximation.", + "authors": "Kimia Nadjahi, Romain Laroche, R\u00e9mi Tachet des Combes", + "published": "2019-07-11", + "updated": "2019-07-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2202.05607v2", + "title": "Online Decision Transformer", + "abstract": "Recent work has shown that offline reinforcement learning (RL) can be\nformulated as a sequence modeling problem (Chen et al., 2021; Janner et al.,\n2021) and solved via approaches similar to large-scale language modeling.\nHowever, any practical instantiation of RL also involves an online component,\nwhere policies pretrained on passive offline datasets are finetuned via\ntaskspecific interactions with the environment. We propose Online Decision\nTransformers (ODT), an RL algorithm based on sequence modeling that blends\noffline pretraining with online finetuning in a unified framework. Our\nframework uses sequence-level entropy regularizers in conjunction with\nautoregressive modeling objectives for sample-efficient exploration and\nfinetuning. Empirically, we show that ODT is competitive with the\nstate-of-the-art in absolute performance on the D4RL benchmark but shows much\nmore significant gains during the finetuning procedure.", + "authors": "Qinqing Zheng, Amy Zhang, Aditya Grover", + "published": "2022-02-11", + "updated": "2022-07-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.02200v1", + "title": "Offline RL Policies Should be Trained to be Adaptive", + "abstract": "Offline RL algorithms must account for the fact that the dataset they are\nprovided may leave many facets of the environment unknown. The most common way\nto approach this challenge is to employ pessimistic or conservative methods,\nwhich avoid behaviors that are too dissimilar from those in the training\ndataset. However, relying exclusively on conservatism has drawbacks:\nperformance is sensitive to the exact degree of conservatism, and conservative\nobjectives can recover highly suboptimal policies. In this work, we propose\nthat offline RL methods should instead be adaptive in the presence of\nuncertainty. We show that acting optimally in offline RL in a Bayesian sense\ninvolves solving an implicit POMDP. As a result, optimal policies for offline\nRL must be adaptive, depending not just on the current state but rather all the\ntransitions seen so far during evaluation.We present a model-free algorithm for\napproximating this optimal adaptive policy, and demonstrate the efficacy of\nlearning such adaptive policies in offline RL benchmarks.", + "authors": "Dibya Ghosh, Anurag Ajay, Pulkit Agrawal, Sergey Levine", + "published": "2022-07-05", + "updated": "2022-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1905.01756v2", + "title": "P3O: Policy-on Policy-off Policy Optimization", + "abstract": "On-policy reinforcement learning (RL) algorithms have high sample complexity\nwhile off-policy algorithms are difficult to tune. Merging the two holds the\npromise to develop efficient algorithms that generalize across diverse\nenvironments. It is however challenging in practice to find suitable\nhyper-parameters that govern this trade off. This paper develops a simple\nalgorithm named P3O that interleaves off-policy updates with on-policy updates.\nP3O uses the effective sample size between the behavior policy and the target\npolicy to control how far they can be from each other and does not introduce\nany additional hyper-parameters. Extensive experiments on the Atari-2600 and\nMuJoCo benchmark suites show that this simple technique is effective in\nreducing the sample complexity of state-of-the-art algorithms. Code to\nreproduce experiments in this paper is at https://github.com/rasoolfa/P3O.", + "authors": "Rasool Fakoor, Pratik Chaudhari, Alexander J. Smola", + "published": "2019-05-05", + "updated": "2019-07-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2007.08202v2", + "title": "Provably Good Batch Reinforcement Learning Without Great Exploration", + "abstract": "Batch reinforcement learning (RL) is important to apply RL algorithms to many\nhigh stakes tasks. Doing batch RL in a way that yields a reliable new policy in\nlarge domains is challenging: a new decision policy may visit states and\nactions outside the support of the batch data, and function approximation and\noptimization with limited samples can further increase the potential of\nlearning policies with overly optimistic estimates of their future performance.\nRecent algorithms have shown promise but can still be overly optimistic in\ntheir expected outcomes. Theoretical work that provides strong guarantees on\nthe performance of the output policy relies on a strong concentrability\nassumption, that makes it unsuitable for cases where the ratio between\nstate-action distributions of behavior policy and some candidate policies is\nlarge. This is because in the traditional analysis, the error bound scales up\nwith this ratio. We show that a small modification to Bellman optimality and\nevaluation back-up to take a more conservative update can have much stronger\nguarantees. In certain settings, they can find the approximately best policy\nwithin the state-action space explored by the batch data, without requiring a\npriori assumptions of concentrability. We highlight the necessity of our\nconservative update and the limitations of previous algorithms and analyses by\nillustrative MDP examples, and demonstrate an empirical comparison of our\nalgorithm and other state-of-the-art batch RL baselines in standard benchmarks.", + "authors": "Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill", + "published": "2020-07-16", + "updated": "2020-07-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.11603v1", + "title": "Model-based Trajectory Stitching for Improved Offline Reinforcement Learning", + "abstract": "In many real-world applications, collecting large and high-quality datasets\nmay be too costly or impractical. Offline reinforcement learning (RL) aims to\ninfer an optimal decision-making policy from a fixed set of data. Getting the\nmost information from historical data is then vital for good performance once\nthe policy is deployed. We propose a model-based data augmentation strategy,\nTrajectory Stitching (TS), to improve the quality of sub-optimal historical\ntrajectories. TS introduces unseen actions joining previously disconnected\nstates: using a probabilistic notion of state reachability, it effectively\n`stitches' together parts of the historical demonstrations to generate new,\nhigher quality ones. A stitching event consists of a transition between a pair\nof observed states through a synthetic and highly probable action. New actions\nare introduced only when they are expected to be beneficial, according to an\nestimated state-value function. We show that using this data augmentation\nstrategy jointly with behavioural cloning (BC) leads to improvements over the\nbehaviour-cloned policy from the original dataset. Improving over the BC policy\ncould then be used as a launchpad for online RL through planning and\ndemonstration-guided RL.", + "authors": "Charles A. Hepburn, Giovanni Montana", + "published": "2022-11-21", + "updated": "2022-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.08050v1", + "title": "Offline Reinforcement Learning with Fisher Divergence Critic Regularization", + "abstract": "Many modern approaches to offline Reinforcement Learning (RL) utilize\nbehavior regularization, typically augmenting a model-free actor critic\nalgorithm with a penalty measuring divergence of the policy from the offline\ndata. In this work, we propose an alternative approach to encouraging the\nlearned policy to stay close to the data, namely parameterizing the critic as\nthe log-behavior-policy, which generated the offline data, plus a state-action\nvalue offset term, which can be learned using a neural network. Behavior\nregularization then corresponds to an appropriate regularizer on the offset\nterm. We propose using a gradient penalty regularizer for the offset term and\ndemonstrate its equivalence to Fisher divergence regularization, suggesting\nconnections to the score matching and generative energy-based model literature.\nWe thus term our resulting algorithm Fisher-BRC (Behavior Regularized Critic).\nOn standard offline RL benchmarks, Fisher-BRC achieves both improved\nperformance and faster convergence over existing state-of-the-art methods.", + "authors": "Ilya Kostrikov, Jonathan Tompson, Rob Fergus, Ofir Nachum", + "published": "2021-03-14", + "updated": "2021-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.08909v3", + "title": "Offline RL Without Off-Policy Evaluation", + "abstract": "Most prior approaches to offline reinforcement learning (RL) have taken an\niterative actor-critic approach involving off-policy evaluation. In this paper\nwe show that simply doing one step of constrained/regularized policy\nimprovement using an on-policy Q estimate of the behavior policy performs\nsurprisingly well. This one-step algorithm beats the previously reported\nresults of iterative algorithms on a large portion of the D4RL benchmark. The\none-step baseline achieves this strong performance while being notably simpler\nand more robust to hyperparameters than previously proposed iterative\nalgorithms. We argue that the relatively poor performance of iterative\napproaches is a result of the high variance inherent in doing off-policy\nevaluation and magnified by the repeated optimization of policies against those\nestimates. In addition, we hypothesize that the strong performance of the\none-step algorithm is due to a combination of favorable structure in the\nenvironment and behavior policy.", + "authors": "David Brandfonbrener, William F. Whitney, Rajesh Ranganath, Joan Bruna", + "published": "2021-06-16", + "updated": "2021-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.09359v6", + "title": "AWAC: Accelerating Online Reinforcement Learning with Offline Datasets", + "abstract": "Reinforcement learning (RL) provides an appealing formalism for learning\ncontrol policies from experience. However, the classic active formulation of RL\nnecessitates a lengthy active exploration process for each behavior, making it\ndifficult to apply in real-world settings such as robotic control. If we can\ninstead allow RL algorithms to effectively use previously collected data to aid\nthe online learning process, such applications could be made substantially more\npractical: the prior data would provide a starting point that mitigates\nchallenges due to exploration and sample complexity, while the online training\nenables the agent to perfect the desired skill. Such prior data could either\nconstitute expert demonstrations or sub-optimal prior data that illustrates\npotentially useful transitions. While a number of prior methods have either\nused optimal demonstrations to bootstrap RL, or have used sub-optimal data to\ntrain purely offline, it remains exceptionally difficult to train a policy with\noffline data and actually continue to improve it further with online RL. In\nthis paper we analyze why this problem is so challenging, and propose an\nalgorithm that combines sample efficient dynamic programming with maximum\nlikelihood policy updates, providing a simple and effective framework that is\nable to leverage large amounts of offline data and then quickly perform online\nfine-tuning of RL policies. We show that our method, advantage weighted actor\ncritic (AWAC), enables rapid learning of skills with a combination of prior\ndemonstration data and online experience. We demonstrate these benefits on\nsimulated and real-world robotics domains, including dexterous manipulation\nwith a real multi-fingered hand, drawer opening with a robotic arm, and\nrotating a valve. Our results show that incorporating prior data can reduce the\ntime required to learn a range of robotic skills to practical time-scales.", + "authors": "Ashvin Nair, Abhishek Gupta, Murtaza Dalal, Sergey Levine", + "published": "2020-06-16", + "updated": "2021-04-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.08066v5", + "title": "Exploiting Action Impact Regularity and Exogenous State Variables for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning -- learning a policy from a batch of data --\nis known to be hard for general MDPs. These results motivate the need to look\nat specific classes of MDPs where offline reinforcement learning might be\nfeasible. In this work, we explore a restricted class of MDPs to obtain\nguarantees for offline reinforcement learning. The key property, which we call\nAction Impact Regularity (AIR), is that actions primarily impact a part of the\nstate (an endogenous component) and have limited impact on the remaining part\nof the state (an exogenous component). AIR is a strong assumption, but it\nnonetheless holds in a number of real-world domains including financial\nmarkets. We discuss algorithms that exploit the AIR property, and provide a\ntheoretical analysis for an algorithm based on Fitted-Q Iteration. Finally, we\ndemonstrate that the algorithm outperforms existing offline reinforcement\nlearning algorithms across different data collection policies in simulated and\nreal world environments where the regularity holds.", + "authors": "Vincent Liu, James R. Wright, Martha White", + "published": "2021-11-15", + "updated": "2023-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10411v2", + "title": "Boosting Offline Reinforcement Learning with Residual Generative Modeling", + "abstract": "Offline reinforcement learning (RL) tries to learn the near-optimal policy\nwith recorded offline experience without online exploration. Current offline RL\nresearch includes: 1) generative modeling, i.e., approximating a policy using\nfixed data; and 2) learning the state-action value function. While most\nresearch focuses on the state-action function part through reducing the\nbootstrapping error in value function approximation induced by the distribution\nshift of training data, the effects of error propagation in generative modeling\nhave been neglected. In this paper, we analyze the error in generative\nmodeling. We propose AQL (action-conditioned Q-learning), a residual generative\nmodel to reduce policy approximation error for offline RL. We show that our\nmethod can learn more accurate policy approximations in different benchmark\ndatasets. In addition, we show that the proposed offline RL method can learn\nmore competitive AI agents in complex control tasks under the multiplayer\nonline battle arena (MOBA) game Honor of Kings.", + "authors": "Hua Wei, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, Zhenhui Li", + "published": "2021-06-19", + "updated": "2021-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T01", + "I.2.8; I.2.1" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13611v3", + "title": "OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning", + "abstract": "Reinforcement learning (RL) has achieved impressive performance in a variety\nof online settings in which an agent's ability to query the environment for\ntransitions and rewards is effectively unlimited. However, in many practical\napplications, the situation is reversed: an agent may have access to large\namounts of undirected offline experience data, while access to the online\nenvironment is severely limited. In this work, we focus on this offline\nsetting. Our main insight is that, when presented with offline data composed of\na variety of behaviors, an effective way to leverage this data is to extract a\ncontinuous space of recurring and temporally extended primitive behaviors\nbefore using these primitives for downstream task learning. Primitives\nextracted in this way serve two purposes: they delineate the behaviors that are\nsupported by the data from those that are not, making them useful for avoiding\ndistributional shift in offline RL; and they provide a degree of temporal\nabstraction, which reduces the effective horizon yielding better learning in\ntheory, and improved offline RL in practice. In addition to benefiting offline\npolicy optimization, we show that performing offline primitive learning in this\nway can also be leveraged for improving few-shot imitation learning as well as\nexploration and transfer in online RL on a variety of benchmark domains.\nVisualizations are available at https://sites.google.com/view/opal-iclr", + "authors": "Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum", + "published": "2020-10-26", + "updated": "2021-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.05546v1", + "title": "Offline Actor-Critic Reinforcement Learning Scales to Large Models", + "abstract": "We show that offline actor-critic reinforcement learning can scale to large\nmodels - such as transformers - and follows similar scaling laws as supervised\nlearning. We find that offline actor-critic algorithms can outperform strong,\nsupervised, behavioral cloning baselines for multi-task training on a large\ndataset containing both sub-optimal and expert behavior on 132 continuous\ncontrol tasks. We introduce a Perceiver-based actor-critic model and elucidate\nthe key model features needed to make offline RL work with self- and\ncross-attention modules. Overall, we find that: i) simple offline actor critic\nalgorithms are a natural choice for gradually moving away from the currently\npredominant paradigm of behavioral cloning, and ii) via offline RL it is\npossible to learn multi-task policies that master many domains simultaneously,\nincluding real robotics tasks, from sub-optimal demonstrations or\nself-generated data.", + "authors": "Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin Riedmiller", + "published": "2024-02-08", + "updated": "2024-02-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.07614v1", + "title": "Towards Data-Driven Offline Simulations for Online Reinforcement Learning", + "abstract": "Modern decision-making systems, from robots to web recommendation engines,\nare expected to adapt: to user preferences, changing circumstances or even new\ntasks. Yet, it is still uncommon to deploy a dynamically learning agent (rather\nthan a fixed policy) to a production system, as it's perceived as unsafe. Using\nhistorical data to reason about learning algorithms, similar to offline policy\nevaluation (OPE) applied to fixed policies, could help practitioners evaluate\nand ultimately deploy such adaptive agents to production. In this work, we\nformalize offline learner simulation (OLS) for reinforcement learning (RL) and\npropose a novel evaluation protocol that measures both fidelity and efficiency\nof the simulation. For environments with complex high-dimensional observations,\nwe propose a semi-parametric approach that leverages recent advances in latent\nstate discovery in order to achieve accurate and efficient offline simulations.\nIn preliminary experiments, we show the advantage of our approach compared to\nfully non-parametric baselines. The code to reproduce these experiments will be\nmade available at https://github.com/microsoft/rl-offline-simulation.", + "authors": "Shengpu Tang, Felipe Vieira Frujeri, Dipendra Misra, Alex Lamb, John Langford, Paul Mineiro, Sebastian Kochman", + "published": "2022-11-14", + "updated": "2022-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07166v1", + "title": "Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) extends the paradigm of classical RL\nalgorithms to purely learning from static datasets, without interacting with\nthe underlying environment during the learning process. A key challenge of\noffline RL is the instability of policy training, caused by the mismatch\nbetween the distribution of the offline data and the undiscounted stationary\nstate-action distribution of the learned policy. To avoid the detrimental\nimpact of distribution mismatch, we regularize the undiscounted stationary\ndistribution of the current policy towards the offline data during the policy\noptimization process. Further, we train a dynamics model to both implement this\nregularization and better estimate the stationary distribution of the current\npolicy, reducing the error induced by distribution mismatch. On a wide range of\ncontinuous-control offline RL datasets, our method indicates competitive\nperformance, which validates our algorithm. The code is publicly available.", + "authors": "Shentao Yang, Yihao Feng, Shujian Zhang, Mingyuan Zhou", + "published": "2022-06-14", + "updated": "2022-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.06662v1", + "title": "DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning algorithms promise to be applicable in\nsettings where a fixed dataset is available and no new experience can be\nacquired. However, such formulation is inevitably offline-data-hungry and, in\npractice, collecting a large offline dataset for one specific task over one\nspecific environment is also costly and laborious. In this paper, we thus 1)\nformulate the offline dynamics adaptation by using (source) offline data\ncollected from another dynamics to relax the requirement for the extensive\n(target) offline data, 2) characterize the dynamics shift problem in which\nprior offline methods do not scale well, and 3) derive a simple dynamics-aware\nreward augmentation (DARA) framework from both model-free and model-based\noffline settings. Specifically, DARA emphasizes learning from those source\ntransition pairs that are adaptive for the target environment and mitigates the\noffline dynamics shift by characterizing state-action-next-state pairs instead\nof the typical state-action distribution sketched by prior offline RL methods.\nThe experimental evaluation demonstrates that DARA, by augmenting rewards in\nthe source offline dataset, can acquire an adaptive policy for the target\nenvironment and yet significantly reduce the requirement of target offline\ndata. With only modest amounts of target offline data, our performance\nconsistently outperforms the prior offline RL methods in both simulated and\nreal-world tasks.", + "authors": "Jinxin Liu, Hongyin Zhang, Donglin Wang", + "published": "2022-03-13", + "updated": "2022-03-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08569v2", + "title": "Bootstrapped Transformer for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims at learning policies from previously\ncollected static trajectory data without interacting with the real environment.\nRecent works provide a novel perspective by viewing offline RL as a generic\nsequence generation problem, adopting sequence models such as Transformer\narchitecture to model distributions over trajectories, and repurposing beam\nsearch as a planning algorithm. However, the training datasets utilized in\ngeneral offline RL tasks are quite limited and often suffer from insufficient\ndistribution coverage, which could be harmful to training sequence generation\nmodels yet has not drawn enough attention in the previous works. In this paper,\nwe propose a novel algorithm named Bootstrapped Transformer, which incorporates\nthe idea of bootstrapping and leverages the learned model to self-generate more\noffline data to further boost the sequence model training. We conduct extensive\nexperiments on two offline RL benchmarks and demonstrate that our model can\nlargely remedy the existing offline RL training limitations and beat other\nstrong baseline methods. We also analyze the generated pseudo data and the\nrevealed characteristics may shed some light on offline RL training. The codes\nare available at https://seqml.github.io/bootorl.", + "authors": "Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, Dongsheng Li", + "published": "2022-06-17", + "updated": "2022-10-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.09796v1", + "title": "Offline Reinforcement Learning with Value-based Episodic Memory", + "abstract": "Offline reinforcement learning (RL) shows promise of applying RL to\nreal-world problems by effectively utilizing previously collected data. Most\nexisting offline RL algorithms use regularization or constraints to suppress\nextrapolation error for actions outside the dataset. In this paper, we adopt a\ndifferent framework, which learns the V-function instead of the Q-function to\nnaturally keep the learning procedure within the support of an offline dataset.\nTo enable effective generalization while maintaining proper conservatism in\noffline learning, we propose Expectile V-Learning (EVL), which smoothly\ninterpolates between the optimal value learning and behavior cloning. Further,\nwe introduce implicit planning along offline trajectories to enhance learned\nV-values and accelerate convergence. Together, we present a new offline method\ncalled Value-based Episodic Memory (VEM). We provide theoretical analysis for\nthe convergence properties of our proposed VEM method, and empirical results in\nthe D4RL benchmark show that our method achieves superior performance in most\ntasks, particularly in sparse-reward tasks.", + "authors": "Xiaoteng Ma, Yiqin Yang, Hao Hu, Qihan Liu, Jun Yang, Chongjie Zhang, Qianchuan Zhao, Bin Liang", + "published": "2021-10-19", + "updated": "2021-10-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.11336v1", + "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", + "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", + "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08016v1", + "title": "Contextual Transformer for Offline Meta Reinforcement Learning", + "abstract": "The pretrain-finetuning paradigm in large-scale sequence models has made\nsignificant progress in natural language processing and computer vision tasks.\nHowever, such a paradigm is still hindered by several challenges in\nReinforcement Learning (RL), including the lack of self-supervised pretraining\nalgorithms based on offline data and efficient fine-tuning/prompt-tuning over\nunseen downstream tasks. In this work, we explore how prompts can improve\nsequence modeling-based offline reinforcement learning (offline-RL) algorithms.\nFirstly, we propose prompt tuning for offline RL, where a context vector\nsequence is concatenated with the input to guide the conditional policy\ngeneration. As such, we can pretrain a model on the offline dataset with\nself-supervised loss and learn a prompt to guide the policy towards desired\nactions. Secondly, we extend our framework to Meta-RL settings and propose\nContextual Meta Transformer (CMT); CMT leverages the context among different\ntasks as the prompt to improve generalization on unseen tasks. We conduct\nextensive experiments across three different offline-RL settings: offline\nsingle-agent RL on the D4RL dataset, offline Meta-RL on the MuJoCo benchmark,\nand offline MARL on the SMAC benchmark. Superior results validate the strong\nperformance, and generality of our methods.", + "authors": "Runji Lin, Ye Li, Xidong Feng, Zhaowei Zhang, Xian Hong Wu Fung, Haifeng Zhang, Jun Wang, Yali Du, Yaodong Yang", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.02845v3", + "title": "Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Tackles All SMAC Tasks", + "abstract": "Offline reinforcement learning leverages previously-collected offline\ndatasets to learn optimal policies with no necessity to access the real\nenvironment. Such a paradigm is also desirable for multi-agent reinforcement\nlearning (MARL) tasks, given the increased interactions among agents and with\nthe enviroment. Yet, in MARL, the paradigm of offline pre-training with online\nfine-tuning has not been studied, nor datasets or benchmarks for offline MARL\nresearch are available. In this paper, we facilitate the research by providing\nlarge-scale datasets, and use them to examine the usage of the Decision\nTransformer in the context of MARL. We investigate the generalisation of MARL\noffline pre-training in the following three aspects: 1) between single agents\nand multiple agents, 2) from offline pretraining to the online fine-tuning, and\n3) to that of multiple downstream tasks with few-shot and zero-shot\ncapabilities. We start by introducing the first offline MARL dataset with\ndiverse quality levels based on the StarCraftII environment, and then propose\nthe novel architecture of multi-agent decision transformer (MADT) for effective\noffline learning. MADT leverages transformer's modelling ability of sequence\nmodelling and integrates it seamlessly with both offline and online MARL tasks.\nA crucial benefit of MADT is that it learns generalisable policies that can\ntransfer between different types of agents under different task scenarios. On\nStarCraft II offline dataset, MADT outperforms the state-of-the-art offline RL\nbaselines. When applied to online tasks, the pre-trained MADT significantly\nimproves sample efficiency, and enjoys strong performance both few-short and\nzero-shot cases. To our best knowledge, this is the first work that studies and\ndemonstrates the effectiveness of offline pre-trained models in terms of sample\nefficiency and generalisability enhancements in MARL.", + "authors": "Linghui Meng, Muning Wen, Yaodong Yang, Chenyang Le, Xiyun Li, Weinan Zhang, Ying Wen, Haifeng Zhang, Jun Wang, Bo Xu", + "published": "2021-12-06", + "updated": "2022-06-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.17156v2", + "title": "MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations", + "abstract": "We study a new paradigm for sequential decision making, called offline policy\nlearning from observations (PLfO). Offline PLfO aims to learn policies using\ndatasets with substandard qualities: 1) only a subset of trajectories is\nlabeled with rewards, 2) labeled trajectories may not contain actions, 3)\nlabeled trajectories may not be of high quality, and 4) the data may not have\nfull coverage. Such imperfection is common in real-world learning scenarios,\nand offline PLfO encompasses many existing offline learning setups, including\noffline imitation learning (IL), offline IL from observations (ILfO), and\noffline reinforcement learning (RL). In this work, we present a generic\napproach to offline PLfO, called $\\textbf{M}$odality-agnostic\n$\\textbf{A}$dversarial $\\textbf{H}$ypothesis $\\textbf{A}$daptation for\n$\\textbf{L}$earning from $\\textbf{O}$bservations (MAHALO). Built upon the\npessimism concept in offline RL, MAHALO optimizes the policy using a\nperformance lower bound that accounts for uncertainty due to the dataset's\ninsufficient coverage. We implement this idea by adversarially training\ndata-consistent critic and reward functions, which forces the learned policy to\nbe robust to data deficiency. We show that MAHALO consistently outperforms or\nmatches specialized algorithms across a variety of offline PLfO tasks in theory\nand experiments. Our code is available at https://github.com/AnqiLi/mahalo.", + "authors": "Anqi Li, Byron Boots, Ching-An Cheng", + "published": "2023-03-30", + "updated": "2023-08-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.10813v2", + "title": "A Workflow for Offline Model-Free Robotic Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables learning control policies by\nutilizing only prior experience, without any online interaction. This can allow\nrobots to acquire generalizable skills from large and diverse datasets, without\nany costly or unsafe online data collection. Despite recent algorithmic\nadvances in offline RL, applying these methods to real-world problems has\nproven challenging. Although offline RL methods can learn from prior data,\nthere is no clear and well-understood process for making various design\nchoices, from model architecture to algorithm hyperparameters, without actually\nevaluating the learned policies online. In this paper, our aim is to develop a\npractical workflow for using offline RL analogous to the relatively\nwell-understood workflows for supervised learning problems. To this end, we\ndevise a set of metrics and conditions that can be tracked over the course of\noffline training, and can inform the practitioner about how the algorithm and\nmodel architecture should be adjusted to improve final performance. Our\nworkflow is derived from a conceptual understanding of the behavior of\nconservative offline RL algorithms and cross-validation in supervised learning.\nWe demonstrate the efficacy of this workflow in producing effective policies\nwithout any online tuning, both in several simulated robotic learning scenarios\nand for three tasks on two distinct real robots, focusing on learning\nmanipulation skills with raw image observations with sparse binary rewards.\nExplanatory video and additional results can be found at\nsites.google.com/view/offline-rl-workflow", + "authors": "Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine", + "published": "2021-09-22", + "updated": "2021-09-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.04779v3", + "title": "Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations", + "abstract": "Offline reinforcement learning has shown great promise in leveraging large\npre-collected datasets for policy learning, allowing agents to forgo\noften-expensive online data collection. However, offline reinforcement learning\nfrom visual observations with continuous action spaces remains under-explored,\nwith a limited understanding of the key challenges in this complex domain. In\nthis paper, we establish simple baselines for continuous control in the visual\ndomain and introduce a suite of benchmarking tasks for offline reinforcement\nlearning from visual observations designed to better represent the data\ndistributions present in real-world offline RL problems and guided by a set of\ndesiderata for offline RL from visual observations, including robustness to\nvisual distractions and visually identifiable changes in dynamics. Using this\nsuite of benchmarking tasks, we show that simple modifications to two popular\nvision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2,\nsuffice to outperform existing offline RL methods and establish competitive\nbaselines for continuous control in the visual domain. We rigorously evaluate\nthese algorithms and perform an empirical evaluation of the differences between\nstate-of-the-art model-based and model-free offline RL methods for continuous\ncontrol from visual observations. All code and data used in this evaluation are\nopen-sourced to facilitate progress in this domain.", + "authors": "Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh", + "published": "2022-06-09", + "updated": "2023-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05612v1", + "title": "Personalization for Web-based Services using Offline Reinforcement Learning", + "abstract": "Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.", + "authors": "Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, Igor L. Markov", + "published": "2021-02-10", + "updated": "2021-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "cs.SE" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.06871v3", + "title": "Improving Offline-to-Online Reinforcement Learning with Q-Ensembles", + "abstract": "Offline reinforcement learning (RL) is a learning paradigm where an agent\nlearns from a fixed dataset of experience. However, learning solely from a\nstatic dataset can limit the performance due to the lack of exploration. To\novercome it, offline-to-online RL combines offline pre-training with online\nfine-tuning, which enables the agent to further refine its policy by\ninteracting with the environment in real-time. Despite its benefits, existing\noffline-to-online RL methods suffer from performance degradation and slow\nimprovement during the online phase. To tackle these challenges, we propose a\nnovel framework called Ensemble-based Offline-to-Online (E2O) RL. By increasing\nthe number of Q-networks, we seamlessly bridge offline pre-training and online\nfine-tuning without degrading performance. Moreover, to expedite online\nperformance enhancement, we appropriately loosen the pessimism of Q-value\nestimation and incorporate ensemble-based exploration mechanisms into our\nframework. Experimental results demonstrate that E2O can substantially improve\nthe training stability, learning efficiency, and final performance of existing\noffline RL methods during online fine-tuning on a range of locomotion and\nnavigation tasks, significantly outperforming existing offline-to-online RL\nmethods.", + "authors": "Kai Zhao, Yi Ma, Jianye Hao, Jinyi Liu, Yan Zheng, Zhaopeng Meng", + "published": "2023-06-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.11566v1", + "title": "Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning", + "abstract": "Offline Reinforcement Learning (RL) aims to learn policies from previously\ncollected datasets without exploring the environment. Directly applying\noff-policy algorithms to offline RL usually fails due to the extrapolation\nerror caused by the out-of-distribution (OOD) actions. Previous methods tackle\nsuch problem by penalizing the Q-values of OOD actions or constraining the\ntrained policy to be close to the behavior policy. Nevertheless, such methods\ntypically prevent the generalization of value functions beyond the offline data\nand also lack precise characterization of OOD data. In this paper, we propose\nPessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven\noffline algorithm without explicit policy constraints. Specifically, PBRL\nconducts uncertainty quantification via the disagreement of bootstrapped\nQ-functions, and performs pessimistic updates by penalizing the value function\nbased on the estimated uncertainty. To tackle the extrapolating error, we\nfurther propose a novel OOD sampling method. We show that such OOD sampling and\npessimistic bootstrapping yields provable uncertainty quantifier in linear\nMDPs, thus providing the theoretical underpinning for PBRL. Extensive\nexperiments on D4RL benchmark show that PBRL has better performance compared to\nthe state-of-the-art algorithms.", + "authors": "Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang", + "published": "2022-02-23", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08331v1", + "title": "Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation", + "abstract": "In recommender systems (RecSys) and real-time bidding (RTB) for online\nadvertisements, we often try to optimize sequential decision making using\nbandit and reinforcement learning (RL) techniques. In these applications,\noffline reinforcement learning (offline RL) and off-policy evaluation (OPE) are\nbeneficial because they enable safe policy optimization using only logged data\nwithout any risky online interaction. In this position paper, we explore the\npotential of using simulation to accelerate practical research of offline RL\nand OPE, particularly in RecSys and RTB. Specifically, we discuss how\nsimulation can help us conduct empirical research of offline RL and OPE. We\ntake a position to argue that we should effectively use simulations in the\nempirical research of offline RL and OPE. To refute the counterclaim that\nexperiments using only real-world data are preferable, we first point out the\nunderlying risks and reproducibility issue in real-world experiments. Then, we\ndescribe how these issues can be addressed by using simulations. Moreover, we\nshow how to incorporate the benefits of both real-world and simulation-based\nexperiments to defend our position. Finally, we also present an open challenge\nto further facilitate practical research of offline RL and OPE in RecSys and\nRTB, with respect to public simulation platforms. As a possible solution for\nthe issue, we show our ongoing open source project and its potential use case.\nWe believe that building and utilizing simulation-based evaluation platforms\nfor offline RL and OPE will be of great interest and relevance for the RecSys\nand RTB community.", + "authors": "Haruka Kiyohara, Kosuke Kawakami, Yuta Saito", + "published": "2021-09-17", + "updated": "2021-09-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.18617v1", + "title": "ELA: Exploited Level Augmentation for Offline Learning in Zero-Sum Games", + "abstract": "Offline learning has become widely used due to its ability to derive\neffective policies from offline datasets gathered by expert demonstrators\nwithout interacting with the environment directly. Recent research has explored\nvarious ways to enhance offline learning efficiency by considering the\ncharacteristics (e.g., expertise level or multiple demonstrators) of the\ndataset. However, a different approach is necessary in the context of zero-sum\ngames, where outcomes vary significantly based on the strategy of the opponent.\nIn this study, we introduce a novel approach that uses unsupervised learning\ntechniques to estimate the exploited level of each trajectory from the offline\ndataset of zero-sum games made by diverse demonstrators. Subsequently, we\nincorporate the estimated exploited level into the offline learning to maximize\nthe influence of the dominant strategy. Our method enables interpretable\nexploited level estimation in multiple zero-sum games and effectively\nidentifies dominant strategy data. Also, our exploited level augmented offline\nlearning significantly enhances the original offline learning algorithms\nincluding imitation learning and offline reinforcement learning for zero-sum\ngames.", + "authors": "Shiqi Lei, Kanghoon Lee, Linjing Li, Jinkyoo Park, Jiachen Li", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.AI", + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.03351v4", + "title": "Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization", + "abstract": "Combining offline and online reinforcement learning (RL) is crucial for\nefficient and safe learning. However, previous approaches treat offline and\nonline learning as separate procedures, resulting in redundant designs and\nlimited performance. We ask: Can we achieve straightforward yet effective\noffline and online learning without introducing extra conservatism or\nregularization? In this study, we propose Uni-o4, which utilizes an on-policy\nobjective for both offline and online learning. Owning to the alignment of\nobjectives in two phases, the RL agent can transfer between offline and online\nlearning seamlessly. This property enhances the flexibility of the learning\nparadigm, allowing for arbitrary combinations of pretraining, fine-tuning,\noffline, and online learning. In the offline phase, specifically, Uni-o4\nleverages diverse ensemble policies to address the mismatch issues between the\nestimated behavior policy and the offline dataset. Through a simple offline\npolicy evaluation (OPE) approach, Uni-o4 can achieve multi-step policy\nimprovement safely. We demonstrate that by employing the method above, the\nfusion of these two paradigms can yield superior offline initialization as well\nas stable and rapid online fine-tuning capabilities. Through real-world robot\ntasks, we highlight the benefits of this paradigm for rapid deployment in\nchallenging, previously unseen real-world environments. Additionally, through\ncomprehensive evaluations using numerous simulated benchmarks, we substantiate\nthat our method achieves state-of-the-art performance in both offline and\noffline-to-online fine-tuning learning. Our website:\nhttps://lei-kun.github.io/uni-o4/ .", + "authors": "Kun Lei, Zhengmao He, Chenhao Lu, Kaizhe Hu, Yang Gao, Huazhe Xu", + "published": "2023-11-06", + "updated": "2024-03-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.12716v1", + "title": "H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps", + "abstract": "Solving real-world complex tasks using reinforcement learning (RL) without\nhigh-fidelity simulation environments or large amounts of offline data can be\nquite challenging. Online RL agents trained in imperfect simulation\nenvironments can suffer from severe sim-to-real issues. Offline RL approaches\nalthough bypass the need for simulators, often pose demanding requirements on\nthe size and quality of the offline datasets. The recently emerged hybrid\noffline-and-online RL provides an attractive framework that enables joint use\nof limited offline data and imperfect simulator for transferable policy\nlearning. In this paper, we develop a new algorithm, called H2O+, which offers\ngreat flexibility to bridge various choices of offline and online learning\nmethods, while also accounting for dynamics gaps between the real and\nsimulation environment. Through extensive simulation and real-world robotics\nexperiments, we demonstrate superior performance and flexibility over advanced\ncross-domain online and offline RL algorithms.", + "authors": "Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu, Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2023-09-22", + "updated": "2023-09-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.16217v2", + "title": "Beyond Reward: Offline Preference-guided Policy Optimization", + "abstract": "This study focuses on the topic of offline preference-based reinforcement\nlearning (PbRL), a variant of conventional reinforcement learning that\ndispenses with the need for online interaction or specification of reward\nfunctions. Instead, the agent is provided with fixed offline trajectories and\nhuman preferences between pairs of trajectories to extract the dynamics and\ntask information, respectively. Since the dynamics and task information are\northogonal, a naive approach would involve using preference-based reward\nlearning followed by an off-the-shelf offline RL algorithm. However, this\nrequires the separate learning of a scalar reward function, which is assumed to\nbe an information bottleneck of the learning process. To address this issue, we\npropose the offline preference-guided policy optimization (OPPO) paradigm,\nwhich models offline trajectories and preferences in a one-step process,\neliminating the need for separately learning a reward function. OPPO achieves\nthis by introducing an offline hindsight information matching objective for\noptimizing a contextual policy and a preference modeling objective for finding\nthe optimal context. OPPO further integrates a well-performing decision policy\nby optimizing the two objectives iteratively. Our empirical results demonstrate\nthat OPPO effectively models offline preferences and outperforms prior\ncompeting baselines, including offline RL algorithms performed over either true\nor pseudo reward function specifications. Our code is available on the project\nwebsite: https://sites.google.com/view/oppo-icml-2023 .", + "authors": "Yachen Kang, Diyuan Shi, Jinxin Liu, Li He, Donglin Wang", + "published": "2023-05-25", + "updated": "2023-06-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02000v1", + "title": "Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning", + "abstract": "Value function estimation is an indispensable subroutine in reinforcement\nlearning, which becomes more challenging in the offline setting. In this paper,\nwe propose Hybrid Value Estimation (HVE) to reduce value estimation error,\nwhich trades off bias and variance by balancing between the value estimation\nfrom offline data and the learned model. Theoretical analysis discloses that\nHVE enjoys a better error bound than the direct methods. HVE can be leveraged\nin both off-policy evaluation and offline reinforcement learning settings. We,\ntherefore, provide two concrete algorithms Off-policy HVE (OPHVE) and\nModel-based Offline HVE (MOHVE), respectively. Empirical evaluations on MuJoCo\ntasks corroborate the theoretical claim. OPHVE outperforms other off-policy\nevaluation methods in all three metrics measuring the estimation effectiveness,\nwhile MOHVE achieves better or comparable performance with state-of-the-art\noffline reinforcement learning algorithms. We hope that HVE could shed some\nlight on further research on reinforcement learning from fixed data.", + "authors": "Xue-Kun Jin, Xu-Hui Liu, Shengyi Jiang, Yang Yu", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.09844v2", + "title": "Small Dataset, Big Gains: Enhancing Reinforcement Learning by Offline Pre-Training with Model Based Augmentation", + "abstract": "Offline reinforcement learning leverages pre-collected datasets of\ntransitions to train policies. It can serve as effective initialization for\nonline algorithms, enhancing sample efficiency and speeding up convergence.\nHowever, when such datasets are limited in size and quality, offline\npre-training can produce sub-optimal policies and lead to degraded online\nreinforcement learning performance. In this paper we propose a model-based data\naugmentation strategy to maximize the benefits of offline reinforcement\nlearning pre-training and reduce the scale of data needed to be effective. Our\napproach leverages a world model of the environment trained on the offline\ndataset to augment states during offline pre-training. We evaluate our approach\non a variety of MuJoCo robotic tasks and our results show it can jump-start\nonline fine-tuning and substantially reduce - in some cases by an order of\nmagnitude - the required number of environment interactions.", + "authors": "Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov", + "published": "2023-12-15", + "updated": "2023-12-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.10070v1", + "title": "MOORe: Model-based Offline-to-Online Reinforcement Learning", + "abstract": "With the success of offline reinforcement learning (RL), offline trained RL\npolicies have the potential to be further improved when deployed online. A\nsmooth transfer of the policy matters in safe real-world deployment. Besides,\nfast adaptation of the policy plays a vital role in practical online\nperformance improvement. To tackle these challenges, we propose a simple yet\nefficient algorithm, Model-based Offline-to-Online Reinforcement learning\n(MOORe), which employs a prioritized sampling scheme that can dynamically\nadjust the offline and online data for smooth and efficient online adaptation\nof the policy. We provide a theoretical foundation for our algorithms design.\nExperiment results on the D4RL benchmark show that our algorithm smoothly\ntransfers from offline to online stages while enabling sample-efficient online\nadaption, and also significantly outperforms existing methods.", + "authors": "Yihuan Mao, Chao Wang, Bin Wang, Chongjie Zhang", + "published": "2022-01-25", + "updated": "2022-01-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.13885v1", + "title": "Offline Learning from Demonstrations and Unlabeled Experience", + "abstract": "Behavior cloning (BC) is often practical for robot learning because it allows\na policy to be trained offline without rewards, by supervised learning on\nexpert demonstrations. However, BC does not effectively leverage what we will\nrefer to as unlabeled experience: data of mixed and unknown quality without\nreward annotations. This unlabeled data can be generated by a variety of\nsources such as human teleoperation, scripted policies and other agents on the\nsame robot. Towards data-driven offline robot learning that can use this\nunlabeled experience, we introduce Offline Reinforced Imitation Learning\n(ORIL). ORIL first learns a reward function by contrasting observations from\ndemonstrator and unlabeled trajectories, then annotates all data with the\nlearned reward, and finally trains an agent via offline reinforcement learning.\nAcross a diverse set of continuous control and simulated robotic manipulation\ntasks, we show that ORIL consistently outperforms comparable BC agents by\neffectively leveraging unlabeled experience.", + "authors": "Konrad Zolna, Alexander Novikov, Ksenia Konyushkova, Caglar Gulcehre, Ziyu Wang, Yusuf Aytar, Misha Denil, Nando de Freitas, Scott Reed", + "published": "2020-11-27", + "updated": "2020-11-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.10442v1", + "title": "Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning", + "abstract": "We study offline meta-reinforcement learning, a practical reinforcement\nlearning paradigm that learns from offline data to adapt to new tasks. The\ndistribution of offline data is determined jointly by the behavior policy and\nthe task. Existing offline meta-reinforcement learning algorithms cannot\ndistinguish these factors, making task representations unstable to the change\nof behavior policies. To address this problem, we propose a contrastive\nlearning framework for task representations that are robust to the distribution\nmismatch of behavior policies in training and test. We design a bi-level\nencoder structure, use mutual information maximization to formalize task\nrepresentation learning, derive a contrastive learning objective, and introduce\nseveral approaches to approximate the true distribution of negative pairs.\nExperiments on a variety of offline meta-reinforcement learning benchmarks\ndemonstrate the advantages of our method over prior methods, especially on the\ngeneralization to out-of-distribution behavior policies. The code is available\nat https://github.com/PKU-AI-Edge/CORRO.", + "authors": "Haoqi Yuan, Zongqing Lu", + "published": "2022-06-21", + "updated": "2022-06-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.10393v1", + "title": "Offline Trajectory Generalization for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn policies from static\ndatasets of previously collected trajectories. Existing methods for offline RL\neither constrain the learned policy to the support of offline data or utilize\nmodel-based virtual environments to generate simulated rollouts. However, these\nmethods suffer from (i) poor generalization to unseen states; and (ii) trivial\nimprovement from low-qualified rollout simulation. In this paper, we propose\noffline trajectory generalization through world transformers for offline\nreinforcement learning (OTTO). Specifically, we use casual Transformers, a.k.a.\nWorld Transformers, to predict state dynamics and the immediate reward. Then we\npropose four strategies to use World Transformers to generate high-rewarded\ntrajectory simulation by perturbing the offline data. Finally, we jointly use\noffline data with simulated data to train an offline RL algorithm. OTTO serves\nas a plug-in module and can be integrated with existing offline RL methods to\nenhance them with better generalization capability of transformers and\nhigh-rewarded data augmentation. Conducting extensive experiments on D4RL\nbenchmark datasets, we verify that OTTO significantly outperforms\nstate-of-the-art offline RL methods.", + "authors": "Ziqi Zhao, Zhaochun Ren, Liu Yang, Fajie Yuan, Pengjie Ren, Zhumin Chen, jun Ma, Xin Xin", + "published": "2024-04-16", + "updated": "2024-04-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2107.01757v1", + "title": "The Least Restriction for Offline Reinforcement Learning", + "abstract": "Many practical applications of reinforcement learning (RL) constrain the\nagent to learn from a fixed offline dataset of logged interactions, which has\nalready been gathered, without offering further possibility for data\ncollection. However, commonly used off-policy RL algorithms, such as the Deep Q\nNetwork and the Deep Deterministic Policy Gradient, are incapable of learning\nwithout data correlated to the distribution under the current policy, making\nthem ineffective for this offline setting. As the first step towards useful\noffline RL algorithms, we analysis the reason of instability in standard\noff-policy RL algorithms. It is due to the bootstrapping error. The key to\navoiding this error, is ensuring that the agent's action space does not go out\nof the fixed offline dataset. Based on our consideration, a creative offline RL\nframework, the Least Restriction (LR), is proposed in this paper. The LR\nregards selecting an action as taking a sample from the probability\ndistribution. It merely set a little limit for action selection, which not only\navoid the action being out of the offline dataset but also remove all the\nunreasonable restrictions in earlier approaches (e.g. Batch-Constrained Deep\nQ-Learning). In the further, we will demonstrate that the LR, is able to learn\nrobustly from different offline datasets, including random and suboptimal\ndemonstrations, on a range of practical control tasks.", + "authors": "Zizhou Su", + "published": "2021-07-05", + "updated": "2021-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03788v2", + "title": "d3rlpy: An Offline Deep Reinforcement Learning Library", + "abstract": "In this paper, we introduce d3rlpy, an open-sourced offline deep\nreinforcement learning (RL) library for Python. d3rlpy supports a set of\noffline deep RL algorithms as well as off-policy online algorithms via a fully\ndocumented plug-and-play API. To address a reproducibility issue, we conduct a\nlarge-scale benchmark with D4RL and Atari 2600 dataset to ensure implementation\nquality and provide experimental scripts and full tables of results. The d3rlpy\nsource code can be found on GitHub: \\url{https://github.com/takuseno/d3rlpy}.", + "authors": "Takuma Seno, Michita Imai", + "published": "2021-11-06", + "updated": "2022-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.02929v2", + "title": "Model-Based Offline Meta-Reinforcement Learning with Regularization", + "abstract": "Existing offline reinforcement learning (RL) methods face a few major\nchallenges, particularly the distributional shift between the learned policy\nand the behavior policy. Offline Meta-RL is emerging as a promising approach to\naddress these challenges, aiming to learn an informative meta-policy from a\ncollection of tasks. Nevertheless, as shown in our empirical studies, offline\nMeta-RL could be outperformed by offline single-task RL methods on tasks with\ngood quality of datasets, indicating that a right balance has to be delicately\ncalibrated between \"exploring\" the out-of-distribution state-actions by\nfollowing the meta-policy and \"exploiting\" the offline dataset by staying close\nto the behavior policy. Motivated by such empirical analysis, we explore\nmodel-based offline Meta-RL with regularized Policy Optimization (MerPO), which\nlearns a meta-model for efficient task structure inference and an informative\nmeta-policy for safe exploration of out-of-distribution state-actions. In\nparticular, we devise a new meta-Regularized model-based Actor-Critic (RAC)\nmethod for within-task policy optimization, as a key building block of MerPO,\nusing conservative policy evaluation and regularized policy improvement; and\nthe intrinsic tradeoff therein is achieved via striking the right balance\nbetween two regularizers, one based on the behavior policy and the other on the\nmeta-policy. We theoretically show that the learnt policy offers guaranteed\nimprovement over both the behavior policy and the meta-policy, thus ensuring\nthe performance improvement on new tasks via offline Meta-RL. Experiments\ncorroborate the superior performance of MerPO over existing offline Meta-RL\nmethods.", + "authors": "Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, Junshan Zhang", + "published": "2022-02-07", + "updated": "2022-07-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.09119v2", + "title": "Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL", + "abstract": "Offline Reinforcement Learning (RL) aims to extract near-optimal policies\nfrom imperfect offline data without additional environment interactions.\nExtracting policies from diverse offline datasets has the potential to expand\nthe range of applicability of RL by making the training process safer, faster,\nand more streamlined. We investigate how to improve the performance of offline\nRL algorithms, its robustness to the quality of offline data, as well as its\ngeneralization capabilities. To this end, we introduce Offline Model-based RL\nwith Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding\nthat dynamics models, which support within-domain generalization, and\nbehavioral priors, which support cross-domain generalization, are\ncomplementary. When combined together, they substantially improve the\nperformance and generalization of offline RL policies. In the widely studied\nD4RL offline RL benchmark, we find that MABE achieves higher average\nperformance compared to prior model-free and model-based algorithms. In\nexperiments that require cross-domain generalization, we find that MABE\noutperforms prior methods. Our website is available at\nhttps://sites.google.com/berkeley.edu/mabe .", + "authors": "Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, Michael Laskin", + "published": "2021-06-16", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.06294v1", + "title": "Online and Offline Reinforcement Learning by Planning with a Learned Model", + "abstract": "Learning efficiently from small amounts of data has long been the focus of\nmodel-based reinforcement learning, both for the online case when interacting\nwith the environment and the offline case when learning from a fixed dataset.\nHowever, to date no single unified algorithm could demonstrate state-of-the-art\nresults in both settings. In this work, we describe the Reanalyse algorithm\nwhich uses model-based policy and value improvement operators to compute new\nimproved training targets on existing data points, allowing efficient learning\nfor data budgets varying by several orders of magnitude. We further show that\nReanalyse can also be used to learn entirely from demonstrations without any\nenvironment interactions, as in the case of offline Reinforcement Learning\n(offline RL). Combining Reanalyse with the MuZero algorithm, we introduce\nMuZero Unplugged, a single unified algorithm for any data budget, including\noffline RL. In contrast to previous work, our algorithm does not require any\nspecial adaptations for the off-policy or offline RL settings. MuZero Unplugged\nsets new state-of-the-art results in the RL Unplugged offline RL benchmark as\nwell as in the online RL benchmark of Atari in the standard 200 million frame\nsetting.", + "authors": "Julian Schrittwieser, Thomas Hubert, Amol Mandhane, Mohammadamin Barekatain, Ioannis Antonoglou, David Silver", + "published": "2021-04-13", + "updated": "2021-04-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.09712v1", + "title": "Semi-Offline Reinforcement Learning for Optimized Text Generation", + "abstract": "In reinforcement learning (RL), there are two major settings for interacting\nwith the environment: online and offline. Online methods explore the\nenvironment at significant time cost, and offline methods efficiently obtain\nreward signals by sacrificing exploration capability. We propose semi-offline\nRL, a novel paradigm that smoothly transits from offline to online settings,\nbalances exploration capability and training cost, and provides a theoretical\nfoundation for comparing different RL settings. Based on the semi-offline\nformulation, we present the RL setting that is optimal in terms of optimization\ncost, asymptotic error, and overfitting error bound. Extensive experiments show\nthat our semi-offline approach is efficient and yields comparable or often\nbetter performance compared with state-of-the-art methods.", + "authors": "Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, Rui Yan", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.05804v1", + "title": "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism", + "abstract": "Offline reinforcement learning, which seeks to utilize offline/historical\ndata to optimize sequential decision-making strategies, has gained surging\nprominence in recent studies. Due to the advantage that appropriate function\napproximators can help mitigate the sample complexity burden in modern\nreinforcement learning problems, existing endeavors usually enforce powerful\nfunction representation models (e.g. neural networks) to learn the optimal\npolicies. However, a precise understanding of the statistical limits with\nfunction representations, remains elusive, even when such a representation is\nlinear.\n Towards this goal, we study the statistical limits of offline reinforcement\nlearning with linear model representations. To derive the tight offline\nlearning bound, we design the variance-aware pessimistic value iteration\n(VAPVI), which adopts the conditional variance information of the value\nfunction for time-inhomogeneous episodic linear Markov decision processes\n(MDPs). VAPVI leverages estimated variances of the value functions to reweight\nthe Bellman residuals in the least-square pessimistic value iteration and\nprovides improved offline learning bounds over the best-known existing results\n(whereas the Bellman residuals are equally weighted by design). More\nimportantly, our learning bounds are expressed in terms of system quantities,\nwhich provide natural instance-dependent characterizations that previous\nresults are short of. We hope our results draw a clearer picture of what\noffline learning should look like when linear representations are provided.", + "authors": "Ming Yin, Yaqi Duan, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-03-11", + "updated": "2022-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.12639v1", + "title": "Single-Task Continual Offline Reinforcement Learning", + "abstract": "In this paper, we study the continual learning problem of single-task offline\nreinforcement learning. In the past, continual reinforcement learning usually\nonly dealt with multitasking, that is, learning multiple related or unrelated\ntasks in a row, but once each learned task was learned, it was not relearned,\nbut only used in subsequent processes. However, offline reinforcement learning\ntasks require the continuously learning of multiple different datasets for the\nsame task. Existing algorithms will try their best to achieve the best results\nin each offline dataset they have learned and the skills of the network will\noverwrite the high-quality datasets that have been learned after learning the\nsubsequent poor datasets. On the other hand, if too much emphasis is placed on\nstability, the network will learn the subsequent better dataset after learning\nthe poor offline dataset, and the problem of insufficient plasticity and\nnon-learning will occur. How to design a strategy that can always preserve the\nbest performance for each state in the data that has been learned is a new\nchallenge and the focus of this study. Therefore, this study proposes a new\nalgorithm, called Ensemble Offline Reinforcement Learning Based on Experience\nReplay, which introduces multiple value networks to learn the same dataset and\njudge whether the strategy has been learned by the discrete degree of the value\nnetwork, to improve the performance of the network in single-task offline\nreinforcement learning.", + "authors": "Sibo Gai, Donglin Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "I.2.6" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08128v1", + "title": "Conservative Data Sharing for Multi-Task Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) algorithms have shown promising results\nin domains where abundant pre-collected data is available. However, prior\nmethods focus on solving individual problems from scratch with an offline\ndataset without considering how an offline RL agent can acquire multiple\nskills. We argue that a natural use case of offline RL is in settings where we\ncan pool large amounts of data collected in various scenarios for solving\ndifferent tasks, and utilize all of this data to learn behaviors for all the\ntasks more effectively rather than training each one in isolation. However,\nsharing data across all tasks in multi-task offline RL performs surprisingly\npoorly in practice. Thorough empirical analysis, we find that sharing data can\nactually exacerbate the distributional shift between the learned policy and the\ndataset, which in turn can lead to divergence of the learned policy and poor\nperformance. To address this challenge, we develop a simple technique for\ndata-sharing in multi-task offline RL that routes data based on the improvement\nover the task-specific data. We call this approach conservative data sharing\n(CDS), and it can be applied with multiple single-task offline RL methods. On a\nrange of challenging multi-task locomotion, navigation, and vision-based\nrobotic manipulation problems, CDS achieves the best or comparable performance\ncompared to prior offline multi-task RL methods and previous data sharing\napproaches.", + "authors": "Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn", + "published": "2021-09-16", + "updated": "2021-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.05433v1", + "title": "Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) Algorithms are often designed with\nenvironments such as MuJoCo in mind, in which the planning horizon is extremely\nlong and no noise exists. We compare model-free, model-based, as well as hybrid\noffline RL approaches on various industrial benchmark (IB) datasets to test the\nalgorithms in settings closer to real world problems, including complex noise\nand partially observable states. We find that on the IB, hybrid approaches face\nsevere difficulties and that simpler algorithms, such as rollout based\nalgorithms or model-free algorithms with simpler regularizers perform best on\nthe datasets.", + "authors": "Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler", + "published": "2022-01-14", + "updated": "2022-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.12876v2", + "title": "Guiding Online Reinforcement Learning with Action-Free Offline Pretraining", + "abstract": "Offline RL methods have been shown to reduce the need for environment\ninteraction by training agents using offline collected episodes. However, these\nmethods typically require action information to be logged during data\ncollection, which can be difficult or even impossible in some practical cases.\nIn this paper, we investigate the potential of using action-free offline\ndatasets to improve online reinforcement learning, name this problem\nReinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We\nintroduce Action-Free Guide (AF-Guide), a method that guides online training by\nextracting knowledge from action-free offline datasets. AF-Guide consists of an\nAction-Free Decision Transformer (AFDT) implementing a variant of Upside-Down\nReinforcement Learning. It learns to plan the next states from the offline\ndataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with\nguidance from AFDT. Experimental results show that AF-Guide can improve sample\nefficiency and performance in online training thanks to the knowledge from the\naction-free offline dataset. Code is available at\nhttps://github.com/Vision-CAIR/AF-Guide.", + "authors": "Deyao Zhu, Yuhui Wang, J\u00fcrgen Schmidhuber, Mohamed Elhoseiny", + "published": "2023-01-30", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08302v1", + "title": "Safe Evaluation For Offline Learning: Are We Ready To Deploy?", + "abstract": "The world currently offers an abundance of data in multiple domains, from\nwhich we can learn reinforcement learning (RL) policies without further\ninteraction with the environment. RL agents learning offline from such data is\npossible but deploying them while learning might be dangerous in domains where\nsafety is critical. Therefore, it is essential to find a way to estimate how a\nnewly-learned agent will perform if deployed in the target environment before\nactually deploying it and without the risk of overestimating its true\nperformance. To achieve this, we introduce a framework for safe evaluation of\noffline learning using approximate high-confidence off-policy evaluation\n(HCOPE) to estimate the performance of offline policies during learning. In our\nsetting, we assume a source of data, which we split into a train-set, to learn\nan offline policy, and a test-set, to estimate a lower-bound on the offline\npolicy using off-policy evaluation with bootstrapping. A lower-bound estimate\ntells us how good a newly-learned target policy would perform before it is\ndeployed in the real environment, and therefore allows us to decide when to\ndeploy our learned policy.", + "authors": "Hager Radi, Josiah P. Hanna, Peter Stone, Matthew E. Taylor", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.05742v2", + "title": "The Generalization Gap in Offline Reinforcement Learning", + "abstract": "Despite recent progress in offline learning, these methods are still trained\nand tested on the same environment. In this paper, we compare the\ngeneralization abilities of widely used online and offline learning methods\nsuch as online reinforcement learning (RL), offline RL, sequence modeling, and\nbehavioral cloning. Our experiments show that offline learning algorithms\nperform worse on new environments than online learning ones. We also introduce\nthe first benchmark for evaluating generalization in offline learning,\ncollecting datasets of varying sizes and skill-levels from Procgen (2D video\ngames) and WebShop (e-commerce websites). The datasets contain trajectories for\na limited number of game levels or natural language instructions and at test\ntime, the agent has to generalize to new levels or instructions. Our\nexperiments reveal that existing offline learning algorithms struggle to match\nthe performance of online RL on both train and test environments. Behavioral\ncloning is a strong baseline, outperforming state-of-the-art offline RL and\nsequence modeling approaches when trained on data from multiple environments\nand tested on new ones. Finally, we find that increasing the diversity of the\ndata, rather than its size, improves performance on new environments for all\noffline learning algorithms. Our study demonstrates the limited generalization\nof current offline learning algorithms highlighting the need for more research\nin this area.", + "authors": "Ishita Mediratta, Qingfei You, Minqi Jiang, Roberta Raileanu", + "published": "2023-12-10", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.08566v1", + "title": "Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining", + "abstract": "Large transformer models pretrained on offline reinforcement learning\ndatasets have demonstrated remarkable in-context reinforcement learning (ICRL)\ncapabilities, where they can make good decisions when prompted with interaction\ntrajectories from unseen environments. However, when and how transformers can\nbe trained to perform ICRL have not been theoretically well-understood. In\nparticular, it is unclear which reinforcement-learning algorithms transformers\ncan perform in context, and how distribution mismatch in offline training data\naffects the learned algorithms. This paper provides a theoretical framework\nthat analyzes supervised pretraining for ICRL. This includes two recently\nproposed training methods -- algorithm distillation and decision-pretrained\ntransformers. First, assuming model realizability, we prove the\nsupervised-pretrained transformer will imitate the conditional expectation of\nthe expert algorithm given the observed trajectory. The generalization error\nwill scale with model capacity and a distribution divergence factor between the\nexpert and offline algorithms. Second, we show transformers with ReLU attention\ncan efficiently approximate near-optimal online reinforcement learning\nalgorithms like LinUCB and Thompson sampling for stochastic linear bandits, and\nUCB-VI for tabular Markov decision processes. This provides the first\nquantitative analysis of the ICRL capabilities of transformers pretrained from\noffline trajectories.", + "authors": "Licong Lin, Yu Bai, Song Mei", + "published": "2023-10-12", + "updated": "2023-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "math.ST", + "stat.ML", + "stat.TH" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.12755v1", + "title": "Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn a policy using only\npre-collected and fixed data. Although avoiding the time-consuming online\ninteractions in RL, it poses challenges for out-of-distribution (OOD) state\nactions and often suffers from data inefficiency for training. Despite many\nefforts being devoted to addressing OOD state actions, the latter (data\ninefficiency) receives little attention in offline RL. To address this, this\npaper proposes the cross-domain offline RL, which assumes offline data\nincorporate additional source-domain data from varying transition dynamics\n(environments), and expects it to contribute to the offline data efficiency. To\ndo so, we identify a new challenge of OOD transition dynamics, beyond the\ncommon OOD state actions issue, when utilizing cross-domain offline data. Then,\nwe propose our method BOSA, which employs two support-constrained objectives to\naddress the above OOD issues. Through extensive experiments in the cross-domain\noffline RL setting, we demonstrate BOSA can greatly improve offline data\nefficiency: using only 10\\% of the target data, BOSA could achieve {74.4\\%} of\nthe SOTA offline RL performance that uses 100\\% of the target data.\nAdditionally, we also show BOSA can be effortlessly plugged into model-based\noffline RL and noising data augmentation techniques (used for generating\nsource-domain data), which naturally avoids the potential dynamics mismatch\nbetween target-domain data and newly generated source-domain data.", + "authors": "Jinxin Liu, Ziqi Zhang, Zhenyu Wei, Zifeng Zhuang, Yachen Kang, Sibo Gai, Donglin Wang", + "published": "2023-06-22", + "updated": "2023-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.01734v1", + "title": "Offline Goal-Conditioned Reinforcement Learning for Safety-Critical Tasks with Recovery Policy", + "abstract": "Offline goal-conditioned reinforcement learning (GCRL) aims at solving\ngoal-reaching tasks with sparse rewards from an offline dataset. While prior\nwork has demonstrated various approaches for agents to learn near-optimal\npolicies, these methods encounter limitations when dealing with diverse\nconstraints in complex environments, such as safety constraints. Some of these\napproaches prioritize goal attainment without considering safety, while others\nexcessively focus on safety at the expense of training efficiency. In this\npaper, we study the problem of constrained offline GCRL and propose a new\nmethod called Recovery-based Supervised Learning (RbSL) to accomplish\nsafety-critical tasks with various goals. To evaluate the method performance,\nwe build a benchmark based on the robot-fetching environment with a randomly\npositioned obstacle and use expert or random policies to generate an offline\ndataset. We compare RbSL with three offline GCRL algorithms and one offline\nsafe RL algorithm. As a result, our method outperforms the existing\nstate-of-the-art methods to a large extent. Furthermore, we validate the\npracticality and effectiveness of RbSL by deploying it on a real Panda\nmanipulator. Code is available at https://github.com/Sunlighted/RbSL.git.", + "authors": "Chenyang Cao, Zichen Yan, Renhao Lu, Junbo Tan, Xueqian Wang", + "published": "2024-03-04", + "updated": "2024-03-04", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG", + "68T40" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.03383v2", + "title": "On the Role of Discount Factor in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables effective learning from\npreviously collected data without exploration, which shows great promise in\nreal-world applications when exploration is expensive or even infeasible. The\ndiscount factor, $\\gamma$, plays a vital role in improving online RL sample\nefficiency and estimation accuracy, but the role of the discount factor in\noffline RL is not well explored. This paper examines two distinct effects of\n$\\gamma$ in offline RL with theoretical analysis, namely the regularization\neffect and the pessimism effect. On the one hand, $\\gamma$ is a regulator to\ntrade-off optimality with sample efficiency upon existing offline techniques.\nOn the other hand, lower guidance $\\gamma$ can also be seen as a way of\npessimism where we optimize the policy's performance in the worst possible\nmodels. We empirically verify the above theoretical observation with tabular\nMDPs and standard D4RL tasks. The results show that the discount factor plays\nan essential role in the performance of offline RL algorithms, both under small\ndata regimes upon existing offline methods and in large data regimes without\nother conservative methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2022-06-07", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2205.09550v1", + "title": "Data Valuation for Offline Reinforcement Learning", + "abstract": "The success of deep reinforcement learning (DRL) hinges on the availability\nof training data, which is typically obtained via a large number of environment\ninteractions. In many real-world scenarios, costs and risks are associated with\ngathering these data. The field of offline reinforcement learning addresses\nthese issues through outsourcing the collection of data to a domain expert or a\ncarefully monitored program and subsequently searching for a batch-constrained\noptimal policy. With the emergence of data markets, an alternative to\nconstructing a dataset in-house is to purchase external data. However, while\nstate-of-the-art offline reinforcement learning approaches have shown a lot of\npromise, they currently rely on carefully constructed datasets that are well\naligned with the intended target domains. This raises questions regarding the\ntransferability and robustness of an offline reinforcement learning agent\ntrained on externally acquired data. In this paper, we empirically evaluate the\nability of the current state-of-the-art offline reinforcement learning\napproaches to coping with the source-target domain mismatch within two MuJoCo\nenvironments, finding that current state-of-the-art offline reinforcement\nlearning algorithms underperform in the target domain. To address this, we\npropose data valuation for offline reinforcement learning (DVORL), which allows\nus to identify relevant and high-quality transitions, improving the performance\nand transferability of policies learned by offline reinforcement learning\nalgorithms. The results show that our method outperforms offline reinforcement\nlearning baselines on two MuJoCo environments.", + "authors": "Amir Abolfazli, Gregory Palmer, Daniel Kudenko", + "published": "2022-05-19", + "updated": "2022-05-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.05422v1", + "title": "Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning", + "abstract": "Learning a precise dynamics model can be crucial for offline reinforcement\nlearning, which, unfortunately, has been found to be quite challenging.\nDynamics models that are learned by fitting historical transitions often\nstruggle to generalize to unseen transitions. In this study, we identify a\nhidden but pivotal factor termed dynamics reward that remains consistent across\ntransitions, offering a pathway to better generalization. Therefore, we propose\nthe idea of reward-consistent dynamics models: any trajectory generated by the\ndynamics model should maximize the dynamics reward derived from the data. We\nimplement this idea as the MOREC (Model-based Offline reinforcement learning\nwith Reward Consistency) method, which can be seamlessly integrated into\nprevious offline model-based reinforcement learning (MBRL) methods. MOREC\nlearns a generalizable dynamics reward function from offline data, which is\nsubsequently employed as a transition filter in any offline MBRL method: when\ngenerating transitions, the dynamics model generates a batch of transitions and\nselects the one with the highest dynamics reward value. On a synthetic task, we\nvisualize that MOREC has a strong generalization ability and can surprisingly\nrecover some distant unseen transitions. On 21 offline tasks in D4RL and NeoRL\nbenchmarks, MOREC improves the previous state-of-the-art performance by a\nsignificant margin, i.e., 4.6% on D4RL tasks and 25.9% on NeoRL tasks. Notably,\nMOREC is the first method that can achieve above 95% online RL performance in 6\nout of 12 D4RL tasks and 3 out of 9 NeoRL tasks.", + "authors": "Fan-Ming Luo, Tian Xu, Xingchen Cao, Yang Yu", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.14629v1", + "title": "Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions", + "abstract": "Reinforcement learning (RL) agents are widely used for solving complex\nsequential decision making tasks, but still exhibit difficulty in generalizing\nto scenarios not seen during training. While prior online approaches\ndemonstrated that using additional signals beyond the reward function can lead\nto better generalization capabilities in RL agents, i.e. using self-supervised\nlearning (SSL), they struggle in the offline RL setting, i.e. learning from a\nstatic dataset. We show that performance of online algorithms for\ngeneralization in RL can be hindered in the offline setting due to poor\nestimation of similarity between observations. We propose a new\ntheoretically-motivated framework called Generalized Similarity Functions\n(GSF), which uses contrastive learning to train an offline RL agent to\naggregate observations based on the similarity of their expected future\nbehavior, where we quantify this similarity using \\emph{generalized value\nfunctions}. We show that GSF is general enough to recover existing SSL\nobjectives while also improving zero-shot generalization performance on a\ncomplex offline RL benchmark, offline Procgen.", + "authors": "Bogdan Mazoure, Ilya Kostrikov, Ofir Nachum, Jonathan Tompson", + "published": "2021-11-29", + "updated": "2021-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.11620v2", + "title": "Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization", + "abstract": "Offline reinforcement learning (RL) has received considerable attention in\nrecent years due to its attractive capability of learning policies from offline\ndatasets without environmental interactions. Despite some success in the\nsingle-agent setting, offline multi-agent RL (MARL) remains to be a challenge.\nThe large joint state-action space and the coupled multi-agent behaviors pose\nextra complexities for offline policy optimization. Most existing offline MARL\nstudies simply apply offline data-related regularizations on individual agents,\nwithout fully considering the multi-agent system at the global level. In this\nwork, we present OMIGA, a new offline m ulti-agent RL algorithm with implicit\nglobal-to-local v alue regularization. OMIGA provides a principled framework to\nconvert global-level value regularization into equivalent implicit local value\nregularizations and simultaneously enables in-sample learning, thus elegantly\nbridging multi-agent value decomposition and policy learning with offline\nregularizations. Based on comprehensive experiments on the offline multi-agent\nMuJoCo and StarCraft II micro-management tasks, we show that OMIGA achieves\nsuperior performance over the state-of-the-art offline MARL methods in almost\nall tasks.", + "authors": "Xiangsen Wang, Haoran Xu, Yinan Zheng, Xianyuan Zhan", + "published": "2023-07-21", + "updated": "2023-11-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02439v2", + "title": "DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching", + "abstract": "In offline reinforcement learning (RL), the performance of the learned policy\nhighly depends on the quality of offline datasets. However, in many cases, the\noffline dataset contains very limited optimal trajectories, which poses a\nchallenge for offline RL algorithms as agents must acquire the ability to\ntransit to high-reward regions. To address this issue, we introduce\nDiffusion-based Trajectory Stitching (DiffStitch), a novel diffusion-based data\naugmentation pipeline that systematically generates stitching transitions\nbetween trajectories. DiffStitch effectively connects low-reward trajectories\nwith high-reward trajectories, forming globally optimal trajectories to address\nthe challenges faced by offline RL algorithms. Empirical experiments conducted\non D4RL datasets demonstrate the effectiveness of DiffStitch across RL\nmethodologies. Notably, DiffStitch demonstrates substantial enhancements in the\nperformance of one-step methods (IQL), imitation learning methods (TD3+BC), and\ntrajectory optimization methods (DT).", + "authors": "Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, Weinan Zhang", + "published": "2024-02-04", + "updated": "2024-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.06043v3", + "title": "Offline Meta-Reinforcement Learning with Advantage Weighting", + "abstract": "This paper introduces the offline meta-reinforcement learning (offline\nmeta-RL) problem setting and proposes an algorithm that performs well in this\nsetting. Offline meta-RL is analogous to the widely successful supervised\nlearning strategy of pre-training a model on a large batch of fixed,\npre-collected data (possibly from various tasks) and fine-tuning the model to a\nnew task with relatively little data. That is, in offline meta-RL, we\nmeta-train on fixed, pre-collected data from several tasks in order to adapt to\na new task with a very small amount (less than 5 trajectories) of data from the\nnew task. By nature of being offline, algorithms for offline meta-RL can\nutilize the largest possible pool of training data available and eliminate\npotentially unsafe or costly data collection during meta-training. This setting\ninherits the challenges of offline RL, but it differs significantly because\noffline RL does not generally consider a) transfer to new tasks or b) limited\ndata from the test task, both of which we face in offline meta-RL. Targeting\nthe offline meta-RL setting, we propose Meta-Actor Critic with Advantage\nWeighting (MACAW), an optimization-based meta-learning algorithm that uses\nsimple, supervised regression objectives for both the inner and outer loop of\nmeta-training. On offline variants of common meta-RL benchmarks, we empirically\nfind that this approach enables fully offline meta-reinforcement learning and\nachieves notable gains over prior methods.", + "authors": "Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn", + "published": "2020-08-13", + "updated": "2021-07-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.06734v1", + "title": "Corruption Robust Offline Reinforcement Learning with Human Feedback", + "abstract": "We study data corruption robustness for reinforcement learning with human\nfeedback (RLHF) in an offline setting. Given an offline dataset of pairs of\ntrajectories along with feedback about human preferences, an\n$\\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or\ntrajectory features manipulated), capturing an adversarial attack or noisy\nhuman preferences. We aim to design algorithms that identify a near-optimal\npolicy from the corrupted data, with provable guarantees. Existing theoretical\nworks have separately studied the settings of corruption robust RL (learning\nfrom scalar rewards directly under corruption) and offline RLHF (learning from\nhuman feedback without corruption); however, they are inapplicable to our\nproblem of dealing with corrupted data in offline RLHF setting. To this end, we\ndesign novel corruption robust offline RLHF methods under various assumptions\non the coverage of the data-generating distributions. At a high level, our\nmethodology robustifies an offline RLHF framework by first learning a reward\nmodel along with confidence sets and then learning a pessimistic optimal policy\nover the confidence set. Our key insight is that learning optimal policy can be\ndone by leveraging an offline corruption-robust RL oracle in different ways\n(e.g., zero-order oracle or first-order oracle), depending on the data coverage\nassumptions. To our knowledge, ours is the first work that provides provable\ncorruption robust offline RLHF methods.", + "authors": "Debmalya Mandal, Andi Nika, Parameswaran Kamalaruban, Adish Singla, Goran Radanovi\u0107", + "published": "2024-02-09", + "updated": "2024-02-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.00935v3", + "title": "Policy Expansion for Bridging Offline-to-Online Reinforcement Learning", + "abstract": "Pre-training with offline data and online fine-tuning using reinforcement\nlearning is a promising strategy for learning control policies by leveraging\nthe best of both worlds in terms of sample efficiency and performance. One\nnatural approach is to initialize the policy for online learning with the one\ntrained offline. In this work, we introduce a policy expansion scheme for this\ntask. After learning the offline policy, we use it as one candidate policy in a\npolicy set. We then expand the policy set with another policy which will be\nresponsible for further learning. The two policies will be composed in an\nadaptive manner for interacting with the environment. With this approach, the\npolicy previously learned offline is fully retained during online learning,\nthus mitigating the potential issues such as destroying the useful behaviors of\nthe offline policy in the initial stage of online learning while allowing the\noffline policy participate in the exploration naturally in an adaptive manner.\nMoreover, new useful behaviors can potentially be captured by the newly added\npolicy through learning. Experiments are conducted on a number of tasks and the\nresults demonstrate the effectiveness of the proposed approach.", + "authors": "Haichao Zhang, We Xu, Haonan Yu", + "published": "2023-02-02", + "updated": "2023-04-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.05440v1", + "title": "Dealing with the Unknown: Pessimistic Offline Reinforcement Learning", + "abstract": "Reinforcement Learning (RL) has been shown effective in domains where the\nagent can learn policies by actively interacting with its operating\nenvironment. However, if we change the RL scheme to offline setting where the\nagent can only update its policy via static datasets, one of the major issues\nin offline reinforcement learning emerges, i.e. distributional shift. We\npropose a Pessimistic Offline Reinforcement Learning (PessORL) algorithm to\nactively lead the agent back to the area where it is familiar by manipulating\nthe value function. We focus on problems caused by out-of-distribution (OOD)\nstates, and deliberately penalize high values at states that are absent in the\ntraining dataset, so that the learned pessimistic value function lower bounds\nthe true value anywhere within the state space. We evaluate the PessORL\nalgorithm on various benchmark tasks, where we show that our method gains\nbetter performance by explicitly handling OOD states, when compared to those\nmethods merely considering OOD actions.", + "authors": "Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan", + "published": "2021-11-09", + "updated": "2021-11-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.15690v1", + "title": "Benchmarking Offline Reinforcement Learning on Real-Robot Hardware", + "abstract": "Learning policies from previously recorded data is a promising direction for\nreal-world robotics tasks, as online learning is often infeasible. Dexterous\nmanipulation in particular remains an open problem in its general form. The\ncombination of offline reinforcement learning with large diverse datasets,\nhowever, has the potential to lead to a breakthrough in this challenging domain\nanalogously to the rapid progress made in supervised learning in recent years.\nTo coordinate the efforts of the research community toward tackling this\nproblem, we propose a benchmark including: i) a large collection of data for\noffline learning from a dexterous manipulation platform on two tasks, obtained\nwith capable RL agents trained in simulation; ii) the option to execute learned\npolicies on a real-world robotic system and a simulation for efficient\ndebugging. We evaluate prominent open-sourced offline reinforcement learning\nalgorithms on the datasets and provide a reproducible experimental setup for\noffline reinforcement learning on real systems.", + "authors": "Nico G\u00fcrtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel W\u00fcthrich, Stefan Bauer, Bernhard Sch\u00f6lkopf, Georg Martius", + "published": "2023-07-28", + "updated": "2023-07-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.07693v1", + "title": "Adaptive Policy Learning for Offline-to-Online Reinforcement Learning", + "abstract": "Conventional reinforcement learning (RL) needs an environment to collect\nfresh data, which is impractical when online interactions are costly. Offline\nRL provides an alternative solution by directly learning from the previously\ncollected dataset. However, it will yield unsatisfactory performance if the\nquality of the offline datasets is poor. In this paper, we consider an\noffline-to-online setting where the agent is first learned from the offline\ndataset and then trained online, and propose a framework called Adaptive Policy\nLearning for effectively taking advantage of offline and online data.\nSpecifically, we explicitly consider the difference between the online and\noffline data and apply an adaptive update scheme accordingly, that is, a\npessimistic update strategy for the offline dataset and an optimistic/greedy\nupdate scheme for the online dataset. Such a simple and effective method\nprovides a way to mix the offline and online RL and achieve the best of both\nworlds. We further provide two detailed algorithms for implementing the\nframework through embedding value or policy-based RL algorithms into it.\nFinally, we conduct extensive experiments on popular continuous control tasks,\nand results show that our algorithm can learn the expert policy with high\nsample efficiency even when the quality of offline dataset is poor, e.g.,\nrandom dataset.", + "authors": "Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08251v1", + "title": "Offline Reinforcement Learning with Adaptive Behavior Regularization", + "abstract": "Offline reinforcement learning (RL) defines a sample-efficient learning\nparadigm, where a policy is learned from static and previously collected\ndatasets without additional interaction with the environment. The major\nobstacle to offline RL is the estimation error arising from evaluating the\nvalue of out-of-distribution actions. To tackle this problem, most existing\noffline RL methods attempt to acquire a policy both ``close\" to the behaviors\ncontained in the dataset and sufficiently improved over them, which requires a\ntrade-off between two possibly conflicting targets. In this paper, we propose a\nnovel approach, which we refer to as adaptive behavior regularization (ABR), to\nbalance this critical trade-off. By simply utilizing a sample-based\nregularization, ABR enables the policy to adaptively adjust its optimization\nobjective between cloning and improving over the policy used to generate the\ndataset. In the evaluation on D4RL datasets, a widely adopted benchmark for\noffline reinforcement learning, ABR can achieve improved or competitive\nperformance compared to existing state-of-the-art algorithms.", + "authors": "Yunfan Zhou, Xijun Li, Qingyu Qu", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11895v1", + "title": "What are the Statistical Limits of Offline RL with Linear Function Approximation?", + "abstract": "Offline reinforcement learning seeks to utilize offline (observational) data\nto guide the learning of (causal) sequential decision making strategies. The\nhope is that offline reinforcement learning coupled with function approximation\nmethods (to deal with the curse of dimensionality) can provide a means to help\nalleviate the excessive sample complexity burden in modern sequential decision\nmaking problems. However, the extent to which this broader approach can be\neffective is not well understood, where the literature largely consists of\nsufficient conditions.\n This work focuses on the basic question of what are necessary\nrepresentational and distributional conditions that permit provable\nsample-efficient offline reinforcement learning. Perhaps surprisingly, our main\nresult shows that even if: i) we have realizability in that the true value\nfunction of \\emph{every} policy is linear in a given set of features and 2) our\noff-policy data has good coverage over all features (under a strong spectral\ncondition), then any algorithm still (information-theoretically) requires a\nnumber of offline samples that is exponential in the problem horizon in order\nto non-trivially estimate the value of \\emph{any} given policy. Our results\nhighlight that sample-efficient offline policy evaluation is simply not\npossible unless significantly stronger conditions hold; such conditions include\neither having low distribution shift (where the offline data distribution is\nclose to the distribution of the policy to be evaluated) or significantly\nstronger representational conditions (beyond realizability).", + "authors": "Ruosong Wang, Dean P. Foster, Sham M. Kakade", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.13846v1", + "title": "Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning", + "abstract": "Offline reinforcement learning, by learning from a fixed dataset, makes it\npossible to learn agent behaviors without interacting with the environment.\nHowever, depending on the quality of the offline dataset, such pre-trained\nagents may have limited performance and would further need to be fine-tuned\nonline by interacting with the environment. During online fine-tuning, the\nperformance of the pre-trained agent may collapse quickly due to the sudden\ndistribution shift from offline to online data. While constraints enforced by\noffline RL methods such as a behaviour cloning loss prevent this to an extent,\nthese constraints also significantly slow down online fine-tuning by forcing\nthe agent to stay close to the behavior policy. We propose to adaptively weigh\nthe behavior cloning loss during online fine-tuning based on the agent's\nperformance and training stability. Moreover, we use a randomized ensemble of Q\nfunctions to further increase the sample efficiency of online fine-tuning by\nperforming a large number of learning updates. Experiments show that the\nproposed method yields state-of-the-art offline-to-online reinforcement\nlearning performance on the popular D4RL benchmark. Code is available:\n\\url{https://github.com/zhaoyi11/adaptive_bc}.", + "authors": "Yi Zhao, Rinu Boney, Alexander Ilin, Juho Kannala, Joni Pajarinen", + "published": "2022-10-25", + "updated": "2022-10-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13630v1", + "title": "Offline Skill Graph (OSG): A Framework for Learning and Planning using Offline Reinforcement Learning Skills", + "abstract": "Reinforcement Learning has received wide interest due to its success in\ncompetitive games. Yet, its adoption in everyday applications is limited (e.g.\nindustrial, home, healthcare, etc.). In this paper, we address this limitation\nby presenting a framework for planning over offline skills and solving complex\ntasks in real-world environments. Our framework is comprised of three modules\nthat together enable the agent to learn from previously collected data and\ngeneralize over it to solve long-horizon tasks. We demonstrate our approach by\ntesting it on a robotic arm that is required to solve complex tasks.", + "authors": "Ben-ya Halevy, Yehudit Aperstein, Dotan Di Castro", + "published": "2023-06-23", + "updated": "2023-06-23", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.01643v3", + "title": "Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems", + "abstract": "In this tutorial article, we aim to provide the reader with the conceptual\ntools needed to get started on research on offline reinforcement learning\nalgorithms: reinforcement learning algorithms that utilize previously collected\ndata, without additional online data collection. Offline reinforcement learning\nalgorithms hold tremendous promise for making it possible to turn large\ndatasets into powerful decision making engines. Effective offline reinforcement\nlearning methods would be able to extract policies with the maximum possible\nutility out of the available data, thereby allowing automation of a wide range\nof decision-making domains, from healthcare and education to robotics. However,\nthe limitations of current algorithms make this difficult. We will aim to\nprovide the reader with an understanding of these challenges, particularly in\nthe context of modern deep reinforcement learning methods, and describe some\npotential solutions that have been explored in recent work to mitigate these\nchallenges, along with recent applications, and a discussion of perspectives on\nopen problems in the field.", + "authors": "Sergey Levine, Aviral Kumar, George Tucker, Justin Fu", + "published": "2020-05-04", + "updated": "2020-11-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03097v1", + "title": "Federated Ensemble-Directed Offline Reinforcement Learning", + "abstract": "We consider the problem of federated offline reinforcement learning (RL), a\nscenario under which distributed learning agents must collaboratively learn a\nhigh-quality control policy only using small pre-collected datasets generated\naccording to different unknown behavior policies. Naively combining a standard\noffline RL approach with a standard federated learning approach to solve this\nproblem can lead to poorly performing policies. In response, we develop the\nFederated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA),\nwhich distills the collective wisdom of the clients using an ensemble learning\napproach. We develop the FEDORA codebase to utilize distributed compute\nresources on a federated learning platform. We show that FEDORA significantly\noutperforms other approaches, including offline RL over the combined data pool,\nin various complex continuous control environments and real world datasets.\nFinally, we demonstrate the performance of FEDORA in the real-world on a mobile\nrobot.", + "authors": "Desik Rengarajan, Nitin Ragothaman, Dileep Kalathil, Srinivas Shakkottai", + "published": "2023-05-04", + "updated": "2023-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.00750v2", + "title": "Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient", + "abstract": "Offline reinforcement learning, which aims at optimizing sequential\ndecision-making strategies with historical data, has been extensively applied\nin real-life applications. State-Of-The-Art algorithms usually leverage\npowerful function approximators (e.g. neural networks) to alleviate the sample\ncomplexity hurdle for better empirical performances. Despite the successes, a\nmore systematic understanding of the statistical complexity for function\napproximation remains lacking. Towards bridging the gap, we take a step by\nconsidering offline reinforcement learning with differentiable function class\napproximation (DFA). This function class naturally incorporates a wide range of\nmodels with nonlinear/nonconvex structures. Most importantly, we show offline\nRL with differentiable function approximation is provably efficient by\nanalyzing the pessimistic fitted Q-learning (PFQL) algorithm, and our results\nprovide the theoretical basis for understanding a variety of practical\nheuristics that rely on Fitted Q-Iteration style design. In addition, we\nfurther improve our guarantee with a tighter instance-dependent\ncharacterization. We hope our work could draw interest in studying\nreinforcement learning with differentiable function approximation beyond the\nscope of current research.", + "authors": "Ming Yin, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-10-03", + "updated": "2022-11-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.13493v1", + "title": "The Provable Benefits of Unsupervised Data Sharing for Offline Reinforcement Learning", + "abstract": "Self-supervised methods have become crucial for advancing deep learning by\nleveraging data itself to reduce the need for expensive annotations. However,\nthe question of how to conduct self-supervised offline reinforcement learning\n(RL) in a principled way remains unclear. In this paper, we address this issue\nby investigating the theoretical benefits of utilizing reward-free data in\nlinear Markov Decision Processes (MDPs) within a semi-supervised setting.\n Further, we propose a novel, Provable Data Sharing algorithm (PDS) to utilize\nsuch reward-free data for offline RL. PDS uses additional penalties on the\nreward function learned from labeled data to prevent overestimation, ensuring a\nconservative algorithm. Our results on various offline RL tasks demonstrate\nthat PDS significantly improves the performance of offline RL algorithms with\nreward-free data. Overall, our work provides a promising approach to leveraging\nthe benefits of unlabeled data in offline RL while maintaining theoretical\nguarantees. We believe our findings will contribute to developing more robust\nself-supervised RL methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.14897v1", + "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", + "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", + "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", + "published": "2023-08-28", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.03086v1", + "title": "DITTO: Offline Imitation Learning with World Models", + "abstract": "We propose DITTO, an offline imitation learning algorithm which uses world\nmodels and on-policy reinforcement learning to addresses the problem of\ncovariate shift, without access to an oracle or any additional online\ninteractions. We discuss how world models enable offline, on-policy imitation\nlearning, and propose a simple intrinsic reward defined in the world model\nlatent space that induces imitation learning by reinforcement learning.\nTheoretically, we show that our formulation induces a divergence bound between\nexpert and learner, in turn bounding the difference in reward. We test our\nmethod on difficult Atari environments from pixels alone, and achieve\nstate-of-the-art performance in the offline setting.", + "authors": "Branton DeMoss, Paul Duckworth, Nick Hawes, Ingmar Posner", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.11574v1", + "title": "Offline Multitask Representation Learning for Reinforcement Learning", + "abstract": "We study offline multitask representation learning in reinforcement learning\n(RL), where a learner is provided with an offline dataset from different tasks\nthat share a common representation and is asked to learn the shared\nrepresentation. We theoretically investigate offline multitask low-rank RL, and\npropose a new algorithm called MORL for offline multitask representation\nlearning. Furthermore, we examine downstream RL in reward-free, offline and\nonline scenarios, where a new task is introduced to the agent that shares the\nsame representation as the upstream offline tasks. Our theoretical results\ndemonstrate the benefits of using the learned representation from the upstream\noffline task instead of directly learning the representation of the low-rank\nmodel.", + "authors": "Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13412v2", + "title": "CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn an optimal policy from\npre-collected and labeled datasets, which eliminates the time-consuming data\ncollection in online RL. However, offline RL still bears a large burden of\nspecifying/handcrafting extrinsic rewards for each transition in the offline\ndata. As a remedy for the labor-intensive labeling, we propose to endow offline\nRL tasks with a few expert data and utilize the limited expert data to drive\nintrinsic rewards, thus eliminating the need for extrinsic rewards. To achieve\nthat, we introduce \\textbf{C}alibrated \\textbf{L}atent\ng\\textbf{U}idanc\\textbf{E} (CLUE), which utilizes a conditional variational\nauto-encoder to learn a latent space such that intrinsic rewards can be\ndirectly qualified over the latent space. CLUE's key idea is to align the\nintrinsic rewards consistent with the expert intention via enforcing the\nembeddings of expert data to a calibrated contextual representation. We\ninstantiate the expert-driven intrinsic rewards in sparse-reward offline RL\ntasks, offline imitation learning (IL) tasks, and unsupervised offline RL\ntasks. Empirically, we find that CLUE can effectively improve the sparse-reward\noffline RL performance, outperform the state-of-the-art offline IL baselines,\nand discover diverse skills from static reward-free offline data.", + "authors": "Jinxin Liu, Lipeng Zu, Li He, Donglin Wang", + "published": "2023-06-23", + "updated": "2023-10-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.04268v1", + "title": "On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples", + "abstract": "Offline reinforcement learning (offline RL) considers problems where learning\nis performed using only previously collected samples and is helpful for the\nsettings in which collecting new data is costly or risky. In model-based\noffline RL, the learner performs estimation (or optimization) using a model\nconstructed according to the empirical transition frequencies. We analyze the\nsample complexity of vanilla model-based offline RL with dependent samples in\nthe infinite-horizon discounted-reward setting. In our setting, the samples\nobey the dynamics of the Markov decision process and, consequently, may have\ninterdependencies. Under no assumption of independent samples, we provide a\nhigh-probability, polynomial sample complexity bound for vanilla model-based\noff-policy evaluation that requires partial or uniform coverage. We extend this\nresult to the off-policy optimization under uniform coverage. As a comparison\nto the model-based approach, we analyze the sample complexity of off-policy\nevaluation with vanilla importance sampling in the infinite-horizon setting.\nFinally, we provide an estimator that outperforms the sample-mean estimator for\nalmost deterministic dynamics that are prevalent in reinforcement learning.", + "authors": "Mustafa O. Karabag, Ufuk Topcu", + "published": "2023-03-07", + "updated": "2023-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2302.09605v1", + "title": "Efficient Communication via Self-supervised Information Aggregation for Online and Offline Multi-agent Reinforcement Learning", + "abstract": "Utilizing messages from teammates can improve coordination in cooperative\nMulti-agent Reinforcement Learning (MARL). Previous works typically combine raw\nmessages of teammates with local information as inputs for policy. However,\nneglecting message aggregation poses significant inefficiency for policy\nlearning. Motivated by recent advances in representation learning, we argue\nthat efficient message aggregation is essential for good coordination in\ncooperative MARL. In this paper, we propose Multi-Agent communication via\nSelf-supervised Information Aggregation (MASIA), where agents can aggregate the\nreceived messages into compact representations with high relevance to augment\nthe local policy. Specifically, we design a permutation invariant message\nencoder to generate common information-aggregated representation from messages\nand optimize it via reconstructing and shooting future information in a\nself-supervised manner. Hence, each agent would utilize the most relevant parts\nof the aggregated representation for decision-making by a novel message\nextraction mechanism. Furthermore, considering the potential of offline\nlearning for real-world applications, we build offline benchmarks for\nmulti-agent communication, which is the first as we know. Empirical results\ndemonstrate the superiority of our method in both online and offline settings.\nWe also release the built offline benchmarks in this paper as a testbed for\ncommunication ability validation to facilitate further future research.", + "authors": "Cong Guan, Feng Chen, Lei Yuan, Zongzhang Zhang, Yang Yu", + "published": "2023-02-19", + "updated": "2023-02-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "label": "Original Paper", + "paper_cat": "Offline AND Reinforcement AND Learning", + "gt": "Multi-Agent Reinforcement Learning (MARL). MARL has made prominent progress these years. Having emerged under the CTDE paradigm, many methods are designed to relieve the non-stationarity issue, and have made noticeable progress these years. Most of them can be roughly divided into policy-based and value-based methods. Typical policy gradient-methods involve MADDPG [38], COMA [16], MAAC [25], SQDDPG [66], FOP [83], and HAPPO [29] which explore the optimization of multi-agent policy gradient methods, while value-based methods mainly focus on the factorization of the global value function [63, 9]. VDN [57] applies a simple additive factorization to decompose the joint value function into agent-wise value functions. QMIX [52] structurally enforces the learned joint value function to be monotonic to the agent\u2019s utilities, which can represent a more af\ufb02uent class of value functions. QPLEX [64] further takes a duplex dueling network architecture to factorize the joint value function, achieving a full expressiveness power of Individual Global Maximization (IGM) [55]. Multi-agent Communication. Communication plays a promising role in multi-agent coordination under partial observability [13, 84]. Extensive research have been made on learning communication protocols to 15 \ud835\udf06! = 1, \ud835\udf06\" = 1 \ud835\udf06! = 1, \ud835\udf06\" = 0 \ud835\udf06! = 1, \ud835\udf06\" = 0 \ud835\udf06! = 0, \ud835\udf06\" = 0 \ud835\udf06! = 1, \ud835\udf06\" = 1 (No Pretrain) Behavior Policy 0.0M 0.8M 1.6M 2.4M 3.2M 4.0M Timesteps 0 20 40 60 80 100 Median Test Return (a) Replay (Medium) 0.0M 0.8M 1.6M 2.4M 3.2M 4.0M Timesteps 0 20 40 60 80 100 Median Test Return (b) Replay (Poor) Figure 10: Ablation experiments for of\ufb02ine learning on the task of Hallway: 3x5-4x6x10. improve performance on cooperative tasks [20, 15, 34, 73, 12, 37, 77, 71, 21]. Previous works can be divided into two categories. One focuses on generating a meaningful message for the message senders. The simplest way is to treat the raw local observation, or the local information history as message [15, 56]. VBC [81] and TMC [82] apply techniques, such as variance-based control and temporal smoothing, in the sender end to make the generated messages meaningful and valuable for policy learning. NDQ [67] generates minimized messages for different teammates to learn nearly decomposable value functions, and optimize the message generator based on two different information-theory-based regularizers to achieve expressive communication. On the contrary, other works try to learn ef\ufb01ciently to extract the most useful message on the receiver end, and they design mechanisms to differentiate the importance of messages. I2C [8] and ACML [40] employ the gate mechanism to be selective on received messages. There are also works inspired by the broad application of the attention mechanism [3, 7]. TarMAC [6] achieves targeted communication via a simple signature-based soft-attention mechanism, where the sender broadcasts a key encoding the properties of the agents, then the receiver attends to all received messages for a weighted sum of messages for decision marking. SARNet [51] and MAGIC [44] further remove the signature in TarMAC and leverage attention-based networks to learn ef\ufb01cient and interpretable relations between entities, decide when and with whom to communicate. Of\ufb02ine MARL. Of\ufb02ine reinforcement learning [35] attracts tremendous attention for its data-driven training paradigm without interactions with the environment [50]. Previous work [19] discusses the distribution shift issue in of\ufb02ine learning and considers learning behavior-constrained policies to relieve extrapolation error from unseen data estimations [70, 30, 32]. Of\ufb02ine MARL is a promising research direction [80] that trains policies from a static dataset. Following online MARL methods that either extend policy gradient algorithms to multi-agent cases [38, 16, 68] or adopt Q-learning paradigms with value decomposition [57, 52, 55, 64], existing of\ufb02ine MARL methods try to exploit of\ufb02ine data with policy constraints. ICQ [74] effectively alleviates the extrapolation error by only trusting of\ufb02ine data. MABCQ [26] introduces a fully decentralized of\ufb02ine MARL setting and utilizes techniques of value deviation and transition normalization for ef\ufb01cient learning. OMAR [46] combines \ufb01rst-order policy gradients and zeroth-order optimization methods to avoid the uncoordinated local optima. MADT [42] leverages transformer\u2019s modelling ability of sequence modelling and integrates it seamlessly with both of\ufb02ine and online MARL tasks. [58] investigates of\ufb02ine MARL with explicit consideration on the diversity of agent-wise trajectories and proposes a novel framework called Shared Individual Trajectories (SIT) to address this problem. [59] proposes to \ufb01rst train a teacher policy who has the privilege to access every agent\u2019s observations, actions, and rewards. After the teacher policy has identi\ufb01ed and recombined the \"good\" behavior in the dataset, they create separate student policies and distill 16 not only the teacher policy\u2019s features but also its structural relations among different agents\u2019 features to student policies. ODIS [79] proposes a novel Of\ufb02ine MARL algorithm to Discover coordInation Skills (ODIS) from multi-task data. [5] recently releases a framework Off-the-Grid MARL (OG-MARL) for generating of\ufb02ine MARL datasets and algorithms without communication, which releases an initial set of datasets and baselines for cooperative of\ufb02ine MARL, along with a standardised evaluation protocol. To the best of our knowledge, none of the existing MARL communication methods explicitly consider how the multiple received messages can be optimized for ef\ufb01cient policy learning. Agents may be confused by redundant information from teammates, and simply augmenting the local policy with the raw message may burden the learning. Meanwhile, there is no testbed for of\ufb02ine multi-agent communication setting. Our proposed method applies a message aggregation module to learn a compact information representation and extracts the most relevant part for decision-making in online and of\ufb02ine settings.", + "pre_questions": [], + "main_content": "Introduction Multi-Agent Reinforcement Learning (MARL) [24] has attracted widespread attention [24, 10] recently, achieving remarkable success in many complex domains [2], such as traf\ufb01c signal control [11], droplet control [36], active voltage control [65], and dynamic algorithm con\ufb01guration [72]. For better coordination on further applications, some issues like non-stationarity [47], scalability [4] remain to be solved. To solve the non-stationarity caused by the concurrent learning of multiple policies and scalability as the agent number increases, most recent works on MARL adopt the Centralized Training and Decentralized Execution (CTDE) [28, 39] paradigm, which includes both value-based methods [57, 52, 64, 76] and policy gradient methods [16, 38, 68, 75], or other techniques like transformer [69]. Under the CTDE paradigm, however, the coordination ability of the learned policies can be fragile due to the partial observability in the multiagent environment, which is a common challenge in many multi-agent tasks [41]. While recurrent neural networks could in principle relieve this issue by conditioning the policy on action-observation history [23], the uncertainty of other agents (e.g., states and actions) at execution time can result in catastrophic miscoordination and even sub-optimality [67, 8]. Communication shows great potential in solving these problems [78, 84], with which agents can share information such as observations, intentions, or experiences to stabilize the learning process, leading to a better understanding of the environment (or the other agents) and better coordination as a result. Previous *Corresponding author 1 arXiv:2302.09605v1 [cs.LG] 19 Feb 2023 communication methods either focus on generating meaningful information [67, 27, 82] for the message senders, or design techniques such as attention mechanism [6, 44], message gate [40, 8] to \ufb01lter the most relevant information on raw received messages. These approaches treat the received information as a black box and tacitly assume that policy networks can automatically extract the most critical information from multiple raw messages during policy learning. On this occasion, with the only signal given by reinforcement learning, the extraction process may be reasonably inef\ufb01cient, especially in complex scenarios. Motivated by recent advances in state representation learning [54, 33], which reveals that auxiliary representation objectives could facilitate policy learning [14], we aim at ef\ufb01ciently aggregating information as compact representations for policy by designing a novel communication framework Multi-Agent communication via Self-supervised Information Aggregation (MASIA). Speci\ufb01cally, representations are optimized through self-supervised objectives, which encourages the representations to be both abstract of the true states and predictive of the future information. Since agents are guided towards higher cumulative rewards during policy learning, correlating representations with true states and future information could intensify the learning signals in policy learning. In this way, the ef\ufb01ciency of policy learning could be encouraged. Also, considering that permutation invariance of representations can also promote ef\ufb01ciency, we design a self-attention mechanism to maintain the invariance of obtained representations. We also design a network that weighs the aggregated representation for individual agents to derive unique and highly relevant representation to augment local policies for ef\ufb01cient coordination. On the other hand, the application of Reinforcement Learning (RL) in real-world scenarios faces signi\ufb01cant challenges as the interaction with environments is typically costly or even impossible [35]. Of\ufb02ine RL is recently proposed to help solve this concern, which only learns the agent policy from a \ufb01xed of\ufb02ine dataset without interaction with the environment. Despite the signi\ufb01cance and popularity of this topic, current works primarily focus on single agent setting [35, 49, 18], or multi-agent coordination without communication [74, 46, 58, 5, 79]. Even communication plays a crucial role in multi-agent coordination, especially in partially observable scenarios [84], there are neither any ef\ufb01cient approaches for this issue nor any testbeds to validate this setting. Based on this situation, we construct an of\ufb02ine testbed based on some popular scenarios where communication is indeed necessary to test the communication ability for different communication approaches. We hope these benchmarks can be utilized to benchmark the performance of various multi-agent communication algorithms and trigger more research about of\ufb02ine reinforcement learning concerning multi-agent communication. To evaluate our method, we conduct extensive experiments on various cooperative multi-agent benchmarks, including Hallway [67], Level-Based Foraging [48], Traf\ufb01c Junction [6], and two maps from StarCraft Multi-Agent Challenge (SMAC) [67] to validate the communication effectiveness in online and of\ufb02ine settings. The online experimental results show that MASIA outperforms previous approaches, strong baselines, and ablations of our method, demonstrating the effectiveness of MASIA for online learning. We also build the of\ufb02ine dataset based on the four environments above. Further experiments concerning of\ufb02ine learning show that MASIA also performs well in the of\ufb02ine setting. Our main contributions are: \u2022 We propose a novel framework that uses a message aggregation network to extract from multiple messages generated by various teammates, with which we acquire a permutation invariant information aggregation representation. Agents can then use a novel focusing network to extract the most relevant information for decision-making. \u2022 Two representation objectives are introduced to make the information representation compact and suf\ufb01cient, including the state reconstruction and multi-step future states prediction. \u2022 We construct an of\ufb02ine dataset for multi-agent communication, which considers multiple environments and various dataset settings. This dataset is set up to benchmark different communication algorithms under of\ufb02ine learning and encourage more research on of\ufb02ine multi-agent communication learning. \u2022 Suf\ufb01cient online results on various benchmarks and communication conditions demonstrate that our proposed approach signi\ufb01cantly improves the communication performance, and visualization 2 results further reveal why it works. Additional experimental results on the of\ufb02ine dataset justify the effectiveness of our approach in the of\ufb02ine setting, inspiring further research in this \ufb01eld. 2 Problem Formulation This paper considers a fully cooperative MARL communication problem, which can be modeled as Decentralised Partially Observable Markov Decision Process under Communication (Dec-POMDP-Com) [45] and formulated as a tuple \u27e8N , S, A, P, \u2126, O, R, \u03b3, M\u27e9, where N = {1, . . . , n} is the set of agents, S is the set of global states, A is the set of actions, \u2126is the set of observations, O is the observation function, R represents the reward function, \u03b3 \u2208[0, 1) stands for the discounted factor, and M indicates the set of messages. At each time step, due to partial observability, each agent i \u2208N can only acquire the observation oi \u2208\u2126drawn from the observation function O(s, i) with s \u2208S, each agent holds an individual policy \u03c0(ai | \u03c4i, mi), where \u03c4i represents the history (o1 i , a1 i , . . . , ot\u22121 i , at\u22121 i , ot i) of agent i at current timestep t, and mi \u2208M is the message received by the agent i. The joint action a = \u27e8a1, . . . , an\u27e9leads to next state s\u2032 \u223cP(s\u2032 | s, a) and the global reward R(s, a). The formal objective is to \ufb01nd a joint policy \u03c0(\u03c4, a) to maximize the global value function Q\u03c0 tot(\u03c4, a) = Es,a \u0002 \u2211\u221e t=0 \u03b3tR(s, a) | s0 = s, a0 = a, \u03c0 \u0003 , with \u03c4 = \u27e8\u03c41, . . . , \u03c4n\u27e9. As each agent can behave as a message sender as well as a message receiver, this paper considers learning useful message representation in the received end, and agents only use local information oi as message to share within the team. We optimize the policy by value-based MARL, where deep Q-learning [43] implements the action-value function Q(s, a) with a deep neural network Q(\u03c4, a; \u03b8) parameterized by \u03b8. This paper follows the CTDE paradigm. In the centralized training phase, deep Q-learning uses a replay memory D to store the transition tuple \u27e8\u03c4, a, r, \u03c4\u2032\u27e9. We use Q(\u03c4, a; \u03b8) to approximate Q(s, a; \u03b8) to relieve the partial observability. Thus, the parameters \u03b8 are learnt by minimizing the expected Temporal Difference (TD) error: L(\u03b8) = E(\u03c4,a,r,\u03c4\u2032)\u2208D h\u0000r + \u03b3V \u0000\u03c4\u2032; \u03b8\u2212\u0001 \u2212Q(\u03c4, a; \u03b8) \u00012i , where V (\u03c4\u2032; \u03b8\u2212) = maxa\u2032 Q (\u03c4\u2032, a\u2032; \u03b8\u2212) is the expected future return of the TD target and \u03b8\u2212are parameters of the target network periodically updated with \u03b8. When considering the of\ufb02ine setting, we have an of\ufb02ine dataset which is denoted as B. The dataset is collected by speci\ufb01c behavior policy, and it keeps \ufb01xed during the whole training process. For DecPOMDP problems, the of\ufb02ine dataset B is typically decomposed of a number of trajectories, that is to say, B := \b(st, {ot i}N , {at i}N , rt)T t=1 \t , where T denotes the length of the trajectory, rt means the reward obtained at the t-th timestep. When further considering multi-agent communication, we additionally record the receivers\u2019 id of the messages sent by each agent. This information portrays the communication channels allowed by the current problem. 3 Method In this paper, we propose ef\ufb01cient Multi-Agent communication via Self-supervised Information Aggregation (MASIA), a novel multi-agent communication mechanism for promoting cooperation performance. Redundant communications could increase the burden of information processing for each agent to make decisions and pose new challenges for information extraction since plenty of irrelevant information is contained in raw messages. To design an ef\ufb01cient communication mechanism, we believe two properties are of vital importance suf\ufb01ciency and compactness, where suf\ufb01ciency means a rich amount of information, and compactness calls for higher information density. To meet the standard of suf\ufb01ciency, a global encoder, which we call Information Aggregation Encoder (IAE), is shared among agents to aggregate the information broadcasted by agents into a common representation. With proper training, this representation could re\ufb02ect the global observation so that each agent could obtain suf\ufb01cient information from it to make decisions. As for compactness, we \ufb01rst design an auxiliary loss on the global representation to correlate it with the policy learning process, and make each agent only 3 \ud835\udc44!\"!(\ud835\udf49, \ud835\udc82) Mixing Network Agent \ud835\udfcf \ud835\udc44#(\ud835\udf0f#, \ud835\udc4e#) Agent \ud835\udc8a \ud835\udc5c$ % \ud835\udc44$(\ud835\udf0f$, \ud835\udc4e$) \u2026 \ud835\udc5c# % \u2026 \ud835\udc44$(\ud835\udf0f$, \ud835\udc4e$) \ud835\udc5c$ % \ud835\udc64$ % Information Aggregation Encoder \ud835\udc5c#:' % \ud835\udc5c#:' % \ud835\udc67% State Decoder \u2112() \u0302 \ud835\udc60% Latent Model : : Multilayer perceptron Element-wise multiplication Concatenation : EMA update \ud835\udc5c#:' %*#:%*+ \u223c\ud835\udc9f \u0302 \ud835\udc67%*#:%*+ \u0303 \ud835\udc67%*#:%*+ \u2112, \u2026 Self-Attention Network \u2026 Integration Network Information Aggregation Encoder (a) (b) (c) (d) \ud835\udc60% \ud835\udc60% \ud835\udc3e-step Rollout \u2112-. \ud835\udc67% \u00b7 \u00b7 Information Extraction \ud835\udc4e%:%*+/# \u223c\ud835\udc9f \u0302 \ud835\udc67%*#:%*+/# Figure 1: Structure of MASIA. (a) The overall architecture. (b) Information aggregation and extraction. (c) Information aggregation optimization. (d) Transition model learning. focus on the part of the representation related to its performance and coordination by the designed focusing network through excluding the unrelated parts. The entire framework of our method is shown in Figure 1. Furthermore, a description of the training and execution processes can be found in Appendix B.2. 3.1 Information Aggregation and Extraction Information Aggregation. Believing that the true state should be re\ufb02ected from combined messages, we design the aggregation encoder to be capable of subsuming all the messages sent from agents. Also, the communication system in multi-agent systems is \ufb02exible and permutation invariant in nature, which calls for a permutation invariant structure for the aggregation encoder. Based on these beliefs, we apply a self-attention mechanism to aggregate multiple messages from different teammates: Q, K, V = MLPQ,K,V([ot 1, . . . , ot i, . . . , ot n]), (1) H = softmax( QKT \u221adk )V, (2) where the learnable matrices Q, K, and V transform the perception from all agents into the corresponding query Q , key K, and value V, which are the concepts de\ufb01ned in the attention mechanism [61]. Speci\ufb01cally, each row vector of H can be seen as a querying result of one agent for all available information, and the hidden state H will be fed into the subsequent integration network to \ufb01nally obtain the output aggregated representation zt. A detailed discussion about the design of the integration network can be found in Appendix B.1. In the centralized training phase, we use the aggregated representation zt as extra input in addition to the individual observation to feed the value function. Since zt contains the information required to determine the true state, taking zt as extra input could reduce the uncertainty about the environment states and produce better estimations on the Q-values for value functions under any value-based policy learning algorithm. Information Extraction. Similar to the decision process of human beings, global messages are usually redundant for an individual agent to make good coordination in communication systems. For example, on the task of Traf\ufb01c Junction [6], one natural idea is that the information of neighboring cars are more important for agents to perceive than those distant ones, and the unrelated information in global message may sometimes even confuse the agents and impede the learning when the map is large. A toy experiment 4 : ego car : distant car : nearby car : visible area Figure 2: A toy experiment for information redundancy on the task of Traf\ufb01c Junction. shown in Figure 2 supports this idea. We apply the QMIX algorithm in Traf\ufb01c Junction tasks with different sight settings. The results show that the agents learn better policies when in a small-sight setting (sight-1) than both in a super-limited-sight setting (sight-0) and full-sight setting (sight-full), motivating a demand of ef\ufb01cient message extraction. To make each agent capable of deciding its own perceptive area, we employ the focusing network to weigh the aggregated representations for each agent. The focusing network is designed as a Multi-Layer Perception (MLP) with the Sigmoid output activation function to ensure that each dimension of wt i is bounded between 0 and 1. By taking element-wise multiplication with zt, a unique representation could be distilled for individual agents. In this way, if the focusing network produces higher weights on speci\ufb01c dimensions, changes in aggregated representations on these dimensions would be more signi\ufb01cant and thus, the agent would be more sensitive to aggregated representation on these parts. On the contrary, if some near-zero weights are outputted on some dimensions, information on those dimensions would be \ufb01ltered out. In particular, although the information extraction process can, to some extent, re\ufb02ect the speci\ufb01city of each agent, we stress the local information by feeding it into the subsequent network together with the extracted representation. 3.2 Information Representation Optimization As for the learning process of aggregated representation, we consider two typical objectives in global encoder training: reconstruction and multi-step prediction, which constrain the representations produced by the global encoder to be suf\ufb01cient and compact, respectively. For the reconstruction objective, we employ an additional decoder, which aims to reconstruct the global state by the aggregated representation to allow self-supervision on the global encoder. Speci\ufb01cally, the decoder is optimized together with the aggregation encoder by reconstructing the global states st from the multiple received messages ot: Lae(\u03b8, \u03b7) = Eot,st\u2225g\u03b7(zt) \u2212st\u22252 2, zt = f\u03b8(ot), (3) where f\u03b8, g\u03b7 denote the encoder network parameterized by \u03b8 and the decoder network parameterized by \u03b7, respectively. This loss term resembles a classical auto-encoder loss, while the decoder here is not to 5 reconstruct the input, but to recover the global state from representations instead. By utilizing this loss, we guide the encoder to extract observational features that can help infer the global state and let zt = f\u03b8(ot) be a suf\ufb01cient representation. As for the multi-step prediction objective, we constrain the produced representation to be predictive of future information. Speci\ufb01cally, we design a transition model h\u03c8 : Z \u00d7 An \u2192Z parameterized by \u03c8 as auxiliary model, which predicts the aggregated representation zt+1 on next step t + 1 through the aggregated representation zt and joint action at on current step t. We regress the predicted aggregated representation after k-step rollout on the actual aggregated representation of future messages ot+k, updating both the aggregation encoder and the auxiliary model via the multi-step prediction loss: Lm(\u03b8, \u03c8) = Eot,st,at,...,at+K\u22121,ot+K,st+K h \u2211K k=1 \u2225\u02c6 zt+k \u2212\u02dc zt+k\u22252 2 i , \u02c6 zt+1 = h\u03c8(\u02dc zt, at), \u02c6 zt+k = h\u03c8(\u02c6 zt+k\u22121, at+k\u22121), k = 2, . . . , K, \u02dc zt+k = f\u03b8(ot+k), k = 0, . . . , K. (4) To further stabilize the learning process, we apply the double network technique, which employs two networks with the same architecture but different update frequencies, for the aggeration encoder. The target network is updated via Exponential Moving Average (EMA) like in SPR [54]. By forcing the aggregated representation to be predictive of its future states, the aggregated representation could be more correlated with the information required for its decision-making, which meets the compactness standard. Combining these two objectives allows the aggregation encoder to extract more helpful information for agents to coordinate better. To improve the capability of information extraction on individual agents, we also enhance the learning process of these components with an RL objective. Speci\ufb01cally, we consider minimizing the TD loss: Lrl(\u03b8, \u03c6) = E(\u03c4,a,r,\u03c4\u2032)\u2208D \" r + \u03b3 max a\u2032 Qtot \u0000\u03c4\u2032, a\u2032; \u03b8\u2212, \u03c6\u2212\u0001 \u2212Qtot(\u03c4, a; \u03b8, \u03c6) !2# , (5) where Qtot is computed with individual Q-values. The computation of Q-values is actually dependent on the speci\ufb01c value-based learning algorithm. We apply it to prevalent methods, including VDN [57], QMIX [52], and QPLEX [64]. Moreover, the updating of the focusing network is coupled to the RL objective, making the weights produced by the focusing network could be task-sensitive, which could also facilitate policy learning. 3.3 Representation Pre-training for Of\ufb02ine Learning Considering the signi\ufb01cance of of\ufb02ine learning for real-world applications, we also cover the of\ufb02ine setting in this paper. When applying our approach to of\ufb02ine communication learning, we can only access a \ufb01xed of\ufb02ine dataset without interacting with the environment. Thus, the dif\ufb01culty lies in how to fully utilize the limited of\ufb02ine data to learn multi-agent communication policies ef\ufb01ciently. On the other hand, as the aggregation encoder is updated together with the Q-networks, the representation space Z it derives is, to some extent, dynamic. Usually the encoder requires a certain amount of data for training to obtain a relatively stable representation space. Motivated by these analyses, we propose doing representation pre-training before optimizing the agent policy with the of\ufb02ine dataset. The core idea is to \ufb01rstly obtain a relatively good aggregation encoder, of which the derived representation space is to some extent stable. Then we further update the Q-networks with the of\ufb02ine dataset and jointly \ufb01ne-tune the aggregation encoder. The total work\ufb02ow is depicted in Figure 3. In fact, this practice brings some advantages: (1) the representation pre-training process typically optimizes the unsupervised learning objectives and updates only the aggregation encoder network, which 6 Stage1: Pre-training Stage2: Joint training offline dataset \ud835\udc94\ud835\udc95, \ud835\udc90\ud835\udc8a \ud835\udc95 \ud835\udc75, \ud835\udc82\ud835\udc8a \ud835\udc95 \ud835\udc75, \ud835\udc93\ud835\udc95 \ud835\udc95$\ud835\udfcf \ud835\udc7b agent(\ud835\udc5c! \", \ud835\udc5c# \", . . . , \ud835\udc5c$ \") \ud835\udc67& ' RL Loss Information Aggregation Encoder \ud835\udc67$ Unsupervised Representation Loss \ud835\udc3f!\", \ud835\udc3f# \ud835\udc3f$% \ud835\udc67( ' \ud835\udc67) ' \u22ef Information Extraction Unsupervised Representation Training Information Extraction Training Policy Training Stage2: Joint training Figure 3: The total training work\ufb02ow of MASIA for of\ufb02ine learning. Which includes two training stages, pretraining and joint training, respectively. During the stage of pre-training, we only optimize the unsupervised representation loss with the of\ufb02ine data, while for joint training we optimize the representation loss and RL loss together. Besides, we utilize agent(ot 1, ..., ot n) to indicate the network inference process of the agent policy, which is to output action or Q-values. does not require considering the Q-divergence problem; (2) pre-training the aggregation encoder network brings a more stable aggregation representation space, which will facilitate the following optimization of RL objective. With this practice, we aim to better utilize the of\ufb02ine dataset by considering the property of our approach. In some sense, it can also be considered a bene\ufb01t of our approach\u2019s mechanism, and we adopt this practice in all of\ufb02ine experiments. Further ablation studies in Section 4.2.2 justi\ufb01es the effectiveness of this practice. 4 Experiment To evaluate the effectiveness of our approach both in the online and of\ufb02ine problem setting, we \ufb01rst conduct online communication learning on four benchmarks to validate the effectiveness of MASIA. Further, we build an of\ufb02ine dataset to support of\ufb02ine communication learning. Its purpose is to check the convergence performance of various multi-agent communication learning algorithms in the of\ufb02ine setting and justify whether MASIAcan still perform well with only a \ufb01xed of\ufb02ine dataset. 4.1 Online Multi-Agent Communication Learning We conduct experiments on various benchmarks with different communication request levels1. Speci\ufb01cally, we aim to answer the following questions in this section: 1) How does our method perform when compared with multiple baselines in various scenarios (Section 4.1.1)? 2) What kind of knowledge has been learned by the information aggregation encoder (Section 4.1.2)? 3) How can the information extraction module extract the most relevant information for the individual from the learned embedding space (Section 4.1.3)? 4) Can MASIA be applied to different value decomposition baselines to improve their coordination ability and robustness in various communication conditions (Section 4.1.4)? We compare MASIA against a variety of 1The codes are available at https://github.com/chenf-ai/MASIA 7 \u22ef \u22ef \u22ef \"! \"\" #! ## $! $!$ % 7 3 6 1 2 4 5 3 5 3 (a) Hallway (b) LBF (c) Traffic Junction (TJ) (d) SMAC Figure 4: Multiple benchmarks used in our experiments. MASIA (Ours) Full-Comm NDQ TarMAC + QMIX TMC QMIX 0.0M 0.4M 0.8M 1.2M 1.6M 2.0M Timesteps 0 20 40 60 80 100 Median Test Win Rate % (a) Hallway: 4x6x10 0.0M 0.4M 0.8M 1.2M 1.6M 2.0M Timesteps 0 20 40 60 80 100 (b) LBF: 11x11-6p-4f-s1 0.0M 0.8M 1.6M 2.4M 3.2M 4.0M Timesteps 0 20 40 60 80 100 (c) TJ: medium 0.0M 0.4M 0.8M 1.2M 1.6M 2.0M Timesteps 0 20 40 60 80 100 (d) SMAC: 1o10b_vs_1r 0.0M 0.4M 0.8M 1.2M 1.6M 2.0M Timesteps 0 20 40 60 80 100 Median Test Win Rate % (e) Hallway: 3x5-4x6x10 0.0M 0.4M 0.8M 1.2M 1.6M 2.0M Timesteps 0 15 30 45 60 75 (f) LBF: 20x20-10p-6f-s1 0.0M 0.8M 1.6M 2.4M 3.2M 4.0M Timesteps 0 20 40 60 80 100 (g) TJ: hard 0.0M 0.4M 0.8M 1.2M 1.6M 2.0M Timesteps 0 20 40 60 80 100 (h) SMAC: 1o2r_vs_4r Figure 5: Performance comparison with baselines on multiple benchmarks. baselines, including communication-free methods and some state-of-the-art communication approaches. QMIX [52] is a strong communication-free baseline, and we use the implementation by PyMARL2 for comparison, which has shown excellent performance on diverse multi-agent benchmarks [53]. TarMAC utilizes an attention mechanism to select messages according to their relative importance. The implementation we used is provided by [67], denoted as TarMAC + QMIX. NDQ [67] aims at learning nearly decomposable Q functions via generating meaningful messages and communication minimization. TMC [82] applies a temporal smoothing technique at the message sender end to drastically reduce the amount of information exchanged between agents. For the ablation study, we design a baseline only different in the communication protocol, which adopts a full communication paradigm, where each agent gets message from all other teammates at each timestep, denoted as FullComm. We evaluate our proposed method on multiple benchmarks shown in Figure 4. Hallway [67] is a cooperative environment under partial observability, where m agents are randomly initialized at different positions and required to arrive at the goal g simultaneously. We consider two scenarios with various agents and different groups, and different groups have to arrive at different times. Level Based Foraging (LBF) [48] is another cooperative partially observable grid world game, where agents coordinate to collect food concurrently. Traf\ufb01c Junction (TJ) [6] is a popular benchmark used to test communication ability, where 2Our experiments are all based on the PyMARL framework, which uses SC2.4.6.2.6923. 8 Dead battle ! Battle Phase Closing Phase Seeking Phase : Enemy Agents : Ally Agents Similar Focusing Present Time 0.9 1.0 Focusing Similarity Score Curve Timeline Agent 3 Agent 6 (a) (b) Figure 6: (a) Information aggregation visualization. Each plotted dot represents an aggregated representation. The three different colors respectively represent three different initialization situations for the enemy entities. For example, type 0 shows the case when the enemies initialize at the lower right corner. To distinguish aggregated representations on different timesteps, we mark larger timesteps by darker shades of the dots. (b) Visualization of variations of selected agents\u2019 focus weight wt i in a single episode. We use the horizontal axes for timesteps in one single episode and vertical axes for dimensions of the aggregated representation. The weight is re\ufb02ected through luminance, and the darker the cell, the larger the weight. many cars move along two-way roads with one or more road junctions following the prede\ufb01ned routes, and we test on the medium and hard maps. Two maps named 1o2r_vs_4r and 1o10b_vs_1r from SMAC [67] require the agents to cooperate and communicate to get the position of the enemies. For evaluation, all results are reported with median performance with 95% con\ufb01dence interval on 5 random seeds. Details about benchmarks, network architecture and hyper-parameter choices of our method are all presented in Appendices A, and B.1, respectively. 4.1.1 Communication Performance We \ufb01rst compare MASIA against multiple baselines to investigate the communication ef\ufb01ciency on various benchmarks. As illustrated in Figure 5, MASIA achieves the best performance with low variance on all benchmarks, indicating MASIA\u2019s strong applicability in scenarios with various dif\ufb01culties. In Hallway (Figure 5a & Figure 5e), where frequent communications are required for good performance (method without communication such as QMIX fails), other communication methods such as TarMAC, NDQ, and TMC achieve low performance or even fail in this environment. This indicates that inappropriate message generation or message selection would injure the learning process. We believe the reason why FullComm succeeds is that there is hardly any redundancy in agents\u2019 observations in Hallway. Our MASIA also succeeds in this environment, showing superiority over others. The dominating performance of MASIA is even more signi\ufb01cant in extended Hallway (Figure 5e), where agents are separated into different groups. In this environment, MASIA can help agents extract information about their teammates and learn a coordination pattern more ef\ufb01ciently. In LBF (Figure 5b & Figure 5f), existing communication-based MARL methods like NDQ, Fullcomm, and TarMAC struggle due to the sparsity of rewards, especially when the foods are more sparsely distributed (Figure 5f). In contrast to the performance of QMIX in Hallway, QMIX performs well in 9 LBF, which is attributed to the fact that agents can observe the grids near them, and the mixing network of QMIX can help improve the coordination ability of the \ufb01xed group of agents in the training phase. Our method achieves comparable performance with QMIX and TarMAC, showing its strong coordination ability even in sparse reward scenarios. In Traf\ufb01c Junction (Figure 5c & Figure 5g), TarMAC and NDQ have high variance due to the instability of their messages, while MASIA gains high sample ef\ufb01ciency and can generate steady messages since it aims to reconstruct the state. On the SMAC benchmarks (Figure 5d & Figure 5h), we test on two complex scenarios requiring communication to succeed, where one overseer is in active service to get the information of the enemies. Messages are demanded since the agents have limited sight, so other teammates need the overseer\u2019s messages to identify the enemies\u2019 positions. Our method MASIA can maintain the high ef\ufb01ciency of learning and always have competitive performance when converged, which is superior to other baselines. 4.1.2 Insights into Information Aggregation Encoder To determine what kind of knowledge the encoder has learned through training, we conduct a visualization analysis on the map 1o2r_vs_4r from SMAC to demonstrate the information contained in the aggregation representation zt. We project the aggregation representation vectors into two-dimensional plane by t-SNE [60] in Figure 6a. We take trajectories from 3 scenarios of different types of initialization under a task where agents have to seek the enemy at the start, discover the enemy, and \ufb01nally battle with it for better performance. It can be observed that (1) the aggregated representations could be well distinguished by phases. Projected representations in the seeking phase are far from those in the battle phase and closing phase. This implies that our learned representations could well re\ufb02ect the true states. (2) the aggregated representations are \ufb01rst divergent in the seeking phase when enemies have been initialized, but become increasingly interlaced later until the closing phase, when enemies have been wiped out after a \ufb01erce battle. Since enemies are highly related to decision making, such a result veri\ufb01es that the aggregated representations exploit also reward information. To sum up, the visualization results show that MASIA can extract valuable global information with these representations. 4.1.3 Study about Individual Information Extraction To demonstrate the effectiveness of our information extraction module, we analyze the weights computed by the focusing network. Speci\ufb01cally, we select the TJ (medium) task for evaluation and compare the weights produced by two different agents. In this environment, agents could dynamically enter or leave the plane, making the agents staying in the environment \ufb02exible through time. It can be observed that agent 3 and agent 6 put focus on similar areas of the aggregated representation. Especially after timestep 15, when agents 3 and 6 are in similar situations and distant from the intersection, their focuses are nearly the same. This veri\ufb01es that the global state information has been successfully extracted to individual agents. Also, on the top of the \ufb01gure, we draw a \ufb01gure to measure the relationship between the cosine similarity of weight vectors of different agents against timesteps. It reveals that the similarity of their focus rises after these two agents begin to proceed in the same line (indicated by the render images posted on the lower parts of Figure 6b). This also conforms to the intuition that similar messages should be extracted for similar observations. 4.1.4 Generality of Our Method We aim to verify that the proposed approach is agnostic to various sight ranges and applied value-based MARL methods. We \ufb01rst conduct experiments on the map 1o10b_vs_1r to show that MASIA could also generalize well on agents with limited observations. The results in Section 4.1.1 show the performance of MASIA when the agents have a sight range of 9. When we narrow the agents\u2019 sight ranges, as shown in Figure 7a, by receiving and aggregating messages from teammates, the performance of MASIA does not suffer from a signi\ufb01cant drop. Our information aggregation and extraction modules prevent the agent from forfeiting knowledge about the state when the sight range is further limited. 10 Table 1: The format of our of\ufb02ine dataset for multi-agent communication. step 0 \u00b7 \u00b7 \u00b7 step t step t + 1 \u00b7 \u00b7 \u00b7 step T s0 \u00b7 \u00b7 \u00b7 st st+1 \u00b7 \u00b7 \u00b7 sT r0 \u00b7 \u00b7 \u00b7 rt rt+1 \u00b7 \u00b7 \u00b7 rT agent0 i=1 \u00b7 \u00b7 \u00b7 agentt i=1 ot 1 agentt+1 i=1 ot+1 1 \u00b7 \u00b7 \u00b7 agentT i=1 at 1 at+1 1 donet 1 donet+1 1 receiver idt 1 receiver idt+1 1 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 agent0 i=n \u00b7 \u00b7 \u00b7 agentt i=n ot n agentt+1 i=n ot+1 n \u00b7 \u00b7 \u00b7 agentT i=n at n at+1 n donet n donet+1 n receiver idt n receiver idt+1 n 0.0M 0.4M 0.8M 1.2M 1.6M 2.0M Timesteps 0 20 40 60 80 100 Median Test Win Rate % MASIA (s9) QMIX (s9) MASIA (s6) QMIX (s6) MASIA (s3) QMIX (s3) QMIX QPLEX VDN 0.0 0.2 0.4 0.6 0.8 Avarage Test Win Rate % w/ MASIA w/o MASIA Figure 7: (a) Performance comparison with varying sights, where sn means the sight range is n. (b) The increase of winning rates brought about by MASIA on map 1o10b_vs_1r. To show the generality of the MASIA framework, we also carry out experiments to integrate MASIA with current baselines, including VDN, QMIX, and QPLEX. As illustrated in Figure 7b, when integrated with MASIA, the performance of these baselines can be vastly improved on the map 1o10b_vs_1r from SMAC. In this scenario, one overseer is in service to monitor the enemies. Without communication, the other agents have to search the map for the enemies exhaustively. While with reliable communication, they could communicate with each other and the overseer for better coordination. The results demonstrate that MASIA can ef\ufb01ciently aggregate the messages and improve the agents\u2019 coordination ability for these value-based MARL methods. 4.1.5 Ablation Studies for Online Learning In our work, we propose two representation objectives to make the aggregated information representation compact and suf\ufb01cient. To further justify the effectiveness of these two objectives, we conduct ablation studies for online experiments. Speci\ufb01cally, we design three ablations: (1)\u03bb1 = 1, \u03bb2 = 0; (2)\u03bb1 = 0, \u03bb2 = 1; (3)\u03bb1 = 0, \u03bb2 = 0, which respectively corresponds to (1) only use encoder-decoder learning loss; (2) only use latent model learning loss; (3) neither loss is used. While our method (MASIA) corresponds to \u03bb0 = 1, \u03bb1 = 1. The experimental results for the tasks of Hallway are illustrated in Figure 8. 11 (a) Hallway: 4x6x10 (b) Hallway: 3x5-4x6x10 \ud835\udf06! = 1, \ud835\udf06\" = 1 \ud835\udf06! = 1, \ud835\udf06\" = 0 \ud835\udf06! = 0, \ud835\udf06\" = 1 \ud835\udf06! = 0, \ud835\udf06\" = 0 Figure 8: Ablation experiments for online learning in Hallway. From the experimental results, we can see that MASIA with two representation objectives outperforms the ablations. The proposed two represetantion objectives help MASIA learn to solve the task faster in both settings of 4x6x10 and 3x5-4x6x10. Especially, some random seeds of the third ablation even fail to solve the task 3x5-4x6x10 within 2M samples, which shows the indispensable roles of these two representation objectives. They offer good guides and accelerate the task learning. 4.2 Of\ufb02ine Multi-Agent Communication Learning Currently, in the \ufb01eld of multi-agent communicative reinforcement learning, there is still a lack of appropriate evaluation criteria for of\ufb02ine learning. Thus, to explore the possibility of learning multi-agent communication policies with of\ufb02ine dataset and to test the effectiveness of various multi-agent communication algorithms in the of\ufb02ine setting, we construct a set of of\ufb02ine datasets and conduct experiments on them. In speci\ufb01c, to make our constructed dataset as close to the real-world data as possible, we learn from D4RL [17] which offers a benchmark for single-agent of\ufb02ine learning, we collect data in a similar way to build our dataset. Different from D4RL, our dataset considers the property of Dec-POMDP, which means that it also contains observations for each agent. Besides, as we focus on multi-agent communication setting, the information of message receivers at each timestep is also included in the dataset. We provide a description of the whole dataset structure in Table 1. From this table, we can see that each trajectory in the dataset consists of the individual observations and individual actions for each agent. Besides, the variable receiver idt i indicates who can receive the messages sent by agent i at timestep t. In speci\ufb01c, the data is typically stored as vectors. For example, receiver idt i may be stored as [0, 1] which indicates that agent 0 and agent 1 can receive the messages sent by agent i at timestep t. Considering that the real-world data is typically of greatly varying quality, we design three different dataset settings: 1) expert, 2) noisy, and 3) replay, respectively. These data-collection schemes aim to systematically cover different real-world settings. More details are listed as below: \u2022 Expert Dataset. We train an online policy until convergence and greedily sample data with the \ufb01nal expert policy. Such practice is also adopted in Fu et al. [17], Gulcehre et al. [22] and Kumar et al. [31]. \u2022 Noisy Dataset. The noisy dataset is generated with an expert policy that selects actions via \u03b5-greedy method with \u03b5 = 0.2. Creating a dataset from a \ufb01xed noisy policy is similar to the dataset collection process in Fujimoto et al. [18], Kumar et al. [31] and Gulcehre et al. [22]. This scheme aims to simulate the real-world scenarios where experts may occasionally make mistakes. \u2022 Replay Dataset. This dataset contains the samples that represent different periods of the online policy 12 Table 2: The range of evaluation return values for the three different levels of behavior policies. Environment Level Good Medium Poor Hallway: 4x6x10 0.75 1.0 0.5 0.75 0.0 0.5 Hallway: 3x5-4x6x10 1.5 2.0 1.0 1.5 0.0 1.0 LBF: 11x11-6p-4f-s1 0.75 1.0 0.5 0.75 0.0 0.5 LBF: 20x20-10p-6f-s1 0.75 1.0 0.5 0.75 0.0 0.5 SMAC: 1o2r_vs_4r 15 20 10 15 0 10 SMAC: 1o10b_vs_1r 15 20 10 15 0 10 TJ: easy 0 (-7) (-7) (-17) (-17) (-600) TJ: medium 0 (-7) (-7) (-17) (-17) (-600) learning, to which similar scheme can be found in Agarwal et al. [1] and Fujimoto et al. [18]. To consider of\ufb02ine data of different qualities, we include three forms of Replay dataset. Concretely, we save some intermediate policies during online training, and arti\ufb01cially divide these saved policies into three levels (Poor, Medium and Good) based on the evaluation return values as shown in Table 2. Then we collect three forms of datasets with different mixtures of behavior policies: 1) Replay (Poor) consists entirely of data collected by Poor-Level behavior policies (Poor-Level data); 2) Replay (Medium) consists of 80% Medium-Level data and 20% Poor-Level data; (3) Replay (Good) consists of 60% Good-Level data, 20% Medium-Level data and 20% Poor-Level data. 4.2.1 Communication Performance Considering the Q-divergence problem which is well-studied [35] in of\ufb02ine learning, we propose to combine these communication algorithms with an of\ufb02ine MARL algorithm, ICQ [74], which adopts a conservative learning paradigm to alleviate the Q-divergence issue. Speci\ufb01cally, we integrate the design of different communication algorithms into the actor network structure, thus letting the agents learn message-based policy. That is to say agents make decisions based on communication messages. We also add the experimental results of the ICQ algorithm itself for an ablation, thus to validate the effectiveness of agent communication. The overall results are listed in Table 3. Although Off-Policy Evaluation (OPE) [62] is commonly applied in of\ufb02ine RL to test the \ufb01nal policy performance, in order to provide more accurate evaluation, we directly evaluate the policy in the environment and report the evaluation results. Actually, it is exciting to \ufb01nd that MASIA obtains the best communication performance on most tasks under different of\ufb02ine dataset settings. For example, when learning with the Expert of\ufb02ine dataset, MASIA achieves the best performance in all scenarios except for SMAC: 1o2r_vs_4r. While on the task SMAC: 1o2r_vs_4r, MASIA achieves as good average performance as Full-Comm with a slightly larger variance. Again taking the task of Hallway: 3x5-4x6x10 as an example, MASIA attains the highest success rates on all modes of of\ufb02ine dataset, especially with a huge performance advantage over all other baselines on Replay dataset. The good performance of MASIA on Replay (Poor) and Replay (Medium) datasets demonstrates the robustness of MASIA as it can still learn good communication policies with low quality data. To further compare the learning trends of MASIA and other baselines on different tasks, we also selectively show the learning curves for of\ufb02ine experiment on part of datasets of Hallway: 3x5-4x6x10 and SMAC: 1o2r_vs_4r in Figure 9. As we can see from Figure 9a and 9b, MASIA exhibits better learning speed and convergence in the learning curves of Hallway: 3x5-4x6x10. MASIA is the only method that achieves over 60% median test success rates within 4M learning samples on Replay (Poor) dataset. Similarly, on Replay (Medium) dataset of Hallway: 3x5-4x6x10, only MASIA steadily converges to a success rate of over 80%, while all other methods fail. Again on SMAC: 1o2r_vs_4r, MASIA shows faster convergence rate and better convergence performance, on par with Full-Comm algorithm. 13 Table 3: Experimental results of different methods under different benchmarks. Environment/Dataset Method ICQ+MASIA ICQ+Fullcomm ICQ+NDQ ICQ+TarMAC ICQ+TMC ICQ Hallway: 4x6x10 Expert 100.00 \u00b1 0.00 100.00 \u00b1 0.00 85.10 \u00b1 5.55 97.80 \u00b1 4.28 0.04 \u00b1 0.24 0.25 \u00b1 0.64 Noisy 99.90 \u00b1 0.07 99.60 \u00b1 0.95 99.50 \u00b1 0.74 99.80 \u00b1 0.51 14.10 \u00b1 11.62 1.55 \u00b1 2.82 Replay (Poor) 94.50 \u00b1 3.59 87.90 \u00b1 8.95 62.20 \u00b1 9.21 83.60 \u00b1 7.92 0.52 \u00b1 0.94 0.11 \u00b1 0.35 Replay (Medium) 97.40 \u00b1 3.60 94.40 \u00b1 5.07 57.40 \u00b1 8.24 92.80 \u00b1 5.51 1.39 \u00b1 1.69 0.12 \u00b1 0.40 Replay (Good) 99.70 \u00b1 0.72 91.10 \u00b1 8.46 11.30 \u00b1 13.95 97.70 \u00b1 2.59 0.33 \u00b1 1.01 0.02 \u00b1 0.17 Hallway: 3x5-4x6x10 Expert 99.70 \u00b1 0.57 98.90 \u00b1 0.48 86.50 \u00b1 5.17 98.50 \u00b1 2.26 1.18 \u00b1 1.48 0.36 \u00b1 0.73 Noisy 99.90 \u00b1 0.10 99.90 \u00b1 0.20 99.10 \u00b1 1.17 99.80 \u00b1 1.84 7.67 \u00b1 4.43 5.97 \u00b1 3.71 Replay (Poor) 73.40 \u00b1 10.63 40.80 \u00b1 20.46 10.20 \u00b1 16.61 53.40 \u00b1 26.28 0.00 \u00b1 0.00 0.00 \u00b1 0.00 Replay (Medium) 81.90 \u00b1 11.54 38.60 \u00b1 15.92 2.78 \u00b1 4.02 35.60 \u00b1 24.91 0.00 \u00b1 0.00 0.00 \u00b1 0.00 Replay (Good) 95.20 \u00b1 4.02 77.50 \u00b1 13.98 76.50 \u00b1 12.65 88.30 \u00b1 8.35 0.01 \u00b1 0.10 0.06 \u00b1 0.46 LBF: 11x11-6p-4f-s1 Expert 88.70 \u00b1 13.09 76.50 \u00b1 15.61 82.60 \u00b1 4.33 88.30 \u00b1 11.37 80.50 \u00b1 14.13 82.90 \u00b1 12.53 Noisy 89.60 \u00b1 9.42 78.40 \u00b1 14.59 80.60 \u00b1 4.93 88.80 \u00b1 10.09 82.10 \u00b1 10.94 80.10 \u00b1 11.59 Replay (Poor) 77.20 \u00b1 15.14 66.40 \u00b1 15.53 70.10 \u00b1 5.38 78.10 \u00b1 13.81 70.90 \u00b1 15.14 71.20 \u00b1 12.55 Replay (Medium) 82.70 \u00b1 10.92 70.70 \u00b1 15.93 74.80 \u00b1 5.74 80.50 \u00b1 11.75 73.60 \u00b1 13.45 75.70 \u00b1 13.13 Replay (Good) 89.10 \u00b1 11.41 76.90 \u00b1 13.43 79.30 \u00b1 4.60 87.40 \u00b1 8.96 78.20 \u00b1 12.09 78.30 \u00b1 11.92 LBF: 20x20-10p-6f-s1 Expert 54.70 \u00b1 10.39 42.90 \u00b1 10.53 45.80 \u00b1 4.27 52.50 \u00b1 10.83 46.40 \u00b1 10.26 44.90 \u00b1 10.53 Noisy 54.10 \u00b1 10.66 43.30 \u00b1 11.89 47.30 \u00b1 5.30 54.60 \u00b1 12.71 48.70 \u00b1 11.72 46.60 \u00b1 11.92 Replay (Poor) 48.90 \u00b1 12.75 36.80 \u00b1 11.49 44.50 \u00b1 4.85 44.78 \u00b1 10.23 45.10 \u00b1 12.29 45.10 \u00b1 11.72 Replay (Medium) 50.20 \u00b1 10.92 39.90 \u00b1 11.04 46.40 \u00b1 5.13 48.10 \u00b1 11.85 45.70 \u00b1 10.45 44.70 \u00b1 10.56 Replay (Good) 51.50 \u00b1 12.78 41.70 \u00b1 10.38 45.60 \u00b1 5.50 48.90 \u00b1 11.81 46.30 \u00b1 10.54 45.20 \u00b1 10.19 SMAC: 1o2r_vs_4r Expert 83.60 \u00b1 17.68 83.60 \u00b1 15.04 79.30 \u00b1 12.71 80.70 \u00b1 19.69 45.60 \u00b1 25.81 43.70 \u00b1 25.58 Noisy 81.10 \u00b1 15.08 76.70 \u00b1 21.21 77.40 \u00b1 10.59 79.90 \u00b1 18.75 57.90 \u00b1 26.88 41.50 \u00b1 25.26 Replay (Poor) 85.70 \u00b1 15.61 83.60 \u00b1 19.86 82.10 \u00b1 10.76 84.70 \u00b1 15.56 40.30 \u00b1 25.85 45.40 \u00b1 25.34 Replay (Medium) 79.60 \u00b1 18.17 80.50 \u00b1 20.29 77.70 \u00b1 10.86 79.20 \u00b1 18.55 50.20 \u00b1 26.39 45.20 \u00b1 25.71 Replay (Good) 80.10 \u00b1 15.26 75.90 \u00b1 23.41 77.90 \u00b1 10.78 19.60 \u00b1 21.23 48.80 \u00b1 26.75 43.80 \u00b1 26.68 SMAC: 1o10b_vs_1r Expert 84.10 \u00b1 21.36 77.30 \u00b1 21.48 82.80 \u00b1 12.41 74.30 \u00b1 23.08 11.90 \u00b1 16.75 9.34 \u00b1 18.37 Noisy 84.60 \u00b1 17.77 82.40 \u00b1 18.89 78.70 \u00b1 10.19 58.20 \u00b1 26.63 13.30 \u00b1 19.76 15.60 \u00b1 18.11 Replay (Poor) 82.90 \u00b1 17.19 78.70 \u00b1 21.93 77.70 \u00b1 10.05 80.90 \u00b1 20.91 7.70 \u00b1 11.57 9.79 \u00b1 18.18 Replay (Medium) 82.90 \u00b1 21.26 82.80 \u00b1 18.54 81.70 \u00b1 8.90 85.40 \u00b1 16.14 9.19 \u00b1 14.65 7.77 \u00b1 13.97 Replay (Good) 85.60 \u00b1 17.79 80.90 \u00b1 20.71 81.10 \u00b1 10.19 80.30 \u00b1 23.97 10.30 \u00b1 17.21 6.44 \u00b1 12.16 TJ: easy Expert 96.70 \u00b1 10.04 68.60 \u00b1 29.36 89.70 \u00b1 7.51 96.70 \u00b1 12.93 92.00 \u00b1 16.31 91.20 \u00b1 18.64 Noisy 97.90 \u00b1 9.93 93.60 \u00b1 12.65 96.20 \u00b1 4.39 96.40 \u00b1 11.47 95.80 \u00b1 10.24 95.10 \u00b1 16.29 Replay (Poor) 91.10 \u00b1 19.53 75.60 \u00b1 27.95 95.30 \u00b1 5.22 92.20 \u00b1 16.53 90.20 \u00b1 20.76 95.50 \u00b1 13.71 Replay (Medium) 87.30 \u00b1 21.93 71.90 \u00b1 28.52 93.60 \u00b1 7.89 89.80 \u00b1 18.95 98.10 \u00b1 9.69 96.80 \u00b1 12.41 Replay (Good) 90.80 \u00b1 20.92 75.60 \u00b1 27.01 89.20 \u00b1 12.01 96.70 \u00b1 10.23 94.40 \u00b1 17.71 96.30 \u00b1 13.93 TJ: medium Expert 88.60 \u00b1 19.98 15.80 \u00b1 24.56 78.70 \u00b1 14.59 75.50 \u00b1 31.17 81.50 \u00b1 28.94 87.90 \u00b1 22.49 Noisy 72.70 \u00b1 29.63 48.20 \u00b1 32.49 71.30 \u00b1 17.58 56.40 \u00b1 34.31 76.10 \u00b1 28.35 78.10 \u00b1 26.14 Replay (Poor) 99.60 \u00b1 3.04 69.90 \u00b1 30.91 96.30 \u00b1 5.30 98.10 \u00b1 6.89 97.50 \u00b1 9.80 98.10 \u00b1 9.24 Replay (Medium) 89.90 \u00b1 19.21 49.20 \u00b1 33.36 83.70 \u00b1 11.69 87.80 \u00b1 21.61 91.20 \u00b1 17.81 80.10 \u00b1 27.11 Replay (Good) 92.60 \u00b1 14.81 72.20 \u00b1 30.12 90.30 \u00b1 6.89 85.90 \u00b1 23.69 96.10 \u00b1 12.88 91.10 \u00b1 20.63 4.2.2 Ablation Studies for Of\ufb02ine Learning In the previous experiments, we have proved the importance of our proposed two unsupervised representation objectives under online setting. To study how these two objectives work in the of\ufb02ine setting, we further conduct ablation studies for the of\ufb02ine experiments. The ablation results on task of Hallway: 3x5-4x6x10 with dataset Replay (Medium) and Replay (Poor) are depicted in Figure 10. Similar to the ablation studies for online experiments, here we ablate the two unsupervised objectives, respectively. From the results we can see that, ablating any of these two objectives will result in a drop in the \ufb01nal communication performance, indicating that the designed two unsupervised objectives play an indispensable role in the of\ufb02ine setting. Actually, it can be seem that the impact of ablating the unsupervised learning objectives in the of\ufb02ine setting is relatively larger than that in the online experiments. The main reason for this phenomenon is that it is more challenging to learn a good communication policy in the of\ufb02ine setting as no exploration in the environment is allowed. Besides, the problem of the dataset quality, especially for dataset Replay (Poor), strengthens the problem. Moreover, we also add the ablation experiments for the representation pre-training practice in the of\ufb02ine experiments. From the results we can see that the ablation of the representation pre-training also causes great harm to the performance, which justi\ufb01es the effectiveness of this practice. More ablation experiments and analyses can be found in Appendix C.2. 14 ICQ + MASIA (Ours) ICQ+NDQ ICQ + TarMAC ICQ+TMC ICQ + Full-Comm ICQ 0.0M 0.8M 1.6M 2.4M 3.2M 4.0M Timesteps 6M 2.4M Timesteps 0 20 40 60 80 100 Median Test Return (a) Replay (Poor) 0.0M 0.8M 1.6M 2.4M 3.2M 4.0M Timesteps 6M 2.4M Timesteps 0 20 40 60 80 100 Median Test Return (b) Replay (Medium) 0.0M 0.4M 0.8M 1.2M 1.6M 2.0M Timesteps M 1.2M Timesteps 0 20 40 60 80 100 Median Test Return (c) Replay (Medium) 0.0M 0.4M 0.8M 1.2M 1.6M 2.0M Timesteps M 1.2M Timesteps 0 20 40 60 80 100 Median Test Return (d) Replay (Good) In this paper, we investigate the information representation for multi-agent communication. Previous works either focus on generating meaningful messages or designing a mechanism to select the most relevant message in a raw way, ignoring the aggregation of the message, resulting in low sample efficiency in complex scenarios. Our approach improves communication efficiency by learning a compact information representation to ground the true state and optimizing it in a self-supervised way. Also, we apply a focusing network to extract the most relevant part for decision-making. We conduct sufficient experiments in various benchmarks to verify the efficiency of the proposed methods, both in online and offline settings, and more visualization results further reveal why our approach works. We further release the newly built offline benchmark for multi-agent communication, hoping to facilitate the real-world application of MARL communication. For future work, more results on image input and solving the scalability issue when facing environments with hundreds or thousands of agents by techniques like agent grouping would be of great interest. Also, how to obtain a robust communication policy when suffering from a distribution shift in online policy deployment is an urgent topic. Acknowledgments This work is supported by the National Key Research and Development Program of China (2020AAA0107200), the National Science Foundation of China (61921006, 61876119, 62276126), the Natural Science Foundation of Jiangsu (BK20221442), and the program B for Outstanding Ph.D. candidate of Nanjing University. We thank Lichao Zhang and Chuneng Fan for their useful support, suggestions, and discussions." + }, + { + "url": "http://arxiv.org/abs/1605.06676v2", + "title": "Learning to Communicate with Deep Multi-Agent Reinforcement Learning", + "abstract": "We consider the problem of multiple agents sensing and acting in environments\nwith the goal of maximising their shared utility. In these environments, agents\nmust learn communication protocols in order to share information that is needed\nto solve the tasks. By embracing deep neural networks, we are able to\ndemonstrate end-to-end learning of protocols in complex environments inspired\nby communication riddles and multi-agent computer vision problems with partial\nobservability. We propose two approaches for learning in these domains:\nReinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning\n(DIAL). The former uses deep Q-learning, while the latter exploits the fact\nthat, during learning, agents can backpropagate error derivatives through\n(noisy) communication channels. Hence, this approach uses centralised learning\nbut decentralised execution. Our experiments introduce new environments for\nstudying the learning of communication protocols and present a set of\nengineering innovations that are essential for success in these domains.", + "authors": "Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, Shimon Whiteson", + "published": "2016-05-21", + "updated": "2016-05-24", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.08975v1", + "title": "A Survey of Multi-Agent Reinforcement Learning with Communication", + "abstract": "Communication is an effective mechanism for coordinating the behavior of\nmultiple agents. In the field of multi-agent reinforcement learning, agents can\nimprove the overall learning performance and achieve their objectives by\ncommunication. Moreover, agents can communicate various types of messages,\neither to all agents or to specific agent groups, and through specific\nchannels. With the growing body of research work in MARL with communication\n(Comm-MARL), there is lack of a systematic and structural approach to\ndistinguish and classify existing Comm-MARL systems. In this paper, we survey\nrecent works in the Comm-MARL field and consider various aspects of\ncommunication that can play a role in the design and development of multi-agent\nreinforcement learning systems. With these aspects in mind, we propose several\ndimensions along which Comm-MARL systems can be analyzed, developed, and\ncompared.", + "authors": "Changxi Zhu, Mehdi Dastani, Shihan Wang", + "published": "2022-03-16", + "updated": "2022-03-16", + "primary_cat": "cs.MA", + "cats": [ + "cs.MA", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1810.02912v2", + "title": "Actor-Attention-Critic for Multi-Agent Reinforcement Learning", + "abstract": "Reinforcement learning in multi-agent scenarios is important for real-world\napplications but presents challenges beyond those seen in single-agent\nsettings. We present an actor-critic algorithm that trains decentralized\npolicies in multi-agent settings, using centrally computed critics that share\nan attention mechanism which selects relevant information for each agent at\nevery timestep. This attention mechanism enables more effective and scalable\nlearning in complex multi-agent environments, when compared to recent\napproaches. Our approach is applicable not only to cooperative settings with\nshared rewards, but also individualized reward settings, including adversarial\nsettings, as well as settings that do not provide global states, and it makes\nno assumptions about the action spaces of the agents. As such, it is flexible\nenough to be applied to most multi-agent learning problems.", + "authors": "Shariq Iqbal, Fei Sha", + "published": "2018-10-05", + "updated": "2019-05-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.01387v3", + "title": "A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open Problems", + "abstract": "With the widespread adoption of deep learning, reinforcement learning (RL)\nhas experienced a dramatic increase in popularity, scaling to previously\nintractable problems, such as playing complex games from pixel observations,\nsustaining conversations with humans, and controlling robotic agents. However,\nthere is still a wide range of domains inaccessible to RL due to the high cost\nand danger of interacting with the environment. Offline RL is a paradigm that\nlearns exclusively from static datasets of previously collected interactions,\nmaking it feasible to extract policies from large and diverse training\ndatasets. Effective offline RL algorithms have a much wider range of\napplications than online RL, being particularly appealing for real-world\napplications, such as education, healthcare, and robotics. In this work, we\ncontribute with a unifying taxonomy to classify offline RL methods.\nFurthermore, we provide a comprehensive review of the latest algorithmic\nbreakthroughs in the field using a unified notation as well as a review of\nexisting benchmarks' properties and shortcomings. Additionally, we provide a\nfigure that summarizes the performance of each method and class of methods on\ndifferent dataset properties, equipping researchers with the tools to decide\nwhich type of algorithm is best suited for the problem at hand and identify\nwhich classes of algorithms look the most promising. Finally, we provide our\nperspective on open problems and propose future research directions for this\nrapidly growing field.", + "authors": "Rafael Figueiredo Prudencio, Marcos R. O. A. Maximo, Esther Luna Colombini", + "published": "2022-03-02", + "updated": "2023-04-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1906.00949v2", + "title": "Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction", + "abstract": "Off-policy reinforcement learning aims to leverage experience collected from\nprior policies for sample-efficient learning. However, in practice, commonly\nused off-policy approximate dynamic programming methods based on Q-learning and\nactor-critic methods are highly sensitive to the data distribution, and can\nmake only limited progress without collecting additional on-policy data. As a\nstep towards more robust off-policy algorithms, we study the setting where the\noff-policy experience is fixed and there is no further interaction with the\nenvironment. We identify bootstrapping error as a key source of instability in\ncurrent methods. Bootstrapping error is due to bootstrapping from actions that\nlie outside of the training data distribution, and it accumulates via the\nBellman backup operator. We theoretically analyze bootstrapping error, and\ndemonstrate how carefully constraining action selection in the backup can\nmitigate it. Based on our analysis, we propose a practical algorithm,\nbootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is\nable to learn robustly from different off-policy distributions, including\nrandom and suboptimal demonstrations, on a range of continuous control tasks.", + "authors": "Aviral Kumar, Justin Fu, George Tucker, Sergey Levine", + "published": "2019-06-03", + "updated": "2019-11-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.00587v5", + "title": "Towards Understanding Cooperative Multi-Agent Q-Learning with Value Factorization", + "abstract": "Value factorization is a popular and promising approach to scaling up\nmulti-agent reinforcement learning in cooperative settings, which balances the\nlearning scalability and the representational capacity of value functions.\nHowever, the theoretical understanding of such methods is limited. In this\npaper, we formalize a multi-agent fitted Q-iteration framework for analyzing\nfactorized multi-agent Q-learning. Based on this framework, we investigate\nlinear value factorization and reveal that multi-agent Q-learning with this\nsimple decomposition implicitly realizes a powerful counterfactual credit\nassignment, but may not converge in some settings. Through further analysis, we\nfind that on-policy training or richer joint value function classes can improve\nits local or global convergence properties, respectively. Finally, to support\nour theoretical implications in practical realization, we conduct an empirical\nanalysis of state-of-the-art deep multi-agent Q-learning algorithms on didactic\nexamples and a broad set of StarCraft II unit micromanagement tasks.", + "authors": "Jianhao Wang, Zhizhou Ren, Beining Han, Jianing Ye, Chongjie Zhang", + "published": "2020-05-31", + "updated": "2021-10-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2202.04868v2", + "title": "Understanding Value Decomposition Algorithms in Deep Cooperative Multi-Agent Reinforcement Learning", + "abstract": "Value function decomposition is becoming a popular rule of thumb for scaling\nup multi-agent reinforcement learning (MARL) in cooperative games. For such a\ndecomposition rule to hold, the assumption of the individual-global max (IGM)\nprinciple must be made; that is, the local maxima on the decomposed value\nfunction per every agent must amount to the global maximum on the joint value\nfunction. This principle, however, does not have to hold in general. As a\nresult, the applicability of value decomposition algorithms is concealed and\ntheir corresponding convergence properties remain unknown. In this paper, we\nmake the first effort to answer these questions. Specifically, we introduce the\nset of cooperative games in which the value decomposition methods find their\nvalidity, which is referred as decomposable games. In decomposable games, we\ntheoretically prove that applying the multi-agent fitted Q-Iteration algorithm\n(MA-FQI) will lead to an optimal Q-function. In non-decomposable games, the\nestimated Q-function by MA-FQI can still converge to the optimum under the\ncircumstance that the Q-function needs projecting into the decomposable\nfunction space at each iteration. In both settings, we consider value function\nrepresentations by practical deep neural networks and derive their\ncorresponding convergence rates. To summarize, our results, for the first time,\noffer theoretical insights for MARL practitioners in terms of when value\ndecomposition algorithms converge and why they perform well.", + "authors": "Zehao Dou, Jakub Grudzien Kuba, Yaodong Yang", + "published": "2022-02-10", + "updated": "2022-02-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1803.11485v2", + "title": "QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning", + "abstract": "In many real-world settings, a team of agents must coordinate their behaviour\nwhile acting in a decentralised way. At the same time, it is often possible to\ntrain the agents in a centralised fashion in a simulated or laboratory setting,\nwhere global state information is available and communication constraints are\nlifted. Learning joint action-values conditioned on extra state information is\nan attractive way to exploit centralised learning, but the best strategy for\nthen extracting decentralised policies is unclear. Our solution is QMIX, a\nnovel value-based method that can train decentralised policies in a centralised\nend-to-end fashion. QMIX employs a network that estimates joint action-values\nas a complex non-linear combination of per-agent values that condition only on\nlocal observations. We structurally enforce that the joint-action value is\nmonotonic in the per-agent values, which allows tractable maximisation of the\njoint action-value in off-policy learning, and guarantees consistency between\nthe centralised and decentralised policies. We evaluate QMIX on a challenging\nset of StarCraft II micromanagement tasks, and show that QMIX significantly\noutperforms existing value-based multi-agent reinforcement learning methods.", + "authors": "Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foerster, Shimon Whiteson", + "published": "2018-03-30", + "updated": "2018-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1810.11187v2", + "title": "TarMAC: Targeted Multi-Agent Communication", + "abstract": "We propose a targeted communication architecture for multi-agent\nreinforcement learning, where agents learn both what messages to send and whom\nto address them to while performing cooperative tasks in partially-observable\nenvironments. This targeting behavior is learnt solely from downstream\ntask-specific reward without any communication supervision. We additionally\naugment this with a multi-round communication approach where agents coordinate\nvia multiple rounds of communication before taking actions in the environment.\nWe evaluate our approach on a diverse set of cooperative multi-agent tasks, of\nvarying difficulties, with varying number of agents, in a variety of\nenvironments ranging from 2D grid layouts of shapes and simulated traffic\njunctions to 3D indoor environments, and demonstrate the benefits of targeted\nand multi-round communication. Moreover, we show that the targeted\ncommunication strategies learned by agents are interpretable and intuitive.\nFinally, we show that our architecture can be easily extended to mixed and\ncompetitive environments, leading to improved performance and sample complexity\nover recent state-of-the-art approaches.", + "authors": "Abhishek Das, Th\u00e9ophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Michael Rabbat, Joelle Pineau", + "published": "2018-10-26", + "updated": "2020-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1907.05707v6", + "title": "Shapley Q-value: A Local Reward Approach to Solve Global Reward Games", + "abstract": "Cooperative game is a critical research area in the multi-agent reinforcement\nlearning (MARL). Global reward game is a subclass of cooperative games, where\nall agents aim to maximize the global reward. Credit assignment is an important\nproblem studied in the global reward game. Most of previous works stood by the\nview of non-cooperative-game theoretical framework with the shared reward\napproach, i.e., each agent being assigned a shared global reward directly.\nThis, however, may give each agent an inaccurate reward on its contribution to\nthe group, which could cause inefficient learning. To deal with this problem,\nwe i) introduce a cooperative-game theoretical framework called extended convex\ngame (ECG) that is a superset of global reward game, and ii) propose a local\nreward approach called Shapley Q-value. Shapley Q-value is able to distribute\nthe global reward, reflecting each agent's own contribution in contrast to the\nshared reward approach. Moreover, we derive an MARL algorithm called Shapley\nQ-value deep deterministic policy gradient (SQDDPG), using Shapley Q-value as\nthe critic for each agent. We evaluate SQDDPG on Cooperative Navigation,\nPrey-and-Predator and Traffic Junction, compared with the state-of-the-art\nalgorithms, e.g., MADDPG, COMA, Independent DDPG and Independent A2C. In the\nexperiments, SQDDPG shows a significant improvement on the convergence rate.\nFinally, we plot Shapley Q-value and validate the property of fair credit\nassignment.", + "authors": "Jianhong Wang, Yuan Zhang, Tae-Kyun Kim, Yunjie Gu", + "published": "2019-07-11", + "updated": "2022-10-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1912.05676v1", + "title": "Biases for Emergent Communication in Multi-agent Reinforcement Learning", + "abstract": "We study the problem of emergent communication, in which language arises\nbecause speakers and listeners must communicate information in order to solve\ntasks. In temporally extended reinforcement learning domains, it has proved\nhard to learn such communication without centralized training of agents, due in\npart to a difficult joint exploration problem. We introduce inductive biases\nfor positive signalling and positive listening, which ease this problem. In a\nsimple one-step environment, we demonstrate how these biases ease the learning\nproblem. We also apply our methods to a more extended environment, showing that\nagents with these inductive biases achieve better performance, and analyse the\nresulting communication protocols.", + "authors": "Tom Eccles, Yoram Bachrach, Guy Lever, Angeliki Lazaridou, Thore Graepel", + "published": "2019-12-11", + "updated": "2019-12-11", + "primary_cat": "cs.MA", + "cats": [ + "cs.MA", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1812.02783v8", + "title": "Finite-Sample Analysis For Decentralized Batch Multi-Agent Reinforcement Learning With Networked Agents", + "abstract": "Despite the increasing interest in multi-agent reinforcement learning (MARL)\nin multiple communities, understanding its theoretical foundation has long been\nrecognized as a challenging problem. In this work, we address this problem by\nproviding a finite-sample analysis for decentralized batch MARL with networked\nagents. Specifically, we consider two decentralized MARL settings, where teams\nof agents are connected by time-varying communication networks, and either\ncollaborate or compete in a zero-sum game setting, without any central\ncontroller. These settings cover many conventional MARL settings in the\nliterature. For both settings, we develop batch MARL algorithms that can be\nimplemented in a decentralized fashion, and quantify the finite-sample errors\nof the estimated action-value functions. Our error analysis captures how the\nfunction class, the number of samples within each iteration, and the number of\niterations determine the statistical accuracy of the proposed algorithms. Our\nresults, compared to the finite-sample bounds for single-agent RL, involve\nadditional error terms caused by decentralized computation, which is inherent\nin our decentralized MARL setting. This work appears to be the first\nfinite-sample analysis for batch MARL, a step towards rigorous theoretical\nunderstanding of general MARL algorithms in the finite-sample regime.", + "authors": "Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, Tamer Ba\u015far", + "published": "2018-12-06", + "updated": "2020-12-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.11251v2", + "title": "Trust Region Policy Optimisation in Multi-Agent Reinforcement Learning", + "abstract": "Trust region methods rigorously enabled reinforcement learning (RL) agents to\nlearn monotonically improving policies, leading to superior performance on a\nvariety of tasks. Unfortunately, when it comes to multi-agent reinforcement\nlearning (MARL), the property of monotonic improvement may not simply apply;\nthis is because agents, even in cooperative games, could have conflicting\ndirections of policy updates. As a result, achieving a guaranteed improvement\non the joint policy where each agent acts individually remains an open\nchallenge. In this paper, we extend the theory of trust region learning to\nMARL. Central to our findings are the multi-agent advantage decomposition lemma\nand the sequential policy update scheme. Based on these, we develop\nHeterogeneous-Agent Trust Region Policy Optimisation (HATPRO) and\nHeterogeneous-Agent Proximal Policy Optimisation (HAPPO) algorithms. Unlike\nmany existing MARL algorithms, HATRPO/HAPPO do not need agents to share\nparameters, nor do they need any restrictive assumptions on decomposibility of\nthe joint value function. Most importantly, we justify in theory the monotonic\nimprovement property of HATRPO/HAPPO. We evaluate the proposed methods on a\nseries of Multi-Agent MuJoCo and StarCraftII tasks. Results show that HATRPO\nand HAPPO significantly outperform strong baselines such as IPPO, MAPPO and\nMADDPG on all tested tasks, therefore establishing a new state of the art.", + "authors": "Jakub Grudzien Kuba, Ruiqing Chen, Muning Wen, Ying Wen, Fanglei Sun, Jun Wang, Yaodong Yang", + "published": "2021-09-23", + "updated": "2022-04-04", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1905.05408v1", + "title": "QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning", + "abstract": "We explore value-based solutions for multi-agent reinforcement learning\n(MARL) tasks in the centralized training with decentralized execution (CTDE)\nregime popularized recently. However, VDN and QMIX are representative examples\nthat use the idea of factorization of the joint action-value function into\nindividual ones for decentralized execution. VDN and QMIX address only a\nfraction of factorizable MARL tasks due to their structural constraint in\nfactorization such as additivity and monotonicity. In this paper, we propose a\nnew factorization method for MARL, QTRAN, which is free from such structural\nconstraints and takes on a new approach to transforming the original joint\naction-value function into an easily factorizable one, with the same optimal\nactions. QTRAN guarantees more general factorization than VDN or QMIX, thus\ncovering a much wider class of MARL tasks than does previous methods. Our\nexperiments for the tasks of multi-domain Gaussian-squeeze and modified\npredator-prey demonstrate QTRAN's superior performance with especially larger\nmargins in games whose payoffs penalize non-cooperative behavior more\naggressively.", + "authors": "Kyunghwan Son, Daewoo Kim, Wan Ju Kang, David Earl Hostallero, Yung Yi", + "published": "2019-05-14", + "updated": "2019-05-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.14391v2", + "title": "Succinct and Robust Multi-Agent Communication With Temporal Message Control", + "abstract": "Recent studies have shown that introducing communication between agents can\nsignificantly improve overall performance in cooperative Multi-agent\nreinforcement learning (MARL). However, existing communication schemes often\nrequire agents to exchange an excessive number of messages at run-time under a\nreliable communication channel, which hinders its practicality in many\nreal-world situations. In this paper, we present \\textit{Temporal Message\nControl} (TMC), a simple yet effective approach for achieving succinct and\nrobust communication in MARL. TMC applies a temporal smoothing technique to\ndrastically reduce the amount of information exchanged between agents.\nExperiments show that TMC can significantly reduce inter-agent communication\noverhead without impacting accuracy. Furthermore, TMC demonstrates much better\nrobustness against transmission loss than existing approaches in lossy\nnetworking environments.", + "authors": "Sai Qian Zhang, Jieyu Lin, Qi Zhang", + "published": "2020-10-27", + "updated": "2020-12-24", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2007.12322v2", + "title": "Off-Policy Multi-Agent Decomposed Policy Gradients", + "abstract": "Multi-agent policy gradient (MAPG) methods recently witness vigorous\nprogress. However, there is a significant performance discrepancy between MAPG\nmethods and state-of-the-art multi-agent value-based approaches. In this paper,\nwe investigate causes that hinder the performance of MAPG algorithms and\npresent a multi-agent decomposed policy gradient method (DOP). This method\nintroduces the idea of value function decomposition into the multi-agent\nactor-critic framework. Based on this idea, DOP supports efficient off-policy\nlearning and addresses the issue of centralized-decentralized mismatch and\ncredit assignment in both discrete and continuous action spaces. We formally\nshow that DOP critics have sufficient representational capability to guarantee\nconvergence. In addition, empirical evaluations on the StarCraft II\nmicromanagement benchmark and multi-agent particle environments demonstrate\nthat DOP significantly outperforms both state-of-the-art value-based and\npolicy-based multi-agent reinforcement learning algorithms. Demonstrative\nvideos are available at https://sites.google.com/view/dop-mapg/.", + "authors": "Yihan Wang, Beining Han, Tonghan Wang, Heng Dong, Chongjie Zhang", + "published": "2020-07-24", + "updated": "2020-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2005.01643v3", + "title": "Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems", + "abstract": "In this tutorial article, we aim to provide the reader with the conceptual\ntools needed to get started on research on offline reinforcement learning\nalgorithms: reinforcement learning algorithms that utilize previously collected\ndata, without additional online data collection. Offline reinforcement learning\nalgorithms hold tremendous promise for making it possible to turn large\ndatasets into powerful decision making engines. Effective offline reinforcement\nlearning methods would be able to extract policies with the maximum possible\nutility out of the available data, thereby allowing automation of a wide range\nof decision-making domains, from healthcare and education to robotics. However,\nthe limitations of current algorithms make this difficult. We will aim to\nprovide the reader with an understanding of these challenges, particularly in\nthe context of modern deep reinforcement learning methods, and describe some\npotential solutions that have been explored in recent work to mitigate these\nchallenges, along with recent applications, and a discussion of perspectives on\nopen problems in the field.", + "authors": "Sergey Levine, Aviral Kumar, George Tucker, Justin Fu", + "published": "2020-05-04", + "updated": "2020-11-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1812.02900v3", + "title": "Off-Policy Deep Reinforcement Learning without Exploration", + "abstract": "Many practical applications of reinforcement learning constrain agents to\nlearn from a fixed batch of data which has already been gathered, without\noffering further possibility for data collection. In this paper, we demonstrate\nthat due to errors introduced by extrapolation, standard off-policy deep\nreinforcement learning algorithms, such as DQN and DDPG, are incapable of\nlearning with data uncorrelated to the distribution under the current policy,\nmaking them ineffective for this fixed batch setting. We introduce a novel\nclass of off-policy algorithms, batch-constrained reinforcement learning, which\nrestricts the action space in order to force the agent towards behaving close\nto on-policy with respect to a subset of the given data. We present the first\ncontinuous control deep reinforcement learning algorithm which can learn\neffectively from arbitrary, fixed batch data, and empirically demonstrate the\nquality of its behavior in several tasks.", + "authors": "Scott Fujimoto, David Meger, Doina Precup", + "published": "2018-12-07", + "updated": "2019-08-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.02419v2", + "title": "Emergent Multi-Agent Communication in the Deep Learning Era", + "abstract": "The ability to cooperate through language is a defining feature of humans. As\nthe perceptual, motory and planning capabilities of deep artificial networks\nincrease, researchers are studying whether they also can develop a shared\nlanguage to interact. From a scientific perspective, understanding the\nconditions under which language evolves in communities of deep agents and its\nemergent features can shed light on human language evolution. From an applied\nperspective, endowing deep networks with the ability to solve problems\ninteractively by communicating with each other and with us should make them\nmore flexible and useful in everyday life.\n This article surveys representative recent language emergence studies from\n both of these two angles.", + "authors": "Angeliki Lazaridou, Marco Baroni", + "published": "2020-06-03", + "updated": "2020-07-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1605.07736v2", + "title": "Learning Multiagent Communication with Backpropagation", + "abstract": "Many tasks in AI require the collaboration of multiple agents. Typically, the\ncommunication protocol between agents is manually specified and not altered\nduring training. In this paper we explore a simple neural model, called\nCommNet, that uses continuous communication for fully cooperative tasks. The\nmodel consists of multiple agents and the communication between them is learned\nalongside their policy. We apply this model to a diverse set of tasks,\ndemonstrating the ability of the agents to learn to communicate amongst\nthemselves, yielding improved performance over non-communicative agents and\nbaselines. In some cases, it is possible to interpret the language devised by\nthe agents, revealing simple but effective strategies for solving the task at\nhand.", + "authors": "Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus", + "published": "2016-05-25", + "updated": "2016-10-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.03803v2", + "title": "Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement Learning", + "abstract": "Recent studies in multi-agent communicative reinforcement learning (MACRL)\nhave demonstrated that multi-agent coordination can be greatly improved by\nallowing communication between agents. Meanwhile, adversarial machine learning\n(ML) has shown that ML models are vulnerable to attacks. Despite the increasing\nconcern about the robustness of ML algorithms, how to achieve robust\ncommunication in multi-agent reinforcement learning has been largely neglected.\nIn this paper, we systematically explore the problem of adversarial\ncommunication in MACRL. Our main contributions are threefold. First, we propose\nan effective method to perform attacks in MACRL, by learning a model to\ngenerate optimal malicious messages. Second, we develop a defence method based\non message reconstruction, to maintain multi-agent coordination under message\nattacks. Third, we formulate the adversarial communication problem as a\ntwo-player zero-sum game and propose a game-theoretical method R-MACRL to\nimprove the worst-case defending performance. Empirical results demonstrate\nthat many state-of-the-art MACRL methods are vulnerable to message attacks, and\nour method can significantly improve their robustness.", + "authors": "Wanqi Xue, Wei Qiu, Bo An, Zinovi Rabinovich, Svetlana Obraztsova, Chai Kiat Yeo", + "published": "2021-08-09", + "updated": "2022-01-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1706.02275v4", + "title": "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments", + "abstract": "We explore deep reinforcement learning methods for multi-agent domains. We\nbegin by analyzing the difficulty of traditional algorithms in the multi-agent\ncase: Q-learning is challenged by an inherent non-stationarity of the\nenvironment, while policy gradient suffers from a variance that increases as\nthe number of agents grows. We then present an adaptation of actor-critic\nmethods that considers action policies of other agents and is able to\nsuccessfully learn policies that require complex multi-agent coordination.\nAdditionally, we introduce a training regimen utilizing an ensemble of policies\nfor each agent that leads to more robust multi-agent policies. We show the\nstrength of our approach compared to existing methods in cooperative as well as\ncompetitive scenarios, where agent populations are able to discover various\nphysical and informational coordination strategies.", + "authors": "Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, Igor Mordatch", + "published": "2017-06-07", + "updated": "2020-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2008.01062v3", + "title": "QPLEX: Duplex Dueling Multi-Agent Q-Learning", + "abstract": "We explore value-based multi-agent reinforcement learning (MARL) in the\npopular paradigm of centralized training with decentralized execution (CTDE).\nCTDE has an important concept, Individual-Global-Max (IGM) principle, which\nrequires the consistency between joint and local action selections to support\nefficient local decision-making. However, in order to achieve scalability,\nexisting MARL methods either limit representation expressiveness of their value\nfunction classes or relax the IGM consistency, which may suffer from\ninstability risk or may not perform well in complex domains. This paper\npresents a novel MARL approach, called duPLEX dueling multi-agent Q-learning\n(QPLEX), which takes a duplex dueling network architecture to factorize the\njoint value function. This duplex dueling structure encodes the IGM principle\ninto the neural network architecture and thus enables efficient value function\nlearning. Theoretical analysis shows that QPLEX achieves a complete IGM\nfunction class. Empirical experiments on StarCraft II micromanagement tasks\ndemonstrate that QPLEX significantly outperforms state-of-the-art baselines in\nboth online and offline data collection settings, and also reveal that QPLEX\nachieves high sample efficiency and can benefit from offline datasets without\nadditional online exploration.", + "authors": "Jianhao Wang, Zhizhou Ren, Terry Liu, Yang Yu, Chongjie Zhang", + "published": "2020-08-03", + "updated": "2021-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1912.05304v1", + "title": "Learning Agent Communication under Limited Bandwidth by Message Pruning", + "abstract": "Communication is a crucial factor for the big multi-agent world to stay\norganized and productive. Recently, Deep Reinforcement Learning (DRL) has been\napplied to learn the communication strategy and the control policy for multiple\nagents. However, the practical \\emph{\\textbf{limited bandwidth}} in multi-agent\ncommunication has been largely ignored by the existing DRL methods.\nSpecifically, many methods keep sending messages incessantly, which consumes\ntoo much bandwidth. As a result, they are inapplicable to multi-agent systems\nwith limited bandwidth. To handle this problem, we propose a gating mechanism\nto adaptively prune less beneficial messages. We evaluate the gating mechanism\non several tasks. Experiments demonstrate that it can prune a lot of messages\nwith little impact on performance. In fact, the performance may be greatly\nimproved by pruning redundant messages. Moreover, the proposed gating mechanism\nis applicable to several previous methods, equipping them the ability to\naddress bandwidth restricted settings.", + "authors": "Hangyu Mao, Zhengchao Zhang, Zhen Xiao, Zhibo Gong, Yan Ni", + "published": "2019-12-03", + "updated": "2019-12-03", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.06455v2", + "title": "Learning Individually Inferred Communication for Multi-Agent Cooperation", + "abstract": "Communication lays the foundation for human cooperation. It is also crucial\nfor multi-agent cooperation. However, existing work focuses on broadcast\ncommunication, which is not only impractical but also leads to information\nredundancy that could even impair the learning process. To tackle these\ndifficulties, we propose Individually Inferred Communication (I2C), a simple\nyet effective model to enable agents to learn a prior for agent-agent\ncommunication. The prior knowledge is learned via causal inference and realized\nby a feed-forward neural network that maps the agent's local observation to a\nbelief about who to communicate with. The influence of one agent on another is\ninferred via the joint action-value function in multi-agent reinforcement\nlearning and quantified to label the necessity of agent-agent communication.\nFurthermore, the agent policy is regularized to better exploit communicated\nmessages. Empirically, we show that I2C can not only reduce communication\noverhead but also improve the performance in a variety of multi-agent\ncooperative scenarios, comparing to existing methods. The code is available at\nhttps://github.com/PKU-AI-Edge/I2C.", + "authors": "Ziluo Ding, Tiejun Huang, Zongqing Lu", + "published": "2020-06-11", + "updated": "2021-04-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.15612v2", + "title": "Learning from Good Trajectories in Offline Multi-Agent Reinforcement Learning", + "abstract": "Offline multi-agent reinforcement learning (MARL) aims to learn effective\nmulti-agent policies from pre-collected datasets, which is an important step\ntoward the deployment of multi-agent systems in real-world applications.\nHowever, in practice, each individual behavior policy that generates\nmulti-agent joint trajectories usually has a different level of how well it\nperforms. e.g., an agent is a random policy while other agents are medium\npolicies. In the cooperative game with global reward, one agent learned by\nexisting offline MARL often inherits this random policy, jeopardizing the\nperformance of the entire team. In this paper, we investigate offline MARL with\nexplicit consideration on the diversity of agent-wise trajectories and propose\na novel framework called Shared Individual Trajectories (SIT) to address this\nproblem. Specifically, an attention-based reward decomposition network assigns\nthe credit to each agent through a differentiable key-value memory mechanism in\nan offline manner. These decomposed credits are then used to reconstruct the\njoint offline datasets into prioritized experience replay with individual\ntrajectories, thereafter agents can share their good trajectories and\nconservatively train their policies with a graph attention network (GAT) based\ncritic. We evaluate our method in both discrete control (i.e., StarCraft II and\nmulti-agent particle environment) and continuous control (i.e, multi-agent\nmujoco). The results indicate that our method achieves significantly better\nresults in complex and mixed offline multi-agent datasets, especially when the\ndifference of data quality between individual trajectories is large.", + "authors": "Qi Tian, Kun Kuang, Furui Liu, Baoxiang Wang", + "published": "2022-11-28", + "updated": "2023-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.03400v2", + "title": "Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning", + "abstract": "Learning from datasets without interaction with environments (Offline\nLearning) is an essential step to apply Reinforcement Learning (RL) algorithms\nin real-world scenarios. However, compared with the single-agent counterpart,\noffline multi-agent RL introduces more agents with the larger state and action\nspace, which is more challenging but attracts little attention. We demonstrate\ncurrent offline RL algorithms are ineffective in multi-agent systems due to the\naccumulated extrapolation error. In this paper, we propose a novel offline RL\nalgorithm, named Implicit Constraint Q-learning (ICQ), which effectively\nalleviates the extrapolation error by only trusting the state-action pairs\ngiven in the dataset for value estimation. Moreover, we extend ICQ to\nmulti-agent tasks by decomposing the joint-policy under the implicit\nconstraint. Experimental results demonstrate that the extrapolation error is\nsuccessfully controlled within a reasonable range and insensitive to the number\nof agents. We further show that ICQ achieves the state-of-the-art performance\nin the challenging multi-agent offline tasks (StarCraft II). Our code is public\nonline at https://github.com/YiqinYang/ICQ.", + "authors": "Yiqin Yang, Xiaoteng Ma, Chenghao Li, Zewu Zheng, Qiyuan Zhang, Gao Huang, Jun Yang, Qianchuan Zhao", + "published": "2021-06-07", + "updated": "2021-10-26", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1705.08926v2", + "title": "Counterfactual Multi-Agent Policy Gradients", + "abstract": "Cooperative multi-agent systems can be naturally used to model many real\nworld problems, such as network packet routing and the coordination of\nautonomous vehicles. There is a great need for new reinforcement learning\nmethods that can efficiently learn decentralised policies for such systems. To\nthis end, we propose a new multi-agent actor-critic method called\ncounterfactual multi-agent (COMA) policy gradients. COMA uses a centralised\ncritic to estimate the Q-function and decentralised actors to optimise the\nagents' policies. In addition, to address the challenges of multi-agent credit\nassignment, it uses a counterfactual baseline that marginalises out a single\nagent's action, while keeping the other agents' actions fixed. COMA also uses a\ncritic representation that allows the counterfactual baseline to be computed\nefficiently in a single forward pass. We evaluate COMA in the testbed of\nStarCraft unit micromanagement, using a decentralised variant with significant\npartial observability. COMA significantly improves average performance over\nother multi-agent actor-critic methods in this setting, and the best performing\nagents are competitive with state-of-the-art centralised controllers that get\naccess to the full state.", + "authors": "Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, Shimon Whiteson", + "published": "2017-05-24", + "updated": "2017-12-14", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1910.05366v2", + "title": "Learning Nearly Decomposable Value Functions Via Communication Minimization", + "abstract": "Reinforcement learning encounters major challenges in multi-agent settings,\nsuch as scalability and non-stationarity. Recently, value function\nfactorization learning emerges as a promising way to address these challenges\nin collaborative multi-agent systems. However, existing methods have been\nfocusing on learning fully decentralized value functions, which are not\nefficient for tasks requiring communication. To address this limitation, this\npaper presents a novel framework for learning nearly decomposable Q-functions\n(NDQ) via communication minimization, with which agents act on their own most\nof the time but occasionally send messages to other agents in order for\neffective coordination. This framework hybridizes value function factorization\nlearning and communication learning by introducing two information-theoretic\nregularizers. These regularizers are maximizing mutual information between\nagents' action selection and communication messages while minimizing the\nentropy of messages between agents. We show how to optimize these regularizers\nin a way that is easily integrated with existing value function factorization\nmethods such as QMIX. Finally, we demonstrate that, on the StarCraft unit\nmicromanagement benchmark, our framework significantly outperforms baseline\nmethods and allows us to cut off more than $80\\%$ of communication without\nsacrificing the performance. The videos of our experiments are available at\nhttps://sites.google.com/view/ndq.", + "authors": "Tonghan Wang, Jianhao Wang, Chongyi Zheng, Chongjie Zhang", + "published": "2019-10-11", + "updated": "2020-07-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.11188v3", + "title": "Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification", + "abstract": "Conservatism has led to significant progress in offline reinforcement\nlearning (RL) where an agent learns from pre-collected datasets. However, as\nmany real-world scenarios involve interaction among multiple agents, it is\nimportant to resolve offline RL in the multi-agent setting. Given the recent\nsuccess of transferring online RL algorithms to the multi-agent setting, one\nmay expect that offline RL algorithms will also transfer to multi-agent\nsettings directly. Surprisingly, we empirically observe that conservative\noffline RL algorithms do not work well in the multi-agent setting -- the\nperformance degrades significantly with an increasing number of agents. Towards\nmitigating the degradation, we identify a key issue that non-concavity of the\nvalue function makes the policy gradient improvements prone to local optima.\nMultiple agents exacerbate the problem severely, since the suboptimal policy by\nany agent can lead to uncoordinated global failure. Following this intuition,\nwe propose a simple yet effective method, Offline Multi-Agent RL with Actor\nRectification (OMAR), which combines the first-order policy gradients and\nzeroth-order optimization methods to better optimize the conservative value\nfunctions over the actor parameters. Despite the simplicity, OMAR achieves\nstate-of-the-art results in a variety of multi-agent control tasks.", + "authors": "Ling Pan, Longbo Huang, Tengyu Ma, Huazhe Xu", + "published": "2021-11-22", + "updated": "2022-04-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.04745v3", + "title": "Mildly Conservative Q-Learning for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) defines the task of learning from a\nstatic logged dataset without continually interacting with the environment. The\ndistribution shift between the learned policy and the behavior policy makes it\nnecessary for the value function to stay conservative such that\nout-of-distribution (OOD) actions will not be severely overestimated. However,\nexisting approaches, penalizing the unseen actions or regularizing with the\nbehavior policy, are too pessimistic, which suppresses the generalization of\nthe value function and hinders the performance improvement. This paper explores\nmild but enough conservatism for offline learning while not harming\ngeneralization. We propose Mildly Conservative Q-learning (MCQ), where OOD\nactions are actively trained by assigning them proper pseudo Q values. We\ntheoretically show that MCQ induces a policy that behaves at least as well as\nthe behavior policy and no erroneous overestimation will occur for OOD actions.\nExperimental results on the D4RL benchmarks demonstrate that MCQ achieves\nremarkable performance compared with prior work. Furthermore, MCQ shows\nsuperior generalization ability when transferring from offline to online, and\nsignificantly outperforms baselines. Our code is publicly available at\nhttps://github.com/dmksjfl/MCQ.", + "authors": "Jiafei Lyu, Xiaoteng Ma, Xiu Li, Zongqing Lu", + "published": "2022-06-09", + "updated": "2024-02-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1904.02874v3", + "title": "An Attentive Survey of Attention Models", + "abstract": "Attention Model has now become an important concept in neural networks that\nhas been researched within diverse application domains. This survey provides a\nstructured and comprehensive overview of the developments in modeling\nattention. In particular, we propose a taxonomy which groups existing\ntechniques into coherent categories. We review salient neural architectures in\nwhich attention has been incorporated, and discuss applications in which\nmodeling attention has shown a significant impact. We also describe how\nattention has been used to improve the interpretability of neural networks.\nFinally, we discuss some future research directions in attention. We hope this\nsurvey will provide a succinct introduction to attention models and guide\npractitioners while developing approaches for their applications.", + "authors": "Sneha Chaudhari, Varun Mithal, Gungor Polatkan, Rohan Ramanath", + "published": "2019-04-05", + "updated": "2021-07-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1909.02682v2", + "title": "Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control", + "abstract": "Multi-agent reinforcement learning (MARL) has recently received considerable\nattention due to its applicability to a wide range of real-world applications.\nHowever, achieving efficient communication among agents has always been an\noverarching problem in MARL. In this work, we propose Variance Based Control\n(VBC), a simple yet efficient technique to improve communication efficiency in\nMARL. By limiting the variance of the exchanged messages between agents during\nthe training phase, the noisy component in the messages can be eliminated\neffectively, while the useful part can be preserved and utilized by the agents\nfor better performance. Our evaluation using a challenging set of StarCraft II\nbenchmarks indicates that our method achieves $2-10\\times$ lower in\ncommunication overhead than state-of-the-art MARL algorithms, while allowing\nagents to better collaborate by developing sophisticated strategies.", + "authors": "Sai Qian Zhang, Qi Zhang, Jieyu Lin", + "published": "2019-09-06", + "updated": "2019-11-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.11361v1", + "title": "Behavior Regularized Offline Reinforcement Learning", + "abstract": "In reinforcement learning (RL) research, it is common to assume access to\ndirect online interactions with the environment. However in many real-world\napplications, access to the environment is limited to a fixed offline dataset\nof logged experience. In such settings, standard RL algorithms have been shown\nto diverge or otherwise yield poor performance. Accordingly, recent work has\nsuggested a number of remedies to these issues. In this work, we introduce a\ngeneral framework, behavior regularized actor critic (BRAC), to empirically\nevaluate recently proposed methods as well as a number of simple baselines\nacross a variety of offline continuous control tasks. Surprisingly, we find\nthat many of the technical complexities introduced in recent methods are\nunnecessary to achieve strong performance. Additional ablations provide\ninsights into which design choices matter most in the offline RL setting.", + "authors": "Yifan Wu, George Tucker, Ofir Nachum", + "published": "2019-11-26", + "updated": "2019-11-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.15349v1", + "title": "Learning to Ground Multi-Agent Communication with Autoencoders", + "abstract": "Communication requires having a common language, a lingua franca, between\nagents. This language could emerge via a consensus process, but it may require\nmany generations of trial and error. Alternatively, the lingua franca can be\ngiven by the environment, where agents ground their language in representations\nof the observed world. We demonstrate a simple way to ground language in\nlearned representations, which facilitates decentralized multi-agent\ncommunication and coordination. We find that a standard representation learning\nalgorithm -- autoencoding -- is sufficient for arriving at a grounded common\nlanguage. When agents broadcast these representations, they learn to understand\nand respond to each other's utterances and achieve surprisingly strong task\nperformance across a variety of multi-agent communication environments.", + "authors": "Toru Lin, Minyoung Huh, Chris Stauffer, Ser-Nam Lim, Phillip Isola", + "published": "2021-10-28", + "updated": "2021-10-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.08066v5", + "title": "Exploiting Action Impact Regularity and Exogenous State Variables for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning -- learning a policy from a batch of data --\nis known to be hard for general MDPs. These results motivate the need to look\nat specific classes of MDPs where offline reinforcement learning might be\nfeasible. In this work, we explore a restricted class of MDPs to obtain\nguarantees for offline reinforcement learning. The key property, which we call\nAction Impact Regularity (AIR), is that actions primarily impact a part of the\nstate (an endogenous component) and have limited impact on the remaining part\nof the state (an exogenous component). AIR is a strong assumption, but it\nnonetheless holds in a number of real-world domains including financial\nmarkets. We discuss algorithms that exploit the AIR property, and provide a\ntheoretical analysis for an algorithm based on Fitted-Q Iteration. Finally, we\ndemonstrate that the algorithm outperforms existing offline reinforcement\nlearning algorithms across different data collection policies in simulated and\nreal world environments where the regularity holds.", + "authors": "Vincent Liu, James R. Wright, Martha White", + "published": "2021-11-15", + "updated": "2023-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1907.04543v4", + "title": "An Optimistic Perspective on Offline Reinforcement Learning", + "abstract": "Off-policy reinforcement learning (RL) using a fixed offline dataset of\nlogged interactions is an important consideration in real world applications.\nThis paper studies offline RL using the DQN replay dataset comprising the\nentire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate\nthat recent off-policy deep RL algorithms, even when trained solely on this\nfixed dataset, outperform the fully trained DQN agent. To enhance\ngeneralization in the offline setting, we present Random Ensemble Mixture\n(REM), a robust Q-learning algorithm that enforces optimal Bellman consistency\non random convex combinations of multiple Q-value estimates. Offline REM\ntrained on the DQN replay dataset surpasses strong RL baselines. Ablation\nstudies highlight the role of offline dataset size and diversity as well as the\nalgorithm choice in our positive results. Overall, the results here present an\noptimistic view that robust RL algorithms trained on sufficiently large and\ndiverse offline datasets can lead to high quality policies. The DQN replay\ndataset can serve as an offline RL benchmark and is open-sourced.", + "authors": "Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi", + "published": "2019-07-10", + "updated": "2020-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.10813v2", + "title": "A Workflow for Offline Model-Free Robotic Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables learning control policies by\nutilizing only prior experience, without any online interaction. This can allow\nrobots to acquire generalizable skills from large and diverse datasets, without\nany costly or unsafe online data collection. Despite recent algorithmic\nadvances in offline RL, applying these methods to real-world problems has\nproven challenging. Although offline RL methods can learn from prior data,\nthere is no clear and well-understood process for making various design\nchoices, from model architecture to algorithm hyperparameters, without actually\nevaluating the learned policies online. In this paper, our aim is to develop a\npractical workflow for using offline RL analogous to the relatively\nwell-understood workflows for supervised learning problems. To this end, we\ndevise a set of metrics and conditions that can be tracked over the course of\noffline training, and can inform the practitioner about how the algorithm and\nmodel architecture should be adjusted to improve final performance. Our\nworkflow is derived from a conceptual understanding of the behavior of\nconservative offline RL algorithms and cross-validation in supervised learning.\nWe demonstrate the efficacy of this workflow in producing effective policies\nwithout any online tuning, both in several simulated robotic learning scenarios\nand for three tasks on two distinct real robots, focusing on learning\nmanipulation skills with raw image observations with sparse binary rewards.\nExplanatory video and additional results can be found at\nsites.google.com/view/offline-rl-workflow", + "authors": "Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine", + "published": "2021-09-22", + "updated": "2021-09-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.00188v2", + "title": "Offline Reinforcement Learning with Reverse Model-based Imagination", + "abstract": "In offline reinforcement learning (offline RL), one of the main challenges is\nto deal with the distributional shift between the learning policy and the given\ndataset. To address this problem, recent offline RL methods attempt to\nintroduce conservatism bias to encourage learning in high-confidence areas.\nModel-free approaches directly encode such bias into policy or value function\nlearning using conservative regularizations or special network structures, but\ntheir constrained policy search limits the generalization beyond the offline\ndataset. Model-based approaches learn forward dynamics models with conservatism\nquantifications and then generate imaginary trajectories to extend the offline\ndatasets. However, due to limited samples in offline datasets, conservatism\nquantifications often suffer from overgeneralization in out-of-support regions.\nThe unreliable conservative measures will mislead forward model-based\nimaginations to undesired areas, leading to overaggressive behaviors. To\nencourage more conservatism, we propose a novel model-based offline RL\nframework, called Reverse Offline Model-based Imagination (ROMI). We learn a\nreverse dynamics model in conjunction with a novel reverse policy, which can\ngenerate rollouts leading to the target goal states within the offline dataset.\nThese reverse imaginations provide informed data augmentation for model-free\npolicy learning and enable conservative generalization beyond the offline\ndataset. ROMI can effectively combine with off-the-shelf model-free algorithms\nto enable model-based generalization with proper conservatism. Empirical\nresults show that our method can generate more conservative behaviors and\nachieve state-of-the-art performance on offline RL benchmark tasks.", + "authors": "Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, Chongjie Zhang", + "published": "2021-10-01", + "updated": "2021-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.03086v1", + "title": "DITTO: Offline Imitation Learning with World Models", + "abstract": "We propose DITTO, an offline imitation learning algorithm which uses world\nmodels and on-policy reinforcement learning to addresses the problem of\ncovariate shift, without access to an oracle or any additional online\ninteractions. We discuss how world models enable offline, on-policy imitation\nlearning, and propose a simple intrinsic reward defined in the world model\nlatent space that induces imitation learning by reinforcement learning.\nTheoretically, we show that our formulation induces a divergence bound between\nexpert and learner, in turn bounding the difference in reward. We test our\nmethod on difficult Atari environments from pixels alone, and achieve\nstate-of-the-art performance in the offline setting.", + "authors": "Branton DeMoss, Paul Duckworth, Nick Hawes, Ingmar Posner", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05815v1", + "title": "Representation Matters: Offline Pretraining for Sequential Decision Making", + "abstract": "The recent success of supervised learning methods on ever larger offline\ndatasets has spurred interest in the reinforcement learning (RL) field to\ninvestigate whether the same paradigms can be translated to RL algorithms. This\nresearch area, known as offline RL, has largely focused on offline policy\noptimization, aiming to find a return-maximizing policy exclusively from\noffline data. In this paper, we consider a slightly different approach to\nincorporating offline data into sequential decision-making. We aim to answer\nthe question, what unsupervised objectives applied to offline datasets are able\nto learn state representations which elevate performance on downstream tasks,\nwhether those downstream tasks be online RL, imitation learning from expert\ndemonstrations, or even offline policy optimization based on the same offline\ndataset? Through a variety of experiments utilizing standard offline RL\ndatasets, we find that the use of pretraining with unsupervised learning\nobjectives can dramatically improve the performance of policy learning\nalgorithms that otherwise yield mediocre performance on their own. Extensive\nablations further provide insights into what components of these unsupervised\nobjectives -- e.g., reward prediction, continuous or discrete representations,\npretraining or finetuning -- are most important and in which settings.", + "authors": "Mengjiao Yang, Ofir Nachum", + "published": "2021-02-11", + "updated": "2021-02-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.03351v4", + "title": "Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization", + "abstract": "Combining offline and online reinforcement learning (RL) is crucial for\nefficient and safe learning. However, previous approaches treat offline and\nonline learning as separate procedures, resulting in redundant designs and\nlimited performance. We ask: Can we achieve straightforward yet effective\noffline and online learning without introducing extra conservatism or\nregularization? In this study, we propose Uni-o4, which utilizes an on-policy\nobjective for both offline and online learning. Owning to the alignment of\nobjectives in two phases, the RL agent can transfer between offline and online\nlearning seamlessly. This property enhances the flexibility of the learning\nparadigm, allowing for arbitrary combinations of pretraining, fine-tuning,\noffline, and online learning. In the offline phase, specifically, Uni-o4\nleverages diverse ensemble policies to address the mismatch issues between the\nestimated behavior policy and the offline dataset. Through a simple offline\npolicy evaluation (OPE) approach, Uni-o4 can achieve multi-step policy\nimprovement safely. We demonstrate that by employing the method above, the\nfusion of these two paradigms can yield superior offline initialization as well\nas stable and rapid online fine-tuning capabilities. Through real-world robot\ntasks, we highlight the benefits of this paradigm for rapid deployment in\nchallenging, previously unseen real-world environments. Additionally, through\ncomprehensive evaluations using numerous simulated benchmarks, we substantiate\nthat our method achieves state-of-the-art performance in both offline and\noffline-to-online fine-tuning learning. Our website:\nhttps://lei-kun.github.io/uni-o4/ .", + "authors": "Kun Lei, Zhengmao He, Chenhao Lu, Kaizhe Hu, Yang Gao, Huazhe Xu", + "published": "2023-11-06", + "updated": "2024-03-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.11574v1", + "title": "Offline Multitask Representation Learning for Reinforcement Learning", + "abstract": "We study offline multitask representation learning in reinforcement learning\n(RL), where a learner is provided with an offline dataset from different tasks\nthat share a common representation and is asked to learn the shared\nrepresentation. We theoretically investigate offline multitask low-rank RL, and\npropose a new algorithm called MORL for offline multitask representation\nlearning. Furthermore, we examine downstream RL in reward-free, offline and\nonline scenarios, where a new task is introduced to the agent that shares the\nsame representation as the upstream offline tasks. Our theoretical results\ndemonstrate the benefits of using the learned representation from the upstream\noffline task instead of directly learning the representation of the low-rank\nmodel.", + "authors": "Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.06871v3", + "title": "Improving Offline-to-Online Reinforcement Learning with Q-Ensembles", + "abstract": "Offline reinforcement learning (RL) is a learning paradigm where an agent\nlearns from a fixed dataset of experience. However, learning solely from a\nstatic dataset can limit the performance due to the lack of exploration. To\novercome it, offline-to-online RL combines offline pre-training with online\nfine-tuning, which enables the agent to further refine its policy by\ninteracting with the environment in real-time. Despite its benefits, existing\noffline-to-online RL methods suffer from performance degradation and slow\nimprovement during the online phase. To tackle these challenges, we propose a\nnovel framework called Ensemble-based Offline-to-Online (E2O) RL. By increasing\nthe number of Q-networks, we seamlessly bridge offline pre-training and online\nfine-tuning without degrading performance. Moreover, to expedite online\nperformance enhancement, we appropriately loosen the pessimism of Q-value\nestimation and incorporate ensemble-based exploration mechanisms into our\nframework. Experimental results demonstrate that E2O can substantially improve\nthe training stability, learning efficiency, and final performance of existing\noffline RL methods during online fine-tuning on a range of locomotion and\nnavigation tasks, significantly outperforming existing offline-to-online RL\nmethods.", + "authors": "Kai Zhao, Yi Ma, Jianye Hao, Jinyi Liu, Yan Zheng, Zhaopeng Meng", + "published": "2023-06-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.02929v2", + "title": "Model-Based Offline Meta-Reinforcement Learning with Regularization", + "abstract": "Existing offline reinforcement learning (RL) methods face a few major\nchallenges, particularly the distributional shift between the learned policy\nand the behavior policy. Offline Meta-RL is emerging as a promising approach to\naddress these challenges, aiming to learn an informative meta-policy from a\ncollection of tasks. Nevertheless, as shown in our empirical studies, offline\nMeta-RL could be outperformed by offline single-task RL methods on tasks with\ngood quality of datasets, indicating that a right balance has to be delicately\ncalibrated between \"exploring\" the out-of-distribution state-actions by\nfollowing the meta-policy and \"exploiting\" the offline dataset by staying close\nto the behavior policy. Motivated by such empirical analysis, we explore\nmodel-based offline Meta-RL with regularized Policy Optimization (MerPO), which\nlearns a meta-model for efficient task structure inference and an informative\nmeta-policy for safe exploration of out-of-distribution state-actions. In\nparticular, we devise a new meta-Regularized model-based Actor-Critic (RAC)\nmethod for within-task policy optimization, as a key building block of MerPO,\nusing conservative policy evaluation and regularized policy improvement; and\nthe intrinsic tradeoff therein is achieved via striking the right balance\nbetween two regularizers, one based on the behavior policy and the other on the\nmeta-policy. We theoretically show that the learnt policy offers guaranteed\nimprovement over both the behavior policy and the meta-policy, thus ensuring\nthe performance improvement on new tasks via offline Meta-RL. Experiments\ncorroborate the superior performance of MerPO over existing offline Meta-RL\nmethods.", + "authors": "Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, Junshan Zhang", + "published": "2022-02-07", + "updated": "2022-07-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13412v2", + "title": "CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn an optimal policy from\npre-collected and labeled datasets, which eliminates the time-consuming data\ncollection in online RL. However, offline RL still bears a large burden of\nspecifying/handcrafting extrinsic rewards for each transition in the offline\ndata. As a remedy for the labor-intensive labeling, we propose to endow offline\nRL tasks with a few expert data and utilize the limited expert data to drive\nintrinsic rewards, thus eliminating the need for extrinsic rewards. To achieve\nthat, we introduce \\textbf{C}alibrated \\textbf{L}atent\ng\\textbf{U}idanc\\textbf{E} (CLUE), which utilizes a conditional variational\nauto-encoder to learn a latent space such that intrinsic rewards can be\ndirectly qualified over the latent space. CLUE's key idea is to align the\nintrinsic rewards consistent with the expert intention via enforcing the\nembeddings of expert data to a calibrated contextual representation. We\ninstantiate the expert-driven intrinsic rewards in sparse-reward offline RL\ntasks, offline imitation learning (IL) tasks, and unsupervised offline RL\ntasks. Empirically, we find that CLUE can effectively improve the sparse-reward\noffline RL performance, outperform the state-of-the-art offline IL baselines,\nand discover diverse skills from static reward-free offline data.", + "authors": "Jinxin Liu, Lipeng Zu, Li He, Donglin Wang", + "published": "2023-06-23", + "updated": "2023-10-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.17156v2", + "title": "MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations", + "abstract": "We study a new paradigm for sequential decision making, called offline policy\nlearning from observations (PLfO). Offline PLfO aims to learn policies using\ndatasets with substandard qualities: 1) only a subset of trajectories is\nlabeled with rewards, 2) labeled trajectories may not contain actions, 3)\nlabeled trajectories may not be of high quality, and 4) the data may not have\nfull coverage. Such imperfection is common in real-world learning scenarios,\nand offline PLfO encompasses many existing offline learning setups, including\noffline imitation learning (IL), offline IL from observations (ILfO), and\noffline reinforcement learning (RL). In this work, we present a generic\napproach to offline PLfO, called $\\textbf{M}$odality-agnostic\n$\\textbf{A}$dversarial $\\textbf{H}$ypothesis $\\textbf{A}$daptation for\n$\\textbf{L}$earning from $\\textbf{O}$bservations (MAHALO). Built upon the\npessimism concept in offline RL, MAHALO optimizes the policy using a\nperformance lower bound that accounts for uncertainty due to the dataset's\ninsufficient coverage. We implement this idea by adversarially training\ndata-consistent critic and reward functions, which forces the learned policy to\nbe robust to data deficiency. We show that MAHALO consistently outperforms or\nmatches specialized algorithms across a variety of offline PLfO tasks in theory\nand experiments. Our code is available at https://github.com/AnqiLi/mahalo.", + "authors": "Anqi Li, Byron Boots, Ching-An Cheng", + "published": "2023-03-30", + "updated": "2023-08-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.06294v1", + "title": "Online and Offline Reinforcement Learning by Planning with a Learned Model", + "abstract": "Learning efficiently from small amounts of data has long been the focus of\nmodel-based reinforcement learning, both for the online case when interacting\nwith the environment and the offline case when learning from a fixed dataset.\nHowever, to date no single unified algorithm could demonstrate state-of-the-art\nresults in both settings. In this work, we describe the Reanalyse algorithm\nwhich uses model-based policy and value improvement operators to compute new\nimproved training targets on existing data points, allowing efficient learning\nfor data budgets varying by several orders of magnitude. We further show that\nReanalyse can also be used to learn entirely from demonstrations without any\nenvironment interactions, as in the case of offline Reinforcement Learning\n(offline RL). Combining Reanalyse with the MuZero algorithm, we introduce\nMuZero Unplugged, a single unified algorithm for any data budget, including\noffline RL. In contrast to previous work, our algorithm does not require any\nspecial adaptations for the off-policy or offline RL settings. MuZero Unplugged\nsets new state-of-the-art results in the RL Unplugged offline RL benchmark as\nwell as in the online RL benchmark of Atari in the standard 200 million frame\nsetting.", + "authors": "Julian Schrittwieser, Thomas Hubert, Amol Mandhane, Mohammadamin Barekatain, Ioannis Antonoglou, David Silver", + "published": "2021-04-13", + "updated": "2021-04-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.13425v3", + "title": "Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning", + "abstract": "Recent progress in deep learning has relied on access to large and diverse\ndatasets. Such data-driven progress has been less evident in offline\nreinforcement learning (RL), because offline RL data is usually collected to\noptimize specific target tasks limiting the data's diversity. In this work, we\npropose Exploratory data for Offline RL (ExORL), a data-centric approach to\noffline RL. ExORL first generates data with unsupervised reward-free\nexploration, then relabels this data with a downstream reward before training a\npolicy with offline RL. We find that exploratory data allows vanilla off-policy\nRL algorithms, without any offline-specific modifications, to outperform or\nmatch state-of-the-art offline RL algorithms on downstream tasks. Our findings\nsuggest that data generation is as important as algorithmic advances for\noffline RL and hence requires careful consideration from the community. Code\nand data can be found at https://github.com/denisyarats/exorl .", + "authors": "Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, Lerrel Pinto", + "published": "2022-01-31", + "updated": "2022-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.13885v1", + "title": "Offline Learning from Demonstrations and Unlabeled Experience", + "abstract": "Behavior cloning (BC) is often practical for robot learning because it allows\na policy to be trained offline without rewards, by supervised learning on\nexpert demonstrations. However, BC does not effectively leverage what we will\nrefer to as unlabeled experience: data of mixed and unknown quality without\nreward annotations. This unlabeled data can be generated by a variety of\nsources such as human teleoperation, scripted policies and other agents on the\nsame robot. Towards data-driven offline robot learning that can use this\nunlabeled experience, we introduce Offline Reinforced Imitation Learning\n(ORIL). ORIL first learns a reward function by contrasting observations from\ndemonstrator and unlabeled trajectories, then annotates all data with the\nlearned reward, and finally trains an agent via offline reinforcement learning.\nAcross a diverse set of continuous control and simulated robotic manipulation\ntasks, we show that ORIL consistently outperforms comparable BC agents by\neffectively leveraging unlabeled experience.", + "authors": "Konrad Zolna, Alexander Novikov, Ksenia Konyushkova, Caglar Gulcehre, Ziyu Wang, Yusuf Aytar, Misha Denil, Nando de Freitas, Scott Reed", + "published": "2020-11-27", + "updated": "2020-11-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07166v1", + "title": "Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) extends the paradigm of classical RL\nalgorithms to purely learning from static datasets, without interacting with\nthe underlying environment during the learning process. A key challenge of\noffline RL is the instability of policy training, caused by the mismatch\nbetween the distribution of the offline data and the undiscounted stationary\nstate-action distribution of the learned policy. To avoid the detrimental\nimpact of distribution mismatch, we regularize the undiscounted stationary\ndistribution of the current policy towards the offline data during the policy\noptimization process. Further, we train a dynamics model to both implement this\nregularization and better estimate the stationary distribution of the current\npolicy, reducing the error induced by distribution mismatch. On a wide range of\ncontinuous-control offline RL datasets, our method indicates competitive\nperformance, which validates our algorithm. The code is publicly available.", + "authors": "Shentao Yang, Yihao Feng, Shujian Zhang, Mingyuan Zhou", + "published": "2022-06-14", + "updated": "2022-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02752v2", + "title": "Offline Reinforcement Learning with Imbalanced Datasets", + "abstract": "The prevalent use of benchmarks in current offline reinforcement learning\n(RL) research has led to a neglect of the imbalance of real-world dataset\ndistributions in the development of models. The real-world offline RL dataset\nis often imbalanced over the state space due to the challenge of exploration or\nsafety considerations. In this paper, we specify properties of imbalanced\ndatasets in offline RL, where the state coverage follows a power law\ndistribution characterized by skewed policies. Theoretically and empirically,\nwe show that typically offline RL methods based on distributional constraints,\nsuch as conservative Q-learning (CQL), are ineffective in extracting policies\nunder the imbalanced dataset. Inspired by natural intelligence, we propose a\nnovel offline RL method that utilizes the augmentation of CQL with a retrieval\nprocess to recall past related experiences, effectively alleviating the\nchallenges posed by imbalanced datasets. We evaluate our method on several\ntasks in the context of imbalanced datasets with varying levels of imbalance,\nutilizing the variant of D4RL. Empirical results demonstrate the superiority of\nour method over other baselines.", + "authors": "Li Jiang, Sijie Chen, Jielin Qiu, Haoran Xu, Wai Kin Chan, Zhao Ding", + "published": "2023-07-06", + "updated": "2023-07-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.05422v1", + "title": "Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning", + "abstract": "Learning a precise dynamics model can be crucial for offline reinforcement\nlearning, which, unfortunately, has been found to be quite challenging.\nDynamics models that are learned by fitting historical transitions often\nstruggle to generalize to unseen transitions. In this study, we identify a\nhidden but pivotal factor termed dynamics reward that remains consistent across\ntransitions, offering a pathway to better generalization. Therefore, we propose\nthe idea of reward-consistent dynamics models: any trajectory generated by the\ndynamics model should maximize the dynamics reward derived from the data. We\nimplement this idea as the MOREC (Model-based Offline reinforcement learning\nwith Reward Consistency) method, which can be seamlessly integrated into\nprevious offline model-based reinforcement learning (MBRL) methods. MOREC\nlearns a generalizable dynamics reward function from offline data, which is\nsubsequently employed as a transition filter in any offline MBRL method: when\ngenerating transitions, the dynamics model generates a batch of transitions and\nselects the one with the highest dynamics reward value. On a synthetic task, we\nvisualize that MOREC has a strong generalization ability and can surprisingly\nrecover some distant unseen transitions. On 21 offline tasks in D4RL and NeoRL\nbenchmarks, MOREC improves the previous state-of-the-art performance by a\nsignificant margin, i.e., 4.6% on D4RL tasks and 25.9% on NeoRL tasks. Notably,\nMOREC is the first method that can achieve above 95% online RL performance in 6\nout of 12 D4RL tasks and 3 out of 9 NeoRL tasks.", + "authors": "Fan-Ming Luo, Tian Xu, Xingchen Cao, Yang Yu", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2107.06106v2", + "title": "Conservative Offline Distributional Reinforcement Learning", + "abstract": "Many reinforcement learning (RL) problems in practice are offline, learning\npurely from observational data. A key challenge is how to ensure the learned\npolicy is safe, which requires quantifying the risk associated with different\nactions. In the online setting, distributional RL algorithms do so by learning\nthe distribution over returns (i.e., cumulative rewards) instead of the\nexpected return; beyond quantifying risk, they have also been shown to learn\nbetter representations for planning. We propose Conservative Offline\nDistributional Actor Critic (CODAC), an offline RL algorithm suitable for both\nrisk-neutral and risk-averse domains. CODAC adapts distributional RL to the\noffline setting by penalizing the predicted quantiles of the return for\nout-of-distribution actions. We prove that CODAC learns a conservative return\ndistribution -- in particular, for finite MDPs, CODAC converges to an uniform\nlower bound on the quantiles of the return distribution; our proof relies on a\nnovel analysis of the distributional Bellman operator. In our experiments, on\ntwo challenging robot navigation tasks, CODAC successfully learns risk-averse\npolicies using offline data collected purely from risk-neutral agents.\nFurthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of\nboth expected and risk-sensitive performance.", + "authors": "Yecheng Jason Ma, Dinesh Jayaraman, Osbert Bastani", + "published": "2021-07-12", + "updated": "2021-10-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.13493v1", + "title": "The Provable Benefits of Unsupervised Data Sharing for Offline Reinforcement Learning", + "abstract": "Self-supervised methods have become crucial for advancing deep learning by\nleveraging data itself to reduce the need for expensive annotations. However,\nthe question of how to conduct self-supervised offline reinforcement learning\n(RL) in a principled way remains unclear. In this paper, we address this issue\nby investigating the theoretical benefits of utilizing reward-free data in\nlinear Markov Decision Processes (MDPs) within a semi-supervised setting.\n Further, we propose a novel, Provable Data Sharing algorithm (PDS) to utilize\nsuch reward-free data for offline RL. PDS uses additional penalties on the\nreward function learned from labeled data to prevent overestimation, ensuring a\nconservative algorithm. Our results on various offline RL tasks demonstrate\nthat PDS significantly improves the performance of offline RL algorithms with\nreward-free data. Overall, our work provides a promising approach to leveraging\nthe benefits of unlabeled data in offline RL while maintaining theoretical\nguarantees. We believe our findings will contribute to developing more robust\nself-supervised RL methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.15578v1", + "title": "Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning", + "abstract": "We hypothesize that empirically studying the sample complexity of offline\nreinforcement learning (RL) is crucial for the practical applications of RL in\nthe real world. Several recent works have demonstrated the ability to learn\npolicies directly from offline data. In this work, we ask the question of the\ndependency on the number of samples for learning from offline data. Our\nobjective is to emphasize that studying sample complexity for offline RL is\nimportant, and is an indicator of the usefulness of existing offline\nalgorithms. We propose an evaluation approach for sample complexity analysis of\noffline RL.", + "authors": "Samin Yeasar Arnob, Riashat Islam, Doina Precup", + "published": "2021-12-31", + "updated": "2021-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.12639v1", + "title": "Single-Task Continual Offline Reinforcement Learning", + "abstract": "In this paper, we study the continual learning problem of single-task offline\nreinforcement learning. In the past, continual reinforcement learning usually\nonly dealt with multitasking, that is, learning multiple related or unrelated\ntasks in a row, but once each learned task was learned, it was not relearned,\nbut only used in subsequent processes. However, offline reinforcement learning\ntasks require the continuously learning of multiple different datasets for the\nsame task. Existing algorithms will try their best to achieve the best results\nin each offline dataset they have learned and the skills of the network will\noverwrite the high-quality datasets that have been learned after learning the\nsubsequent poor datasets. On the other hand, if too much emphasis is placed on\nstability, the network will learn the subsequent better dataset after learning\nthe poor offline dataset, and the problem of insufficient plasticity and\nnon-learning will occur. How to design a strategy that can always preserve the\nbest performance for each state in the data that has been learned is a new\nchallenge and the focus of this study. Therefore, this study proposes a new\nalgorithm, called Ensemble Offline Reinforcement Learning Based on Experience\nReplay, which introduces multiple value networks to learn the same dataset and\njudge whether the strategy has been learned by the discrete degree of the value\nnetwork, to improve the performance of the network in single-task offline\nreinforcement learning.", + "authors": "Sibo Gai, Donglin Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "I.2.6" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.03383v2", + "title": "On the Role of Discount Factor in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables effective learning from\npreviously collected data without exploration, which shows great promise in\nreal-world applications when exploration is expensive or even infeasible. The\ndiscount factor, $\\gamma$, plays a vital role in improving online RL sample\nefficiency and estimation accuracy, but the role of the discount factor in\noffline RL is not well explored. This paper examines two distinct effects of\n$\\gamma$ in offline RL with theoretical analysis, namely the regularization\neffect and the pessimism effect. On the one hand, $\\gamma$ is a regulator to\ntrade-off optimality with sample efficiency upon existing offline techniques.\nOn the other hand, lower guidance $\\gamma$ can also be seen as a way of\npessimism where we optimize the policy's performance in the worst possible\nmodels. We empirically verify the above theoretical observation with tabular\nMDPs and standard D4RL tasks. The results show that the discount factor plays\nan essential role in the performance of offline RL algorithms, both under small\ndata regimes upon existing offline methods and in large data regimes without\nother conservative methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2022-06-07", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02439v2", + "title": "DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching", + "abstract": "In offline reinforcement learning (RL), the performance of the learned policy\nhighly depends on the quality of offline datasets. However, in many cases, the\noffline dataset contains very limited optimal trajectories, which poses a\nchallenge for offline RL algorithms as agents must acquire the ability to\ntransit to high-reward regions. To address this issue, we introduce\nDiffusion-based Trajectory Stitching (DiffStitch), a novel diffusion-based data\naugmentation pipeline that systematically generates stitching transitions\nbetween trajectories. DiffStitch effectively connects low-reward trajectories\nwith high-reward trajectories, forming globally optimal trajectories to address\nthe challenges faced by offline RL algorithms. Empirical experiments conducted\non D4RL datasets demonstrate the effectiveness of DiffStitch across RL\nmethodologies. Notably, DiffStitch demonstrates substantial enhancements in the\nperformance of one-step methods (IQL), imitation learning methods (TD3+BC), and\ntrajectory optimization methods (DT).", + "authors": "Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, Weinan Zhang", + "published": "2024-02-04", + "updated": "2024-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.14379v1", + "title": "Offline Reinforcement Learning Hands-On", + "abstract": "Offline Reinforcement Learning (RL) aims to turn large datasets into powerful\ndecision-making engines without any online interactions with the environment.\nThis great promise has motivated a large amount of research that hopes to\nreplicate the success RL has experienced in simulation settings. This work\nambitions to reflect upon these efforts from a practitioner viewpoint. We start\nby discussing the dataset properties that we hypothesise can characterise the\ntype of offline methods that will be the most successful. We then verify these\nclaims through a set of experiments and designed datasets generated from\nenvironments with both discrete and continuous action spaces. We experimentally\nvalidate that diversity and high-return examples in the data are crucial to the\nsuccess of offline RL and show that behavioural cloning remains a strong\ncontender compared to its contemporaries. Overall, this work stands as a\ntutorial to help people build their intuition on today's offline RL methods and\ntheir applicability.", + "authors": "Louis Monier, Jakub Kmec, Alexandre Laterre, Thomas Pierrot, Valentin Courgeau, Olivier Sigaud, Karim Beguir", + "published": "2020-11-29", + "updated": "2020-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03788v2", + "title": "d3rlpy: An Offline Deep Reinforcement Learning Library", + "abstract": "In this paper, we introduce d3rlpy, an open-sourced offline deep\nreinforcement learning (RL) library for Python. d3rlpy supports a set of\noffline deep RL algorithms as well as off-policy online algorithms via a fully\ndocumented plug-and-play API. To address a reproducibility issue, we conduct a\nlarge-scale benchmark with D4RL and Atari 2600 dataset to ensure implementation\nquality and provide experimental scripts and full tables of results. The d3rlpy\nsource code can be found on GitHub: \\url{https://github.com/takuseno/d3rlpy}.", + "authors": "Takuma Seno, Michita Imai", + "published": "2021-11-06", + "updated": "2022-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.04779v3", + "title": "Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations", + "abstract": "Offline reinforcement learning has shown great promise in leveraging large\npre-collected datasets for policy learning, allowing agents to forgo\noften-expensive online data collection. However, offline reinforcement learning\nfrom visual observations with continuous action spaces remains under-explored,\nwith a limited understanding of the key challenges in this complex domain. In\nthis paper, we establish simple baselines for continuous control in the visual\ndomain and introduce a suite of benchmarking tasks for offline reinforcement\nlearning from visual observations designed to better represent the data\ndistributions present in real-world offline RL problems and guided by a set of\ndesiderata for offline RL from visual observations, including robustness to\nvisual distractions and visually identifiable changes in dynamics. Using this\nsuite of benchmarking tasks, we show that simple modifications to two popular\nvision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2,\nsuffice to outperform existing offline RL methods and establish competitive\nbaselines for continuous control in the visual domain. We rigorously evaluate\nthese algorithms and perform an empirical evaluation of the differences between\nstate-of-the-art model-based and model-free offline RL methods for continuous\ncontrol from visual observations. All code and data used in this evaluation are\nopen-sourced to facilitate progress in this domain.", + "authors": "Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh", + "published": "2022-06-09", + "updated": "2023-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13611v3", + "title": "OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning", + "abstract": "Reinforcement learning (RL) has achieved impressive performance in a variety\nof online settings in which an agent's ability to query the environment for\ntransitions and rewards is effectively unlimited. However, in many practical\napplications, the situation is reversed: an agent may have access to large\namounts of undirected offline experience data, while access to the online\nenvironment is severely limited. In this work, we focus on this offline\nsetting. Our main insight is that, when presented with offline data composed of\na variety of behaviors, an effective way to leverage this data is to extract a\ncontinuous space of recurring and temporally extended primitive behaviors\nbefore using these primitives for downstream task learning. Primitives\nextracted in this way serve two purposes: they delineate the behaviors that are\nsupported by the data from those that are not, making them useful for avoiding\ndistributional shift in offline RL; and they provide a degree of temporal\nabstraction, which reduces the effective horizon yielding better learning in\ntheory, and improved offline RL in practice. In addition to benefiting offline\npolicy optimization, we show that performing offline primitive learning in this\nway can also be leveraged for improving few-shot imitation learning as well as\nexploration and transfer in online RL on a variety of benchmark domains.\nVisualizations are available at https://sites.google.com/view/opal-iclr", + "authors": "Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum", + "published": "2020-10-26", + "updated": "2021-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.13846v1", + "title": "Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning", + "abstract": "Offline reinforcement learning, by learning from a fixed dataset, makes it\npossible to learn agent behaviors without interacting with the environment.\nHowever, depending on the quality of the offline dataset, such pre-trained\nagents may have limited performance and would further need to be fine-tuned\nonline by interacting with the environment. During online fine-tuning, the\nperformance of the pre-trained agent may collapse quickly due to the sudden\ndistribution shift from offline to online data. While constraints enforced by\noffline RL methods such as a behaviour cloning loss prevent this to an extent,\nthese constraints also significantly slow down online fine-tuning by forcing\nthe agent to stay close to the behavior policy. We propose to adaptively weigh\nthe behavior cloning loss during online fine-tuning based on the agent's\nperformance and training stability. Moreover, we use a randomized ensemble of Q\nfunctions to further increase the sample efficiency of online fine-tuning by\nperforming a large number of learning updates. Experiments show that the\nproposed method yields state-of-the-art offline-to-online reinforcement\nlearning performance on the popular D4RL benchmark. Code is available:\n\\url{https://github.com/zhaoyi11/adaptive_bc}.", + "authors": "Yi Zhao, Rinu Boney, Alexander Ilin, Juho Kannala, Joni Pajarinen", + "published": "2022-10-25", + "updated": "2022-10-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.05742v2", + "title": "The Generalization Gap in Offline Reinforcement Learning", + "abstract": "Despite recent progress in offline learning, these methods are still trained\nand tested on the same environment. In this paper, we compare the\ngeneralization abilities of widely used online and offline learning methods\nsuch as online reinforcement learning (RL), offline RL, sequence modeling, and\nbehavioral cloning. Our experiments show that offline learning algorithms\nperform worse on new environments than online learning ones. We also introduce\nthe first benchmark for evaluating generalization in offline learning,\ncollecting datasets of varying sizes and skill-levels from Procgen (2D video\ngames) and WebShop (e-commerce websites). The datasets contain trajectories for\na limited number of game levels or natural language instructions and at test\ntime, the agent has to generalize to new levels or instructions. Our\nexperiments reveal that existing offline learning algorithms struggle to match\nthe performance of online RL on both train and test environments. Behavioral\ncloning is a strong baseline, outperforming state-of-the-art offline RL and\nsequence modeling approaches when trained on data from multiple environments\nand tested on new ones. Finally, we find that increasing the diversity of the\ndata, rather than its size, improves performance on new environments for all\noffline learning algorithms. Our study demonstrates the limited generalization\nof current offline learning algorithms highlighting the need for more research\nin this area.", + "authors": "Ishita Mediratta, Qingfei You, Minqi Jiang, Roberta Raileanu", + "published": "2023-12-10", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.11620v2", + "title": "Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization", + "abstract": "Offline reinforcement learning (RL) has received considerable attention in\nrecent years due to its attractive capability of learning policies from offline\ndatasets without environmental interactions. Despite some success in the\nsingle-agent setting, offline multi-agent RL (MARL) remains to be a challenge.\nThe large joint state-action space and the coupled multi-agent behaviors pose\nextra complexities for offline policy optimization. Most existing offline MARL\nstudies simply apply offline data-related regularizations on individual agents,\nwithout fully considering the multi-agent system at the global level. In this\nwork, we present OMIGA, a new offline m ulti-agent RL algorithm with implicit\nglobal-to-local v alue regularization. OMIGA provides a principled framework to\nconvert global-level value regularization into equivalent implicit local value\nregularizations and simultaneously enables in-sample learning, thus elegantly\nbridging multi-agent value decomposition and policy learning with offline\nregularizations. Based on comprehensive experiments on the offline multi-agent\nMuJoCo and StarCraft II micro-management tasks, we show that OMIGA achieves\nsuperior performance over the state-of-the-art offline MARL methods in almost\nall tasks.", + "authors": "Xiangsen Wang, Haoran Xu, Yinan Zheng, Xianyuan Zhan", + "published": "2023-07-21", + "updated": "2023-11-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.09119v2", + "title": "Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL", + "abstract": "Offline Reinforcement Learning (RL) aims to extract near-optimal policies\nfrom imperfect offline data without additional environment interactions.\nExtracting policies from diverse offline datasets has the potential to expand\nthe range of applicability of RL by making the training process safer, faster,\nand more streamlined. We investigate how to improve the performance of offline\nRL algorithms, its robustness to the quality of offline data, as well as its\ngeneralization capabilities. To this end, we introduce Offline Model-based RL\nwith Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding\nthat dynamics models, which support within-domain generalization, and\nbehavioral priors, which support cross-domain generalization, are\ncomplementary. When combined together, they substantially improve the\nperformance and generalization of offline RL policies. In the widely studied\nD4RL offline RL benchmark, we find that MABE achieves higher average\nperformance compared to prior model-free and model-based algorithms. In\nexperiments that require cross-domain generalization, we find that MABE\noutperforms prior methods. Our website is available at\nhttps://sites.google.com/berkeley.edu/mabe .", + "authors": "Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, Michael Laskin", + "published": "2021-06-16", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10411v2", + "title": "Boosting Offline Reinforcement Learning with Residual Generative Modeling", + "abstract": "Offline reinforcement learning (RL) tries to learn the near-optimal policy\nwith recorded offline experience without online exploration. Current offline RL\nresearch includes: 1) generative modeling, i.e., approximating a policy using\nfixed data; and 2) learning the state-action value function. While most\nresearch focuses on the state-action function part through reducing the\nbootstrapping error in value function approximation induced by the distribution\nshift of training data, the effects of error propagation in generative modeling\nhave been neglected. In this paper, we analyze the error in generative\nmodeling. We propose AQL (action-conditioned Q-learning), a residual generative\nmodel to reduce policy approximation error for offline RL. We show that our\nmethod can learn more accurate policy approximations in different benchmark\ndatasets. In addition, we show that the proposed offline RL method can learn\nmore competitive AI agents in complex control tasks under the multiplayer\nonline battle arena (MOBA) game Honor of Kings.", + "authors": "Hua Wei, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, Zhenhui Li", + "published": "2021-06-19", + "updated": "2021-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T01", + "I.2.8; I.2.1" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08569v2", + "title": "Bootstrapped Transformer for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims at learning policies from previously\ncollected static trajectory data without interacting with the real environment.\nRecent works provide a novel perspective by viewing offline RL as a generic\nsequence generation problem, adopting sequence models such as Transformer\narchitecture to model distributions over trajectories, and repurposing beam\nsearch as a planning algorithm. However, the training datasets utilized in\ngeneral offline RL tasks are quite limited and often suffer from insufficient\ndistribution coverage, which could be harmful to training sequence generation\nmodels yet has not drawn enough attention in the previous works. In this paper,\nwe propose a novel algorithm named Bootstrapped Transformer, which incorporates\nthe idea of bootstrapping and leverages the learned model to self-generate more\noffline data to further boost the sequence model training. We conduct extensive\nexperiments on two offline RL benchmarks and demonstrate that our model can\nlargely remedy the existing offline RL training limitations and beat other\nstrong baseline methods. We also analyze the generated pseudo data and the\nrevealed characteristics may shed some light on offline RL training. The codes\nare available at https://seqml.github.io/bootorl.", + "authors": "Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, Dongsheng Li", + "published": "2022-06-17", + "updated": "2022-10-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.00935v3", + "title": "Policy Expansion for Bridging Offline-to-Online Reinforcement Learning", + "abstract": "Pre-training with offline data and online fine-tuning using reinforcement\nlearning is a promising strategy for learning control policies by leveraging\nthe best of both worlds in terms of sample efficiency and performance. One\nnatural approach is to initialize the policy for online learning with the one\ntrained offline. In this work, we introduce a policy expansion scheme for this\ntask. After learning the offline policy, we use it as one candidate policy in a\npolicy set. We then expand the policy set with another policy which will be\nresponsible for further learning. The two policies will be composed in an\nadaptive manner for interacting with the environment. With this approach, the\npolicy previously learned offline is fully retained during online learning,\nthus mitigating the potential issues such as destroying the useful behaviors of\nthe offline policy in the initial stage of online learning while allowing the\noffline policy participate in the exploration naturally in an adaptive manner.\nMoreover, new useful behaviors can potentially be captured by the newly added\npolicy through learning. Experiments are conducted on a number of tasks and the\nresults demonstrate the effectiveness of the proposed approach.", + "authors": "Haichao Zhang, We Xu, Haonan Yu", + "published": "2023-02-02", + "updated": "2023-04-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.02845v3", + "title": "Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Tackles All SMAC Tasks", + "abstract": "Offline reinforcement learning leverages previously-collected offline\ndatasets to learn optimal policies with no necessity to access the real\nenvironment. Such a paradigm is also desirable for multi-agent reinforcement\nlearning (MARL) tasks, given the increased interactions among agents and with\nthe enviroment. Yet, in MARL, the paradigm of offline pre-training with online\nfine-tuning has not been studied, nor datasets or benchmarks for offline MARL\nresearch are available. In this paper, we facilitate the research by providing\nlarge-scale datasets, and use them to examine the usage of the Decision\nTransformer in the context of MARL. We investigate the generalisation of MARL\noffline pre-training in the following three aspects: 1) between single agents\nand multiple agents, 2) from offline pretraining to the online fine-tuning, and\n3) to that of multiple downstream tasks with few-shot and zero-shot\ncapabilities. We start by introducing the first offline MARL dataset with\ndiverse quality levels based on the StarCraftII environment, and then propose\nthe novel architecture of multi-agent decision transformer (MADT) for effective\noffline learning. MADT leverages transformer's modelling ability of sequence\nmodelling and integrates it seamlessly with both offline and online MARL tasks.\nA crucial benefit of MADT is that it learns generalisable policies that can\ntransfer between different types of agents under different task scenarios. On\nStarCraft II offline dataset, MADT outperforms the state-of-the-art offline RL\nbaselines. When applied to online tasks, the pre-trained MADT significantly\nimproves sample efficiency, and enjoys strong performance both few-short and\nzero-shot cases. To our best knowledge, this is the first work that studies and\ndemonstrates the effectiveness of offline pre-trained models in terms of sample\nefficiency and generalisability enhancements in MARL.", + "authors": "Linghui Meng, Muning Wen, Yaodong Yang, Chenyang Le, Xiyun Li, Weinan Zhang, Ying Wen, Haifeng Zhang, Jun Wang, Bo Xu", + "published": "2021-12-06", + "updated": "2022-06-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.13464v3", + "title": "When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning", + "abstract": "Learning effective reinforcement learning (RL) policies to solve real-world\ncomplex tasks can be quite challenging without a high-fidelity simulation\nenvironment. In most cases, we are only given imperfect simulators with\nsimplified dynamics, which inevitably lead to severe sim-to-real gaps in RL\npolicy learning. The recently emerged field of offline RL provides another\npossibility to learn policies directly from pre-collected historical data.\nHowever, to achieve reasonable performance, existing offline RL algorithms need\nimpractically large offline data with sufficient state-action space coverage\nfor training. This brings up a new question: is it possible to combine learning\nfrom limited real data in offline RL and unrestricted exploration through\nimperfect simulators in online RL to address the drawbacks of both approaches?\nIn this study, we propose the Dynamics-Aware Hybrid Offline-and-Online\nReinforcement Learning (H2O) framework to provide an affirmative answer to this\nquestion. H2O introduces a dynamics-aware policy evaluation scheme, which\nadaptively penalizes the Q function learning on simulated state-action pairs\nwith large dynamics gaps, while also simultaneously allowing learning from a\nfixed real-world dataset. Through extensive simulation and real-world tasks, as\nwell as theoretical analysis, we demonstrate the superior performance of H2O\nagainst other cross-domain online and offline RL algorithms. H2O provides a\nbrand new hybrid offline-and-online RL paradigm, which can potentially shed\nlight on future RL algorithm design for solving practical real-world tasks.", + "authors": "Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2022-06-27", + "updated": "2023-01-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.18617v1", + "title": "ELA: Exploited Level Augmentation for Offline Learning in Zero-Sum Games", + "abstract": "Offline learning has become widely used due to its ability to derive\neffective policies from offline datasets gathered by expert demonstrators\nwithout interacting with the environment directly. Recent research has explored\nvarious ways to enhance offline learning efficiency by considering the\ncharacteristics (e.g., expertise level or multiple demonstrators) of the\ndataset. However, a different approach is necessary in the context of zero-sum\ngames, where outcomes vary significantly based on the strategy of the opponent.\nIn this study, we introduce a novel approach that uses unsupervised learning\ntechniques to estimate the exploited level of each trajectory from the offline\ndataset of zero-sum games made by diverse demonstrators. Subsequently, we\nincorporate the estimated exploited level into the offline learning to maximize\nthe influence of the dominant strategy. Our method enables interpretable\nexploited level estimation in multiple zero-sum games and effectively\nidentifies dominant strategy data. Also, our exploited level augmented offline\nlearning significantly enhances the original offline learning algorithms\nincluding imitation learning and offline reinforcement learning for zero-sum\ngames.", + "authors": "Shiqi Lei, Kanghoon Lee, Linjing Li, Jinkyoo Park, Jiachen Li", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.AI", + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.16217v2", + "title": "Beyond Reward: Offline Preference-guided Policy Optimization", + "abstract": "This study focuses on the topic of offline preference-based reinforcement\nlearning (PbRL), a variant of conventional reinforcement learning that\ndispenses with the need for online interaction or specification of reward\nfunctions. Instead, the agent is provided with fixed offline trajectories and\nhuman preferences between pairs of trajectories to extract the dynamics and\ntask information, respectively. Since the dynamics and task information are\northogonal, a naive approach would involve using preference-based reward\nlearning followed by an off-the-shelf offline RL algorithm. However, this\nrequires the separate learning of a scalar reward function, which is assumed to\nbe an information bottleneck of the learning process. To address this issue, we\npropose the offline preference-guided policy optimization (OPPO) paradigm,\nwhich models offline trajectories and preferences in a one-step process,\neliminating the need for separately learning a reward function. OPPO achieves\nthis by introducing an offline hindsight information matching objective for\noptimizing a contextual policy and a preference modeling objective for finding\nthe optimal context. OPPO further integrates a well-performing decision policy\nby optimizing the two objectives iteratively. Our empirical results demonstrate\nthat OPPO effectively models offline preferences and outperforms prior\ncompeting baselines, including offline RL algorithms performed over either true\nor pseudo reward function specifications. Our code is available on the project\nwebsite: https://sites.google.com/view/oppo-icml-2023 .", + "authors": "Yachen Kang, Diyuan Shi, Jinxin Liu, Li He, Donglin Wang", + "published": "2023-05-25", + "updated": "2023-06-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.13777v4", + "title": "Deep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future Directions", + "abstract": "Deep generative models (DGMs) have demonstrated great success across various\ndomains, particularly in generating texts, images, and videos using models\ntrained from offline data. Similarly, data-driven decision-making and robotic\ncontrol also necessitate learning a generator function from the offline data to\nserve as the strategy or policy. In this case, applying deep generative models\nin offline policy learning exhibits great potential, and numerous studies have\nexplored in this direction. However, this field still lacks a comprehensive\nreview and so developments of different branches are relatively independent.\nThus, we provide the first systematic review on the applications of deep\ngenerative models for offline policy learning. In particular, we cover five\nmainstream deep generative models, including Variational Auto-Encoders,\nGenerative Adversarial Networks, Normalizing Flows, Transformers, and Diffusion\nModels, and their applications in both offline reinforcement learning (offline\nRL) and imitation learning (IL). Offline RL and IL are two main branches of\noffline policy learning and are widely-adopted techniques for sequential\ndecision-making. Specifically, for each type of DGM-based offline policy\nlearning, we distill its fundamental scheme, categorize related works based on\nthe usage of the DGM, and sort out the development process of algorithms in\nthat field. Subsequent to the main content, we provide in-depth discussions on\ndeep generative models and offline policy learning as a summary, based on which\nwe present our perspectives on future research directions. This work offers a\nhands-on reference for the research progress in deep generative models for\noffline policy learning, and aims to inspire improved DGM-based offline RL or\nIL algorithms. For convenience, we maintain a paper list on\nhttps://github.com/LucasCJYSDL/DGMs-for-Offline-Policy-Learning.", + "authors": "Jiayu Chen, Bhargav Ganguly, Yang Xu, Yongsheng Mei, Tian Lan, Vaneet Aggarwal", + "published": "2024-02-21", + "updated": "2024-02-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.09844v2", + "title": "Small Dataset, Big Gains: Enhancing Reinforcement Learning by Offline Pre-Training with Model Based Augmentation", + "abstract": "Offline reinforcement learning leverages pre-collected datasets of\ntransitions to train policies. It can serve as effective initialization for\nonline algorithms, enhancing sample efficiency and speeding up convergence.\nHowever, when such datasets are limited in size and quality, offline\npre-training can produce sub-optimal policies and lead to degraded online\nreinforcement learning performance. In this paper we propose a model-based data\naugmentation strategy to maximize the benefits of offline reinforcement\nlearning pre-training and reduce the scale of data needed to be effective. Our\napproach leverages a world model of the environment trained on the offline\ndataset to augment states during offline pre-training. We evaluate our approach\non a variety of MuJoCo robotic tasks and our results show it can jump-start\nonline fine-tuning and substantially reduce - in some cases by an order of\nmagnitude - the required number of environment interactions.", + "authors": "Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov", + "published": "2023-12-15", + "updated": "2023-12-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.05804v1", + "title": "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism", + "abstract": "Offline reinforcement learning, which seeks to utilize offline/historical\ndata to optimize sequential decision-making strategies, has gained surging\nprominence in recent studies. Due to the advantage that appropriate function\napproximators can help mitigate the sample complexity burden in modern\nreinforcement learning problems, existing endeavors usually enforce powerful\nfunction representation models (e.g. neural networks) to learn the optimal\npolicies. However, a precise understanding of the statistical limits with\nfunction representations, remains elusive, even when such a representation is\nlinear.\n Towards this goal, we study the statistical limits of offline reinforcement\nlearning with linear model representations. To derive the tight offline\nlearning bound, we design the variance-aware pessimistic value iteration\n(VAPVI), which adopts the conditional variance information of the value\nfunction for time-inhomogeneous episodic linear Markov decision processes\n(MDPs). VAPVI leverages estimated variances of the value functions to reweight\nthe Bellman residuals in the least-square pessimistic value iteration and\nprovides improved offline learning bounds over the best-known existing results\n(whereas the Bellman residuals are equally weighted by design). More\nimportantly, our learning bounds are expressed in terms of system quantities,\nwhich provide natural instance-dependent characterizations that previous\nresults are short of. We hope our results draw a clearer picture of what\noffline learning should look like when linear representations are provided.", + "authors": "Ming Yin, Yaqi Duan, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-03-11", + "updated": "2022-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.12876v2", + "title": "Guiding Online Reinforcement Learning with Action-Free Offline Pretraining", + "abstract": "Offline RL methods have been shown to reduce the need for environment\ninteraction by training agents using offline collected episodes. However, these\nmethods typically require action information to be logged during data\ncollection, which can be difficult or even impossible in some practical cases.\nIn this paper, we investigate the potential of using action-free offline\ndatasets to improve online reinforcement learning, name this problem\nReinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We\nintroduce Action-Free Guide (AF-Guide), a method that guides online training by\nextracting knowledge from action-free offline datasets. AF-Guide consists of an\nAction-Free Decision Transformer (AFDT) implementing a variant of Upside-Down\nReinforcement Learning. It learns to plan the next states from the offline\ndataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with\nguidance from AFDT. Experimental results show that AF-Guide can improve sample\nefficiency and performance in online training thanks to the knowledge from the\naction-free offline dataset. Code is available at\nhttps://github.com/Vision-CAIR/AF-Guide.", + "authors": "Deyao Zhu, Yuhui Wang, J\u00fcrgen Schmidhuber, Mohamed Elhoseiny", + "published": "2023-01-30", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.10393v1", + "title": "Offline Trajectory Generalization for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn policies from static\ndatasets of previously collected trajectories. Existing methods for offline RL\neither constrain the learned policy to the support of offline data or utilize\nmodel-based virtual environments to generate simulated rollouts. However, these\nmethods suffer from (i) poor generalization to unseen states; and (ii) trivial\nimprovement from low-qualified rollout simulation. In this paper, we propose\noffline trajectory generalization through world transformers for offline\nreinforcement learning (OTTO). Specifically, we use casual Transformers, a.k.a.\nWorld Transformers, to predict state dynamics and the immediate reward. Then we\npropose four strategies to use World Transformers to generate high-rewarded\ntrajectory simulation by perturbing the offline data. Finally, we jointly use\noffline data with simulated data to train an offline RL algorithm. OTTO serves\nas a plug-in module and can be integrated with existing offline RL methods to\nenhance them with better generalization capability of transformers and\nhigh-rewarded data augmentation. Conducting extensive experiments on D4RL\nbenchmark datasets, we verify that OTTO significantly outperforms\nstate-of-the-art offline RL methods.", + "authors": "Ziqi Zhao, Zhaochun Ren, Liu Yang, Fajie Yuan, Pengjie Ren, Zhumin Chen, jun Ma, Xin Xin", + "published": "2024-04-16", + "updated": "2024-04-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03360v1", + "title": "A Survey on Offline Model-Based Reinforcement Learning", + "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", + "authors": "Haoyang He", + "published": "2023-05-05", + "updated": "2023-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "I.2.6; I.2.8" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05612v1", + "title": "Personalization for Web-based Services using Offline Reinforcement Learning", + "abstract": "Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.", + "authors": "Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, Igor L. Markov", + "published": "2021-02-10", + "updated": "2021-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "cs.SE" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.08566v1", + "title": "Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining", + "abstract": "Large transformer models pretrained on offline reinforcement learning\ndatasets have demonstrated remarkable in-context reinforcement learning (ICRL)\ncapabilities, where they can make good decisions when prompted with interaction\ntrajectories from unseen environments. However, when and how transformers can\nbe trained to perform ICRL have not been theoretically well-understood. In\nparticular, it is unclear which reinforcement-learning algorithms transformers\ncan perform in context, and how distribution mismatch in offline training data\naffects the learned algorithms. This paper provides a theoretical framework\nthat analyzes supervised pretraining for ICRL. This includes two recently\nproposed training methods -- algorithm distillation and decision-pretrained\ntransformers. First, assuming model realizability, we prove the\nsupervised-pretrained transformer will imitate the conditional expectation of\nthe expert algorithm given the observed trajectory. The generalization error\nwill scale with model capacity and a distribution divergence factor between the\nexpert and offline algorithms. Second, we show transformers with ReLU attention\ncan efficiently approximate near-optimal online reinforcement learning\nalgorithms like LinUCB and Thompson sampling for stochastic linear bandits, and\nUCB-VI for tabular Markov decision processes. This provides the first\nquantitative analysis of the ICRL capabilities of transformers pretrained from\noffline trajectories.", + "authors": "Licong Lin, Yu Bai, Song Mei", + "published": "2023-10-12", + "updated": "2023-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "math.ST", + "stat.ML", + "stat.TH" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13630v1", + "title": "Offline Skill Graph (OSG): A Framework for Learning and Planning using Offline Reinforcement Learning Skills", + "abstract": "Reinforcement Learning has received wide interest due to its success in\ncompetitive games. Yet, its adoption in everyday applications is limited (e.g.\nindustrial, home, healthcare, etc.). In this paper, we address this limitation\nby presenting a framework for planning over offline skills and solving complex\ntasks in real-world environments. Our framework is comprised of three modules\nthat together enable the agent to learn from previously collected data and\ngeneralize over it to solve long-horizon tasks. We demonstrate our approach by\ntesting it on a robotic arm that is required to solve complex tasks.", + "authors": "Ben-ya Halevy, Yehudit Aperstein, Dotan Di Castro", + "published": "2023-06-23", + "updated": "2023-06-23", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.10905v1", + "title": "Efficient Robotic Manipulation Through Offline-to-Online Reinforcement Learning and Goal-Aware State Information", + "abstract": "End-to-end learning robotic manipulation with high data efficiency is one of\nthe key challenges in robotics. The latest methods that utilize human\ndemonstration data and unsupervised representation learning has proven to be a\npromising direction to improve RL learning efficiency. The use of demonstration\ndata also allows \"warming-up\" the RL policies using offline data with imitation\nlearning or the recently emerged offline reinforcement learning algorithms.\nHowever, existing works often treat offline policy learning and online\nexploration as two separate processes, which are often accompanied by severe\nperformance drop during the offline-to-online transition. Furthermore, many\nrobotic manipulation tasks involve complex sub-task structures, which are very\nchallenging to be solved in RL with sparse reward. In this work, we propose a\nunified offline-to-online RL framework that resolves the transition performance\ndrop issue. Additionally, we introduce goal-aware state information to the RL\nagent, which can greatly reduce task complexity and accelerate policy learning.\nCombined with an advanced unsupervised representation learning module, our\nframework achieves great training efficiency and performance compared with the\nstate-of-the-art methods in multiple robotic manipulation tasks.", + "authors": "Jin Li, Xianyuan Zhan, Zixu Xiao, Guyue Zhou", + "published": "2021-10-21", + "updated": "2021-10-21", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.09712v1", + "title": "Semi-Offline Reinforcement Learning for Optimized Text Generation", + "abstract": "In reinforcement learning (RL), there are two major settings for interacting\nwith the environment: online and offline. Online methods explore the\nenvironment at significant time cost, and offline methods efficiently obtain\nreward signals by sacrificing exploration capability. We propose semi-offline\nRL, a novel paradigm that smoothly transits from offline to online settings,\nbalances exploration capability and training cost, and provides a theoretical\nfoundation for comparing different RL settings. Based on the semi-offline\nformulation, we present the RL setting that is optimal in terms of optimization\ncost, asymptotic error, and overfitting error bound. Extensive experiments show\nthat our semi-offline approach is efficient and yields comparable or often\nbetter performance compared with state-of-the-art methods.", + "authors": "Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, Rui Yan", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.04268v1", + "title": "On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples", + "abstract": "Offline reinforcement learning (offline RL) considers problems where learning\nis performed using only previously collected samples and is helpful for the\nsettings in which collecting new data is costly or risky. In model-based\noffline RL, the learner performs estimation (or optimization) using a model\nconstructed according to the empirical transition frequencies. We analyze the\nsample complexity of vanilla model-based offline RL with dependent samples in\nthe infinite-horizon discounted-reward setting. In our setting, the samples\nobey the dynamics of the Markov decision process and, consequently, may have\ninterdependencies. Under no assumption of independent samples, we provide a\nhigh-probability, polynomial sample complexity bound for vanilla model-based\noff-policy evaluation that requires partial or uniform coverage. We extend this\nresult to the off-policy optimization under uniform coverage. As a comparison\nto the model-based approach, we analyze the sample complexity of off-policy\nevaluation with vanilla importance sampling in the infinite-horizon setting.\nFinally, we provide an estimator that outperforms the sample-mean estimator for\nalmost deterministic dynamics that are prevalent in reinforcement learning.", + "authors": "Mustafa O. Karabag, Ufuk Topcu", + "published": "2023-03-07", + "updated": "2023-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05951v3", + "title": "MOReL : Model-Based Offline Reinforcement Learning", + "abstract": "In offline reinforcement learning (RL), the goal is to learn a highly\nrewarding policy based solely on a dataset of historical interactions with the\nenvironment. The ability to train RL policies offline can greatly expand the\napplicability of RL, its data efficiency, and its experimental velocity. Prior\nwork in offline RL has been confined almost exclusively to model-free RL\napproaches. In this work, we present MOReL, an algorithmic framework for\nmodel-based offline RL. This framework consists of two steps: (a) learning a\npessimistic MDP (P-MDP) using the offline dataset; and (b) learning a\nnear-optimal policy in this P-MDP. The learned P-MDP has the property that for\nany policy, the performance in the real environment is approximately\nlower-bounded by the performance in the P-MDP. This enables it to serve as a\ngood surrogate for purposes of policy evaluation and learning, and overcome\ncommon pitfalls of model-based RL like model exploitation. Theoretically, we\nshow that MOReL is minimax optimal (up to log factors) for offline RL. Through\nexperiments, we show that MOReL matches or exceeds state-of-the-art results in\nwidely studied offline RL benchmarks. Moreover, the modular design of MOReL\nenables future advances in its components (e.g. generative modeling,\nuncertainty estimation, planning etc.) to directly translate into advances for\noffline RL.", + "authors": "Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims", + "published": "2020-05-12", + "updated": "2021-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.18434v1", + "title": "Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage", + "abstract": "The goal of an offline reinforcement learning (RL) algorithm is to learn\noptimal polices using historical (offline) data, without access to the\nenvironment for online exploration. One of the main challenges in offline RL is\nthe distribution shift which refers to the difference between the state-action\nvisitation distribution of the data generating policy and the learning policy.\nMany recent works have used the idea of pessimism for developing offline RL\nalgorithms and characterizing their sample complexity under a relatively weak\nassumption of single policy concentrability. Different from the offline RL\nliterature, the area of distributionally robust learning (DRL) offers a\nprincipled framework that uses a minimax formulation to tackle model mismatch\nbetween training and testing environments. In this work, we aim to bridge these\ntwo areas by showing that the DRL approach can be used to tackle the\ndistributional shift problem in offline RL. In particular, we propose two\noffline RL algorithms using the DRL framework, for the tabular and linear\nfunction approximation settings, and characterize their sample complexity under\nthe single policy concentrability assumption. We also demonstrate the superior\nperformance our proposed algorithm through simulation experiments.", + "authors": "Kishan Panaganti, Zaiyan Xu, Dileep Kalathil, Mohammad Ghavamzadeh", + "published": "2023-10-27", + "updated": "2023-10-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.06860v2", + "title": "A Minimalist Approach to Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) defines the task of learning from a fixed\nbatch of data. Due to errors in value estimation from out-of-distribution\nactions, most offline RL algorithms take the approach of constraining or\nregularizing the policy with the actions contained in the dataset. Built on\npre-existing RL algorithms, modifications to make an RL algorithm work offline\ncomes at the cost of additional complexity. Offline RL algorithms introduce new\nhyperparameters and often leverage secondary components such as generative\nmodels, while adjusting the underlying RL algorithm. In this paper we aim to\nmake a deep RL algorithm work while making minimal changes. We find that we can\nmatch the performance of state-of-the-art offline RL algorithms by simply\nadding a behavior cloning term to the policy update of an online RL algorithm\nand normalizing the data. The resulting algorithm is a simple to implement and\ntune baseline, while more than halving the overall run time by removing the\nadditional computational overhead of previous methods.", + "authors": "Scott Fujimoto, Shixiang Shane Gu", + "published": "2021-06-12", + "updated": "2021-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.05433v1", + "title": "Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) Algorithms are often designed with\nenvironments such as MuJoCo in mind, in which the planning horizon is extremely\nlong and no noise exists. We compare model-free, model-based, as well as hybrid\noffline RL approaches on various industrial benchmark (IB) datasets to test the\nalgorithms in settings closer to real world problems, including complex noise\nand partially observable states. We find that on the IB, hybrid approaches face\nsevere difficulties and that simpler algorithms, such as rollout based\nalgorithms or model-free algorithms with simpler regularizers perform best on\nthe datasets.", + "authors": "Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler", + "published": "2022-01-14", + "updated": "2022-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.04974v2", + "title": "Leveraging Offline Data in Online Reinforcement Learning", + "abstract": "Two central paradigms have emerged in the reinforcement learning (RL)\ncommunity: online RL and offline RL. In the online RL setting, the agent has no\nprior knowledge of the environment, and must interact with it in order to find\nan $\\epsilon$-optimal policy. In the offline RL setting, the learner instead\nhas access to a fixed dataset to learn from, but is unable to otherwise\ninteract with the environment, and must obtain the best policy it can from this\noffline data. Practical scenarios often motivate an intermediate setting: if we\nhave some set of offline data and, in addition, may also interact with the\nenvironment, how can we best use the offline data to minimize the number of\nonline interactions necessary to learn an $\\epsilon$-optimal policy?\n In this work, we consider this setting, which we call the \\textsf{FineTuneRL}\nsetting, for MDPs with linear structure. We characterize the necessary number\nof online samples needed in this setting given access to some offline dataset,\nand develop an algorithm, \\textsc{FTPedel}, which is provably optimal, up to\n$H$ factors. We show through an explicit example that combining offline data\nwith online interactions can lead to a provable improvement over either purely\noffline or purely online RL. Finally, our results illustrate the distinction\nbetween \\emph{verifiable} learning, the typical setting considered in online\nRL, and \\emph{unverifiable} learning, the setting often considered in offline\nRL, and show that there is a formal separation between these regimes.", + "authors": "Andrew Wagenmaker, Aldo Pacchiano", + "published": "2022-11-09", + "updated": "2023-07-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.11566v1", + "title": "Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning", + "abstract": "Offline Reinforcement Learning (RL) aims to learn policies from previously\ncollected datasets without exploring the environment. Directly applying\noff-policy algorithms to offline RL usually fails due to the extrapolation\nerror caused by the out-of-distribution (OOD) actions. Previous methods tackle\nsuch problem by penalizing the Q-values of OOD actions or constraining the\ntrained policy to be close to the behavior policy. Nevertheless, such methods\ntypically prevent the generalization of value functions beyond the offline data\nand also lack precise characterization of OOD data. In this paper, we propose\nPessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven\noffline algorithm without explicit policy constraints. Specifically, PBRL\nconducts uncertainty quantification via the disagreement of bootstrapped\nQ-functions, and performs pessimistic updates by penalizing the value function\nbased on the estimated uncertainty. To tackle the extrapolating error, we\nfurther propose a novel OOD sampling method. We show that such OOD sampling and\npessimistic bootstrapping yields provable uncertainty quantifier in linear\nMDPs, thus providing the theoretical underpinning for PBRL. Extensive\nexperiments on D4RL benchmark show that PBRL has better performance compared to\nthe state-of-the-art algorithms.", + "authors": "Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang", + "published": "2022-02-23", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2205.09550v1", + "title": "Data Valuation for Offline Reinforcement Learning", + "abstract": "The success of deep reinforcement learning (DRL) hinges on the availability\nof training data, which is typically obtained via a large number of environment\ninteractions. In many real-world scenarios, costs and risks are associated with\ngathering these data. The field of offline reinforcement learning addresses\nthese issues through outsourcing the collection of data to a domain expert or a\ncarefully monitored program and subsequently searching for a batch-constrained\noptimal policy. With the emergence of data markets, an alternative to\nconstructing a dataset in-house is to purchase external data. However, while\nstate-of-the-art offline reinforcement learning approaches have shown a lot of\npromise, they currently rely on carefully constructed datasets that are well\naligned with the intended target domains. This raises questions regarding the\ntransferability and robustness of an offline reinforcement learning agent\ntrained on externally acquired data. In this paper, we empirically evaluate the\nability of the current state-of-the-art offline reinforcement learning\napproaches to coping with the source-target domain mismatch within two MuJoCo\nenvironments, finding that current state-of-the-art offline reinforcement\nlearning algorithms underperform in the target domain. To address this, we\npropose data valuation for offline reinforcement learning (DVORL), which allows\nus to identify relevant and high-quality transitions, improving the performance\nand transferability of policies learned by offline reinforcement learning\nalgorithms. The results show that our method outperforms offline reinforcement\nlearning baselines on two MuJoCo environments.", + "authors": "Amir Abolfazli, Gregory Palmer, Daniel Kudenko", + "published": "2022-05-19", + "updated": "2022-05-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.11731v1", + "title": "Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning", + "abstract": "The offline reinforcement learning (RL) paradigm provides a general recipe to\nconvert static behavior datasets into policies that can perform better than the\npolicy that collected the data. While policy constraints, conservatism, and\nother methods for mitigating distributional shifts have made offline\nreinforcement learning more effective, the continuous action setting often\nnecessitates various approximations for applying these techniques. Many of\nthese challenges are greatly alleviated in discrete action settings, where\noffline RL constraints and regularizers can often be computed more precisely or\neven exactly. In this paper, we propose an adaptive scheme for action\nquantization. We use a VQ-VAE to learn state-conditioned action quantization,\navoiding the exponential blowup that comes with na\\\"ive discretization of the\naction space. We show that several state-of-the-art offline RL methods such as\nIQL, CQL, and BRAC improve in performance on benchmarks when combined with our\nproposed discretization scheme. We further validate our approach on a set of\nchallenging long-horizon complex robotic manipulation tasks in the Robomimic\nenvironment, where our discretized offline RL algorithms are able to improve\nupon their continuous counterparts by 2-3x. Our project page is at\nhttps://saqrl.github.io/", + "authors": "Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine", + "published": "2023-10-18", + "updated": "2023-10-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.07920v2", + "title": "Causal Decision Transformer for Recommender Systems via Offline Reinforcement Learning", + "abstract": "Reinforcement learning-based recommender systems have recently gained\npopularity. However, the design of the reward function, on which the agent\nrelies to optimize its recommendation policy, is often not straightforward.\nExploring the causality underlying users' behavior can take the place of the\nreward function in guiding the agent to capture the dynamic interests of users.\nMoreover, due to the typical limitations of simulation environments (e.g., data\ninefficiency), most of the work cannot be broadly applied in large-scale\nsituations. Although some works attempt to convert the offline dataset into a\nsimulator, data inefficiency makes the learning process even slower. Because of\nthe nature of reinforcement learning (i.e., learning by interaction), it cannot\ncollect enough data to train during a single interaction. Furthermore,\ntraditional reinforcement learning algorithms do not have a solid capability\nlike supervised learning methods to learn from offline datasets directly. In\nthis paper, we propose a new model named the causal decision transformer for\nrecommender systems (CDT4Rec). CDT4Rec is an offline reinforcement learning\nsystem that can learn from a dataset rather than from online interaction.\nMoreover, CDT4Rec employs the transformer architecture, which is capable of\nprocessing large offline datasets and capturing both short-term and long-term\ndependencies within the data to estimate the causal relationship between\naction, state, and reward. To demonstrate the feasibility and superiority of\nour model, we have conducted experiments on six real-world offline datasets and\none online simulator.", + "authors": "Siyu Wang, Xiaocong Chen, Dietmar Jannach, Lina Yao", + "published": "2023-04-17", + "updated": "2023-08-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.15690v1", + "title": "Benchmarking Offline Reinforcement Learning on Real-Robot Hardware", + "abstract": "Learning policies from previously recorded data is a promising direction for\nreal-world robotics tasks, as online learning is often infeasible. Dexterous\nmanipulation in particular remains an open problem in its general form. The\ncombination of offline reinforcement learning with large diverse datasets,\nhowever, has the potential to lead to a breakthrough in this challenging domain\nanalogously to the rapid progress made in supervised learning in recent years.\nTo coordinate the efforts of the research community toward tackling this\nproblem, we propose a benchmark including: i) a large collection of data for\noffline learning from a dexterous manipulation platform on two tasks, obtained\nwith capable RL agents trained in simulation; ii) the option to execute learned\npolicies on a real-world robotic system and a simulation for efficient\ndebugging. We evaluate prominent open-sourced offline reinforcement learning\nalgorithms on the datasets and provide a reproducible experimental setup for\noffline reinforcement learning on real systems.", + "authors": "Nico G\u00fcrtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel W\u00fcthrich, Stefan Bauer, Bernhard Sch\u00f6lkopf, Georg Martius", + "published": "2023-07-28", + "updated": "2023-07-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02829v3", + "title": "RORL: Robust Offline Reinforcement Learning via Conservative Smoothing", + "abstract": "Offline reinforcement learning (RL) provides a promising direction to exploit\nmassive amount of offline data for complex decision-making tasks. Due to the\ndistribution shift issue, current offline RL algorithms are generally designed\nto be conservative in value estimation and action selection. However, such\nconservatism can impair the robustness of learned policies when encountering\nobservation deviation under realistic conditions, such as sensor errors and\nadversarial attacks. To trade off robustness and conservatism, we propose\nRobust Offline Reinforcement Learning (RORL) with a novel conservative\nsmoothing technique. In RORL, we explicitly introduce regularization on the\npolicy and the value function for states near the dataset, as well as\nadditional conservative value estimation on these states. Theoretically, we\nshow RORL enjoys a tighter suboptimality bound than recent theoretical results\nin linear MDPs. We demonstrate that RORL can achieve state-of-the-art\nperformance on the general offline RL benchmark and is considerably robust to\nadversarial observation perturbations.", + "authors": "Rui Yang, Chenjia Bai, Xiaoteng Ma, Zhaoran Wang, Chongjie Zhang, Lei Han", + "published": "2022-06-06", + "updated": "2022-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.06734v1", + "title": "Corruption Robust Offline Reinforcement Learning with Human Feedback", + "abstract": "We study data corruption robustness for reinforcement learning with human\nfeedback (RLHF) in an offline setting. Given an offline dataset of pairs of\ntrajectories along with feedback about human preferences, an\n$\\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or\ntrajectory features manipulated), capturing an adversarial attack or noisy\nhuman preferences. We aim to design algorithms that identify a near-optimal\npolicy from the corrupted data, with provable guarantees. Existing theoretical\nworks have separately studied the settings of corruption robust RL (learning\nfrom scalar rewards directly under corruption) and offline RLHF (learning from\nhuman feedback without corruption); however, they are inapplicable to our\nproblem of dealing with corrupted data in offline RLHF setting. To this end, we\ndesign novel corruption robust offline RLHF methods under various assumptions\non the coverage of the data-generating distributions. At a high level, our\nmethodology robustifies an offline RLHF framework by first learning a reward\nmodel along with confidence sets and then learning a pessimistic optimal policy\nover the confidence set. Our key insight is that learning optimal policy can be\ndone by leveraging an offline corruption-robust RL oracle in different ways\n(e.g., zero-order oracle or first-order oracle), depending on the data coverage\nassumptions. To our knowledge, ours is the first work that provides provable\ncorruption robust offline RLHF methods.", + "authors": "Debmalya Mandal, Andi Nika, Parameswaran Kamalaruban, Adish Singla, Goran Radanovi\u0107", + "published": "2024-02-09", + "updated": "2024-02-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2403.16788v1", + "title": "HPL-ESS: Hybrid Pseudo-Labeling for Unsupervised Event-based Semantic Segmentation", + "abstract": "Event-based semantic segmentation has gained popularity due to its capability\nto deal with scenarios under high-speed motion and extreme lighting conditions,\nwhich cannot be addressed by conventional RGB cameras. Since it is hard to\nannotate event data, previous approaches rely on event-to-image reconstruction\nto obtain pseudo labels for training. However, this will inevitably introduce\nnoise, and learning from noisy pseudo labels, especially when generated from a\nsingle source, may reinforce the errors. This drawback is also called\nconfirmation bias in pseudo-labeling. In this paper, we propose a novel hybrid\npseudo-labeling framework for unsupervised event-based semantic segmentation,\nHPL-ESS, to alleviate the influence of noisy pseudo labels. In particular, we\nfirst employ a plain unsupervised domain adaptation framework as our baseline,\nwhich can generate a set of pseudo labels through self-training. Then, we\nincorporate offline event-to-image reconstruction into the framework, and\nobtain another set of pseudo labels by predicting segmentation maps on the\nreconstructed images. A noisy label learning strategy is designed to mix the\ntwo sets of pseudo labels and enhance the quality. Moreover, we propose a soft\nprototypical alignment module to further improve the consistency of target\ndomain features. Extensive experiments show that our proposed method\noutperforms existing state-of-the-art methods by a large margin on the\nDSEC-Semantic dataset (+5.88% accuracy, +10.32% mIoU), which even surpasses\nseveral supervised methods.", + "authors": "Linglin Jing, Yiming Ding, Yunpeng Gao, Zhigang Wang, Xu Yan, Dong Wang, Gerald Schaefer, Hui Fang, Bin Zhao, Xuelong Li", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Offline AND Reinforcement AND Learning", + "gt": "2.1. Event-based Semantic Segmentation Using deep learning, [1] first introduces event cameras to the semantic segmentation task, with an architecture based on an encoder-decoder CNN, pre-trained on the well-known urban environment Cityscapes dataset [6]. An open dataset, DDD17, containing annotated DAVIS driving records for this task is released in [3]. [10] enables the use of existing video datasets by transforming them into synthetic event data, facilitating the training of networks designed for real event data. Despite its capacity to leverage an unlimited number of video datasets, challenges persist due to the sim-to-real gap in many simulated scenarios. [32] employs two student networks for knowledge distillation from the image to the event domain. However, the method heavily depends on per-pixel paired events and active pixel sensor (APS) frames. Consequently, in scenarios where APS frames are unavailable, the application of such a knowledge distillation approach becomes significantly restricted. [31] substitutes the active pixel sensor modality with grayscale images generated by E2VID [25], transferring the segmentation task from the event domain to the image domain. Recently, ESS [27] addresses event-based semantic segmentation by introducing the DSEC-Semantic dataset, which relies on paired high-resolution images and events, thus providing high-quality semantic labels for event streams. ESS also introduces an event-to-image-based UDA method to transfer knowledge from the source image domain to the target event domain. 2.2. Unsupervised Domain Adaptation Unsupervised domain adaptation (UDA) approaches can be divided into two key methodologies: domain adversarial learning and self-training. Domain adversarial learning focuses on aligning feature distributions across domains [9] but does not inherently ensure the discriminative power of target features [18]. In contrast, self-training capitalizes on a model\u2019s high-confidence predictions to bolster the performance within the target domain. This approach significantly alleviates the domain shift issue by iteratively aligning the feature distribution of the target domain to match that of the source domain, which proves to be particularly effective in scenarios where obtaining labels for the target domain is challenging. In this context, strategies such as leveraging domain-invariant features [5, 12, 17], pseudolabeling [36, 38], intermediate domains [16, 21, 30], and consistency regularisation [13] have been used. We consider that under a similar task and scenario, event data can 2 EMA Teacher Network Target Events Recon. Image Source Image OffLine E2VID Target Events Student Network Hybrid Pseudo Labeling Stop Gradient Soft Prototypical Alignment (SPA) JS Divergence Train Only Train and Inference Weighted Aggregation Cross-Entropy Loss Logits PseudoLabels Figure 2. Overview of the HPL-ESS architecture. During training, we introduce offline event-to-image reconstruction as input to our framework. To avoid overfitting noise, we use only a small proportion (5%) of the reconstructions. The network is trained by hybrid pseudo labels from reconstruction and self-prediction. Additionally, a soft prototypical alignment (SPA) module is designed to enhance the consistency of target domain features. In the inference phase, only events are used as input. also be drawn close to RGB images semantically through the application of UDA methods.", + "pre_questions": [], + "main_content": "Introduction Event cameras are bio-inspired vision sensors that respond to changes in pixel intensity, generating a stream of asyn*Equal contribution. \u2020Corresponding author. Accuracy (%) E2VID [23] ESS [25] ESS [25] HALISE [4] EV-SegNet [1] Ours mIoU (%) Figure 1. Comparison on the DSEC-Semantic dataset. Our method outperforms other UDA works by a large margin and even surpasses fully supervised methods. chronous events characterized by exceptionally high temporal resolution. This technology enables the capture of dynamic scenes, providing features of high dynamic range (HDR) and reduced motion blur. Event cameras have been extensively applied in various applications, including object recognition [15, 23], SLAM [8], and autonomous driving systems [19], effectively addressing challenges such as motion blur and overexposure. However, event data significantly differ from images, making it difficult to annotate in dense pixel prediction tasks such as semantic segmentation. Previous works [1, 31, 32] require per-pixel paired events and images, and then leverage pre-trained networks on images to generate labels for event data. Although a more precisely paired and sharper image would naturally yield improved results, these methods increase the demands on capture devices. Other methods rely on event-to-image conversion to get rid of the need for ground-truth labels. E2VID [25] is an event-to-image 1 arXiv:2403.16788v1 [cs.CV] 25 Mar 2024 (ETI) reconstruction method to transform events into images, while VID2E [10] employs image-to-event (ITE) in reverse. Based on the above methods, a feasible strategy is to generate pseudo labels from converted images for event data. ESS [27] further employs unsupervised domain adaptation (UDA) to transfer knowledge from labeled image data (source domain) to unlabeled event data (target domain) through the bridge of event-to-image reconstruction. Despite improvements, reconstruction-based methods suffer from the limitation that, due to the lack of texture information in event data, the reconstructed images usually have large fuzzy regions, inevitably introducing noise into the generated pseudo labels. Training on noisy pseudo labels has the risk of reinforcing the errors, especially when they are obtained from a single source, a problem that is known as confirmation bias [2] in pseudo-labeling. To alleviate the bias of single-source pseudo labels, in this paper, we propose HPL-ESS, a hybrid pseudo-labeling framework for unsupervised event-based semantic segmentation. Our method is built upon a modified UDA framework, which executes self-training on the mixture of unpaired images and event data. The framework has the ability to generate a set of pseudo labels by directly predicting the event data. Simultaneously, we introduce offline eventto-image reconstruction into the framework, which generates another set of pseudo labels by predicting the reconstructed images. Through training on these hybrid pseudo labels, the network can progressively improve its ability to directly predict more accurate labels for event data. To gradually mitigate the impact of low-quality reconstructed images during training, we approach this challenge as a noisy label learning (NLL) problem. In this context, we distinguish between noisy data (reconstructed images) and clean data (original events). Then, we introduce a noisy-label adaptation process to further refine pseudo labels at each iteration. In addition, due to the large domain gap between image and event, the network is prone to produce dispersed features in the target domain [10]. To counteract this issue, we also design a soft prototypical alignment (SPA) module to learn the intrinsic structure of the target domain and address the dispersion of target features. As illustrated in Figure 1, the proposed method is very effective, outperforming other state-of-the-art UDA approaches by a large margin and even surpassing several fully supervised methods. In summary, our contributions in this paper are: \u2022 We propose a hybrid pseudo-labeling framework for unsupervised event-based semantic segmentation. This framework gets rid of event-to-image pairs and is robust to noisy pseudo labels. \u2022 We design a soft prototypical alignment (SPA) module to to noisy pseudo labels. \u2022 We design a soft prototypical alignment (SPA) module to enforce the network to generate consistent event features under the same class, forming a more compact feature space in the target domain. \u2022 Extensive experiments on two benchmark datasets demonstrate that our method outperforms previous stateof-the-art methods by a large margin. As illustrated in Figure 2, the proposed HPL-ESS framework incorporates self-training UDA techniques, described in Section 3.2, and employs offline event-to-image reconstruction to generate hybrid pseudo labels, covered in Section 3.3. To gradually mitigate the impact of low-quality and blurred areas in offline-reconstructed images, we introduce a noisy label learning (NLL) method to refine pseudo labels. We further propose a soft prototypical alignment (SPA) module to explore the intrinsic structure of event data, alleviating the impact of feature divergence as detailed in Section 3.4. 3.1. Definitions and Problem Formulation In a UDA framework for event-based semantic segmentation, a neural network F is usually trained from labeled source dataset S = {Ii, Yi}M i=1 to transfer to an unlabeled target dataset T = {Ei}N i=1. Specifically, the source domain S consists of images Ii \u2208RH\u00d7W and their corresponding labels Yi \u2208RH\u00d7W . In contrast, the target domain T consists of numerous continuous and asynchronous event streams Ei and without having access to the target labels Vi. Each event stream Ei can be represented as a series of tuples {(xj, yj, tj, pj)}, where j denotes the sample index, x, and y denote the spatial co-ordinates, t represents the timestamp, and p indicates the binary polarity (positive or negative) of brightness changes occurring between two timestamps. Due to the high temporal resolution of Ei, we subsample Ei into a sequence of voxel grid representations [37], where each voxel grid is constructed from non-overlapping temporal windows with a fixed number of events. These are then effectively superimposed to form a static frame. 3.2. UDA Framework Overview We modify DaFormer [14] as the backbone and baseline for our event-based semantic segmentation UDA method. The framework is composed of two networks: a teacher network F\u03d5 and a student network F\u03c8. Other modules in DaFormer are eliminated to ensure the simplicity and efficiency of our method. To facilitate knowledge transfer from the source domain to the target domain, the modified baseline is trained using the mixed data of labeled images and unlabeled events. To be specific, in our work, the student network F\u03c8 first conducts warm-up by being trained with the supervised loss on the source image domain Ls(F\u03c8 | S) = 1 |S |S| |S| \ufffd i=1 entro |S| \ufffd i=1 H (F\u03c8 (Ii) , Yi) , (1) \ufffd where H denotes the cross entropy function. Correspondingly, the parameters of the teacher network are updated using the exponential moving average (EMA) [29] from the student model to maintain stability. After warm-up, the framework follows a self-training strategy, where the 3 teacher network directly predicts the event data to generate pseudo labels for the training of the student model. This process is repeated until the networks have converged. In addition, augmentation methods, such as jitter and ClassMix [22], are used on both events and images to improve the method\u2019s availability across domains. Although self-training UDA is usually an effective technique, it is challenging to obtain satisfactory results due to the large domain gap between images and events. Furthermore, it suffers from the aforementioned single-source noisy pseudo labels. 3.3. Hybrid Pseudo-Labeling To address the above issues, we consider the E2VID [26] method to reconstruct the event streams into simulated images, which are then incorporated into our framework as an intermediate domain to narrow down the gap between the source image domain and the target event domain. The reconstructed images also provide another set of pseudo labels to alleviate the bias of single-source pseudo labels. In particular, we randomly sample the event dataset T = {Ei}N i=1 to create two groups, Tl = \b El i \ta i=1 and Tu = {Eu i }b i=1, where a + b = N. Event streams El i are reconstructed into simulated images Il i as Il i = E2V ID \u0000El i \u0001 . (2) Now, the inputs to the student network encompass source images Ii, unlabeled events Eu i , unlabeled events El i and the corresponding reconstructed images Il i. Notably, we do not reconstruct all event streams into simulated images to avoid the network overfitting these noisy data. As illustrated in Figure 2, the student network F\u03c8 takes the reconstructed image Il i as input and generates the predicted probability map. This map is then utilized as F\u03c8(Il i), the pseudo-ground-truth for El i. Simultaneously, similar to the self-training backbone, event data Eu i and El i are input to the teacher network F\u03d5 to obtain the direct pseudo labels F\u03d5(Eu i ) and F\u03d5(El i). For Eu i , the student network F\u03c8 is trained with the supervised loss Lu calculated as Lu(F\u03c8 | Tu) = 1 |Tu| |Tu| X i=1 H (F\u03c8 (Eu i ) , F\u03d5(Eu i )) . (3) For El i, F\u03d5(El i) together with the event pseudo-groundtruth F\u03c8(Il i) constitutes the hybrid pseudo labels. The event-to-image reconstruction process suffers from limited interpretability and a lack of control, leading to low-quality reconstructed images Il i, e.g., incorrect content and blurred areas. Predicting semantic segmentation maps on these images and viewing them as pseudo labels will inevitably introduce significant noise. Directly using them during training may result in sub-optimal performance. Therefore, we treat this as a noisy label learning Distance: Distance: JS( | ) All Class Prototypes Target Events Reconstructed Image Source Images Figure 3. The concept of our SPA module on source domain, reconstructed images, and events. problem and explicitly regard F\u03c8(Il i) as a noisy label of the events El i. Inspired by [35], we employ a label correction strategy based on self-prediction to mitigate the noise issue. This strategy adapts the noisy distribution from the pseudoground-truth F\u03c8(Il i) to the view of the event distribution. Specifically, for each El i, we reconstruct the refined pseudo label \u02c6 Vi l by combining F\u03c8(Il i) and the F\u03d5(El i) as \u02c6 Vi l = (1 \u2212\u03b1)F\u03c8(Il i) + \u03b1F\u03d5(El i), (4) with ratio \u03b1. Then, the modified loss Ll for El i is Ll(F\u03c8 | Tl) = 1 |Tl| |Tl| X i=1 H \u0010 F\u03c8 \u0000El i \u0001 , \u02c6 Vi l\u0011 . (5) During training, the teacher network progressively generates more accurate F\u03d5(El i), gradually weakening the impact of F\u03c8(Il i). 3.4. Soft Prototypical Alignment By employing man-made paired El i and Il i to bridge the image domain and event domain, we aim to enhance the alignment between the source and target. However, using F\u03c8(Il i) as a pseudo-label may still not be able to solve the distribution misalignment because of the obvious differences in the distributions of Il i and El i. Inspired by [36], we propose a soft prototypical alignment (SPA) module to explicitly align the distributions for our problem. As illustrated in Figure 3, we employ the mean value F\u03c8(Ii) of each class on source images as prototypes \u03b7 and aim to align the prototype distance between F\u03c8(Il i) and F\u03c8(El i). The distance between F\u03c8(Il i) and \u03b7 is calculated as Z(I) i = exp \u0000\u2212 \r \rF\u03c8 \u0000Il i \u0001 \u2212\u03b7 \r \r /\u03c4 \u0001 P exp \u0000\u2212 \r \rF\u03c8 \u0000Il i \u0001 \u2212\u03b7 \r \r /\u03c4 \u0001, (6) 4 where \u03c4 is the coefficient temperature. Similarly, the distance between F\u03c8(El i) and \u03b7 is calculated as Z(E) i = exp \u0000\u2212 \r \rF\u03c8 \u0000El i \u0001 \u2212\u03b7 \r \r /\u03c4 \u0001 P exp \u0000\u2212 \r \rF\u03c8 \u0000El i \u0001 \u2212\u03b7 \r \r /\u03c4 \u0001. (7) We use the Jensen-Shannon (JS) divergence [7] instead of KL divergence used in [24] for distribution alignment due to the symmetry of JS divergence. This ensures an equal pulling effect on the distributions of F\u03c8(Il i) and F\u03c8(El i). The JS divergence loss is calculated as LS JS = JS \u0010 Z(I) i \u2225Z(E) i \u0011 , (8) and compels the network to generate consistent event features and image features for El i and Il i under the same class. Additionally, El i and Eu i are not trained with the same set of pseudo labels, making the target distributions of El i and Eu i more likely to be dispersed. In such a scenario, the network fails to rectify the labels of target data located at the far end of the class cluster. Considering that the distributions of F\u03c8(El i) and F\u03c8(Eu i ) belong to the same scene, their distributions are expected to exhibit similar relative distances. To achieve this, we further employ the mean value of each class in F\u03c8(Il i) as a prototype. We then bring in the relative distances of F\u03c8(El i) and F\u03c8(Eu i ) to F\u03c8(Il i), respectively. Employing a methodology akin to Eqns. (6), (7), and (8), we obtain LI JS that forms a more compact feature space in the target domain. The overall loss in our framework is defined as L = Ls + Lu + Ll + \u03c9(LS JS + LI JS), (9) where \u03c9 denotes a hyper-parameter. 4. Experiments 4.1. Dataset As target data, we evaluate the proposed framework on two event-based semantic segmentation datasets, namely DSEC-Semantic [11] and DDD17 [3]. These drivingfocussed datasets were captured using automotive-grade event cameras, encompassing a diverse range of urban and rural settings. The DDD17 dataset comprises per-pixel paired events and frames captured by DAVIS event cameras with a resolution of 346 \u00d7 260. In [1], semantic labels were generated using pre-trained segmentation networks based on DAVIS images, resulting in 15,950 samples for training and 3,890 for testing. Due to the low resolution, several categories in DDD17 have been merged into six classes, namely flat (road and pavement), background (construction and sky), object, vegetation, human, and vehicle. DSEC-Semantic, a recently introduced dataset for eventbased semantic segmentation, extends the comprehensive DSEC dataset [11]. It includes 53 driving sequences captured by an event camera at a resolution of 640 \u00d7 480. [27] used a state-of-the-art image-based segmentation method [28] to generate segmentation labels. This process yields 8,082 labeled training samples and 2,809 testing samples, distributed across 11 classes: sky, building, fence, person, road, pole, sidewalk, vegetation, vehicle, wall, and traffic sign. As source data, we use the CityScapes street scene dataset [6], which includes 2,975 training and 500 validation images with a resolution of 2048 \u00d7 1024. Following common practice in UDA methods, we resize the CityScape images to 1024 \u00d7 512 pixels. 4.2. Implementation Details In our experiments, we employ DaFormer [14] as our UDA backbone. The encoder in Daformer uses an MiTB5 model [34] and is pre-trained on ImageNet-1K. Across all experiments, the batch size is consistently set to 4. We use the AdamW optimizer with a weight decay of 1\u00d710\u22124. The learning rate is set to 6 \u00d7 10\u22125 and we use a learning rate warm-up for 1,500 iterations, with a linear increase in the learning rate during this period. We additionally warmup for 5,000 iterations on the source dataset to make the network gain the initial semantic segmentation ability. \u03b1 in Eq. 4 and \u03c9 in Eq. 9 are both set to 0.5. For data augmentation in both source and target domains, we follow [14, 33] and employ techniques such as color jitter, Gaussian blur, and ClassMix [22]. These augmentations are instrumental in training the model to learn more robust features across different domains. In the event-to-image simulation process, Spade E2VID [25] is employed as our emulator for reconstruction. This step occurs solely in the offline phase, ensuring that it does not impact the efficiency of our online training and testing process. It is worth noting that E2VID will progressively produce expanding black artifacts if fed with discontinuous event inputs. To mitigate this problem, we reinitialize the E2VID network each time an image is reconstructed, preventing the occurrence of such artifacts. Regarding the event pre-processing on the DDD17 dataset, events are converted into 20 voxel grids, with each grid containing 32,000 events. For the DSEC-Semantic dataset, due to its higher resolution, the number of voxel grids is increased to 40, and each grid comprises 100,000 events. 4.3. Comparison with State-of-the-Art We compare our method with previous relevant approaches, and use the top-1 accuracy and the mean intersection over union (mIoU) as the common semantic segmentation evaluation metrics. Beyond UDA methods, certain approaches have embraced a fully supervised setting to tackle the challenges. EV-SegNet [1] presents the first baseline for event5 Table 1. Performance and necessary number of events on DSEC-Semantic dataset in both UDA and fully supervised learning settings. Type Method No. of Events Accuracy [%] mIoU [%] Supervised EV-SegNet [1] 88.61 51.76 HALISE [4] 89.01 52.43 ESS [27] 2E6 89.37 53.29 UDA EV-Transfer [20] 2E6 60.50 23.20 E2VID [25] 2E6 76.67 40.70 ESS [27] 2E6 84.04 44.87 Ours 1.8E5 (\u219391.0%) 89.92 (+ 5.88%) 55.19 (+10.32%) based semantic segmentation, which employs an encoderdecoder architecture and takes only events for fully supervised learning. HALISE [4] encodes event frames and source images into a spike stream, representing information in a binarised manner, and aligns the feature distribution in these spike streams. EV-Transfer [20] fabricates the motion of a still image to generate event streams, and then uses source labels and the corresponding synthetic events to conduct training. E2VID [25] converts events in the DSECSemantic dataset to reconstruct images, and then predicts semantic segmentation maps using other pre-trained models. E2VID can only perform direct transfer as there is no event label for training. VID2E [10] converts source video frames to synthetic events and trains on the source labels. ESS [27] employs the above E2VID-based process to generate pseudo-labels and attempts to transfer knowledge from the source image domain to the target event domain by the UDA technique. While methods employing supervised learning may achieve superior results compared to traditional UDA approaches, their reliance on labels significantly elevates the demands for dataset collection. DSEC-Semantic dataset. We employ the CityScape dataset as the labeled source dataset and the DSECSemantic dataset as the unlabeled target dataset. This dataset poses additional challenges due to its more finegrained categories compared to the DDD17 dataset. We report the obtained results for all methods in Table 1. As we can see from there, our method demonstrates a Event Frame Ours Ground Truth Figure 4. Example results on DDD17 dataset. The DDD17 ground truth lacks details for some objects. significant improvement, outperforming the previous stateof-the-art UDA work ESS by 5.87% and 9.65% in terms of accuracy and mIoU, respectively. Notably, our UDA-based method even surpasses the performance of fully supervised approaches by 0.55% in terms of accuracy and 1.9% in terms of mIoU. Since this is a highly imbalanced dataset, the gain in mIoU is more representative in the segmentation task. In addition, by solely utilizing the E2VID reconstruction method offline, our approach avoids dependency on recurrent networks in E2VID during both training and inference, significantly reducing the required input events from 2E6 to 1E5 (a 95% reduction). These enhancements remarkably prove the effectiveness and computational efficiency of our proposed method. Some example results are visualized in Figure 5. The background of the reconstructed image exhibits fuzzy regions and low resolution, which inevitably poses significant challenges to semantic segmentation Networks. For instance, due to the lack of texture information in events, the reconstructed sky category appears very similar to the building category in terms of contrast and edge information, leading to potential misinterpretation of the model\u2019s predictions (as indicated by the red arrow). The proposed hybrid pseudo-labeling method effectively mitigates these interference factors in reconstructed images, resulting in improved performance. DDD17 dataset. Table 2 reports the UDA results on the DDD17 dataset for event-based semantic segmentation. Similar to the DSEC-Semantic dataset, only labeled images from CityScape and unlabeled events from DDD17 are available in this task. Table 2 showcases that our method achieves consistent optimal results, outperforming the previous state-of-the-art work by 1.05% (mIoU) and 0.79% (accuracy), respectively. Since the event ground truth in DDD17 is derived from the low-quality paired images, this significantly impacts the reliability, especially concerning texture details, as also mentioned in [27]. As Figure 4 illustrates, our predictions even surpass the ground truth in object details. Taking the first line in Figure 4 as an example, in the yellow box, our network better separates the details of the trees and streetlights, which are missing in the DDD17 ground truth. Sim6 Figure 5. Visualization results on DESC-Semantic dataset. From left to right: event frame, event-to-image reconstruction, the maps predicted by E2VID, ESS, and our proposed HPL-ESS, ground truth. Table 2. Performance comparison of HPL-ESS with state-of-theart methods on DDD17 dataset in UDA setting. Only source labels are available. Method Accuracy [%] mIoU [%] EV-Transfer [20] 47.37 14.91 E2VID [25] 83.24 44.77 VID2E [10] 85.93 45.48 ESS [27] 87.86 52.46 Ours 88.65 (+0.79%) 53.51+(1.05%) ilarly, in the second line, our method segments more correct trees, while the DDD17 ground truth misclassifies trees with sky. This discrepancy could potentially lower our performance during evaluation. Due to the higher resolution and quality of the DSEC-Semantic dataset, we opted for this dataset to evaluate our method and comparison works. 4.4. Comprehensive Analysis Since DSEC-Semantic is a higher-quality dataset, all ablation experiments are conducted on DSEC-Semantic. Design analysis of our framework. We conduct several ablation studies to assess the effectiveness of the proposed framework. As depicted in Table 3, (a) directly applying the UDA baseline alone does not yield satisfactory results, likely due to the substantial domain gap between the image and event domains. Similarly, (b) training directly on the event-to-image (ETI) reconstructed images from E2VID also results in unsatisfactory performance. This result verifies the aforementioned discussion, namely event-to-imagebased methods will suffer from the noise brought by the reconstructed image. Both (a) and (b) highlight the unreliability of solely relying on the single-source pseudo labels and emphasize the necessity for hybrid label learning. An intriguing observation is that (c) employing the source data to pre-train the network for a certain number of iterations, i.e., using a warmup phase, significantly enhances the performance. In (d), our method is based on source domain warm-up, and as described in Section 3, the E2VID reconstructed images are introduced on top of the UDA backbone to provide the hybrid pseudo labels for events, leading to considerable performance gains. We further validate the effectiveness of the proposed 7 Table 3. Ablations study on DSEC-Semantic dataset. Method Baseline ETI Warmup NLL SPA mIoU [%] (a) \u2713 36.76 (b) \u2713 40.70 (c) \u2713 \u2713 44.87 (d) \u2713 \u2713 \u2713 51.08 (e) \u2713 \u2713 \u2713 \u2713 52.23 (f) \u2713 \u2713 \u2713 \u2713 52.69 HPL-ESS \u2713 \u2713 \u2713 \u2713 \u2713 55.19 Table 4. Ablation study for the proportion of event samples participating in the event-to-image reconstruction. Proportion Accuracy [%] mIoU [%] 0% 82.71 44.87 1% 86.54 46.84 5% 89.91 55.19 10% 89.89 55.15 50% 89.81 54.96 80% 89.75 54.82 100% 89.63 54.51 NLL strategy and SPA module. As shown in Table 3, NLL reduces the noise of pseudo-labels on reconstructed images through iterative label refinement, making it more adaptive to the event domain and resulting in a performance gain. SPA prioritizes the divergence of various features on the target domain and aligns the labeled and unlabeled events with the source domain prototype, contributing to enhanced evaluation performance. Ultimately, the simultaneous introduction of these two modules in our framework leads to optimal performance. Proportion of reconstructed event samples. In our framework, we do not transform all event data into reconstructed images to avoid overfitting the reconstruction noise. In fact, as demonstrated in Table 4, the optimal result is achieved when using only 5% of the event data to generate the reconstructed images as the pseudo labels. Performance experiences a slight decline as more reconstructed images are introduced. Particularly, when using 100% of the data, it results in an mIoU drop to 54.51%. The lower dependence on the number of reconstructed images also underscores the remarkable computational efficiency of our method during training. Further reduction of the ratio, below 5%, leads to progressively worse performance, reaching its lowest point at a 0% ratio and reverting back to the UDA backbone. Online/Offline reconstruction pseudo label. Offline event-to-image reconstruction enables us to directly predict the reconstructed image using a pre-trained network and get pseudo labels for event data, which are named reconstruction pseudo labels here. In this section, we compare the effects of fixing these reconstruction pseudo labels and itTable 5. Ablation study for online and offline reconstruction pseudo labels. Method Accuracy [%] mIoU [%] OffLine 83.25 48.45 OnLine 89.92 55.19 eratively repredicting them by our network during training. As shown in Table 5, the online reprediction strategy remarkably surpasses the offline fixing strategy, demonstrating that our method becomes more powerful during training and can predict more accurate reconstruction pseudo labels for event data. 5. Discussion and Limitation Due to the imbalance issue presented in the benchmark datasets, the accuracy performance of classes with insufficient samples, e.g., \u2019rider\u2019 and \u2019traffic light,\u2019 is comparatively lower than the accuracy of some other classes, e.g., sky and road. These results are illustrated in the visualization examples in Supplementary Materials. Despite that our approach yields significant improvement for the classes with a small number of samples when compared to previous methods, we will further consider more strategies to deal with the data imbalance issue in the future work. 6. Conclusion In this paper, we have proposed a novel hybrid pseudolabeling framework HPL-ESS for unsupervised event-based semantic segmentation. HPL-ESS effectively alleviates the challenges posed by noisy pseudo labels, a common issue in this field. The proposed method uniquely incorporates self-training unsupervised domain adaptation and offline event-to-image reconstruction to generate high-quality hybrid pseudo labels. The introduction of a noisy label learning strategy further refines the pseudo labels gradually. Moreover, a soft prototypical alignment (SPA) module significantly enhances the consistency and reliability of the target features. The effectiveness of HPL-ESS is evidenced by its superior performance in extensive experiments, where it not only surpasses existing state-of-the-art UDA methods but also exceeds several supervised methods. 7. Acknowledgments This work is supported by the Shanghai AI Laboratory, National Key R&D Program of China (2022ZD0160101), the National Natural Science Foundation of China (62376222), and Young Elite Scientists Sponsorship Program by CAST (2023QNRC001). This work is supported by 111 Project (No. D23006). 8" + }, + { + "url": "http://arxiv.org/abs/1906.07165v1", + "title": "High Speed and High Dynamic Range Video with an Event Camera", + "abstract": "Event cameras are novel sensors that report brightness changes in the form of\na stream of asynchronous \"events\" instead of intensity frames. They offer\nsignificant advantages with respect to conventional cameras: high temporal\nresolution, high dynamic range, and no motion blur. While the stream of events\nencodes in principle the complete visual signal, the reconstruction of an\nintensity image from a stream of events is an ill-posed problem in practice.\nExisting reconstruction approaches are based on hand-crafted priors and strong\nassumptions about the imaging process as well as the statistics of natural\nimages. In this work we propose to learn to reconstruct intensity images from\nevent streams directly from data instead of relying on any hand-crafted priors.\nWe propose a novel recurrent network to reconstruct videos from a stream of\nevents, and train it on a large amount of simulated event data. During training\nwe propose to use a perceptual loss to encourage reconstructions to follow\nnatural image statistics. We further extend our approach to synthesize color\nimages from color event streams. Our network surpasses state-of-the-art\nreconstruction methods by a large margin in terms of image quality (> 20%),\nwhile comfortably running in real-time. We show that the network is able to\nsynthesize high framerate videos (> 5,000 frames per second) of high-speed\nphenomena (e.g. a bullet hitting an object) and is able to provide high dynamic\nrange reconstructions in challenging lighting conditions. We also demonstrate\nthe effectiveness of our reconstructions as an intermediate representation for\nevent data. We show that off-the-shelf computer vision algorithms can be\napplied to our reconstructions for tasks such as object classification and\nvisual-inertial odometry and that this strategy consistently outperforms\nalgorithms that were specifically designed for event data.", + "authors": "Henri Rebecq, Ren\u00e9 Ranftl, Vladlen Koltun, Davide Scaramuzza", + "published": "2019-06-15", + "updated": "2019-06-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.08200v2", + "title": "Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond", + "abstract": "The vast majority of existing algorithms for unsupervised domain adaptation\n(UDA) focus on adapting from a labeled source domain to an unlabeled target\ndomain directly in a one-off way. Gradual domain adaptation (GDA), on the other\nhand, assumes a path of $(T-1)$ unlabeled intermediate domains bridging the\nsource and target, and aims to provide better generalization in the target\ndomain by leveraging the intermediate ones. Under certain assumptions, Kumar et\nal. (2020) proposed a simple algorithm, Gradual Self-Training, along with a\ngeneralization bound in the order of $e^{O(T)}\n\\left(\\varepsilon_0+O\\left(\\sqrt{log(T)/n}\\right)\\right)$ for the target domain\nerror, where $\\varepsilon_0$ is the source domain error and $n$ is the data\nsize of each domain. Due to the exponential factor, this upper bound becomes\nvacuous when $T$ is only moderately large. In this work, we analyze gradual\nself-training under more general and relaxed assumptions, and prove a\nsignificantly improved generalization bound as $\\varepsilon_0+ O \\left(T\\Delta\n+ T/\\sqrt{n}\\right) + \\widetilde{O}\\left(1/\\sqrt{nT}\\right)$, where $\\Delta$ is\nthe average distributional distance between consecutive domains. Compared with\nthe existing bound with an exponential dependency on $T$ as a multiplicative\nfactor, our bound only depends on $T$ linearly and additively. Perhaps more\ninterestingly, our result implies the existence of an optimal choice of $T$\nthat minimizes the generalization error, and it also naturally suggests an\noptimal way to construct the path of intermediate domains so as to minimize the\naccumulative path length $T\\Delta$ between the source and target. To\ncorroborate the implications of our theory, we examine gradual self-training on\nmultiple semi-synthetic and real datasets, which confirms our findings. We\nbelieve our insights provide a path forward toward the design of future GDA\nalgorithms.", + "authors": "Haoxiang Wang, Bo Li, Han Zhao", + "published": "2022-04-18", + "updated": "2022-07-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.09410v1", + "title": "Exploiting Domain-Specific Features to Enhance Domain Generalization", + "abstract": "Domain Generalization (DG) aims to train a model, from multiple observed\nsource domains, in order to perform well on unseen target domains. To obtain\nthe generalization capability, prior DG approaches have focused on extracting\ndomain-invariant information across sources to generalize on target domains,\nwhile useful domain-specific information which strongly correlates with labels\nin individual domains and the generalization to target domains is usually\nignored. In this paper, we propose meta-Domain Specific-Domain Invariant\n(mDSDI) - a novel theoretically sound framework that extends beyond the\ninvariance view to further capture the usefulness of domain-specific\ninformation. Our key insight is to disentangle features in the latent space\nwhile jointly learning both domain-invariant and domain-specific features in a\nunified framework. The domain-specific representation is optimized through the\nmeta-learning framework to adapt from source domains, targeting a robust\ngeneralization on unseen domains. We empirically show that mDSDI provides\ncompetitive results with state-of-the-art techniques in DG. A further ablation\nstudy with our generated dataset, Background-Colored-MNIST, confirms the\nhypothesis that domain-specific is essential, leading to better results when\ncompared with only using domain-invariant.", + "authors": "Manh-Ha Bui, Toan Tran, Anh Tuan Tran, Dinh Phung", + "published": "2021-10-18", + "updated": "2021-10-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1604.01685v2", + "title": "The Cityscapes Dataset for Semantic Urban Scene Understanding", + "abstract": "Visual understanding of complex urban street scenes is an enabling factor for\na wide range of applications. Object detection has benefited enormously from\nlarge-scale datasets, especially in the context of deep learning. For semantic\nurban scene understanding, however, no current dataset adequately captures the\ncomplexity of real-world urban scenes.\n To address this, we introduce Cityscapes, a benchmark suite and large-scale\ndataset to train and test approaches for pixel-level and instance-level\nsemantic labeling. Cityscapes is comprised of a large, diverse set of stereo\nvideo sequences recorded in streets from 50 different cities. 5000 of these\nimages have high quality pixel-level annotations; 20000 additional images have\ncoarse annotations to enable methods that leverage large volumes of\nweakly-labeled data. Crucially, our effort exceeds previous attempts in terms\nof dataset size, annotation richness, scene variability, and complexity. Our\naccompanying empirical study provides an in-depth analysis of the dataset\ncharacteristics, as well as a performance evaluation of several\nstate-of-the-art approaches based on our benchmark.", + "authors": "Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, Bernt Schiele", + "published": "2016-04-06", + "updated": "2016-04-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2101.10979v2", + "title": "Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation", + "abstract": "Self-training is a competitive approach in domain adaptive segmentation,\nwhich trains the network with the pseudo labels on the target domain. However\ninevitably, the pseudo labels are noisy and the target features are dispersed\ndue to the discrepancy between source and target domains. In this paper, we\nrely on representative prototypes, the feature centroids of classes, to address\nthe two issues for unsupervised domain adaptation. In particular, we take one\nstep further and exploit the feature distances from prototypes that provide\nricher information than mere prototypes. Specifically, we use it to estimate\nthe likelihood of pseudo labels to facilitate online correction in the course\nof training. Meanwhile, we align the prototypical assignments based on relative\nfeature distances for two different views of the same target, producing a more\ncompact target feature space. Moreover, we find that distilling the already\nlearned knowledge to a self-supervised pretrained model further boosts the\nperformance. Our method shows tremendous performance advantage over\nstate-of-the-art methods. We will make the code publicly available.", + "authors": "Pan Zhang, Bo Zhang, Ting Zhang, Dong Chen, Yong Wang, Fang Wen", + "published": "2021-01-26", + "updated": "2021-01-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1811.12039v1", + "title": "EV-SegNet: Semantic Segmentation for Event-based Cameras", + "abstract": "Event cameras, or Dynamic Vision Sensor (DVS), are very promising sensors\nwhich have shown several advantages over frame based cameras. However, most\nrecent work on real applications of these cameras is focused on 3D\nreconstruction and 6-DOF camera tracking. Deep learning based approaches, which\nare leading the state-of-the-art in visual recognition tasks, could potentially\ntake advantage of the benefits of DVS, but some adaptations are needed still\nneeded in order to effectively work on these cameras. This work introduces a\nfirst baseline for semantic segmentation with this kind of data. We build a\nsemantic segmentation CNN based on state-of-the-art techniques which takes\nevent information as the only input. Besides, we propose a novel representation\nfor DVS data that outperforms previously used event representations for related\ntasks. Since there is no existing labeled dataset for this task, we propose how\nto automatically generate approximated semantic segmentation labels for some\nsequences of the DDD17 dataset, which we publish together with the model, and\ndemonstrate they are valid to train a model for DVS data only. We compare our\nresults on semantic segmentation from DVS data with results using corresponding\ngrayscale images, demonstrating how they are complementary and worth combining.", + "authors": "I\u00f1igo Alonso, Ana C. Murillo", + "published": "2018-11-29", + "updated": "2018-11-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.01801v3", + "title": "Dual Transfer Learning for Event-based End-task Prediction via Pluggable Event to Image Translation", + "abstract": "Event cameras are novel sensors that perceive the per-pixel intensity changes\nand output asynchronous event streams with high dynamic range and less motion\nblur. It has been shown that events alone can be used for end-task learning,\ne.g., semantic segmentation, based on encoder-decoder-like networks. However,\nas events are sparse and mostly reflect edge information, it is difficult to\nrecover original details merely relying on the decoder. Moreover, most methods\nresort to pixel-wise loss alone for supervision, which might be insufficient to\nfully exploit the visual details from sparse events, thus leading to less\noptimal performance. In this paper, we propose a simple yet flexible two-stream\nframework named Dual Transfer Learning (DTL) to effectively enhance the\nperformance on the end-tasks without adding extra inference cost. The proposed\napproach consists of three parts: event to end-task learning (EEL) branch,\nevent to image translation (EIT) branch, and transfer learning (TL) module that\nsimultaneously explores the feature-level affinity information and pixel-level\nknowledge from the EIT branch to improve the EEL branch. This simple yet novel\nmethod leads to strong representation learning from events and is evidenced by\nthe significant performance boost on the end-tasks such as semantic\nsegmentation and depth estimation.", + "authors": "Lin Wang, Yujeong Chae, Kuk-Jin Yoon", + "published": "2021-09-04", + "updated": "2021-11-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1810.07911v2", + "title": "Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training", + "abstract": "Recent deep networks achieved state of the art performance on a variety of\nsemantic segmentation tasks. Despite such progress, these models often face\nchallenges in real world `wild tasks' where large difference between labeled\ntraining/source data and unseen test/target data exists. In particular, such\ndifference is often referred to as `domain gap', and could cause significantly\ndecreased performance which cannot be easily remedied by further increasing the\nrepresentation power. Unsupervised domain adaptation (UDA) seeks to overcome\nsuch problem without target domain labels. In this paper, we propose a novel\nUDA framework based on an iterative self-training procedure, where the problem\nis formulated as latent variable loss minimization, and can be solved by\nalternatively generating pseudo labels on target data and re-training the model\nwith these labels. On top of self-training, we also propose a novel\nclass-balanced self-training framework to avoid the gradual dominance of large\nclasses on pseudo-label generation, and introduce spatial priors to refine\ngenerated labels. Comprehensive experiments show that the proposed methods\nachieve state of the art semantic segmentation performance under multiple major\nUDA settings.", + "authors": "Yang Zou, Zhiding Yu, B. V. K. Vijaya Kumar, Jinsong Wang", + "published": "2018-10-18", + "updated": "2018-10-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.10016v2", + "title": "ESS: Learning Event-based Semantic Segmentation from Still Images", + "abstract": "Retrieving accurate semantic information in challenging high dynamic range\n(HDR) and high-speed conditions remains an open challenge for image-based\nalgorithms due to severe image degradations. Event cameras promise to address\nthese challenges since they feature a much higher dynamic range and are\nresilient to motion blur. Nonetheless, semantic segmentation with event cameras\nis still in its infancy which is chiefly due to the lack of high-quality,\nlabeled datasets. In this work, we introduce ESS (Event-based Semantic\nSegmentation), which tackles this problem by directly transferring the semantic\nsegmentation task from existing labeled image datasets to unlabeled events via\nunsupervised domain adaptation (UDA). Compared to existing UDA methods, our\napproach aligns recurrent, motion-invariant event embeddings with image\nembeddings. For this reason, our method neither requires video data nor\nper-pixel alignment between images and events and, crucially, does not need to\nhallucinate motion from still images. Additionally, we introduce DSEC-Semantic,\nthe first large-scale event-based dataset with fine-grained labels. We show\nthat using image labels alone, ESS outperforms existing UDA approaches, and\nwhen combined with event labels, it even outperforms state-of-the-art\nsupervised approaches on both DDD17 and DSEC-Semantic. Finally, ESS is\ngeneral-purpose, which unlocks the vast amount of existing labeled image\ndatasets and paves the way for new and exciting research directions in new\nfields previously inaccessible for event cameras.", + "authors": "Zhaoning Sun, Nico Messikommer, Daniel Gehrig, Davide Scaramuzza", + "published": "2022-03-18", + "updated": "2022-08-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.07378v1", + "title": "X4D-SceneFormer: Enhanced Scene Understanding on 4D Point Cloud Videos through Cross-modal Knowledge Transfer", + "abstract": "The field of 4D point cloud understanding is rapidly developing with the goal\nof analyzing dynamic 3D point cloud sequences. However, it remains a\nchallenging task due to the sparsity and lack of texture in point clouds.\nMoreover, the irregularity of point cloud poses a difficulty in aligning\ntemporal information within video sequences. To address these issues, we\npropose a novel cross-modal knowledge transfer framework, called\nX4D-SceneFormer. This framework enhances 4D-Scene understanding by transferring\ntexture priors from RGB sequences using a Transformer architecture with\ntemporal relationship mining. Specifically, the framework is designed with a\ndual-branch architecture, consisting of an 4D point cloud transformer and a\nGradient-aware Image Transformer (GIT). During training, we employ multiple\nknowledge transfer techniques, including temporal consistency losses and masked\nself-attention, to strengthen the knowledge transfer between modalities. This\nleads to enhanced performance during inference using single-modal 4D point\ncloud inputs. Extensive experiments demonstrate the superior performance of our\nframework on various 4D point cloud video understanding tasks, including action\nrecognition, action segmentation and semantic segmentation. The results achieve\n1st places, i.e., 85.3% (+7.9%) accuracy and 47.3% (+5.0%) mIoU for 4D action\nsegmentation and semantic segmentation, on the HOI4D\nchallenge\\footnote{\\url{http://www.hoi4d.top/}.}, outperforming previous\nstate-of-the-art by a large margin. We release the code at\nhttps://github.com/jinglinglingling/X4D", + "authors": "Linglin Jing, Ying Xue, Xu Yan, Chaoda Zheng, Dong Wang, Ruimao Zhang, Zhigang Wang, Hui Fang, Bin Zhao, Zhen Li", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1912.03095v2", + "title": "Video to Events: Recycling Video Datasets for Event Cameras", + "abstract": "Event cameras are novel sensors that output brightness changes in the form of\na stream of asynchronous \"events\" instead of intensity frames. They offer\nsignificant advantages with respect to conventional cameras: high dynamic range\n(HDR), high temporal resolution, and no motion blur. Recently, novel learning\napproaches operating on event data have achieved impressive results. Yet, these\nmethods require a large amount of event data for training, which is hardly\navailable due the novelty of event sensors in computer vision research. In this\npaper, we present a method that addresses these needs by converting any\nexisting video dataset recorded with conventional cameras to synthetic event\ndata. This unlocks the use of a virtually unlimited number of existing video\ndatasets for training networks designed for real event data. We evaluate our\nmethod on two relevant vision tasks, i.e., object recognition and semantic\nsegmentation, and show that models trained on synthetic events have several\nbenefits: (i) they generalize well to real event data, even in scenarios where\nstandard-camera images are blurry or overexposed, by inheriting the outstanding\nproperties of event cameras; (ii) they can be used for fine-tuning on real data\nto improve over state-of-the-art for both classification and semantic\nsegmentation.", + "authors": "Daniel Gehrig, Mathias Gehrig, Javier Hidalgo-Carri\u00f3, Davide Scaramuzza", + "published": "2019-12-06", + "updated": "2020-04-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.12341v1", + "title": "EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation", + "abstract": "Event cameras sense per-pixel intensity changes and produce asynchronous\nevent streams with high dynamic range and less motion blur, showing advantages\nover conventional cameras. A hurdle of training event-based models is the lack\nof large qualitative labeled data. Prior works learning end-tasks mostly rely\non labeled or pseudo-labeled datasets obtained from the active pixel sensor\n(APS) frames; however, such datasets' quality is far from rivaling those based\non the canonical images. In this paper, we propose a novel approach, called\n\\textbf{EvDistill}, to learn a student network on the unlabeled and unpaired\nevent data (target modality) via knowledge distillation (KD) from a teacher\nnetwork trained with large-scale, labeled image data (source modality). To\nenable KD across the unpaired modalities, we first propose a bidirectional\nmodality reconstruction (BMR) module to bridge both modalities and\nsimultaneously exploit them to distill knowledge via the crafted pairs, causing\nno extra computation in the inference. The BMR is improved by the end-tasks and\nKD losses in an end-to-end manner. Second, we leverage the structural\nsimilarities of both modalities and adapt the knowledge by matching their\ndistributions. Moreover, as most prior feature KD methods are uni-modality and\nless applicable to our problem, we propose to leverage an affinity graph KD\nloss to boost the distillation. Our extensive experiments on semantic\nsegmentation and object recognition demonstrate that EvDistill achieves\nsignificantly better results than the prior works and KD with only events and\nAPS frames.", + "authors": "Lin Wang, Yujeong Chae, Sung-Hoon Yoon, Tae-Kyun Kim, Kuk-Jin Yoon", + "published": "2021-11-24", + "updated": "2021-11-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2011.09230v2", + "title": "FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation", + "abstract": "Unsupervised domain adaptation (UDA) methods for learning domain invariant\nrepresentations have achieved remarkable progress. However, most of the studies\nwere based on direct adaptation from the source domain to the target domain and\nhave suffered from large domain discrepancies. In this paper, we propose a UDA\nmethod that effectively handles such large domain discrepancies. We introduce a\nfixed ratio-based mixup to augment multiple intermediate domains between the\nsource and target domain. From the augmented-domains, we train the\nsource-dominant model and the target-dominant model that have complementary\ncharacteristics. Using our confidence-based learning methodologies, e.g.,\nbidirectional matching with high-confidence predictions and self-penalization\nusing low-confidence predictions, the models can learn from each other or from\nits own results. Through our proposed methods, the models gradually transfer\ndomain knowledge from the source to the target domain. Extensive experiments\ndemonstrate the superiority of our proposed method on three public benchmarks:\nOffice-31, Office-Home, and VisDA-2017.", + "authors": "Jaemin Na, Heechul Jung, Hyung Jin Chang, Wonjun Hwang", + "published": "2020-11-18", + "updated": "2021-03-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1711.01458v1", + "title": "DDD17: End-To-End DAVIS Driving Dataset", + "abstract": "Event cameras, such as dynamic vision sensors (DVS), and dynamic and\nactive-pixel vision sensors (DAVIS) can supplement other autonomous driving\nsensors by providing a concurrent stream of standard active pixel sensor (APS)\nimages and DVS temporal contrast events. The APS stream is a sequence of\nstandard grayscale global-shutter image sensor frames. The DVS events represent\nbrightness changes occurring at a particular moment, with a jitter of about a\nmillisecond under most lighting conditions. They have a dynamic range of >120\ndB and effective frame rates >1 kHz at data rates comparable to 30 fps\n(frames/second) image sensors. To overcome some of the limitations of current\nimage acquisition technology, we investigate in this work the use of the\ncombined DVS and APS streams in end-to-end driving applications. The dataset\nDDD17 accompanying this paper is the first open dataset of annotated DAVIS\ndriving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor\nrecording highway and city driving in daytime, evening, night, dry and wet\nweather conditions, along with vehicle speed, GPS position, driver steering,\nthrottle, and brake captured from the car's on-board diagnostics interface. As\nan example application, we performed a preliminary end-to-end learning study of\nusing a convolutional neural network that is trained to predict the\ninstantaneous steering angle from DVS and APS visual data.", + "authors": "Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck", + "published": "2017-11-04", + "updated": "2017-11-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.02000v1", + "title": "Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning", + "abstract": "Value function estimation is an indispensable subroutine in reinforcement\nlearning, which becomes more challenging in the offline setting. In this paper,\nwe propose Hybrid Value Estimation (HVE) to reduce value estimation error,\nwhich trades off bias and variance by balancing between the value estimation\nfrom offline data and the learned model. Theoretical analysis discloses that\nHVE enjoys a better error bound than the direct methods. HVE can be leveraged\nin both off-policy evaluation and offline reinforcement learning settings. We,\ntherefore, provide two concrete algorithms Off-policy HVE (OPHVE) and\nModel-based Offline HVE (MOHVE), respectively. Empirical evaluations on MuJoCo\ntasks corroborate the theoretical claim. OPHVE outperforms other off-policy\nevaluation methods in all three metrics measuring the estimation effectiveness,\nwhile MOHVE achieves better or comparable performance with state-of-the-art\noffline reinforcement learning algorithms. We hope that HVE could shed some\nlight on further research on reinforcement learning from fixed data.", + "authors": "Xue-Kun Jin, Xu-Hui Liu, Shengyi Jiang, Yang Yu", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.10442v1", + "title": "Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning", + "abstract": "We study offline meta-reinforcement learning, a practical reinforcement\nlearning paradigm that learns from offline data to adapt to new tasks. The\ndistribution of offline data is determined jointly by the behavior policy and\nthe task. Existing offline meta-reinforcement learning algorithms cannot\ndistinguish these factors, making task representations unstable to the change\nof behavior policies. To address this problem, we propose a contrastive\nlearning framework for task representations that are robust to the distribution\nmismatch of behavior policies in training and test. We design a bi-level\nencoder structure, use mutual information maximization to formalize task\nrepresentation learning, derive a contrastive learning objective, and introduce\nseveral approaches to approximate the true distribution of negative pairs.\nExperiments on a variety of offline meta-reinforcement learning benchmarks\ndemonstrate the advantages of our method over prior methods, especially on the\ngeneralization to out-of-distribution behavior policies. The code is available\nat https://github.com/PKU-AI-Edge/CORRO.", + "authors": "Haoqi Yuan, Zongqing Lu", + "published": "2022-06-21", + "updated": "2022-06-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.04974v2", + "title": "Leveraging Offline Data in Online Reinforcement Learning", + "abstract": "Two central paradigms have emerged in the reinforcement learning (RL)\ncommunity: online RL and offline RL. In the online RL setting, the agent has no\nprior knowledge of the environment, and must interact with it in order to find\nan $\\epsilon$-optimal policy. In the offline RL setting, the learner instead\nhas access to a fixed dataset to learn from, but is unable to otherwise\ninteract with the environment, and must obtain the best policy it can from this\noffline data. Practical scenarios often motivate an intermediate setting: if we\nhave some set of offline data and, in addition, may also interact with the\nenvironment, how can we best use the offline data to minimize the number of\nonline interactions necessary to learn an $\\epsilon$-optimal policy?\n In this work, we consider this setting, which we call the \\textsf{FineTuneRL}\nsetting, for MDPs with linear structure. We characterize the necessary number\nof online samples needed in this setting given access to some offline dataset,\nand develop an algorithm, \\textsc{FTPedel}, which is provably optimal, up to\n$H$ factors. We show through an explicit example that combining offline data\nwith online interactions can lead to a provable improvement over either purely\noffline or purely online RL. Finally, our results illustrate the distinction\nbetween \\emph{verifiable} learning, the typical setting considered in online\nRL, and \\emph{unverifiable} learning, the setting often considered in offline\nRL, and show that there is a formal separation between these regimes.", + "authors": "Andrew Wagenmaker, Aldo Pacchiano", + "published": "2022-11-09", + "updated": "2023-07-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.05440v1", + "title": "Dealing with the Unknown: Pessimistic Offline Reinforcement Learning", + "abstract": "Reinforcement Learning (RL) has been shown effective in domains where the\nagent can learn policies by actively interacting with its operating\nenvironment. However, if we change the RL scheme to offline setting where the\nagent can only update its policy via static datasets, one of the major issues\nin offline reinforcement learning emerges, i.e. distributional shift. We\npropose a Pessimistic Offline Reinforcement Learning (PessORL) algorithm to\nactively lead the agent back to the area where it is familiar by manipulating\nthe value function. We focus on problems caused by out-of-distribution (OOD)\nstates, and deliberately penalize high values at states that are absent in the\ntraining dataset, so that the learned pessimistic value function lower bounds\nthe true value anywhere within the state space. We evaluate the PessORL\nalgorithm on various benchmark tasks, where we show that our method gains\nbetter performance by explicitly handling OOD states, when compared to those\nmethods merely considering OOD actions.", + "authors": "Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan", + "published": "2021-11-09", + "updated": "2021-11-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.03086v1", + "title": "DITTO: Offline Imitation Learning with World Models", + "abstract": "We propose DITTO, an offline imitation learning algorithm which uses world\nmodels and on-policy reinforcement learning to addresses the problem of\ncovariate shift, without access to an oracle or any additional online\ninteractions. We discuss how world models enable offline, on-policy imitation\nlearning, and propose a simple intrinsic reward defined in the world model\nlatent space that induces imitation learning by reinforcement learning.\nTheoretically, we show that our formulation induces a divergence bound between\nexpert and learner, in turn bounding the difference in reward. We test our\nmethod on difficult Atari environments from pixels alone, and achieve\nstate-of-the-art performance in the offline setting.", + "authors": "Branton DeMoss, Paul Duckworth, Nick Hawes, Ingmar Posner", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08900v1", + "title": "Offline Multi-Agent Reinforcement Learning with Coupled Value Factorization", + "abstract": "Offline reinforcement learning (RL) that learns policies from offline\ndatasets without environment interaction has received considerable attention in\nrecent years. Compared with the rich literature in the single-agent case,\noffline multi-agent RL is still a relatively underexplored area. Most existing\nmethods directly apply offline RL ingredients in the multi-agent setting\nwithout fully leveraging the decomposable problem structure, leading to less\nsatisfactory performance in complex tasks. We present OMAC, a new offline\nmulti-agent RL algorithm with coupled value factorization. OMAC adopts a\ncoupled value factorization scheme that decomposes the global value function\ninto local and shared components, and also maintains the credit assignment\nconsistency between the state-value and Q-value functions. Moreover, OMAC\nperforms in-sample learning on the decomposed local state-value functions,\nwhich implicitly conducts max-Q operation at the local level while avoiding\ndistributional shift caused by evaluating out-of-distribution actions. Based on\nthe comprehensive evaluations of the offline multi-agent StarCraft II\nmicro-management tasks, we demonstrate the superior performance of OMAC over\nthe state-of-the-art offline multi-agent RL methods.", + "authors": "Xiangsen Wang, Xianyuan Zhan", + "published": "2023-06-15", + "updated": "2023-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.11566v1", + "title": "Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning", + "abstract": "Offline Reinforcement Learning (RL) aims to learn policies from previously\ncollected datasets without exploring the environment. Directly applying\noff-policy algorithms to offline RL usually fails due to the extrapolation\nerror caused by the out-of-distribution (OOD) actions. Previous methods tackle\nsuch problem by penalizing the Q-values of OOD actions or constraining the\ntrained policy to be close to the behavior policy. Nevertheless, such methods\ntypically prevent the generalization of value functions beyond the offline data\nand also lack precise characterization of OOD data. In this paper, we propose\nPessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven\noffline algorithm without explicit policy constraints. Specifically, PBRL\nconducts uncertainty quantification via the disagreement of bootstrapped\nQ-functions, and performs pessimistic updates by penalizing the value function\nbased on the estimated uncertainty. To tackle the extrapolating error, we\nfurther propose a novel OOD sampling method. We show that such OOD sampling and\npessimistic bootstrapping yields provable uncertainty quantifier in linear\nMDPs, thus providing the theoretical underpinning for PBRL. Extensive\nexperiments on D4RL benchmark show that PBRL has better performance compared to\nthe state-of-the-art algorithms.", + "authors": "Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang", + "published": "2022-02-23", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.06871v3", + "title": "Improving Offline-to-Online Reinforcement Learning with Q-Ensembles", + "abstract": "Offline reinforcement learning (RL) is a learning paradigm where an agent\nlearns from a fixed dataset of experience. However, learning solely from a\nstatic dataset can limit the performance due to the lack of exploration. To\novercome it, offline-to-online RL combines offline pre-training with online\nfine-tuning, which enables the agent to further refine its policy by\ninteracting with the environment in real-time. Despite its benefits, existing\noffline-to-online RL methods suffer from performance degradation and slow\nimprovement during the online phase. To tackle these challenges, we propose a\nnovel framework called Ensemble-based Offline-to-Online (E2O) RL. By increasing\nthe number of Q-networks, we seamlessly bridge offline pre-training and online\nfine-tuning without degrading performance. Moreover, to expedite online\nperformance enhancement, we appropriately loosen the pessimism of Q-value\nestimation and incorporate ensemble-based exploration mechanisms into our\nframework. Experimental results demonstrate that E2O can substantially improve\nthe training stability, learning efficiency, and final performance of existing\noffline RL methods during online fine-tuning on a range of locomotion and\nnavigation tasks, significantly outperforming existing offline-to-online RL\nmethods.", + "authors": "Kai Zhao, Yi Ma, Jianye Hao, Jinyi Liu, Yan Zheng, Zhaopeng Meng", + "published": "2023-06-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.12876v2", + "title": "Guiding Online Reinforcement Learning with Action-Free Offline Pretraining", + "abstract": "Offline RL methods have been shown to reduce the need for environment\ninteraction by training agents using offline collected episodes. However, these\nmethods typically require action information to be logged during data\ncollection, which can be difficult or even impossible in some practical cases.\nIn this paper, we investigate the potential of using action-free offline\ndatasets to improve online reinforcement learning, name this problem\nReinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We\nintroduce Action-Free Guide (AF-Guide), a method that guides online training by\nextracting knowledge from action-free offline datasets. AF-Guide consists of an\nAction-Free Decision Transformer (AFDT) implementing a variant of Upside-Down\nReinforcement Learning. It learns to plan the next states from the offline\ndataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with\nguidance from AFDT. Experimental results show that AF-Guide can improve sample\nefficiency and performance in online training thanks to the knowledge from the\naction-free offline dataset. Code is available at\nhttps://github.com/Vision-CAIR/AF-Guide.", + "authors": "Deyao Zhu, Yuhui Wang, J\u00fcrgen Schmidhuber, Mohamed Elhoseiny", + "published": "2023-01-30", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.06860v2", + "title": "A Minimalist Approach to Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) defines the task of learning from a fixed\nbatch of data. Due to errors in value estimation from out-of-distribution\nactions, most offline RL algorithms take the approach of constraining or\nregularizing the policy with the actions contained in the dataset. Built on\npre-existing RL algorithms, modifications to make an RL algorithm work offline\ncomes at the cost of additional complexity. Offline RL algorithms introduce new\nhyperparameters and often leverage secondary components such as generative\nmodels, while adjusting the underlying RL algorithm. In this paper we aim to\nmake a deep RL algorithm work while making minimal changes. We find that we can\nmatch the performance of state-of-the-art offline RL algorithms by simply\nadding a behavior cloning term to the policy update of an online RL algorithm\nand normalizing the data. The resulting algorithm is a simple to implement and\ntune baseline, while more than halving the overall run time by removing the\nadditional computational overhead of previous methods.", + "authors": "Scott Fujimoto, Shixiang Shane Gu", + "published": "2021-06-12", + "updated": "2021-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.13464v3", + "title": "When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning", + "abstract": "Learning effective reinforcement learning (RL) policies to solve real-world\ncomplex tasks can be quite challenging without a high-fidelity simulation\nenvironment. In most cases, we are only given imperfect simulators with\nsimplified dynamics, which inevitably lead to severe sim-to-real gaps in RL\npolicy learning. The recently emerged field of offline RL provides another\npossibility to learn policies directly from pre-collected historical data.\nHowever, to achieve reasonable performance, existing offline RL algorithms need\nimpractically large offline data with sufficient state-action space coverage\nfor training. This brings up a new question: is it possible to combine learning\nfrom limited real data in offline RL and unrestricted exploration through\nimperfect simulators in online RL to address the drawbacks of both approaches?\nIn this study, we propose the Dynamics-Aware Hybrid Offline-and-Online\nReinforcement Learning (H2O) framework to provide an affirmative answer to this\nquestion. H2O introduces a dynamics-aware policy evaluation scheme, which\nadaptively penalizes the Q function learning on simulated state-action pairs\nwith large dynamics gaps, while also simultaneously allowing learning from a\nfixed real-world dataset. Through extensive simulation and real-world tasks, as\nwell as theoretical analysis, we demonstrate the superior performance of H2O\nagainst other cross-domain online and offline RL algorithms. H2O provides a\nbrand new hybrid offline-and-online RL paradigm, which can potentially shed\nlight on future RL algorithm design for solving practical real-world tasks.", + "authors": "Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2022-06-27", + "updated": "2023-01-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.06734v1", + "title": "Corruption Robust Offline Reinforcement Learning with Human Feedback", + "abstract": "We study data corruption robustness for reinforcement learning with human\nfeedback (RLHF) in an offline setting. Given an offline dataset of pairs of\ntrajectories along with feedback about human preferences, an\n$\\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or\ntrajectory features manipulated), capturing an adversarial attack or noisy\nhuman preferences. We aim to design algorithms that identify a near-optimal\npolicy from the corrupted data, with provable guarantees. Existing theoretical\nworks have separately studied the settings of corruption robust RL (learning\nfrom scalar rewards directly under corruption) and offline RLHF (learning from\nhuman feedback without corruption); however, they are inapplicable to our\nproblem of dealing with corrupted data in offline RLHF setting. To this end, we\ndesign novel corruption robust offline RLHF methods under various assumptions\non the coverage of the data-generating distributions. At a high level, our\nmethodology robustifies an offline RLHF framework by first learning a reward\nmodel along with confidence sets and then learning a pessimistic optimal policy\nover the confidence set. Our key insight is that learning optimal policy can be\ndone by leveraging an offline corruption-robust RL oracle in different ways\n(e.g., zero-order oracle or first-order oracle), depending on the data coverage\nassumptions. To our knowledge, ours is the first work that provides provable\ncorruption robust offline RLHF methods.", + "authors": "Debmalya Mandal, Andi Nika, Parameswaran Kamalaruban, Adish Singla, Goran Radanovi\u0107", + "published": "2024-02-09", + "updated": "2024-02-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05815v1", + "title": "Representation Matters: Offline Pretraining for Sequential Decision Making", + "abstract": "The recent success of supervised learning methods on ever larger offline\ndatasets has spurred interest in the reinforcement learning (RL) field to\ninvestigate whether the same paradigms can be translated to RL algorithms. This\nresearch area, known as offline RL, has largely focused on offline policy\noptimization, aiming to find a return-maximizing policy exclusively from\noffline data. In this paper, we consider a slightly different approach to\nincorporating offline data into sequential decision-making. We aim to answer\nthe question, what unsupervised objectives applied to offline datasets are able\nto learn state representations which elevate performance on downstream tasks,\nwhether those downstream tasks be online RL, imitation learning from expert\ndemonstrations, or even offline policy optimization based on the same offline\ndataset? Through a variety of experiments utilizing standard offline RL\ndatasets, we find that the use of pretraining with unsupervised learning\nobjectives can dramatically improve the performance of policy learning\nalgorithms that otherwise yield mediocre performance on their own. Extensive\nablations further provide insights into what components of these unsupervised\nobjectives -- e.g., reward prediction, continuous or discrete representations,\npretraining or finetuning -- are most important and in which settings.", + "authors": "Mengjiao Yang, Ofir Nachum", + "published": "2021-02-11", + "updated": "2021-02-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.17396v1", + "title": "Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions", + "abstract": "Offline reinforcement learning (RL) allows for the training of competent\nagents from offline datasets without any interaction with the environment.\nOnline finetuning of such offline models can further improve performance. But\nhow should we ideally finetune agents obtained from offline RL training? While\noffline RL algorithms can in principle be used for finetuning, in practice,\ntheir online performance improves slowly. In contrast, we show that it is\npossible to use standard online off-policy algorithms for faster improvement.\nHowever, we find this approach may suffer from policy collapse, where the\npolicy undergoes severe performance deterioration during initial online\nlearning. We investigate the issue of policy collapse and how it relates to\ndata diversity, algorithm choices and online replay distribution. Based on\nthese insights, we propose a conservative policy optimization procedure that\ncan achieve stable and sample-efficient online learning from offline\npretraining.", + "authors": "Yicheng Luo, Jackie Kay, Edward Grefenstette, Marc Peter Deisenroth", + "published": "2023-03-30", + "updated": "2023-03-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02439v2", + "title": "DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching", + "abstract": "In offline reinforcement learning (RL), the performance of the learned policy\nhighly depends on the quality of offline datasets. However, in many cases, the\noffline dataset contains very limited optimal trajectories, which poses a\nchallenge for offline RL algorithms as agents must acquire the ability to\ntransit to high-reward regions. To address this issue, we introduce\nDiffusion-based Trajectory Stitching (DiffStitch), a novel diffusion-based data\naugmentation pipeline that systematically generates stitching transitions\nbetween trajectories. DiffStitch effectively connects low-reward trajectories\nwith high-reward trajectories, forming globally optimal trajectories to address\nthe challenges faced by offline RL algorithms. Empirical experiments conducted\non D4RL datasets demonstrate the effectiveness of DiffStitch across RL\nmethodologies. Notably, DiffStitch demonstrates substantial enhancements in the\nperformance of one-step methods (IQL), imitation learning methods (TD3+BC), and\ntrajectory optimization methods (DT).", + "authors": "Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, Weinan Zhang", + "published": "2024-02-04", + "updated": "2024-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02752v2", + "title": "Offline Reinforcement Learning with Imbalanced Datasets", + "abstract": "The prevalent use of benchmarks in current offline reinforcement learning\n(RL) research has led to a neglect of the imbalance of real-world dataset\ndistributions in the development of models. The real-world offline RL dataset\nis often imbalanced over the state space due to the challenge of exploration or\nsafety considerations. In this paper, we specify properties of imbalanced\ndatasets in offline RL, where the state coverage follows a power law\ndistribution characterized by skewed policies. Theoretically and empirically,\nwe show that typically offline RL methods based on distributional constraints,\nsuch as conservative Q-learning (CQL), are ineffective in extracting policies\nunder the imbalanced dataset. Inspired by natural intelligence, we propose a\nnovel offline RL method that utilizes the augmentation of CQL with a retrieval\nprocess to recall past related experiences, effectively alleviating the\nchallenges posed by imbalanced datasets. We evaluate our method on several\ntasks in the context of imbalanced datasets with varying levels of imbalance,\nutilizing the variant of D4RL. Empirical results demonstrate the superiority of\nour method over other baselines.", + "authors": "Li Jiang, Sijie Chen, Jielin Qiu, Haoran Xu, Wai Kin Chan, Zhao Ding", + "published": "2023-07-06", + "updated": "2023-07-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.17156v2", + "title": "MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations", + "abstract": "We study a new paradigm for sequential decision making, called offline policy\nlearning from observations (PLfO). Offline PLfO aims to learn policies using\ndatasets with substandard qualities: 1) only a subset of trajectories is\nlabeled with rewards, 2) labeled trajectories may not contain actions, 3)\nlabeled trajectories may not be of high quality, and 4) the data may not have\nfull coverage. Such imperfection is common in real-world learning scenarios,\nand offline PLfO encompasses many existing offline learning setups, including\noffline imitation learning (IL), offline IL from observations (ILfO), and\noffline reinforcement learning (RL). In this work, we present a generic\napproach to offline PLfO, called $\\textbf{M}$odality-agnostic\n$\\textbf{A}$dversarial $\\textbf{H}$ypothesis $\\textbf{A}$daptation for\n$\\textbf{L}$earning from $\\textbf{O}$bservations (MAHALO). Built upon the\npessimism concept in offline RL, MAHALO optimizes the policy using a\nperformance lower bound that accounts for uncertainty due to the dataset's\ninsufficient coverage. We implement this idea by adversarially training\ndata-consistent critic and reward functions, which forces the learned policy to\nbe robust to data deficiency. We show that MAHALO consistently outperforms or\nmatches specialized algorithms across a variety of offline PLfO tasks in theory\nand experiments. Our code is available at https://github.com/AnqiLi/mahalo.", + "authors": "Anqi Li, Byron Boots, Ching-An Cheng", + "published": "2023-03-30", + "updated": "2023-08-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08016v1", + "title": "Contextual Transformer for Offline Meta Reinforcement Learning", + "abstract": "The pretrain-finetuning paradigm in large-scale sequence models has made\nsignificant progress in natural language processing and computer vision tasks.\nHowever, such a paradigm is still hindered by several challenges in\nReinforcement Learning (RL), including the lack of self-supervised pretraining\nalgorithms based on offline data and efficient fine-tuning/prompt-tuning over\nunseen downstream tasks. In this work, we explore how prompts can improve\nsequence modeling-based offline reinforcement learning (offline-RL) algorithms.\nFirstly, we propose prompt tuning for offline RL, where a context vector\nsequence is concatenated with the input to guide the conditional policy\ngeneration. As such, we can pretrain a model on the offline dataset with\nself-supervised loss and learn a prompt to guide the policy towards desired\nactions. Secondly, we extend our framework to Meta-RL settings and propose\nContextual Meta Transformer (CMT); CMT leverages the context among different\ntasks as the prompt to improve generalization on unseen tasks. We conduct\nextensive experiments across three different offline-RL settings: offline\nsingle-agent RL on the D4RL dataset, offline Meta-RL on the MuJoCo benchmark,\nand offline MARL on the SMAC benchmark. Superior results validate the strong\nperformance, and generality of our methods.", + "authors": "Runji Lin, Ye Li, Xidong Feng, Zhaowei Zhang, Xian Hong Wu Fung, Haifeng Zhang, Jun Wang, Yali Du, Yaodong Yang", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.08066v5", + "title": "Exploiting Action Impact Regularity and Exogenous State Variables for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning -- learning a policy from a batch of data --\nis known to be hard for general MDPs. These results motivate the need to look\nat specific classes of MDPs where offline reinforcement learning might be\nfeasible. In this work, we explore a restricted class of MDPs to obtain\nguarantees for offline reinforcement learning. The key property, which we call\nAction Impact Regularity (AIR), is that actions primarily impact a part of the\nstate (an endogenous component) and have limited impact on the remaining part\nof the state (an exogenous component). AIR is a strong assumption, but it\nnonetheless holds in a number of real-world domains including financial\nmarkets. We discuss algorithms that exploit the AIR property, and provide a\ntheoretical analysis for an algorithm based on Fitted-Q Iteration. Finally, we\ndemonstrate that the algorithm outperforms existing offline reinforcement\nlearning algorithms across different data collection policies in simulated and\nreal world environments where the regularity holds.", + "authors": "Vincent Liu, James R. Wright, Martha White", + "published": "2021-11-15", + "updated": "2023-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.11574v1", + "title": "Offline Multitask Representation Learning for Reinforcement Learning", + "abstract": "We study offline multitask representation learning in reinforcement learning\n(RL), where a learner is provided with an offline dataset from different tasks\nthat share a common representation and is asked to learn the shared\nrepresentation. We theoretically investigate offline multitask low-rank RL, and\npropose a new algorithm called MORL for offline multitask representation\nlearning. Furthermore, we examine downstream RL in reward-free, offline and\nonline scenarios, where a new task is introduced to the agent that shares the\nsame representation as the upstream offline tasks. Our theoretical results\ndemonstrate the benefits of using the learned representation from the upstream\noffline task instead of directly learning the representation of the low-rank\nmodel.", + "authors": "Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.15578v1", + "title": "Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning", + "abstract": "We hypothesize that empirically studying the sample complexity of offline\nreinforcement learning (RL) is crucial for the practical applications of RL in\nthe real world. Several recent works have demonstrated the ability to learn\npolicies directly from offline data. In this work, we ask the question of the\ndependency on the number of samples for learning from offline data. Our\nobjective is to emphasize that studying sample complexity for offline RL is\nimportant, and is an indicator of the usefulness of existing offline\nalgorithms. We propose an evaluation approach for sample complexity analysis of\noffline RL.", + "authors": "Samin Yeasar Arnob, Riashat Islam, Doina Precup", + "published": "2021-12-31", + "updated": "2021-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.15503v1", + "title": "Prioritized Trajectory Replay: A Replay Memory for Data-driven Reinforcement Learning", + "abstract": "In recent years, data-driven reinforcement learning (RL), also known as\noffline RL, have gained significant attention. However, the role of data\nsampling techniques in offline RL has been overlooked despite its potential to\nenhance online RL performance. Recent research suggests applying sampling\ntechniques directly to state-transitions does not consistently improve\nperformance in offline RL. Therefore, in this study, we propose a memory\ntechnique, (Prioritized) Trajectory Replay (TR/PTR), which extends the sampling\nperspective to trajectories for more comprehensive information extraction from\nlimited data. TR enhances learning efficiency by backward sampling of\ntrajectories that optimizes the use of subsequent state information. Building\non TR, we build the weighted critic target to avoid sampling unseen actions in\noffline training, and Prioritized Trajectory Replay (PTR) that enables more\nefficient trajectory sampling, prioritized by various trajectory priority\nmetrics. We demonstrate the benefits of integrating TR and PTR with existing\noffline RL algorithms on D4RL. In summary, our research emphasizes the\nsignificance of trajectory-based data sampling techniques in enhancing the\nefficiency and performance of offline RL algorithms.", + "authors": "Jinyi Liu, Yi Ma, Jianye Hao, Yujing Hu, Yan Zheng, Tangjie Lv, Changjie Fan", + "published": "2023-06-27", + "updated": "2023-06-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07166v1", + "title": "Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) extends the paradigm of classical RL\nalgorithms to purely learning from static datasets, without interacting with\nthe underlying environment during the learning process. A key challenge of\noffline RL is the instability of policy training, caused by the mismatch\nbetween the distribution of the offline data and the undiscounted stationary\nstate-action distribution of the learned policy. To avoid the detrimental\nimpact of distribution mismatch, we regularize the undiscounted stationary\ndistribution of the current policy towards the offline data during the policy\noptimization process. Further, we train a dynamics model to both implement this\nregularization and better estimate the stationary distribution of the current\npolicy, reducing the error induced by distribution mismatch. On a wide range of\ncontinuous-control offline RL datasets, our method indicates competitive\nperformance, which validates our algorithm. The code is publicly available.", + "authors": "Shentao Yang, Yihao Feng, Shujian Zhang, Mingyuan Zhou", + "published": "2022-06-14", + "updated": "2022-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.01643v3", + "title": "Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems", + "abstract": "In this tutorial article, we aim to provide the reader with the conceptual\ntools needed to get started on research on offline reinforcement learning\nalgorithms: reinforcement learning algorithms that utilize previously collected\ndata, without additional online data collection. Offline reinforcement learning\nalgorithms hold tremendous promise for making it possible to turn large\ndatasets into powerful decision making engines. Effective offline reinforcement\nlearning methods would be able to extract policies with the maximum possible\nutility out of the available data, thereby allowing automation of a wide range\nof decision-making domains, from healthcare and education to robotics. However,\nthe limitations of current algorithms make this difficult. We will aim to\nprovide the reader with an understanding of these challenges, particularly in\nthe context of modern deep reinforcement learning methods, and describe some\npotential solutions that have been explored in recent work to mitigate these\nchallenges, along with recent applications, and a discussion of perspectives on\nopen problems in the field.", + "authors": "Sergey Levine, Aviral Kumar, George Tucker, Justin Fu", + "published": "2020-05-04", + "updated": "2020-11-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.16217v2", + "title": "Beyond Reward: Offline Preference-guided Policy Optimization", + "abstract": "This study focuses on the topic of offline preference-based reinforcement\nlearning (PbRL), a variant of conventional reinforcement learning that\ndispenses with the need for online interaction or specification of reward\nfunctions. Instead, the agent is provided with fixed offline trajectories and\nhuman preferences between pairs of trajectories to extract the dynamics and\ntask information, respectively. Since the dynamics and task information are\northogonal, a naive approach would involve using preference-based reward\nlearning followed by an off-the-shelf offline RL algorithm. However, this\nrequires the separate learning of a scalar reward function, which is assumed to\nbe an information bottleneck of the learning process. To address this issue, we\npropose the offline preference-guided policy optimization (OPPO) paradigm,\nwhich models offline trajectories and preferences in a one-step process,\neliminating the need for separately learning a reward function. OPPO achieves\nthis by introducing an offline hindsight information matching objective for\noptimizing a contextual policy and a preference modeling objective for finding\nthe optimal context. OPPO further integrates a well-performing decision policy\nby optimizing the two objectives iteratively. Our empirical results demonstrate\nthat OPPO effectively models offline preferences and outperforms prior\ncompeting baselines, including offline RL algorithms performed over either true\nor pseudo reward function specifications. Our code is available on the project\nwebsite: https://sites.google.com/view/oppo-icml-2023 .", + "authors": "Yachen Kang, Diyuan Shi, Jinxin Liu, Li He, Donglin Wang", + "published": "2023-05-25", + "updated": "2023-06-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.06294v1", + "title": "Online and Offline Reinforcement Learning by Planning with a Learned Model", + "abstract": "Learning efficiently from small amounts of data has long been the focus of\nmodel-based reinforcement learning, both for the online case when interacting\nwith the environment and the offline case when learning from a fixed dataset.\nHowever, to date no single unified algorithm could demonstrate state-of-the-art\nresults in both settings. In this work, we describe the Reanalyse algorithm\nwhich uses model-based policy and value improvement operators to compute new\nimproved training targets on existing data points, allowing efficient learning\nfor data budgets varying by several orders of magnitude. We further show that\nReanalyse can also be used to learn entirely from demonstrations without any\nenvironment interactions, as in the case of offline Reinforcement Learning\n(offline RL). Combining Reanalyse with the MuZero algorithm, we introduce\nMuZero Unplugged, a single unified algorithm for any data budget, including\noffline RL. In contrast to previous work, our algorithm does not require any\nspecial adaptations for the off-policy or offline RL settings. MuZero Unplugged\nsets new state-of-the-art results in the RL Unplugged offline RL benchmark as\nwell as in the online RL benchmark of Atari in the standard 200 million frame\nsetting.", + "authors": "Julian Schrittwieser, Thomas Hubert, Amol Mandhane, Mohammadamin Barekatain, Ioannis Antonoglou, David Silver", + "published": "2021-04-13", + "updated": "2021-04-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.14897v1", + "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", + "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", + "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", + "published": "2023-08-28", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.04779v3", + "title": "Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations", + "abstract": "Offline reinforcement learning has shown great promise in leveraging large\npre-collected datasets for policy learning, allowing agents to forgo\noften-expensive online data collection. However, offline reinforcement learning\nfrom visual observations with continuous action spaces remains under-explored,\nwith a limited understanding of the key challenges in this complex domain. In\nthis paper, we establish simple baselines for continuous control in the visual\ndomain and introduce a suite of benchmarking tasks for offline reinforcement\nlearning from visual observations designed to better represent the data\ndistributions present in real-world offline RL problems and guided by a set of\ndesiderata for offline RL from visual observations, including robustness to\nvisual distractions and visually identifiable changes in dynamics. Using this\nsuite of benchmarking tasks, we show that simple modifications to two popular\nvision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2,\nsuffice to outperform existing offline RL methods and establish competitive\nbaselines for continuous control in the visual domain. We rigorously evaluate\nthese algorithms and perform an empirical evaluation of the differences between\nstate-of-the-art model-based and model-free offline RL methods for continuous\ncontrol from visual observations. All code and data used in this evaluation are\nopen-sourced to facilitate progress in this domain.", + "authors": "Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh", + "published": "2022-06-09", + "updated": "2023-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.13493v1", + "title": "The Provable Benefits of Unsupervised Data Sharing for Offline Reinforcement Learning", + "abstract": "Self-supervised methods have become crucial for advancing deep learning by\nleveraging data itself to reduce the need for expensive annotations. However,\nthe question of how to conduct self-supervised offline reinforcement learning\n(RL) in a principled way remains unclear. In this paper, we address this issue\nby investigating the theoretical benefits of utilizing reward-free data in\nlinear Markov Decision Processes (MDPs) within a semi-supervised setting.\n Further, we propose a novel, Provable Data Sharing algorithm (PDS) to utilize\nsuch reward-free data for offline RL. PDS uses additional penalties on the\nreward function learned from labeled data to prevent overestimation, ensuring a\nconservative algorithm. Our results on various offline RL tasks demonstrate\nthat PDS significantly improves the performance of offline RL algorithms with\nreward-free data. Overall, our work provides a promising approach to leveraging\nthe benefits of unlabeled data in offline RL while maintaining theoretical\nguarantees. We believe our findings will contribute to developing more robust\nself-supervised RL methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.09701v2", + "title": "A Natural Extension To Online Algorithms For Hybrid RL With Limited Coverage", + "abstract": "Hybrid Reinforcement Learning (RL), leveraging both online and offline data,\nhas garnered recent interest, yet research on its provable benefits remains\nsparse. Additionally, many existing hybrid RL algorithms (Song et al., 2023;\nNakamoto et al., 2023; Amortila et al., 2024) impose coverage assumptions on\nthe offline dataset, but we show that this is unnecessary. A well-designed\nonline algorithm should \"fill in the gaps\" in the offline dataset, exploring\nstates and actions that the behavior policy did not explore. Unlike previous\napproaches that focus on estimating the offline data distribution to guide\nonline exploration (Li et al., 2023b), we show that a natural extension to\nstandard optimistic online algorithms -- warm-starting them by including the\noffline dataset in the experience replay buffer -- achieves similar provable\ngains from hybrid data even when the offline dataset does not have\nsingle-policy concentrability. We accomplish this by partitioning the\nstate-action space into two, bounding the regret on each partition through an\noffline and an online complexity measure, and showing that the regret of this\nhybrid RL algorithm can be characterized by the best partition -- despite the\nalgorithm not knowing the partition itself. As an example, we propose\nDISC-GOLF, a modification of an existing optimistic online algorithm with\ngeneral function approximation called GOLF used in Jin et al. (2021); Xie et\nal. (2022a), and show that it demonstrates provable gains over both online-only\nand offline-only reinforcement learning, with competitive bounds when\nspecialized to the tabular, linear and block MDP cases. Numerical simulations\nfurther validate our theory that hybrid data facilitates more efficient\nexploration, supporting the potential of hybrid RL in various scenarios.", + "authors": "Kevin Tan, Ziping Xu", + "published": "2024-03-07", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.00935v3", + "title": "Policy Expansion for Bridging Offline-to-Online Reinforcement Learning", + "abstract": "Pre-training with offline data and online fine-tuning using reinforcement\nlearning is a promising strategy for learning control policies by leveraging\nthe best of both worlds in terms of sample efficiency and performance. One\nnatural approach is to initialize the policy for online learning with the one\ntrained offline. In this work, we introduce a policy expansion scheme for this\ntask. After learning the offline policy, we use it as one candidate policy in a\npolicy set. We then expand the policy set with another policy which will be\nresponsible for further learning. The two policies will be composed in an\nadaptive manner for interacting with the environment. With this approach, the\npolicy previously learned offline is fully retained during online learning,\nthus mitigating the potential issues such as destroying the useful behaviors of\nthe offline policy in the initial stage of online learning while allowing the\noffline policy participate in the exploration naturally in an adaptive manner.\nMoreover, new useful behaviors can potentially be captured by the newly added\npolicy through learning. Experiments are conducted on a number of tasks and the\nresults demonstrate the effectiveness of the proposed approach.", + "authors": "Haichao Zhang, We Xu, Haonan Yu", + "published": "2023-02-02", + "updated": "2023-04-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.07693v1", + "title": "Adaptive Policy Learning for Offline-to-Online Reinforcement Learning", + "abstract": "Conventional reinforcement learning (RL) needs an environment to collect\nfresh data, which is impractical when online interactions are costly. Offline\nRL provides an alternative solution by directly learning from the previously\ncollected dataset. However, it will yield unsatisfactory performance if the\nquality of the offline datasets is poor. In this paper, we consider an\noffline-to-online setting where the agent is first learned from the offline\ndataset and then trained online, and propose a framework called Adaptive Policy\nLearning for effectively taking advantage of offline and online data.\nSpecifically, we explicitly consider the difference between the online and\noffline data and apply an adaptive update scheme accordingly, that is, a\npessimistic update strategy for the offline dataset and an optimistic/greedy\nupdate scheme for the online dataset. Such a simple and effective method\nprovides a way to mix the offline and online RL and achieve the best of both\nworlds. We further provide two detailed algorithms for implementing the\nframework through embedding value or policy-based RL algorithms into it.\nFinally, we conduct extensive experiments on popular continuous control tasks,\nand results show that our algorithm can learn the expert policy with high\nsample efficiency even when the quality of offline dataset is poor, e.g.,\nrandom dataset.", + "authors": "Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03097v1", + "title": "Federated Ensemble-Directed Offline Reinforcement Learning", + "abstract": "We consider the problem of federated offline reinforcement learning (RL), a\nscenario under which distributed learning agents must collaboratively learn a\nhigh-quality control policy only using small pre-collected datasets generated\naccording to different unknown behavior policies. Naively combining a standard\noffline RL approach with a standard federated learning approach to solve this\nproblem can lead to poorly performing policies. In response, we develop the\nFederated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA),\nwhich distills the collective wisdom of the clients using an ensemble learning\napproach. We develop the FEDORA codebase to utilize distributed compute\nresources on a federated learning platform. We show that FEDORA significantly\noutperforms other approaches, including offline RL over the combined data pool,\nin various complex continuous control environments and real world datasets.\nFinally, we demonstrate the performance of FEDORA in the real-world on a mobile\nrobot.", + "authors": "Desik Rengarajan, Nitin Ragothaman, Dileep Kalathil, Srinivas Shakkottai", + "published": "2023-05-04", + "updated": "2023-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.04268v1", + "title": "On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples", + "abstract": "Offline reinforcement learning (offline RL) considers problems where learning\nis performed using only previously collected samples and is helpful for the\nsettings in which collecting new data is costly or risky. In model-based\noffline RL, the learner performs estimation (or optimization) using a model\nconstructed according to the empirical transition frequencies. We analyze the\nsample complexity of vanilla model-based offline RL with dependent samples in\nthe infinite-horizon discounted-reward setting. In our setting, the samples\nobey the dynamics of the Markov decision process and, consequently, may have\ninterdependencies. Under no assumption of independent samples, we provide a\nhigh-probability, polynomial sample complexity bound for vanilla model-based\noff-policy evaluation that requires partial or uniform coverage. We extend this\nresult to the off-policy optimization under uniform coverage. As a comparison\nto the model-based approach, we analyze the sample complexity of off-policy\nevaluation with vanilla importance sampling in the infinite-horizon setting.\nFinally, we provide an estimator that outperforms the sample-mean estimator for\nalmost deterministic dynamics that are prevalent in reinforcement learning.", + "authors": "Mustafa O. Karabag, Ufuk Topcu", + "published": "2023-03-07", + "updated": "2023-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.10813v2", + "title": "A Workflow for Offline Model-Free Robotic Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables learning control policies by\nutilizing only prior experience, without any online interaction. This can allow\nrobots to acquire generalizable skills from large and diverse datasets, without\nany costly or unsafe online data collection. Despite recent algorithmic\nadvances in offline RL, applying these methods to real-world problems has\nproven challenging. Although offline RL methods can learn from prior data,\nthere is no clear and well-understood process for making various design\nchoices, from model architecture to algorithm hyperparameters, without actually\nevaluating the learned policies online. In this paper, our aim is to develop a\npractical workflow for using offline RL analogous to the relatively\nwell-understood workflows for supervised learning problems. To this end, we\ndevise a set of metrics and conditions that can be tracked over the course of\noffline training, and can inform the practitioner about how the algorithm and\nmodel architecture should be adjusted to improve final performance. Our\nworkflow is derived from a conceptual understanding of the behavior of\nconservative offline RL algorithms and cross-validation in supervised learning.\nWe demonstrate the efficacy of this workflow in producing effective policies\nwithout any online tuning, both in several simulated robotic learning scenarios\nand for three tasks on two distinct real robots, focusing on learning\nmanipulation skills with raw image observations with sparse binary rewards.\nExplanatory video and additional results can be found at\nsites.google.com/view/offline-rl-workflow", + "authors": "Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine", + "published": "2021-09-22", + "updated": "2021-09-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.13885v1", + "title": "Offline Learning from Demonstrations and Unlabeled Experience", + "abstract": "Behavior cloning (BC) is often practical for robot learning because it allows\na policy to be trained offline without rewards, by supervised learning on\nexpert demonstrations. However, BC does not effectively leverage what we will\nrefer to as unlabeled experience: data of mixed and unknown quality without\nreward annotations. This unlabeled data can be generated by a variety of\nsources such as human teleoperation, scripted policies and other agents on the\nsame robot. Towards data-driven offline robot learning that can use this\nunlabeled experience, we introduce Offline Reinforced Imitation Learning\n(ORIL). ORIL first learns a reward function by contrasting observations from\ndemonstrator and unlabeled trajectories, then annotates all data with the\nlearned reward, and finally trains an agent via offline reinforcement learning.\nAcross a diverse set of continuous control and simulated robotic manipulation\ntasks, we show that ORIL consistently outperforms comparable BC agents by\neffectively leveraging unlabeled experience.", + "authors": "Konrad Zolna, Alexander Novikov, Ksenia Konyushkova, Caglar Gulcehre, Ziyu Wang, Yusuf Aytar, Misha Denil, Nando de Freitas, Scott Reed", + "published": "2020-11-27", + "updated": "2020-11-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.16973v1", + "title": "Towards Robust Offline-to-Online Reinforcement Learning via Uncertainty and Smoothness", + "abstract": "To obtain a near-optimal policy with fewer interactions in Reinforcement\nLearning (RL), a promising approach involves the combination of offline RL,\nwhich enhances sample efficiency by leveraging offline datasets, and online RL,\nwhich explores informative transitions by interacting with the environment.\nOffline-to-Online (O2O) RL provides a paradigm for improving an offline trained\nagent within limited online interactions. However, due to the significant\ndistribution shift between online experiences and offline data, most offline RL\nalgorithms suffer from performance drops and fail to achieve stable policy\nimprovement in O2O adaptation. To address this problem, we propose the Robust\nOffline-to-Online (RO2O) algorithm, designed to enhance offline policies\nthrough uncertainty and smoothness, and to mitigate the performance drop in\nonline adaptation. Specifically, RO2O incorporates Q-ensemble for uncertainty\npenalty and adversarial samples for policy and value smoothness, which enable\nRO2O to maintain a consistent learning procedure in online adaptation without\nrequiring special changes to the learning objective. Theoretical analyses in\nlinear MDPs demonstrate that the uncertainty and smoothness lead to a tighter\noptimality bound in O2O against distribution shift. Experimental results\nillustrate the superiority of RO2O in facilitating stable offline-to-online\nlearning and achieving significant improvement with limited online\ninteractions.", + "authors": "Xiaoyu Wen, Xudong Yu, Rui Yang, Chenjia Bai, Zhen Wang", + "published": "2023-09-29", + "updated": "2023-09-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10411v2", + "title": "Boosting Offline Reinforcement Learning with Residual Generative Modeling", + "abstract": "Offline reinforcement learning (RL) tries to learn the near-optimal policy\nwith recorded offline experience without online exploration. Current offline RL\nresearch includes: 1) generative modeling, i.e., approximating a policy using\nfixed data; and 2) learning the state-action value function. While most\nresearch focuses on the state-action function part through reducing the\nbootstrapping error in value function approximation induced by the distribution\nshift of training data, the effects of error propagation in generative modeling\nhave been neglected. In this paper, we analyze the error in generative\nmodeling. We propose AQL (action-conditioned Q-learning), a residual generative\nmodel to reduce policy approximation error for offline RL. We show that our\nmethod can learn more accurate policy approximations in different benchmark\ndatasets. In addition, we show that the proposed offline RL method can learn\nmore competitive AI agents in complex control tasks under the multiplayer\nonline battle arena (MOBA) game Honor of Kings.", + "authors": "Hua Wei, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, Zhenhui Li", + "published": "2021-06-19", + "updated": "2021-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T01", + "I.2.8; I.2.1" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.05804v1", + "title": "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism", + "abstract": "Offline reinforcement learning, which seeks to utilize offline/historical\ndata to optimize sequential decision-making strategies, has gained surging\nprominence in recent studies. Due to the advantage that appropriate function\napproximators can help mitigate the sample complexity burden in modern\nreinforcement learning problems, existing endeavors usually enforce powerful\nfunction representation models (e.g. neural networks) to learn the optimal\npolicies. However, a precise understanding of the statistical limits with\nfunction representations, remains elusive, even when such a representation is\nlinear.\n Towards this goal, we study the statistical limits of offline reinforcement\nlearning with linear model representations. To derive the tight offline\nlearning bound, we design the variance-aware pessimistic value iteration\n(VAPVI), which adopts the conditional variance information of the value\nfunction for time-inhomogeneous episodic linear Markov decision processes\n(MDPs). VAPVI leverages estimated variances of the value functions to reweight\nthe Bellman residuals in the least-square pessimistic value iteration and\nprovides improved offline learning bounds over the best-known existing results\n(whereas the Bellman residuals are equally weighted by design). More\nimportantly, our learning bounds are expressed in terms of system quantities,\nwhich provide natural instance-dependent characterizations that previous\nresults are short of. We hope our results draw a clearer picture of what\noffline learning should look like when linear representations are provided.", + "authors": "Ming Yin, Yaqi Duan, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-03-11", + "updated": "2022-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08128v1", + "title": "Conservative Data Sharing for Multi-Task Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) algorithms have shown promising results\nin domains where abundant pre-collected data is available. However, prior\nmethods focus on solving individual problems from scratch with an offline\ndataset without considering how an offline RL agent can acquire multiple\nskills. We argue that a natural use case of offline RL is in settings where we\ncan pool large amounts of data collected in various scenarios for solving\ndifferent tasks, and utilize all of this data to learn behaviors for all the\ntasks more effectively rather than training each one in isolation. However,\nsharing data across all tasks in multi-task offline RL performs surprisingly\npoorly in practice. Thorough empirical analysis, we find that sharing data can\nactually exacerbate the distributional shift between the learned policy and the\ndataset, which in turn can lead to divergence of the learned policy and poor\nperformance. To address this challenge, we develop a simple technique for\ndata-sharing in multi-task offline RL that routes data based on the improvement\nover the task-specific data. We call this approach conservative data sharing\n(CDS), and it can be applied with multiple single-task offline RL methods. On a\nrange of challenging multi-task locomotion, navigation, and vision-based\nrobotic manipulation problems, CDS achieves the best or comparable performance\ncompared to prior offline multi-task RL methods and previous data sharing\napproaches.", + "authors": "Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn", + "published": "2021-09-16", + "updated": "2021-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.07614v1", + "title": "Towards Data-Driven Offline Simulations for Online Reinforcement Learning", + "abstract": "Modern decision-making systems, from robots to web recommendation engines,\nare expected to adapt: to user preferences, changing circumstances or even new\ntasks. Yet, it is still uncommon to deploy a dynamically learning agent (rather\nthan a fixed policy) to a production system, as it's perceived as unsafe. Using\nhistorical data to reason about learning algorithms, similar to offline policy\nevaluation (OPE) applied to fixed policies, could help practitioners evaluate\nand ultimately deploy such adaptive agents to production. In this work, we\nformalize offline learner simulation (OLS) for reinforcement learning (RL) and\npropose a novel evaluation protocol that measures both fidelity and efficiency\nof the simulation. For environments with complex high-dimensional observations,\nwe propose a semi-parametric approach that leverages recent advances in latent\nstate discovery in order to achieve accurate and efficient offline simulations.\nIn preliminary experiments, we show the advantage of our approach compared to\nfully non-parametric baselines. The code to reproduce these experiments will be\nmade available at https://github.com/microsoft/rl-offline-simulation.", + "authors": "Shengpu Tang, Felipe Vieira Frujeri, Dipendra Misra, Alex Lamb, John Langford, Paul Mineiro, Sebastian Kochman", + "published": "2022-11-14", + "updated": "2022-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.11731v1", + "title": "Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning", + "abstract": "The offline reinforcement learning (RL) paradigm provides a general recipe to\nconvert static behavior datasets into policies that can perform better than the\npolicy that collected the data. While policy constraints, conservatism, and\nother methods for mitigating distributional shifts have made offline\nreinforcement learning more effective, the continuous action setting often\nnecessitates various approximations for applying these techniques. Many of\nthese challenges are greatly alleviated in discrete action settings, where\noffline RL constraints and regularizers can often be computed more precisely or\neven exactly. In this paper, we propose an adaptive scheme for action\nquantization. We use a VQ-VAE to learn state-conditioned action quantization,\navoiding the exponential blowup that comes with na\\\"ive discretization of the\naction space. We show that several state-of-the-art offline RL methods such as\nIQL, CQL, and BRAC improve in performance on benchmarks when combined with our\nproposed discretization scheme. We further validate our approach on a set of\nchallenging long-horizon complex robotic manipulation tasks in the Robomimic\nenvironment, where our discretized offline RL algorithms are able to improve\nupon their continuous counterparts by 2-3x. Our project page is at\nhttps://saqrl.github.io/", + "authors": "Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine", + "published": "2023-10-18", + "updated": "2023-10-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1907.04543v4", + "title": "An Optimistic Perspective on Offline Reinforcement Learning", + "abstract": "Off-policy reinforcement learning (RL) using a fixed offline dataset of\nlogged interactions is an important consideration in real world applications.\nThis paper studies offline RL using the DQN replay dataset comprising the\nentire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate\nthat recent off-policy deep RL algorithms, even when trained solely on this\nfixed dataset, outperform the fully trained DQN agent. To enhance\ngeneralization in the offline setting, we present Random Ensemble Mixture\n(REM), a robust Q-learning algorithm that enforces optimal Bellman consistency\non random convex combinations of multiple Q-value estimates. Offline REM\ntrained on the DQN replay dataset surpasses strong RL baselines. Ablation\nstudies highlight the role of offline dataset size and diversity as well as the\nalgorithm choice in our positive results. Overall, the results here present an\noptimistic view that robust RL algorithms trained on sufficiently large and\ndiverse offline datasets can lead to high quality policies. The DQN replay\ndataset can serve as an offline RL benchmark and is open-sourced.", + "authors": "Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi", + "published": "2019-07-10", + "updated": "2020-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03788v2", + "title": "d3rlpy: An Offline Deep Reinforcement Learning Library", + "abstract": "In this paper, we introduce d3rlpy, an open-sourced offline deep\nreinforcement learning (RL) library for Python. d3rlpy supports a set of\noffline deep RL algorithms as well as off-policy online algorithms via a fully\ndocumented plug-and-play API. To address a reproducibility issue, we conduct a\nlarge-scale benchmark with D4RL and Atari 2600 dataset to ensure implementation\nquality and provide experimental scripts and full tables of results. The d3rlpy\nsource code can be found on GitHub: \\url{https://github.com/takuseno/d3rlpy}.", + "authors": "Takuma Seno, Michita Imai", + "published": "2021-11-06", + "updated": "2022-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.00188v2", + "title": "Offline Reinforcement Learning with Reverse Model-based Imagination", + "abstract": "In offline reinforcement learning (offline RL), one of the main challenges is\nto deal with the distributional shift between the learning policy and the given\ndataset. To address this problem, recent offline RL methods attempt to\nintroduce conservatism bias to encourage learning in high-confidence areas.\nModel-free approaches directly encode such bias into policy or value function\nlearning using conservative regularizations or special network structures, but\ntheir constrained policy search limits the generalization beyond the offline\ndataset. Model-based approaches learn forward dynamics models with conservatism\nquantifications and then generate imaginary trajectories to extend the offline\ndatasets. However, due to limited samples in offline datasets, conservatism\nquantifications often suffer from overgeneralization in out-of-support regions.\nThe unreliable conservative measures will mislead forward model-based\nimaginations to undesired areas, leading to overaggressive behaviors. To\nencourage more conservatism, we propose a novel model-based offline RL\nframework, called Reverse Offline Model-based Imagination (ROMI). We learn a\nreverse dynamics model in conjunction with a novel reverse policy, which can\ngenerate rollouts leading to the target goal states within the offline dataset.\nThese reverse imaginations provide informed data augmentation for model-free\npolicy learning and enable conservative generalization beyond the offline\ndataset. ROMI can effectively combine with off-the-shelf model-free algorithms\nto enable model-based generalization with proper conservatism. Empirical\nresults show that our method can generate more conservative behaviors and\nachieve state-of-the-art performance on offline RL benchmark tasks.", + "authors": "Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, Chongjie Zhang", + "published": "2021-10-01", + "updated": "2021-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.13846v1", + "title": "Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning", + "abstract": "Offline reinforcement learning, by learning from a fixed dataset, makes it\npossible to learn agent behaviors without interacting with the environment.\nHowever, depending on the quality of the offline dataset, such pre-trained\nagents may have limited performance and would further need to be fine-tuned\nonline by interacting with the environment. During online fine-tuning, the\nperformance of the pre-trained agent may collapse quickly due to the sudden\ndistribution shift from offline to online data. While constraints enforced by\noffline RL methods such as a behaviour cloning loss prevent this to an extent,\nthese constraints also significantly slow down online fine-tuning by forcing\nthe agent to stay close to the behavior policy. We propose to adaptively weigh\nthe behavior cloning loss during online fine-tuning based on the agent's\nperformance and training stability. Moreover, we use a randomized ensemble of Q\nfunctions to further increase the sample efficiency of online fine-tuning by\nperforming a large number of learning updates. Experiments show that the\nproposed method yields state-of-the-art offline-to-online reinforcement\nlearning performance on the popular D4RL benchmark. Code is available:\n\\url{https://github.com/zhaoyi11/adaptive_bc}.", + "authors": "Yi Zhao, Rinu Boney, Alexander Ilin, Juho Kannala, Joni Pajarinen", + "published": "2022-10-25", + "updated": "2022-10-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.06662v1", + "title": "DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning algorithms promise to be applicable in\nsettings where a fixed dataset is available and no new experience can be\nacquired. However, such formulation is inevitably offline-data-hungry and, in\npractice, collecting a large offline dataset for one specific task over one\nspecific environment is also costly and laborious. In this paper, we thus 1)\nformulate the offline dynamics adaptation by using (source) offline data\ncollected from another dynamics to relax the requirement for the extensive\n(target) offline data, 2) characterize the dynamics shift problem in which\nprior offline methods do not scale well, and 3) derive a simple dynamics-aware\nreward augmentation (DARA) framework from both model-free and model-based\noffline settings. Specifically, DARA emphasizes learning from those source\ntransition pairs that are adaptive for the target environment and mitigates the\noffline dynamics shift by characterizing state-action-next-state pairs instead\nof the typical state-action distribution sketched by prior offline RL methods.\nThe experimental evaluation demonstrates that DARA, by augmenting rewards in\nthe source offline dataset, can acquire an adaptive policy for the target\nenvironment and yet significantly reduce the requirement of target offline\ndata. With only modest amounts of target offline data, our performance\nconsistently outperforms the prior offline RL methods in both simulated and\nreal-world tasks.", + "authors": "Jinxin Liu, Hongyin Zhang, Donglin Wang", + "published": "2022-03-13", + "updated": "2022-03-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.12716v1", + "title": "H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps", + "abstract": "Solving real-world complex tasks using reinforcement learning (RL) without\nhigh-fidelity simulation environments or large amounts of offline data can be\nquite challenging. Online RL agents trained in imperfect simulation\nenvironments can suffer from severe sim-to-real issues. Offline RL approaches\nalthough bypass the need for simulators, often pose demanding requirements on\nthe size and quality of the offline datasets. The recently emerged hybrid\noffline-and-online RL provides an attractive framework that enables joint use\nof limited offline data and imperfect simulator for transferable policy\nlearning. In this paper, we develop a new algorithm, called H2O+, which offers\ngreat flexibility to bridge various choices of offline and online learning\nmethods, while also accounting for dynamics gaps between the real and\nsimulation environment. Through extensive simulation and real-world robotics\nexperiments, we demonstrate superior performance and flexibility over advanced\ncross-domain online and offline RL algorithms.", + "authors": "Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu, Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2023-09-22", + "updated": "2023-09-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.07920v2", + "title": "Causal Decision Transformer for Recommender Systems via Offline Reinforcement Learning", + "abstract": "Reinforcement learning-based recommender systems have recently gained\npopularity. However, the design of the reward function, on which the agent\nrelies to optimize its recommendation policy, is often not straightforward.\nExploring the causality underlying users' behavior can take the place of the\nreward function in guiding the agent to capture the dynamic interests of users.\nMoreover, due to the typical limitations of simulation environments (e.g., data\ninefficiency), most of the work cannot be broadly applied in large-scale\nsituations. Although some works attempt to convert the offline dataset into a\nsimulator, data inefficiency makes the learning process even slower. Because of\nthe nature of reinforcement learning (i.e., learning by interaction), it cannot\ncollect enough data to train during a single interaction. Furthermore,\ntraditional reinforcement learning algorithms do not have a solid capability\nlike supervised learning methods to learn from offline datasets directly. In\nthis paper, we propose a new model named the causal decision transformer for\nrecommender systems (CDT4Rec). CDT4Rec is an offline reinforcement learning\nsystem that can learn from a dataset rather than from online interaction.\nMoreover, CDT4Rec employs the transformer architecture, which is capable of\nprocessing large offline datasets and capturing both short-term and long-term\ndependencies within the data to estimate the causal relationship between\naction, state, and reward. To demonstrate the feasibility and superiority of\nour model, we have conducted experiments on six real-world offline datasets and\none online simulator.", + "authors": "Siyu Wang, Xiaocong Chen, Dietmar Jannach, Lina Yao", + "published": "2023-04-17", + "updated": "2023-08-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.03383v2", + "title": "On the Role of Discount Factor in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables effective learning from\npreviously collected data without exploration, which shows great promise in\nreal-world applications when exploration is expensive or even infeasible. The\ndiscount factor, $\\gamma$, plays a vital role in improving online RL sample\nefficiency and estimation accuracy, but the role of the discount factor in\noffline RL is not well explored. This paper examines two distinct effects of\n$\\gamma$ in offline RL with theoretical analysis, namely the regularization\neffect and the pessimism effect. On the one hand, $\\gamma$ is a regulator to\ntrade-off optimality with sample efficiency upon existing offline techniques.\nOn the other hand, lower guidance $\\gamma$ can also be seen as a way of\npessimism where we optimize the policy's performance in the worst possible\nmodels. We empirically verify the above theoretical observation with tabular\nMDPs and standard D4RL tasks. The results show that the discount factor plays\nan essential role in the performance of offline RL algorithms, both under small\ndata regimes upon existing offline methods and in large data regimes without\nother conservative methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2022-06-07", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08232v1", + "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", + "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", + "authors": "Ashish Kumar, Ilya Kuzovkin", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.01734v1", + "title": "Offline Goal-Conditioned Reinforcement Learning for Safety-Critical Tasks with Recovery Policy", + "abstract": "Offline goal-conditioned reinforcement learning (GCRL) aims at solving\ngoal-reaching tasks with sparse rewards from an offline dataset. While prior\nwork has demonstrated various approaches for agents to learn near-optimal\npolicies, these methods encounter limitations when dealing with diverse\nconstraints in complex environments, such as safety constraints. Some of these\napproaches prioritize goal attainment without considering safety, while others\nexcessively focus on safety at the expense of training efficiency. In this\npaper, we study the problem of constrained offline GCRL and propose a new\nmethod called Recovery-based Supervised Learning (RbSL) to accomplish\nsafety-critical tasks with various goals. To evaluate the method performance,\nwe build a benchmark based on the robot-fetching environment with a randomly\npositioned obstacle and use expert or random policies to generate an offline\ndataset. We compare RbSL with three offline GCRL algorithms and one offline\nsafe RL algorithm. As a result, our method outperforms the existing\nstate-of-the-art methods to a large extent. Furthermore, we validate the\npracticality and effectiveness of RbSL by deploying it on a real Panda\nmanipulator. Code is available at https://github.com/Sunlighted/RbSL.git.", + "authors": "Chenyang Cao, Zichen Yan, Renhao Lu, Junbo Tan, Xueqian Wang", + "published": "2024-03-04", + "updated": "2024-03-04", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG", + "68T40" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02829v3", + "title": "RORL: Robust Offline Reinforcement Learning via Conservative Smoothing", + "abstract": "Offline reinforcement learning (RL) provides a promising direction to exploit\nmassive amount of offline data for complex decision-making tasks. Due to the\ndistribution shift issue, current offline RL algorithms are generally designed\nto be conservative in value estimation and action selection. However, such\nconservatism can impair the robustness of learned policies when encountering\nobservation deviation under realistic conditions, such as sensor errors and\nadversarial attacks. To trade off robustness and conservatism, we propose\nRobust Offline Reinforcement Learning (RORL) with a novel conservative\nsmoothing technique. In RORL, we explicitly introduce regularization on the\npolicy and the value function for states near the dataset, as well as\nadditional conservative value estimation on these states. Theoretically, we\nshow RORL enjoys a tighter suboptimality bound than recent theoretical results\nin linear MDPs. We demonstrate that RORL can achieve state-of-the-art\nperformance on the general offline RL benchmark and is considerably robust to\nadversarial observation perturbations.", + "authors": "Rui Yang, Chenjia Bai, Xiaoteng Ma, Zhaoran Wang, Chongjie Zhang, Lei Han", + "published": "2022-06-06", + "updated": "2022-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.13777v4", + "title": "Deep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future Directions", + "abstract": "Deep generative models (DGMs) have demonstrated great success across various\ndomains, particularly in generating texts, images, and videos using models\ntrained from offline data. Similarly, data-driven decision-making and robotic\ncontrol also necessitate learning a generator function from the offline data to\nserve as the strategy or policy. In this case, applying deep generative models\nin offline policy learning exhibits great potential, and numerous studies have\nexplored in this direction. However, this field still lacks a comprehensive\nreview and so developments of different branches are relatively independent.\nThus, we provide the first systematic review on the applications of deep\ngenerative models for offline policy learning. In particular, we cover five\nmainstream deep generative models, including Variational Auto-Encoders,\nGenerative Adversarial Networks, Normalizing Flows, Transformers, and Diffusion\nModels, and their applications in both offline reinforcement learning (offline\nRL) and imitation learning (IL). Offline RL and IL are two main branches of\noffline policy learning and are widely-adopted techniques for sequential\ndecision-making. Specifically, for each type of DGM-based offline policy\nlearning, we distill its fundamental scheme, categorize related works based on\nthe usage of the DGM, and sort out the development process of algorithms in\nthat field. Subsequent to the main content, we provide in-depth discussions on\ndeep generative models and offline policy learning as a summary, based on which\nwe present our perspectives on future research directions. This work offers a\nhands-on reference for the research progress in deep generative models for\noffline policy learning, and aims to inspire improved DGM-based offline RL or\nIL algorithms. For convenience, we maintain a paper list on\nhttps://github.com/LucasCJYSDL/DGMs-for-Offline-Policy-Learning.", + "authors": "Jiayu Chen, Bhargav Ganguly, Yang Xu, Yongsheng Mei, Tian Lan, Vaneet Aggarwal", + "published": "2024-02-21", + "updated": "2024-02-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13611v3", + "title": "OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning", + "abstract": "Reinforcement learning (RL) has achieved impressive performance in a variety\nof online settings in which an agent's ability to query the environment for\ntransitions and rewards is effectively unlimited. However, in many practical\napplications, the situation is reversed: an agent may have access to large\namounts of undirected offline experience data, while access to the online\nenvironment is severely limited. In this work, we focus on this offline\nsetting. Our main insight is that, when presented with offline data composed of\na variety of behaviors, an effective way to leverage this data is to extract a\ncontinuous space of recurring and temporally extended primitive behaviors\nbefore using these primitives for downstream task learning. Primitives\nextracted in this way serve two purposes: they delineate the behaviors that are\nsupported by the data from those that are not, making them useful for avoiding\ndistributional shift in offline RL; and they provide a degree of temporal\nabstraction, which reduces the effective horizon yielding better learning in\ntheory, and improved offline RL in practice. In addition to benefiting offline\npolicy optimization, we show that performing offline primitive learning in this\nway can also be leveraged for improving few-shot imitation learning as well as\nexploration and transfer in online RL on a variety of benchmark domains.\nVisualizations are available at https://sites.google.com/view/opal-iclr", + "authors": "Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum", + "published": "2020-10-26", + "updated": "2021-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08251v1", + "title": "Offline Reinforcement Learning with Adaptive Behavior Regularization", + "abstract": "Offline reinforcement learning (RL) defines a sample-efficient learning\nparadigm, where a policy is learned from static and previously collected\ndatasets without additional interaction with the environment. The major\nobstacle to offline RL is the estimation error arising from evaluating the\nvalue of out-of-distribution actions. To tackle this problem, most existing\noffline RL methods attempt to acquire a policy both ``close\" to the behaviors\ncontained in the dataset and sufficiently improved over them, which requires a\ntrade-off between two possibly conflicting targets. In this paper, we propose a\nnovel approach, which we refer to as adaptive behavior regularization (ABR), to\nbalance this critical trade-off. By simply utilizing a sample-based\nregularization, ABR enables the policy to adaptively adjust its optimization\nobjective between cloning and improving over the policy used to generate the\ndataset. In the evaluation on D4RL datasets, a widely adopted benchmark for\noffline reinforcement learning, ABR can achieve improved or competitive\nperformance compared to existing state-of-the-art algorithms.", + "authors": "Yunfan Zhou, Xijun Li, Qingyu Qu", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.05422v1", + "title": "Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning", + "abstract": "Learning a precise dynamics model can be crucial for offline reinforcement\nlearning, which, unfortunately, has been found to be quite challenging.\nDynamics models that are learned by fitting historical transitions often\nstruggle to generalize to unseen transitions. In this study, we identify a\nhidden but pivotal factor termed dynamics reward that remains consistent across\ntransitions, offering a pathway to better generalization. Therefore, we propose\nthe idea of reward-consistent dynamics models: any trajectory generated by the\ndynamics model should maximize the dynamics reward derived from the data. We\nimplement this idea as the MOREC (Model-based Offline reinforcement learning\nwith Reward Consistency) method, which can be seamlessly integrated into\nprevious offline model-based reinforcement learning (MBRL) methods. MOREC\nlearns a generalizable dynamics reward function from offline data, which is\nsubsequently employed as a transition filter in any offline MBRL method: when\ngenerating transitions, the dynamics model generates a batch of transitions and\nselects the one with the highest dynamics reward value. On a synthetic task, we\nvisualize that MOREC has a strong generalization ability and can surprisingly\nrecover some distant unseen transitions. On 21 offline tasks in D4RL and NeoRL\nbenchmarks, MOREC improves the previous state-of-the-art performance by a\nsignificant margin, i.e., 4.6% on D4RL tasks and 25.9% on NeoRL tasks. Notably,\nMOREC is the first method that can achieve above 95% online RL performance in 6\nout of 12 D4RL tasks and 3 out of 9 NeoRL tasks.", + "authors": "Fan-Ming Luo, Tian Xu, Xingchen Cao, Yang Yu", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.05546v1", + "title": "Offline Actor-Critic Reinforcement Learning Scales to Large Models", + "abstract": "We show that offline actor-critic reinforcement learning can scale to large\nmodels - such as transformers - and follows similar scaling laws as supervised\nlearning. We find that offline actor-critic algorithms can outperform strong,\nsupervised, behavioral cloning baselines for multi-task training on a large\ndataset containing both sub-optimal and expert behavior on 132 continuous\ncontrol tasks. We introduce a Perceiver-based actor-critic model and elucidate\nthe key model features needed to make offline RL work with self- and\ncross-attention modules. Overall, we find that: i) simple offline actor critic\nalgorithms are a natural choice for gradually moving away from the currently\npredominant paradigm of behavioral cloning, and ii) via offline RL it is\npossible to learn multi-task policies that master many domains simultaneously,\nincluding real robotics tasks, from sub-optimal demonstrations or\nself-generated data.", + "authors": "Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin Riedmiller", + "published": "2024-02-08", + "updated": "2024-02-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.10905v1", + "title": "Efficient Robotic Manipulation Through Offline-to-Online Reinforcement Learning and Goal-Aware State Information", + "abstract": "End-to-end learning robotic manipulation with high data efficiency is one of\nthe key challenges in robotics. The latest methods that utilize human\ndemonstration data and unsupervised representation learning has proven to be a\npromising direction to improve RL learning efficiency. The use of demonstration\ndata also allows \"warming-up\" the RL policies using offline data with imitation\nlearning or the recently emerged offline reinforcement learning algorithms.\nHowever, existing works often treat offline policy learning and online\nexploration as two separate processes, which are often accompanied by severe\nperformance drop during the offline-to-online transition. Furthermore, many\nrobotic manipulation tasks involve complex sub-task structures, which are very\nchallenging to be solved in RL with sparse reward. In this work, we propose a\nunified offline-to-online RL framework that resolves the transition performance\ndrop issue. Additionally, we introduce goal-aware state information to the RL\nagent, which can greatly reduce task complexity and accelerate policy learning.\nCombined with an advanced unsupervised representation learning module, our\nframework achieves great training efficiency and performance compared with the\nstate-of-the-art methods in multiple robotic manipulation tasks.", + "authors": "Jin Li, Xianyuan Zhan, Zixu Xiao, Guyue Zhou", + "published": "2021-10-21", + "updated": "2021-10-21", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.12755v1", + "title": "Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn a policy using only\npre-collected and fixed data. Although avoiding the time-consuming online\ninteractions in RL, it poses challenges for out-of-distribution (OOD) state\nactions and often suffers from data inefficiency for training. Despite many\nefforts being devoted to addressing OOD state actions, the latter (data\ninefficiency) receives little attention in offline RL. To address this, this\npaper proposes the cross-domain offline RL, which assumes offline data\nincorporate additional source-domain data from varying transition dynamics\n(environments), and expects it to contribute to the offline data efficiency. To\ndo so, we identify a new challenge of OOD transition dynamics, beyond the\ncommon OOD state actions issue, when utilizing cross-domain offline data. Then,\nwe propose our method BOSA, which employs two support-constrained objectives to\naddress the above OOD issues. Through extensive experiments in the cross-domain\noffline RL setting, we demonstrate BOSA can greatly improve offline data\nefficiency: using only 10\\% of the target data, BOSA could achieve {74.4\\%} of\nthe SOTA offline RL performance that uses 100\\% of the target data.\nAdditionally, we also show BOSA can be effortlessly plugged into model-based\noffline RL and noising data augmentation techniques (used for generating\nsource-domain data), which naturally avoids the potential dynamics mismatch\nbetween target-domain data and newly generated source-domain data.", + "authors": "Jinxin Liu, Ziqi Zhang, Zhenyu Wei, Zifeng Zhuang, Yachen Kang, Sibo Gai, Donglin Wang", + "published": "2023-06-22", + "updated": "2023-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.07344v1", + "title": "Measurement Scheduling for ICU Patients with Offline Reinforcement Learning", + "abstract": "Scheduling laboratory tests for ICU patients presents a significant\nchallenge. Studies show that 20-40% of lab tests ordered in the ICU are\nredundant and could be eliminated without compromising patient safety. Prior\nwork has leveraged offline reinforcement learning (Offline-RL) to find optimal\npolicies for ordering lab tests based on patient information. However, new ICU\npatient datasets have since been released, and various advancements have been\nmade in Offline-RL methods. In this study, we first introduce a preprocessing\npipeline for the newly-released MIMIC-IV dataset geared toward time-series\ntasks. We then explore the efficacy of state-of-the-art Offline-RL methods in\nidentifying better policies for ICU patient lab test scheduling. Besides\nassessing methodological performance, we also discuss the overall suitability\nand practicality of using Offline-RL frameworks for scheduling laboratory tests\nin ICU settings.", + "authors": "Zongliang Ji, Anna Goldenberg, Rahul G. Krishnan", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.00750v2", + "title": "Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient", + "abstract": "Offline reinforcement learning, which aims at optimizing sequential\ndecision-making strategies with historical data, has been extensively applied\nin real-life applications. State-Of-The-Art algorithms usually leverage\npowerful function approximators (e.g. neural networks) to alleviate the sample\ncomplexity hurdle for better empirical performances. Despite the successes, a\nmore systematic understanding of the statistical complexity for function\napproximation remains lacking. Towards bridging the gap, we take a step by\nconsidering offline reinforcement learning with differentiable function class\napproximation (DFA). This function class naturally incorporates a wide range of\nmodels with nonlinear/nonconvex structures. Most importantly, we show offline\nRL with differentiable function approximation is provably efficient by\nanalyzing the pessimistic fitted Q-learning (PFQL) algorithm, and our results\nprovide the theoretical basis for understanding a variety of practical\nheuristics that rely on Fitted Q-Iteration style design. In addition, we\nfurther improve our guarantee with a tighter instance-dependent\ncharacterization. We hope our work could draw interest in studying\nreinforcement learning with differentiable function approximation beyond the\nscope of current research.", + "authors": "Ming Yin, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-10-03", + "updated": "2022-11-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.06043v3", + "title": "Offline Meta-Reinforcement Learning with Advantage Weighting", + "abstract": "This paper introduces the offline meta-reinforcement learning (offline\nmeta-RL) problem setting and proposes an algorithm that performs well in this\nsetting. Offline meta-RL is analogous to the widely successful supervised\nlearning strategy of pre-training a model on a large batch of fixed,\npre-collected data (possibly from various tasks) and fine-tuning the model to a\nnew task with relatively little data. That is, in offline meta-RL, we\nmeta-train on fixed, pre-collected data from several tasks in order to adapt to\na new task with a very small amount (less than 5 trajectories) of data from the\nnew task. By nature of being offline, algorithms for offline meta-RL can\nutilize the largest possible pool of training data available and eliminate\npotentially unsafe or costly data collection during meta-training. This setting\ninherits the challenges of offline RL, but it differs significantly because\noffline RL does not generally consider a) transfer to new tasks or b) limited\ndata from the test task, both of which we face in offline meta-RL. Targeting\nthe offline meta-RL setting, we propose Meta-Actor Critic with Advantage\nWeighting (MACAW), an optimization-based meta-learning algorithm that uses\nsimple, supervised regression objectives for both the inner and outer loop of\nmeta-training. On offline variants of common meta-RL benchmarks, we empirically\nfind that this approach enables fully offline meta-reinforcement learning and\nachieves notable gains over prior methods.", + "authors": "Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn", + "published": "2020-08-13", + "updated": "2021-07-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.09119v2", + "title": "Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL", + "abstract": "Offline Reinforcement Learning (RL) aims to extract near-optimal policies\nfrom imperfect offline data without additional environment interactions.\nExtracting policies from diverse offline datasets has the potential to expand\nthe range of applicability of RL by making the training process safer, faster,\nand more streamlined. We investigate how to improve the performance of offline\nRL algorithms, its robustness to the quality of offline data, as well as its\ngeneralization capabilities. To this end, we introduce Offline Model-based RL\nwith Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding\nthat dynamics models, which support within-domain generalization, and\nbehavioral priors, which support cross-domain generalization, are\ncomplementary. When combined together, they substantially improve the\nperformance and generalization of offline RL policies. In the widely studied\nD4RL offline RL benchmark, we find that MABE achieves higher average\nperformance compared to prior model-free and model-based algorithms. In\nexperiments that require cross-domain generalization, we find that MABE\noutperforms prior methods. Our website is available at\nhttps://sites.google.com/berkeley.edu/mabe .", + "authors": "Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, Michael Laskin", + "published": "2021-06-16", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.10393v1", + "title": "Offline Trajectory Generalization for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn policies from static\ndatasets of previously collected trajectories. Existing methods for offline RL\neither constrain the learned policy to the support of offline data or utilize\nmodel-based virtual environments to generate simulated rollouts. However, these\nmethods suffer from (i) poor generalization to unseen states; and (ii) trivial\nimprovement from low-qualified rollout simulation. In this paper, we propose\noffline trajectory generalization through world transformers for offline\nreinforcement learning (OTTO). Specifically, we use casual Transformers, a.k.a.\nWorld Transformers, to predict state dynamics and the immediate reward. Then we\npropose four strategies to use World Transformers to generate high-rewarded\ntrajectory simulation by perturbing the offline data. Finally, we jointly use\noffline data with simulated data to train an offline RL algorithm. OTTO serves\nas a plug-in module and can be integrated with existing offline RL methods to\nenhance them with better generalization capability of transformers and\nhigh-rewarded data augmentation. Conducting extensive experiments on D4RL\nbenchmark datasets, we verify that OTTO significantly outperforms\nstate-of-the-art offline RL methods.", + "authors": "Ziqi Zhao, Zhaochun Ren, Liu Yang, Fajie Yuan, Pengjie Ren, Zhumin Chen, jun Ma, Xin Xin", + "published": "2024-04-16", + "updated": "2024-04-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.02845v3", + "title": "Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Tackles All SMAC Tasks", + "abstract": "Offline reinforcement learning leverages previously-collected offline\ndatasets to learn optimal policies with no necessity to access the real\nenvironment. Such a paradigm is also desirable for multi-agent reinforcement\nlearning (MARL) tasks, given the increased interactions among agents and with\nthe enviroment. Yet, in MARL, the paradigm of offline pre-training with online\nfine-tuning has not been studied, nor datasets or benchmarks for offline MARL\nresearch are available. In this paper, we facilitate the research by providing\nlarge-scale datasets, and use them to examine the usage of the Decision\nTransformer in the context of MARL. We investigate the generalisation of MARL\noffline pre-training in the following three aspects: 1) between single agents\nand multiple agents, 2) from offline pretraining to the online fine-tuning, and\n3) to that of multiple downstream tasks with few-shot and zero-shot\ncapabilities. We start by introducing the first offline MARL dataset with\ndiverse quality levels based on the StarCraftII environment, and then propose\nthe novel architecture of multi-agent decision transformer (MADT) for effective\noffline learning. MADT leverages transformer's modelling ability of sequence\nmodelling and integrates it seamlessly with both offline and online MARL tasks.\nA crucial benefit of MADT is that it learns generalisable policies that can\ntransfer between different types of agents under different task scenarios. On\nStarCraft II offline dataset, MADT outperforms the state-of-the-art offline RL\nbaselines. When applied to online tasks, the pre-trained MADT significantly\nimproves sample efficiency, and enjoys strong performance both few-short and\nzero-shot cases. To our best knowledge, this is the first work that studies and\ndemonstrates the effectiveness of offline pre-trained models in terms of sample\nefficiency and generalisability enhancements in MARL.", + "authors": "Linghui Meng, Muning Wen, Yaodong Yang, Chenyang Le, Xiyun Li, Weinan Zhang, Ying Wen, Haifeng Zhang, Jun Wang, Bo Xu", + "published": "2021-12-06", + "updated": "2022-06-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2107.01757v1", + "title": "The Least Restriction for Offline Reinforcement Learning", + "abstract": "Many practical applications of reinforcement learning (RL) constrain the\nagent to learn from a fixed offline dataset of logged interactions, which has\nalready been gathered, without offering further possibility for data\ncollection. However, commonly used off-policy RL algorithms, such as the Deep Q\nNetwork and the Deep Deterministic Policy Gradient, are incapable of learning\nwithout data correlated to the distribution under the current policy, making\nthem ineffective for this offline setting. As the first step towards useful\noffline RL algorithms, we analysis the reason of instability in standard\noff-policy RL algorithms. It is due to the bootstrapping error. The key to\navoiding this error, is ensuring that the agent's action space does not go out\nof the fixed offline dataset. Based on our consideration, a creative offline RL\nframework, the Least Restriction (LR), is proposed in this paper. The LR\nregards selecting an action as taking a sample from the probability\ndistribution. It merely set a little limit for action selection, which not only\navoid the action being out of the offline dataset but also remove all the\nunreasonable restrictions in earlier approaches (e.g. Batch-Constrained Deep\nQ-Learning). In the further, we will demonstrate that the LR, is able to learn\nrobustly from different offline datasets, including random and suboptimal\ndemonstrations, on a range of practical control tasks.", + "authors": "Zizhou Su", + "published": "2021-07-05", + "updated": "2021-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.09844v2", + "title": "Small Dataset, Big Gains: Enhancing Reinforcement Learning by Offline Pre-Training with Model Based Augmentation", + "abstract": "Offline reinforcement learning leverages pre-collected datasets of\ntransitions to train policies. It can serve as effective initialization for\nonline algorithms, enhancing sample efficiency and speeding up convergence.\nHowever, when such datasets are limited in size and quality, offline\npre-training can produce sub-optimal policies and lead to degraded online\nreinforcement learning performance. In this paper we propose a model-based data\naugmentation strategy to maximize the benefits of offline reinforcement\nlearning pre-training and reduce the scale of data needed to be effective. Our\napproach leverages a world model of the environment trained on the offline\ndataset to augment states during offline pre-training. We evaluate our approach\non a variety of MuJoCo robotic tasks and our results show it can jump-start\nonline fine-tuning and substantially reduce - in some cases by an order of\nmagnitude - the required number of environment interactions.", + "authors": "Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov", + "published": "2023-12-15", + "updated": "2023-12-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.11620v2", + "title": "Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization", + "abstract": "Offline reinforcement learning (RL) has received considerable attention in\nrecent years due to its attractive capability of learning policies from offline\ndatasets without environmental interactions. Despite some success in the\nsingle-agent setting, offline multi-agent RL (MARL) remains to be a challenge.\nThe large joint state-action space and the coupled multi-agent behaviors pose\nextra complexities for offline policy optimization. Most existing offline MARL\nstudies simply apply offline data-related regularizations on individual agents,\nwithout fully considering the multi-agent system at the global level. In this\nwork, we present OMIGA, a new offline m ulti-agent RL algorithm with implicit\nglobal-to-local v alue regularization. OMIGA provides a principled framework to\nconvert global-level value regularization into equivalent implicit local value\nregularizations and simultaneously enables in-sample learning, thus elegantly\nbridging multi-agent value decomposition and policy learning with offline\nregularizations. Based on comprehensive experiments on the offline multi-agent\nMuJoCo and StarCraft II micro-management tasks, we show that OMIGA achieves\nsuperior performance over the state-of-the-art offline MARL methods in almost\nall tasks.", + "authors": "Xiangsen Wang, Haoran Xu, Yinan Zheng, Xianyuan Zhan", + "published": "2023-07-21", + "updated": "2023-11-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.12639v1", + "title": "Single-Task Continual Offline Reinforcement Learning", + "abstract": "In this paper, we study the continual learning problem of single-task offline\nreinforcement learning. In the past, continual reinforcement learning usually\nonly dealt with multitasking, that is, learning multiple related or unrelated\ntasks in a row, but once each learned task was learned, it was not relearned,\nbut only used in subsequent processes. However, offline reinforcement learning\ntasks require the continuously learning of multiple different datasets for the\nsame task. Existing algorithms will try their best to achieve the best results\nin each offline dataset they have learned and the skills of the network will\noverwrite the high-quality datasets that have been learned after learning the\nsubsequent poor datasets. On the other hand, if too much emphasis is placed on\nstability, the network will learn the subsequent better dataset after learning\nthe poor offline dataset, and the problem of insufficient plasticity and\nnon-learning will occur. How to design a strategy that can always preserve the\nbest performance for each state in the data that has been learned is a new\nchallenge and the focus of this study. Therefore, this study proposes a new\nalgorithm, called Ensemble Offline Reinforcement Learning Based on Experience\nReplay, which introduces multiple value networks to learn the same dataset and\njudge whether the strategy has been learned by the discrete degree of the value\nnetwork, to improve the performance of the network in single-task offline\nreinforcement learning.", + "authors": "Sibo Gai, Donglin Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "I.2.6" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.05922v1", + "title": "A Unified Framework for Alternating Offline Model Training and Policy Learning", + "abstract": "In offline model-based reinforcement learning (offline MBRL), we learn a\ndynamic model from historically collected data, and subsequently utilize the\nlearned model and fixed datasets for policy learning, without further\ninteracting with the environment. Offline MBRL algorithms can improve the\nefficiency and stability of policy learning over the model-free algorithms.\nHowever, in most of the existing offline MBRL algorithms, the learning\nobjectives for the dynamic models and the policies are isolated from each\nother. Such an objective mismatch may lead to inferior performance of the\nlearned agents. In this paper, we address this issue by developing an iterative\noffline MBRL framework, where we maximize a lower bound of the true expected\nreturn, by alternating between dynamic-model training and policy learning. With\nthe proposed unified model-policy learning framework, we achieve competitive\nperformance on a wide range of continuous-control offline reinforcement\nlearning datasets. Source code is publicly released.", + "authors": "Shentao Yang, Shujian Zhang, Yihao Feng, Mingyuan Zhou", + "published": "2022-10-12", + "updated": "2022-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.15690v1", + "title": "Benchmarking Offline Reinforcement Learning on Real-Robot Hardware", + "abstract": "Learning policies from previously recorded data is a promising direction for\nreal-world robotics tasks, as online learning is often infeasible. Dexterous\nmanipulation in particular remains an open problem in its general form. The\ncombination of offline reinforcement learning with large diverse datasets,\nhowever, has the potential to lead to a breakthrough in this challenging domain\nanalogously to the rapid progress made in supervised learning in recent years.\nTo coordinate the efforts of the research community toward tackling this\nproblem, we propose a benchmark including: i) a large collection of data for\noffline learning from a dexterous manipulation platform on two tasks, obtained\nwith capable RL agents trained in simulation; ii) the option to execute learned\npolicies on a real-world robotic system and a simulation for efficient\ndebugging. We evaluate prominent open-sourced offline reinforcement learning\nalgorithms on the datasets and provide a reproducible experimental setup for\noffline reinforcement learning on real systems.", + "authors": "Nico G\u00fcrtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel W\u00fcthrich, Stefan Bauer, Bernhard Sch\u00f6lkopf, Georg Martius", + "published": "2023-07-28", + "updated": "2023-07-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.05433v1", + "title": "Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) Algorithms are often designed with\nenvironments such as MuJoCo in mind, in which the planning horizon is extremely\nlong and no noise exists. We compare model-free, model-based, as well as hybrid\noffline RL approaches on various industrial benchmark (IB) datasets to test the\nalgorithms in settings closer to real world problems, including complex noise\nand partially observable states. We find that on the IB, hybrid approaches face\nsevere difficulties and that simpler algorithms, such as rollout based\nalgorithms or model-free algorithms with simpler regularizers perform best on\nthe datasets.", + "authors": "Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler", + "published": "2022-01-14", + "updated": "2022-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2205.09550v1", + "title": "Data Valuation for Offline Reinforcement Learning", + "abstract": "The success of deep reinforcement learning (DRL) hinges on the availability\nof training data, which is typically obtained via a large number of environment\ninteractions. In many real-world scenarios, costs and risks are associated with\ngathering these data. The field of offline reinforcement learning addresses\nthese issues through outsourcing the collection of data to a domain expert or a\ncarefully monitored program and subsequently searching for a batch-constrained\noptimal policy. With the emergence of data markets, an alternative to\nconstructing a dataset in-house is to purchase external data. However, while\nstate-of-the-art offline reinforcement learning approaches have shown a lot of\npromise, they currently rely on carefully constructed datasets that are well\naligned with the intended target domains. This raises questions regarding the\ntransferability and robustness of an offline reinforcement learning agent\ntrained on externally acquired data. In this paper, we empirically evaluate the\nability of the current state-of-the-art offline reinforcement learning\napproaches to coping with the source-target domain mismatch within two MuJoCo\nenvironments, finding that current state-of-the-art offline reinforcement\nlearning algorithms underperform in the target domain. To address this, we\npropose data valuation for offline reinforcement learning (DVORL), which allows\nus to identify relevant and high-quality transitions, improving the performance\nand transferability of policies learned by offline reinforcement learning\nalgorithms. The results show that our method outperforms offline reinforcement\nlearning baselines on two MuJoCo environments.", + "authors": "Amir Abolfazli, Gregory Palmer, Daniel Kudenko", + "published": "2022-05-19", + "updated": "2022-05-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.14629v1", + "title": "Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions", + "abstract": "Reinforcement learning (RL) agents are widely used for solving complex\nsequential decision making tasks, but still exhibit difficulty in generalizing\nto scenarios not seen during training. While prior online approaches\ndemonstrated that using additional signals beyond the reward function can lead\nto better generalization capabilities in RL agents, i.e. using self-supervised\nlearning (SSL), they struggle in the offline RL setting, i.e. learning from a\nstatic dataset. We show that performance of online algorithms for\ngeneralization in RL can be hindered in the offline setting due to poor\nestimation of similarity between observations. We propose a new\ntheoretically-motivated framework called Generalized Similarity Functions\n(GSF), which uses contrastive learning to train an offline RL agent to\naggregate observations based on the similarity of their expected future\nbehavior, where we quantify this similarity using \\emph{generalized value\nfunctions}. We show that GSF is general enough to recover existing SSL\nobjectives while also improving zero-shot generalization performance on a\ncomplex offline RL benchmark, offline Procgen.", + "authors": "Bogdan Mazoure, Ilya Kostrikov, Ofir Nachum, Jonathan Tompson", + "published": "2021-11-29", + "updated": "2021-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.09712v1", + "title": "Semi-Offline Reinforcement Learning for Optimized Text Generation", + "abstract": "In reinforcement learning (RL), there are two major settings for interacting\nwith the environment: online and offline. Online methods explore the\nenvironment at significant time cost, and offline methods efficiently obtain\nreward signals by sacrificing exploration capability. We propose semi-offline\nRL, a novel paradigm that smoothly transits from offline to online settings,\nbalances exploration capability and training cost, and provides a theoretical\nfoundation for comparing different RL settings. Based on the semi-offline\nformulation, we present the RL setting that is optimal in terms of optimization\ncost, asymptotic error, and overfitting error bound. Extensive experiments show\nthat our semi-offline approach is efficient and yields comparable or often\nbetter performance compared with state-of-the-art methods.", + "authors": "Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, Rui Yan", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.14379v1", + "title": "Offline Reinforcement Learning Hands-On", + "abstract": "Offline Reinforcement Learning (RL) aims to turn large datasets into powerful\ndecision-making engines without any online interactions with the environment.\nThis great promise has motivated a large amount of research that hopes to\nreplicate the success RL has experienced in simulation settings. This work\nambitions to reflect upon these efforts from a practitioner viewpoint. We start\nby discussing the dataset properties that we hypothesise can characterise the\ntype of offline methods that will be the most successful. We then verify these\nclaims through a set of experiments and designed datasets generated from\nenvironments with both discrete and continuous action spaces. We experimentally\nvalidate that diversity and high-return examples in the data are crucial to the\nsuccess of offline RL and show that behavioural cloning remains a strong\ncontender compared to its contemporaries. Overall, this work stands as a\ntutorial to help people build their intuition on today's offline RL methods and\ntheir applicability.", + "authors": "Louis Monier, Jakub Kmec, Alexandre Laterre, Thomas Pierrot, Valentin Courgeau, Olivier Sigaud, Karim Beguir", + "published": "2020-11-29", + "updated": "2020-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.18617v1", + "title": "ELA: Exploited Level Augmentation for Offline Learning in Zero-Sum Games", + "abstract": "Offline learning has become widely used due to its ability to derive\neffective policies from offline datasets gathered by expert demonstrators\nwithout interacting with the environment directly. Recent research has explored\nvarious ways to enhance offline learning efficiency by considering the\ncharacteristics (e.g., expertise level or multiple demonstrators) of the\ndataset. However, a different approach is necessary in the context of zero-sum\ngames, where outcomes vary significantly based on the strategy of the opponent.\nIn this study, we introduce a novel approach that uses unsupervised learning\ntechniques to estimate the exploited level of each trajectory from the offline\ndataset of zero-sum games made by diverse demonstrators. Subsequently, we\nincorporate the estimated exploited level into the offline learning to maximize\nthe influence of the dominant strategy. Our method enables interpretable\nexploited level estimation in multiple zero-sum games and effectively\nidentifies dominant strategy data. Also, our exploited level augmented offline\nlearning significantly enhances the original offline learning algorithms\nincluding imitation learning and offline reinforcement learning for zero-sum\ngames.", + "authors": "Shiqi Lei, Kanghoon Lee, Linjing Li, Jinkyoo Park, Jiachen Li", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.AI", + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.03351v4", + "title": "Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization", + "abstract": "Combining offline and online reinforcement learning (RL) is crucial for\nefficient and safe learning. However, previous approaches treat offline and\nonline learning as separate procedures, resulting in redundant designs and\nlimited performance. We ask: Can we achieve straightforward yet effective\noffline and online learning without introducing extra conservatism or\nregularization? In this study, we propose Uni-o4, which utilizes an on-policy\nobjective for both offline and online learning. Owning to the alignment of\nobjectives in two phases, the RL agent can transfer between offline and online\nlearning seamlessly. This property enhances the flexibility of the learning\nparadigm, allowing for arbitrary combinations of pretraining, fine-tuning,\noffline, and online learning. In the offline phase, specifically, Uni-o4\nleverages diverse ensemble policies to address the mismatch issues between the\nestimated behavior policy and the offline dataset. Through a simple offline\npolicy evaluation (OPE) approach, Uni-o4 can achieve multi-step policy\nimprovement safely. We demonstrate that by employing the method above, the\nfusion of these two paradigms can yield superior offline initialization as well\nas stable and rapid online fine-tuning capabilities. Through real-world robot\ntasks, we highlight the benefits of this paradigm for rapid deployment in\nchallenging, previously unseen real-world environments. Additionally, through\ncomprehensive evaluations using numerous simulated benchmarks, we substantiate\nthat our method achieves state-of-the-art performance in both offline and\noffline-to-online fine-tuning learning. Our website:\nhttps://lei-kun.github.io/uni-o4/ .", + "authors": "Kun Lei, Zhengmao He, Chenhao Lu, Kaizhe Hu, Yang Gao, Huazhe Xu", + "published": "2023-11-06", + "updated": "2024-03-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2107.06106v2", + "title": "Conservative Offline Distributional Reinforcement Learning", + "abstract": "Many reinforcement learning (RL) problems in practice are offline, learning\npurely from observational data. A key challenge is how to ensure the learned\npolicy is safe, which requires quantifying the risk associated with different\nactions. In the online setting, distributional RL algorithms do so by learning\nthe distribution over returns (i.e., cumulative rewards) instead of the\nexpected return; beyond quantifying risk, they have also been shown to learn\nbetter representations for planning. We propose Conservative Offline\nDistributional Actor Critic (CODAC), an offline RL algorithm suitable for both\nrisk-neutral and risk-averse domains. CODAC adapts distributional RL to the\noffline setting by penalizing the predicted quantiles of the return for\nout-of-distribution actions. We prove that CODAC learns a conservative return\ndistribution -- in particular, for finite MDPs, CODAC converges to an uniform\nlower bound on the quantiles of the return distribution; our proof relies on a\nnovel analysis of the distributional Bellman operator. In our experiments, on\ntwo challenging robot navigation tasks, CODAC successfully learns risk-averse\npolicies using offline data collected purely from risk-neutral agents.\nFurthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of\nboth expected and risk-sensitive performance.", + "authors": "Yecheng Jason Ma, Dinesh Jayaraman, Osbert Bastani", + "published": "2021-07-12", + "updated": "2021-10-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11895v1", + "title": "What are the Statistical Limits of Offline RL with Linear Function Approximation?", + "abstract": "Offline reinforcement learning seeks to utilize offline (observational) data\nto guide the learning of (causal) sequential decision making strategies. The\nhope is that offline reinforcement learning coupled with function approximation\nmethods (to deal with the curse of dimensionality) can provide a means to help\nalleviate the excessive sample complexity burden in modern sequential decision\nmaking problems. However, the extent to which this broader approach can be\neffective is not well understood, where the literature largely consists of\nsufficient conditions.\n This work focuses on the basic question of what are necessary\nrepresentational and distributional conditions that permit provable\nsample-efficient offline reinforcement learning. Perhaps surprisingly, our main\nresult shows that even if: i) we have realizability in that the true value\nfunction of \\emph{every} policy is linear in a given set of features and 2) our\noff-policy data has good coverage over all features (under a strong spectral\ncondition), then any algorithm still (information-theoretically) requires a\nnumber of offline samples that is exponential in the problem horizon in order\nto non-trivially estimate the value of \\emph{any} given policy. Our results\nhighlight that sample-efficient offline policy evaluation is simply not\npossible unless significantly stronger conditions hold; such conditions include\neither having low distribution shift (where the offline data distribution is\nclose to the distribution of the policy to be evaluated) or significantly\nstronger representational conditions (beyond realizability).", + "authors": "Ruosong Wang, Dean P. Foster, Sham M. Kakade", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.13425v3", + "title": "Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning", + "abstract": "Recent progress in deep learning has relied on access to large and diverse\ndatasets. Such data-driven progress has been less evident in offline\nreinforcement learning (RL), because offline RL data is usually collected to\noptimize specific target tasks limiting the data's diversity. In this work, we\npropose Exploratory data for Offline RL (ExORL), a data-centric approach to\noffline RL. ExORL first generates data with unsupervised reward-free\nexploration, then relabels this data with a downstream reward before training a\npolicy with offline RL. We find that exploratory data allows vanilla off-policy\nRL algorithms, without any offline-specific modifications, to outperform or\nmatch state-of-the-art offline RL algorithms on downstream tasks. Our findings\nsuggest that data generation is as important as algorithmic advances for\noffline RL and hence requires careful consideration from the community. Code\nand data can be found at https://github.com/denisyarats/exorl .", + "authors": "Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, Lerrel Pinto", + "published": "2022-01-31", + "updated": "2022-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.10070v1", + "title": "MOORe: Model-based Offline-to-Online Reinforcement Learning", + "abstract": "With the success of offline reinforcement learning (RL), offline trained RL\npolicies have the potential to be further improved when deployed online. A\nsmooth transfer of the policy matters in safe real-world deployment. Besides,\nfast adaptation of the policy plays a vital role in practical online\nperformance improvement. To tackle these challenges, we propose a simple yet\nefficient algorithm, Model-based Offline-to-Online Reinforcement learning\n(MOORe), which employs a prioritized sampling scheme that can dynamically\nadjust the offline and online data for smooth and efficient online adaptation\nof the policy. We provide a theoretical foundation for our algorithms design.\nExperiment results on the D4RL benchmark show that our algorithm smoothly\ntransfers from offline to online stages while enabling sample-efficient online\nadaption, and also significantly outperforms existing methods.", + "authors": "Yihuan Mao, Chao Wang, Bin Wang, Chongjie Zhang", + "published": "2022-01-25", + "updated": "2022-01-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2304.08488v1", + "title": "Affordances from Human Videos as a Versatile Representation for Robotics", + "abstract": "Building a robot that can understand and learn to interact by watching humans\nhas inspired several vision problems. However, despite some successful results\non static datasets, it remains unclear how current models can be used on a\nrobot directly. In this paper, we aim to bridge this gap by leveraging videos\nof human interactions in an environment centric manner. Utilizing internet\nvideos of human behavior, we train a visual affordance model that estimates\nwhere and how in the scene a human is likely to interact. The structure of\nthese behavioral affordances directly enables the robot to perform many complex\ntasks. We show how to seamlessly integrate our affordance model with four robot\nlearning paradigms including offline imitation learning, exploration,\ngoal-conditioned learning, and action parameterization for reinforcement\nlearning. We show the efficacy of our approach, which we call VRB, across 4\nreal world environments, over 10 different tasks, and 2 robotic platforms\noperating in the wild. Results, visualizations and videos at\nhttps://robo-affordances.github.io/", + "authors": "Shikhar Bahl, Russell Mendonca, Lili Chen, Unnat Jain, Deepak Pathak", + "published": "2023-04-17", + "updated": "2023-04-17", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CV", + "cs.LG", + "cs.NE" + ], + "label": "Original Paper", + "paper_cat": "Offline AND Reinforcement AND Learning", + "gt": "Affordance and Interaction Learning from Videos. Given a scene, one can predict interactions using geometrybased rules for objects via 3D scene understanding [43,78, 79, 134], estimating 3D physical attributes [8, 26, 41, 137] or through segmentation models trained on semantic interactions [102, 104], and thus require specialized datasets. More general interaction information can be learned from large human datasets [18, 19, 21, 40, 62, 67], to predict object information [30, 136] (RGB & 3D) [10], graphs [24] or environment information [28,81] such as heatmaps [39, 80]. Approaches also track human poses, especially hands [14,18,65,66,101,107,127]. Similarly, in action anticipation and human motion forecasting, high-level semantic or low level actions are predicted using visual history [1, 11,19,22,31\u201333,37,40,46\u201348,55,58,72,75,100,119,120]. Since our observations only have robot arms and no human hands, we adopt a robot-\ufb01rst formulation, only modeling the contact point and post-contact phase of interaction. Visual Robot Learning. Learning control from visual inputs directly is an important challenge. Previous works have leveraged spatial structures of convolutional networks to directly output locations for grasping and pushing from just an image of the scene [92, 130, 131], which can limit the type of tasks possible. It is also possible to directly learn control end-to-end [52,61] which while general, is quite sample inef\ufb01cient in the real world. It has been common to introduce some form of prior derived from human knowledge, which could take the form of corrective interactions [23, 42, 68], structured policy spaces [2,7,7,17,50,85,94,99,108,125], of\ufb02ine robotics data [25,56,57,71,97], using pretrained visual representations [84,89,106,123,124] or human demonstrations [6,15,105,108,109,113]. Learning Manipulation from Humans. Extensive work has been done on Learning from Demonstrations (LfD) where human supervision is usually provided through teleoperation (of a joystick or VR interface) [77, 115, 133] or kinesthetic teaching, where a user physically moves the robot arm [13, 16, 27, 70, 94].With both these approaches, collecting demonstrations is tedious and slow. Recently, works have shown alternate ways to provide human demonstrations, via hand pose estimation and retargeting [5, 95, 110, 112, 126] in robot hands, but are mostly restricted to tabletop setups. First and third person human demonstrations have been used to train policies directly, transferred either via a handheld gripper [87,114, 128] or using online adaptation [6]. In contrast to directly mimicking a demonstration, we learn robot-centric affordances from passive human videos that provide a great initialization for downstream robot tasks, unlike previous work which require indomain demonstrations.", + "pre_questions": [], + "main_content": "Introduction Imagine standing in a brand-new kitchen. Before taking even a single action, we already have a good understanding of how most objects should be manipulated. This understanding goes beyond semantics as we have a belief of where to hold objects and which direction to move them in, allowing us to interact with it. For instance, the oven is opened by pulling the handle downwards, the tap should be turned sideways, drawers are to be pulled outwards, and light switches are turned on with a \ufb02ick. While things don\u2019t always work as imagined and some exploration might be needed, but humans heavily rely on such visual affordances of objects to ef\ufb01ciently perform day-to-day tasks across environments [35,36]. Extracting such actionable knowledge from videos has long inspired the vision community. More recently, with improving performance on static datasets, the \ufb01eld is increasingly adopting a broader \u2018active\u2019 de\ufb01nition of vision through research in egocentric visual understanding and visual affordances from videos of human interaction. With deep learning, methods can now predict heatmaps of where a human would interact [39,80] or seg\u22c6equal contribution arXiv:2304.08488v1 [cs.RO] 17 Apr 2023 Learning Visual Affordances Deployment on Robot Trajectory Network contact point heatmap trajectory Affordance Model Affordance Model Scene encoder trajectory contact points Figure 2. VRB Overview. First, we learn an actionable representation of visual affordances from human videos: the model predicts contact points and trajectory waypoints with supervision from future frames. For robot deployment, we query the affordance model and convert its outputs to 3D actions to execute. mentation of the object being interacted with [107]. Despite being motivated by the goal of enabling downstream robotic tasks, prior methods for affordance learning are tested primarily on human video datasets with no physical robot or in-the-wild experiments. Without integration with a robotic system, even the most basic question of how the affordance should be de\ufb01ned or represented remains unanswered, let alone evaluating its performance. On the contrary, most robot learning approaches, whether imitation or reinforcement learning, approach a new task or a new environment tabula rasa. At best, the visual representation might be pretrained on some dataset [69, 84, 96, 106, 122, 124]. However, visual representations are only a small part of the larger problem. In robotics, especially in continuous control, the state space complexity grows exponentially with actions. Thus, even with perfect perception, knowing what to do is dif\ufb01cult. Given an image, current computer vision approaches can label most of the objects, and even tell us approximately where they are but this is not suf\ufb01cient for the robot to perform the task. It also needs to know where and how to manipulate the object, and \ufb01guring this out from scratch in every new environment is virtually impossible for all but the simplest of tasks. How do we alleviate this clear gap between visual learning and robotics? In this paper, we propose to rethink visual affordances as a means to bridge vision and robotics. We argue that rich video datasets of humans interacting can offer a lot more actionable information beyond just replacing ImageNet as a pretrained visual encoder for robot learning. Particularly, human interactions are a rich source of how a wide range of objects can be held and what are useful ways to manipulate their state. However, several challenges hinder the smooth integration of vision and robotics. We group them into three parts. First, what is an actionable way to represent affordances? Second, how to learn this representation in a datadriven and scalable manner? Third, how to adapt visual affordances for deployment across robot learning paradigms? To answer the \ufb01rst question, we \ufb01nd that contact points and post-contact trajectories are excellent robot-centric representations of visual affordances, as well as modeling the inherent multi-modality of possible interactions. We make effective use of egocentric datasets in order to tackle the second question. In particular, we reformulate the data to focus on frames without humans for predicting contact points and the post-contact trajectories. To extract free supervision for this prediction, we utilize off-the-shelf tools for estimating egomotion, human pose, and hand-object interaction. Finally, we show how to seamlessly integrate these affordance priors with different kinds of robot learning paradigms. We thus call our approach Vision-Robotics Bridge (VRB) due to its core goal of bridging vision and robotics. We evaluate both the quality of our affordances and their usefulness for 4 different robotic paradigms \u2013 imitation and of\ufb02ine learning, exploration, visual goal-reaching, and using the affordance model as a parameterization for action spaces. These are studied via extensive and rigorous realworld experiments on physical robots which span across 10 real-world tasks, 4 environments, and 2 robot hardware platforms. Many of these tasks are performed in-the-wild outside of lab environments (see Figure 1). We \ufb01nd that VRB outperforms other state-of-the-art human hand-object affordance models, and enables high-performance robot learning in the wild without requiring any simulation. Finally, we also observe that our affordance model learns a good visual representation for robotics as a byproduct. We highlight that all the evaluations are performed in the real world spanning several hundred hours of robot running time which is a very large-scale evaluation in robotics. Our goal is to learn affordance priors from large-scale egocentric videos of human interaction, and then use them to expedite robot learning in the wild. This requires addressing the three questions discussed in Sec. 1 about how to best represent affordances, how to extract them and how to use them across robot learning paradigms. 3.1. Actionable Representation for Affordances Affordances are only meaningful if there is an actor to execute them. For example, a chair has a sitting affordance only if it is possible for some person to sit on it. This property makes it clear that the most natural way to extract human affordances is by watching how people interact with the world. However, what is the right object-centric representation for affordances: is it a heatmap of where the human makes contact? Is it the pre and postcondition of the object? Is it a description of the human interaction? All of these are correct answers and have been studied in prior works [43, 66, 80]. However, the affordance parameterization should be amenable to deployment on robots. If we want the robot to a priori understand how to manipulate a pan (Fig. 1, 4) without any interaction, then a seemingly simple solution is to exactly model human movement from videos [66], but this leads to a human-centric model and will not generalize well because human morphology is starkly different from that of robots. Instead, we take a firstprinciples approach driven by the needs of robot learning. Knowledge of a robot body is often known, hence reaching a point in the 3D space is feasible using motion planning [53, 59, 60]. The difficulty is in figuring out where to interact (e.g. the handle of the lid) and then how to move after the contact is made (e.g., move the lid upwards). Inspired by this, we adopt contact points and post-contact trajectories as a simple actionable representation of visual affordance that can be easily transferred to robots. We use the notation c for a contact point and \u03c4 for post-contact trajectory, both in the pixel space. Specifically, \u03c4 = f(It, ht), where It is the image at timestep t, ht is the human hand location in pixel space, and f is a learned model. We find that our affordance representation outperforms prior formulations across robots. Notably, the c and \u03c4 abstraction makes the affordance prior agnostic to the morphological differences across robots. 3.2. Learning Affordances from Egocentric Videos The next question is how to extract c and \u03c4 from human videos in a scalable data-driven manner while dealing with the presence of human body or hand in the visual input. VRB tackles this through a robot-first approach. Offline Data Collection Action Spaces k-Nearest Neighbors Behavior Cloning reward action s, a, r s, a, r s, a, r state Exploration reward action s, a, r s, a, r s, a, r state Goal-Conditioned Learning reward action s, a, r s, a, r s, a, r state Goal Images reward action s, a, r s, a, r s, a, r state 4 Initialization Deployment 1 3 2 4 2 2 3 4 12 a* a s \ud835\udf45 Figure 3. Robot Learning Paradigms : (a) Of\ufb02ine Data Collection \u2013 Used to investigate the quality of the collected data. (b) Exploration \u2013 The robot needs to use intrinsic rewards to improve (c) Goal-Conditioned Learning \u2013 A desired task is speci\ufb01ed via a goal image, used to provide reward. (d) Action Spaces \u2013 Reduced action spaces are easier to search and allow for discrete control. 3.2.1 Extracting Affordances from Human Videos Consider a video V , say of a person opening a door, consisting of T frames i.e. V = {I1, ..., IT }. We have a twofold objective \u2014 \ufb01nd where and when the contact happened, and estimate how the hand moved after contact was made. This is used to supervise the predictive model f\u03b8(It) that outputs contact points and post-contact trajectories. To do so, we utilize a widely-adopted hand-object detection model trained on human video data [107]. For each image It, this produces 2D bounding boxes of the hand ht, and a discrete contact variable ot. Using this information, we \ufb01lter for frames where ot indicates a contact in each video, and \ufb01nd the \ufb01rst timestep where contact occurs, tcontact. The pixel-space positions of the hand {ht}t\u2032 tcontact constitute the post-contact trajectory (\u03c4). To extract contact points c, we use the corresponding hand bounding box, and apply skin color segmentation to \ufb01nd all points at the periphery of the hand segment that intersect with the bounding box of the object in contact. This gives us a set of N contact points {ci}N, where N can differ depending on the image, object, scene and type of interaction. How should the contact points be aggregated to train our affordance model (f\u03b8)? Some options include predicting the mean of {ci}N, or randomly sampling ci. However, we seek to encourage multi-modality in the predictions, since a scene likely contains multiple possible interactions. To enable this, we \ufb01t a Gaussian mixture model (GMM) to the points. Let us de\ufb01ne a distribution over contact points to be p(c). We \ufb01t the GMM parameters (\u00b5k, \u03a3k) and weights \u03b1k. p(c) = argmax \u00b51,...,\u00b5K,\u03a31,...,\u03a3K N X i=1 K X k=1 \u03b1kN(ci|\u00b5k, \u03a3k) (1) We use these parameters of the above de\ufb01ned GMM with K clusters as targets for f\u03b8. To summarize, 1) we \ufb01nd the \ufb01rst timestep where contact occurs in the human video, tcontact 2) For c, we \ufb01t a GMM to the contact points around the hand at frame Itcontact, parameterized by \u00b5k, \u03a3k and 3) we \ufb01nd the post-contact trajectory of the 2D hand bounding box {ht}t\u2032 tcontact for \u03c4. Accounting for Camera Motion over Time: Consider a person opening a door. Not only do the person\u2019s hands move but their body and hence their head also move closer to the handle and then away from it. Therefore, we need to compensate for this egomotion of the human head/camera from time tcontact to t\u2032. We address this by using the homography matrix at timestep t, Ht to project the points back into the coordinates of the starting frame. We obtain the homography matrix by matching features between consecutive frames. We then use this to produce the transformed trajectory \u03c4 = Ht \u25e6{ht}t\u2032 tcontact. Addressing Human-Robot Visual Domain Shift: The training videos contain human body or hand in the frame but the human will not be present in downstream robotics task, generating domain shift. We deal with this issue with a simple yet elegant trick: we extract affordances in the frames with humans but then map those affordances back to the \ufb01rst frame when human was yet to enter the scene. For videos in which a human is always in frame, we either crop out the human in the initial frame if there is no interaction yet or discard the frame if the human is always in contact. We compute the contact points and post-contact trajectories with respect to this human-less frame via the same homography procedure described above. This human-less frame is then used to condition our affordance model. 3.2.2 Training Affordance Model Conditioned on the input image, the affordance model is trained to predict the extracted labels for contact points and post-contact trajectories. However, naive joint prediction does not work well as the learning problem is inherently multi-modal. For instance, one would pick up a cup differently from a table depending on whether the goal is to pour it into the sink or take a sip from it. We handle this by predicting multiple heatmaps for interaction points using the same model, building a spatial probability distribution. For ease of notation, we use (\u00b7)\u03b8 as a catch-all for all parameterized modules and use f\u03b8 to denote our complete network. Fig. 2 shows an overview of our model. Input image It is encoded using a ResNet [45] visual encoder gconv \u03b8 to give a spatial latent representation zt, i.e., gconv \u03b8 (It) = zt. We then project this latent zt into K probability distributions or heatmaps using deconvolutional layers; concretely, Ht = gdeconv \u03b8 (zt). Using a spatial softmax, \u03c32D, we get the estimation of the labels for GMM means, i.e., \u00b5k. We found that keeping the covariance matrices \ufb01xed gave better results. Formally, the loss for contact point estimation is: Lcontact = \r \r\u00b5i \u2212\u03c32D \u0000gdeconv \u03b8 (gconv \u03b8 (It)) \u0001\r \r 2 (2) To estimate post-contact trajectory, we train a trajectory prediction network, T\u03b8, based on the latent representation zt. We \ufb01nd that it is easier to optimize for relative shifts, i.e., the direction of movement instead of absolute locations, assuming that the \ufb01rst point \u02c6 w0 is 0, since the contact points are already spatially grounded. Based on the success of Transformers for sequential prediction, we employ self-attention blocks [118] and train to optimize Ltraj = \u2225\u03c4 \u2212T\u03b8(zt)\u22252. In a given scene, there are many objects a human could interact with, which may or may not be present in the training data. We tackle this uncertainty and avoid spurious correlations by sampling local crops of It around the contact points. These serve as the effective input to our network f\u03b8 and enables better generalization. 3.3. Robot Learning from Visual Affordances Instead of \ufb01nding a particular way to use our affordance model for robotics, we show that it can bootstrap existing robot learning methods. In particular, we consider four different robotics paradigms as shown in Fig. 3. A. Imitation Learning from Of\ufb02ine Data Collection Imitation learning is conventionally performed on data collected by human demonstrations, teleoperation, or scripted policies \u2013 all of which are expensive and only allow for small-scale data collection [4, 6, 12, 61, 109, 129]. On the other hand, using the affordance model, f\u03b8(\u00b7) to guide the robot has a high probability of yielding \u2018interesting\u2019 interactions. Given an image input It, the affordance model produces (c, \u03c4) = f\u03b8(It), and we store {(It, (c, \u03c4))} in a dataset D. After suf\ufb01cient data has been collected, we can use imitation learning to learn control policies, often to complete a speci\ufb01c task. A common approach for task speci\ufb01cation is to use goal images that show the desired con\ufb01guration of objects. Given the goal image, the k-Nearest Neighbors (k-NN) approach involves \ufb01ltering trajectories in D based on their distance to the goal image in feature space. Further, the top (\ufb01ltered) trajectories can be used for behavior cloning (BC) by training a policy, \u03c0(c, \u03c4|It). We run both k-NN and behavior cloning on datasets collected by different methods in Sec. 4.1. Using the same IL approach for different datasets is also useful for comparing the relative quality of the data. This is because higher relative success for a particular dataset implies that the data is qualitatively better, given that the same IL algorithm achieves worse performance on a different dataset. This indicates that the goal (or similar images) were likely seen during data collection. B. Reward-Free Exploration The goal of exploration is to discover as many diverse skills as possible which can aid the robot in solving downstream tasks. Exploration methods are usually guided by intrinsic rewards that are selfgenerated by the robotic agent, and are not speci\ufb01c to any task [9, 49, 51, 64, 73, 86, 90, 93, 98, 117]. However, starting exploration from scratch is too inef\ufb01cient in the real world, as the robot can spend an extremely large amount of time trying to explore and still not learn meaningful skills to solve tasks desired by humans. Here our affordance model can be greatly bene\ufb01cial by bootstrapping the exploration from the predicted affordances allowing the agent to focus on parts of the scene likely to be of interest to humans. To operationalize this, we \ufb01rst use the affordance model f\u03b8(.) for data-collection. We then rank all the trajectories collected using a task-agnostic exploration metric, and \ufb01t a distribution h to the (c, \u03c4) values of the top trajectories. For subsequent data collection, we sample from h with some probability, and otherwise use the affordance model f. This process can then be repeated, and the elite-\ufb01tting scheme will bootstrap from highly exploratory trajectories to improve exploration even further. For the exploration metric in our experiments, we maximize environment change EC(Ii, Ij) = ||\u03c6(Ii) \u2212\u03c6(Ij)||2, (similar to previous exploration approaches [6, 88]) between \ufb01rst and last images in the trajectory, where \u03c6 masks the robot and the loss is only taken on non-masked pixels. C. Goal-Conditioned Learning While exploring the environment can lead to interesting skills, consider a robot that already knows its goal. Using this knowledge (e.g. an image of the opened door), it supervise its policy search. Goal images are frequently used to specify rewards in RL [3, 34, 38, 74, 82, 83, 91, 121, 138]. Using our affordance Figure 4. Qualitative affordance model outputs for VRB, HOI [66], Hotspots [39] and HAP [39], showing the predicted contact point region, and post-grasp trajectory (green arrow for VRB, red for HOI [66]). We can see that VRB produces the most meaningful affordances. model can expedite the process of solving goal-speci\ufb01ed tasks. Similar to the exploration setting, we rank trajectories and \ufb01t a distribution h to the (c, \u03c4) values of the top trajectories, but here the metric is to minimize distance to the goal image Ig. The metric used in our experiments is to minimize EC(IT , Ig), where IT is the last image in the trajectory, or to minimize ||\u03c8(Ig) \u2212\u03c8(IT )||2 2, where \u03c8 is a feature space. Akin to exploration, subsequent data collection involves sampling from h and the affordance model f. D. Affordance as an Action Space Unlike games with discrete spaces like Chess and Go where reinforcement learning is deployed tabula rasa, robots need to operate in continuous action spaces that are dif\ufb01cult to optimize over. A pragmatic alternative to continuous action spaces is parameterizing them in a spatial manner and assigning a primitive (e.g. grasping, pushing or placing) to each location [111, 131, 132]. While this generally limits the type of tasks that can be performed, our affordance model already seeks out interesting states, due to the data it is trained on. We \ufb01rst query the affordance model on the scene many times to obtain a large number of predictions. We then \ufb01t a GMM to these points to obtain a discrete set of (c, \u03c4) values, and now the robot just needs to search over this space. 4. Experimental Setup and Results Through the four robot learning paradigms, shown in Fig. 3, we seek to answer the following questions: (1) Does our model enable a robot to collect useful data (imitation from of\ufb02ine data)?, (2) How much bene\ufb01t does VRB provide to exploration methods?, (3) Can our method enable goal-conditioned learning?, and (4) Can our model be used to de\ufb01ne a structured action space for robots? Finally, we also study whether our model learns meaningful visual representations for control as a byproduct and also analyze the failure modes and how they differ from prior work. Robotics Setup We use two different robot platforms the Franka Emika Panda arm and the Hello Stretch mobile manipulator. We run the Franka on two distinct play kitchen environments and test on tasks that involve interacting with a cabinet, a knife and some vegetables, and manipulation of a a shelf and a pot. The Hello robot is tested on multiple in-the wild tasks outside lab settings, including opening a garbage can, lifting a lid, opening a door, pulling out a drawer, and opening a dishwasher (Fig. 1). We also provide support for a simulation environment on the Franka-Kitchen benchmark [29]. Details can be found in the Appendix. Observation and Action space For each task, we estimate a task-space image-crop using bounding boxes [135], and pass random sub-crops to f\u03b8. The prediction for contact points c and post-contact trajectory \u03c4 is in pixel space, which are projected into 3D for robot control using a calibrated robot-camera system (with an Intel RealSense D415i). The robot operates in 6DOF end-effector space \u2013 samples a rotation, moves to a contact point, grasps, and then moves to a post-contact position (see Sec. 3.1). Baselines and Ablations: We compare against prior work that has tried to predict heatmaps from human video : 1) Hotspots [80] 2) Hands as Probes (HAP) [39], a modi\ufb01ed version for our robot setup of Liu et al. [66] that predicts Cabinet Knife Veg Shelf Pot Door Lid Drawer k-Nearest Neighbors: HOI 0.2 0.1 0.1 0.6 0.0 0.4 0.0 0.6 HAP 0.3 0.0 0.3 0.0 0.1 0.2 0.0 0.1 Hotspots 0.4 0.0 0.1 0.0 0.5 0.4 0.3 0.5 Random 0.3 0.0 0.1 0.3 0.4 0.2 0.1 0.2 VRB (ours) 0.6 0.3 0.6 0.8 0.4 1.0 0.4 1.0 Behavior Cloning: HOI 0.3 0.0 0.3 0.0 0.1 0.2 0.0 0.1 HAP 0.5 0.0 0.4 0.0 0.3 0.1 0.0 0.1 Hotspots 0.2 0.0 0.0 0.0 0.8 0.1 0.0 0.7 Random 0.1 0.1 0.1 0.0 0.2 0.1 0.0 0.0 VRB (ours) 0.6 0.1 0.3 0.3 0.8 0.9 0.2 0.9 Table 1. Imitation Learning: Success rate for k-NN and Behavior Cloning on collected of\ufb02ine data using various affordance models. We \ufb01nd that VRB vastly outperforms prior approaches, indicating better quality of data. contact region and forecast hand poses: 3) HOI [66] and 4) a baseline that samples affordances at random (Random). HAP and Hotspots only output a contact point, and we randomly select a post-contact direction. More details are available in the Appendix. 4.1. Quality of Collected Data for Imitation We investigate VRB as a tool for useful data collection. We evaluate this on both our robots across 8 different environments, with results in Tab. 1. These are all unseen scenarios (not in train set). Tasks are speci\ufb01ed for each environment using goal images (eg open door, lifted pot etc), and we use the data collected (30-150 episodes) for two established of\ufb02ine learning methods: (1) k-Nearest Neighbors (k-NN) and (2) Behavior Cloning. k-NN [87] \ufb01nds trajectories in the dataset that are close (via distance in feature space [84]) to the goal image. We run the 10-closest trajectories to the goal image and record whether the robot has achieved the task speci\ufb01ed in the goal image. For behavior cloning, we train a network supervised with (image, waypoint) pairs from the collected dataset, and the resulting policy is run 10 times on the real system. With both k-NN and BC, our method outperforms prior tasks on 7 out of 8 tasks, with an average success rate of 57 %, with the runner-up method (Hotspots [80]) only getting 25 %. This shows that VRB leads to much better data of\ufb02ine data quality, and thus can lead to better imitation learning performance. We additionally test for grasping held-out rare objects such as VR remotes or staplers, and \ufb01nd that VRB outperforms baselines. Details can be found in the Appendix. 4.2. Reward-Free Exploration Here we study self-supervised exploration with no external rewards. We utilize environment change, i.e., change in the position of objects as a task-agnostic metric for exploration [6]. For improved exploration, we bias sampling towards trajectories with a higher environment change metric. To evaluate the quality of exploration data, we measure how often does the robot achieves coincidental success i.e. reach a goal image con\ufb01guration without having access to it. As shown in Fig. 5, we obtain consistent improvements over HAP [39] and random exploration raising performance multiple fold \u2013 from 3\u00d7 to 10\u00d7, for every task. 150 200 250 Episodes 0 10 20 30 40 50 60 Coincidental Success Ours HAP Random (a) Cabinet 100 150 200 Episodes 0 2 4 6 8 10 12 14 Coincidental Success (b) Knife 50 80 110 Episodes 0 5 10 15 20 25 Coincidental Success (c) Stovepot 50 80 110 Episodes 0 5 10 15 20 25 30 35 40 Coincidental Success (d) Shelf Figure 5. Exploration: Coincidental success of VRB in comparison to random exploration or the exploration based on HAP [39]. 4.3. Goal-Conditioned Learning The previous settings help robots improve their behaviors with data without an external reward or goal. Here we focus on goal-driven robot learning. Goals are often speci\ufb01ed through images of the goal con\ufb01guration. Note that goal images are also used in Sec. 4.1 but as part of a static dataset to imitate. Here, the robot policy is updated with new data being added to the buffer. We sample this dataset for trajectories that minimize visual change with respect to the goal image. As shown in Fig. 6, VRB learns faster and better HAP [39] and Random on this robot learning paradigm, over six diverse tasks. 4.4. Affordance as an Action Space We utilize visual affordances to create a discrete action space using a set of contact points and post-contact trajectories. We then train a Deep Q-Network (DQN) [76] over this action space, for the above goalconditioned learning problem.In Fig. 7, we see that with VRB, the robot experiences more successes showing that a greater percentage of actions in the discretized action space correspond to meaningful object interactions. 4.5. Analyzing Visual Representations Beyond showing better utility for robot learning paradigms, we analyze the quality of visual representations of the encoder learned in VRB. Two standard evaluations 0 20 40 60 80 100 120 140 Episodes 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Success Rate Ours HAP Random (a) Door 0 20 40 60 80 100 Episodes 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Success Rate (b) Veggies 0 20 40 60 80 Episodes 0.0 0.1 0.2 0.3 0.4 Success Rate (c) Lid 0 10 20 30 40 50 60 70 Episodes 0.0 0.2 0.4 0.6 0.8 Success Rate (d) Dishwasher 0 20 40 60 80 Episodes 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Success Rate (e) Drawer 0 10 20 30 40 50 60 70 80 Episodes 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Success Rate (f) Garbage Can Figure 6. Goal-conditioned Learning: Success rate for reaching goal con\ufb01guration for six different tasks. Sampling via VRB leads to faster learning and better \ufb01nal performance. 0 10 20 30 40 50 60 Episodes 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 Successes Ours HAP (a) Cabinet 0 5 10 15 20 25 30 Episodes 0 2 4 6 8 Successes Ours HAP (b) Veggies Figure 7. Action Space: Success using DQN with the discretized action space, for reaching a speci\ufb01ed goal image. for this are (1) if they can help for downstream tasks and (2) how meaningful distances in their feature spaces are. VRB R3M [84] microwave 0.16 0.10 slide-door 0.84 0.70 door-open 0.13 0.11 Table 2. Behavior Cloning with VRB vs. R3M [84] representation. Finetuning To investigate if the visual representations are effective for control, we directly \ufb01netune a policy on top of the (frozen) visual encoder. We evaluate on three simulated Franka environments, as shown in Tab. 2, and we see that VRB outperforms R3M on all tasks. (We \ufb01netuned the policy only for 2K steps, instead of 20K in the R3M paper). This demonstrates that VRB visual representations contain information that is useful for control. Feature space distance We record the distance in feature space between the current and goal image for every timestep 0 5 10 15 20 25 30 Step (t) 0.0 0.2 0.4 0.6 0.8 Distance to Goal Ours R3M 0 5 10 15 20 25 30 Step (t) 0.0 0.2 0.4 0.6 0.8 Distance to Goal Figure 8. Feature space distance: Distance to goal in feature space for VRB decreases monotonically for door opening. in the episode, for both VRB and R3M [84] on successful cabinet opening trajectories. As shown in Fig. 8, the distance for VRB decreases almost monotonically which correlates well with actual task progress. 4.6. Failure Modes While VRB and the baselines see qualitatively similar successes, VRB in general sees a larger number of them and the average case scenario for VRB is much better. Failure Partial Success Success 0 20 40 60 80 100 120 Count Ours HAP Random Figure 9. Failure mode analysis For the cabinet opening task, we classify each collected episode into three categories: \u201cFailure\u201d, \u201cPartial Success\u201d and \u201cSuccess\u201d. While VRB has a higher number of successful trajectories compared to the baselines (almost 2\u00d7), the number of partial successes is more than 6\u00d7 (Fig. 9). 5. Conclusion We propose Vision-Robotics Bridge (VRB), a scalable approach for learning useful affordances from passive human video data, and deploying them on many different robot learning paradigms (such as data collection for imitation, reward-free exploration, goal conditioned learning and paramterizing action spaces). Our affordance representation consists of contact points and post-contact trajectories. We demonstrate the effectiveness of this approach on the four paradigms and 10 different real world robotics tasks, including many that are in the wild. We run thorough experiments, spanning over 200 hours, and show that VRB drastically outperforms prior approaches. In the future, we hope to deploy on more complex multi-stage tasks, incorporate physical concepts such as force and tactile information, and investigate VRB in the context of visual representations. Acknowledgements We thank Shivam Duggal, Yufei Ye and Homanga Bharadhwaj for fruitful discussions and are grateful to Shagun Uppal, Ananye Agarwal, Murtaza Dalal and Jason Zhang for comments on early drafts of this paper. RM, LC, and DP are supported by NSF IIS-2024594, ONR MURI N00014-22-1-2773 and ONR N00014-22-1-2096." + }, + { + "url": "http://arxiv.org/abs/2212.04498v1", + "title": "VideoDex: Learning Dexterity from Internet Videos", + "abstract": "To build general robotic agents that can operate in many environments, it is\noften imperative for the robot to collect experience in the real world.\nHowever, this is often not feasible due to safety, time, and hardware\nrestrictions. We thus propose leveraging the next best thing as real-world\nexperience: internet videos of humans using their hands. Visual priors, such as\nvisual features, are often learned from videos, but we believe that more\ninformation from videos can be utilized as a stronger prior. We build a\nlearning algorithm, VideoDex, that leverages visual, action, and physical\npriors from human video datasets to guide robot behavior. These actions and\nphysical priors in the neural network dictate the typical human behavior for a\nparticular robot task. We test our approach on a robot arm and dexterous\nhand-based system and show strong results on various manipulation tasks,\noutperforming various state-of-the-art methods. Videos at\nhttps://video-dex.github.io", + "authors": "Kenneth Shaw, Shikhar Bahl, Deepak Pathak", + "published": "2022-12-08", + "updated": "2022-12-08", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CV", + "cs.LG", + "cs.SY", + "eess.SY" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.03655v3", + "title": "Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks", + "abstract": "Realistic manipulation tasks require a robot to interact with an environment\nwith a prolonged sequence of motor actions. While deep reinforcement learning\nmethods have recently emerged as a promising paradigm for automating\nmanipulation behaviors, they usually fall short in long-horizon tasks due to\nthe exploration burden. This work introduces Manipulation Primitive-augmented\nreinforcement Learning (MAPLE), a learning framework that augments standard\nreinforcement learning algorithms with a pre-defined library of behavior\nprimitives. These behavior primitives are robust functional modules specialized\nin achieving manipulation goals, such as grasping and pushing. To use these\nheterogeneous primitives, we develop a hierarchical policy that involves the\nprimitives and instantiates their executions with input parameters. We\ndemonstrate that MAPLE outperforms baseline approaches by a significant margin\non a suite of simulated manipulation tasks. We also quantify the compositional\nstructure of the learned behaviors and highlight our method's ability to\ntransfer policies to new task variants and to physical hardware. Videos and\ncode are available at https://ut-austin-rpl.github.io/maple", + "authors": "Soroush Nasiriany, Huihan Liu, Yuke Zhu", + "published": "2021-10-07", + "updated": "2022-06-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1812.04558v2", + "title": "Grounded Human-Object Interaction Hotspots from Video", + "abstract": "Learning how to interact with objects is an important step towards embodied\nvisual intelligence, but existing techniques suffer from heavy supervision or\nsensing requirements. We propose an approach to learn human-object interaction\n\"hotspots\" directly from video. Rather than treat affordances as a manually\nsupervised semantic segmentation task, our approach learns about interactions\nby watching videos of real human behavior and anticipating afforded actions.\nGiven a novel image or video, our model infers a spatial hotspot map indicating\nhow an object would be manipulated in a potential interaction-- even if the\nobject is currently at rest. Through results with both first and third person\nvideo, we show the value of grounding affordances in real human-object\ninteractions. Not only are our weakly supervised hotspots competitive with\nstrongly supervised affordance methods, but they can also anticipate object\ninteraction for novel object categories.", + "authors": "Tushar Nagarajan, Christoph Feichtenhofer, Kristen Grauman", + "published": "2018-12-11", + "updated": "2019-04-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1710.04615v2", + "title": "Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation", + "abstract": "Imitation learning is a powerful paradigm for robot skill acquisition.\nHowever, obtaining demonstrations suitable for learning a policy that maps from\nraw pixels to actions can be challenging. In this paper we describe how\nconsumer-grade Virtual Reality headsets and hand tracking hardware can be used\nto naturally teleoperate robots to perform complex tasks. We also describe how\nimitation learning can learn deep neural network policies (mapping from pixels\nto actions) that can acquire the demonstrated skills. Our experiments showcase\nthe effectiveness of our approach for learning visuomotor skills.", + "authors": "Tianhao Zhang, Zoe McCarthy, Owen Jow, Dennis Lee, Xi Chen, Ken Goldberg, Pieter Abbeel", + "published": "2017-10-12", + "updated": "2018-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2102.00649v1", + "title": "Forecasting Action through Contact Representations from First Person Video", + "abstract": "Human actions involving hand manipulations are structured according to the\nmaking and breaking of hand-object contact, and human visual understanding of\naction is reliant on anticipation of contact as is demonstrated by pioneering\nwork in cognitive science. Taking inspiration from this, we introduce\nrepresentations and models centered on contact, which we then use in action\nprediction and anticipation. We annotate a subset of the EPIC Kitchens dataset\nto include time-to-contact between hands and objects, as well as segmentations\nof hands and objects. Using these annotations we train the Anticipation Module,\na module producing Contact Anticipation Maps and Next Active Object\nSegmentations - novel low-level representations providing temporal and spatial\ncharacteristics of anticipated near future action. On top of the Anticipation\nModule we apply Egocentric Object Manipulation Graphs (Ego-OMG), a framework\nfor action anticipation and prediction. Ego-OMG models longer term temporal\nsemantic relations through the use of a graph modeling transitions between\ncontact delineated action states. Use of the Anticipation Module within Ego-OMG\nproduces state-of-the-art results, achieving 1st and 2nd place on the unseen\nand seen test sets, respectively, of the EPIC Kitchens Action Anticipation\nChallenge, and achieving state-of-the-art results on the tasks of action\nanticipation and action prediction over EPIC Kitchens. We perform ablation\nstudies over characteristics of the Anticipation Module to evaluate their\nutility.", + "authors": "Eadom Dessalene, Chinmaya Devaraj, Michael Maynord, Cornelia Fermuller, Yiannis Aloimonos", + "published": "2021-02-01", + "updated": "2021-02-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.05053v3", + "title": "Learning Continuous Grasping Function with a Dexterous Hand from Human Demonstrations", + "abstract": "We propose to learn to generate grasping motion for manipulation with a\ndexterous hand using implicit functions. With continuous time inputs, the model\ncan generate a continuous and smooth grasping plan. We name the proposed model\nContinuous Grasping Function (CGF). CGF is learned via generative modeling with\na Conditional Variational Autoencoder using 3D human demonstrations. We will\nfirst convert the large-scale human-object interaction trajectories to robot\ndemonstrations via motion retargeting, and then use these demonstrations to\ntrain CGF. During inference, we perform sampling with CGF to generate different\ngrasping plans in the simulator and select the successful ones to transfer to\nthe real robot. By training on diverse human data, our CGF allows\ngeneralization to manipulate multiple objects. Compared to previous planning\nalgorithms, CGF is more efficient and achieves significant improvement on\nsuccess rate when transferred to grasping with the real Allegro Hand. Our\nproject page is available at https://jianglongye.com/cgf .", + "authors": "Jianglong Ye, Jiashun Wang, Binghao Huang, Yuzhe Qin, Xiaolong Wang", + "published": "2022-07-11", + "updated": "2023-03-19", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1801.02854v3", + "title": "Riemannian Motion Policies", + "abstract": "We introduce the Riemannian Motion Policy (RMP), a new mathematical object\nfor modular motion generation. An RMP is a second-order dynamical system\n(acceleration field or motion policy) coupled with a corresponding Riemannian\nmetric. The motion policy maps positions and velocities to accelerations, while\nthe metric captures the directions in the space important to the policy. We\nshow that RMPs provide a straightforward and convenient method for combining\nmultiple motion policies and transforming such policies from one space (such as\nthe task space) to another (such as the configuration space) in geometrically\nconsistent ways. The operators we derive for these combinations and\ntransformations are provably optimal, have linearity properties making them\nagnostic to the order of application, and are strongly analogous to the\ncovariant transformations of natural gradients popular in the machine learning\nliterature. The RMP framework enables the fusion of motion policies from\ndifferent motion generation paradigms, such as dynamical systems, dynamic\nmovement primitives (DMPs), optimal control, operational space control,\nnonlinear reactive controllers, motion optimization, and model predictive\ncontrol (MPC), thus unifying these disparate techniques from the literature.\nRMPs are easy to implement and manipulate, facilitate controller design,\nsimplify handling of joint limits, and clarify a number of open questions\nregarding the proper fusion of motion generation methods (such as incorporating\nlocal reactive policies into long-horizon optimizers). We demonstrate the\neffectiveness of RMPs on both simulation and real robots, including their\nability to naturally and efficiently solve complicated collision avoidance\nproblems previously handled by more complex planners.", + "authors": "Nathan D. Ratliff, Jan Issac, Daniel Kappler, Stan Birchfield, Dieter Fox", + "published": "2018-01-09", + "updated": "2018-07-25", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1912.04443v3", + "title": "AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos", + "abstract": "Robotic reinforcement learning (RL) holds the promise of enabling robots to\nlearn complex behaviors through experience. However, realizing this promise for\nlong-horizon tasks in the real world requires mechanisms to reduce human burden\nin terms of defining the task and scaffolding the learning process. In this\npaper, we study how these challenges can be alleviated with an automated\nrobotic learning framework, in which multi-stage tasks are defined simply by\nproviding videos of a human demonstrator and then learned autonomously by the\nrobot from raw image observations. A central challenge in imitating human\nvideos is the difference in appearance between the human and robot, which\ntypically requires manual correspondence. We instead take an automated approach\nand perform pixel-level image translation via CycleGAN to convert the human\ndemonstration into a video of a robot, which can then be used to construct a\nreward function for a model-based RL algorithm. The robot then learns the task\none stage at a time, automatically learning how to reset each stage to retry it\nmultiple times without human-provided resets. This makes the learning process\nlargely automatic, from intuitive task specification via a video to automated\ntraining with minimal human intervention. We demonstrate that our approach is\ncapable of learning complex tasks, such as operating a coffee machine, directly\nfrom raw image observations, requiring only 20 minutes to provide human\ndemonstrations and about 180 minutes of robot interaction.", + "authors": "Laura Smith, Nikita Dhawan, Marvin Zhang, Pieter Abbeel, Sergey Levine", + "published": "2019-12-10", + "updated": "2020-06-21", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.06669v1", + "title": "Understanding Human Hands in Contact at Internet Scale", + "abstract": "Hands are the central means by which humans manipulate their world and being\nable to reliably extract hand state information from Internet videos of humans\nengaged in their hands has the potential to pave the way to systems that can\nlearn from petabytes of video data. This paper proposes steps towards this by\ninferring a rich representation of hands engaged in interaction method that\nincludes: hand location, side, contact state, and a box around the object in\ncontact. To support this effort, we gather a large-scale dataset of hands in\ncontact with objects consisting of 131 days of footage as well as a 100K\nannotated hand-contact video frame dataset. The learned model on this dataset\ncan serve as a foundation for hand-contact understanding in videos. We\nquantitatively evaluate it both on its own and in service of predicting and\nlearning from 3D meshes of human hands.", + "authors": "Dandan Shan, Jiaqi Geng, Michelle Shu, David F. Fouhey", + "published": "2020-06-11", + "updated": "2020-06-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.10967v2", + "title": "Forecasting Human-Object Interaction: Joint Prediction of Motor Attention and Actions in First Person Video", + "abstract": "We address the challenging task of anticipating human-object interaction in\nfirst person videos. Most existing methods ignore how the camera wearer\ninteracts with the objects, or simply consider body motion as a separate\nmodality. In contrast, we observe that the international hand movement reveals\ncritical information about the future activity. Motivated by this, we adopt\nintentional hand movement as a future representation and propose a novel deep\nnetwork that jointly models and predicts the egocentric hand motion,\ninteraction hotspots and future action. Specifically, we consider the future\nhand motion as the motor attention, and model this attention using latent\nvariables in our deep model. The predicted motor attention is further used to\ncharacterise the discriminative spatial-temporal visual features for predicting\nactions and interaction hotspots. We present extensive experiments\ndemonstrating the benefit of the proposed joint model. Importantly, our model\nproduces new state-of-the-art results for action anticipation on both EGTEA\nGaze+ and the EPIC-Kitchens datasets. Our project page is available at\nhttps://aptx4869lm.github.io/ForecastingHOI/", + "authors": "Miao Liu, Siyu Tang, Yin Li, James Rehg", + "published": "2019-11-25", + "updated": "2020-07-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1809.09558v1", + "title": "Human-Machine Interface for Remote Training of Robot Tasks", + "abstract": "Regardless of their industrial or research application, the streamlining of\nrobot operations is limited by the proximity of experienced users to the actual\nhardware. Be it massive open online robotics courses, crowd-sourcing of robot\ntask training, or remote research on massive robot farms for machine learning,\nthe need to create an apt remote Human-Machine Interface is quite prevalent.\nThe paper at hand proposes a novel solution to the programming/training of\nremote robots employing an intuitive and accurate user-interface which offers\nall the benefits of working with real robots without imposing delays and\ninefficiency. The system includes: a vision-based 3D hand detection and gesture\nrecognition subsystem, a simulated digital twin of a robot as visual feedback,\nand the \"remote\" robot learning/executing trajectories using dynamic motion\nprimitives. Our results indicate that the system is a promising solution to the\nproblem of remote training of robot tasks.", + "authors": "Jordi Spranger, Roxana Buzatoiu, Athanasios Polydoros, Lazaros Nalpantidis, Evangelos Boukas", + "published": "2018-09-25", + "updated": "2018-09-25", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.06428v1", + "title": "FrankMocap: A Monocular 3D Whole-Body Pose Estimation System via Regression and Integration", + "abstract": "Most existing monocular 3D pose estimation approaches only focus on a single\nbody part, neglecting the fact that the essential nuance of human motion is\nconveyed through a concert of subtle movements of face, hands, and body. In\nthis paper, we present FrankMocap, a fast and accurate whole-body 3D pose\nestimation system that can produce 3D face, hands, and body simultaneously from\nin-the-wild monocular images. The core idea of FrankMocap is its modular\ndesign: We first run 3D pose regression methods for face, hands, and body\nindependently, followed by composing the regression outputs via an integration\nmodule. The separate regression modules allow us to take full advantage of\ntheir state-of-the-art performances without compromising the original accuracy\nand reliability in practice. We develop three different integration modules\nthat trade off between latency and accuracy. All of them are capable of\nproviding simple yet effective solutions to unify the separate outputs into\nseamless whole-body pose estimation results. We quantitatively and\nqualitatively demonstrate that our modularized system outperforms both the\noptimization-based and end-to-end methods of estimating whole-body pose.", + "authors": "Yu Rong, Takaaki Shiratori, Hanbyul Joo", + "published": "2021-08-13", + "updated": "2021-08-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.16817v1", + "title": "Learning Generalizable Robotic Reward Functions from \"In-The-Wild\" Human Videos", + "abstract": "We are motivated by the goal of generalist robots that can complete a wide\nrange of tasks across many environments. Critical to this is the robot's\nability to acquire some metric of task success or reward, which is necessary\nfor reinforcement learning, planning, or knowing when to ask for help. For a\ngeneral-purpose robot operating in the real world, this reward function must\nalso be able to generalize broadly across environments, tasks, and objects,\nwhile depending only on on-board sensor observations (e.g. RGB images). While\ndeep learning on large and diverse datasets has shown promise as a path towards\nsuch generalization in computer vision and natural language, collecting high\nquality datasets of robotic interaction at scale remains an open challenge. In\ncontrast, \"in-the-wild\" videos of humans (e.g. YouTube) contain an extensive\ncollection of people doing interesting tasks across a diverse range of\nsettings. In this work, we propose a simple approach, Domain-agnostic Video\nDiscriminator (DVD), that learns multitask reward functions by training a\ndiscriminator to classify whether two videos are performing the same task, and\ncan generalize by virtue of learning from a small amount of robot data with a\nbroad dataset of human videos. We find that by leveraging diverse human\ndatasets, this reward function (a) can generalize zero shot to unseen\nenvironments, (b) generalize zero shot to unseen tasks, and (c) can be combined\nwith visual model predictive control to solve robotic manipulation tasks on a\nreal WidowX200 robot in an unseen environment from a single human demo.", + "authors": "Annie S. Chen, Suraj Nair, Chelsea Finn", + "published": "2021-03-31", + "updated": "2021-03-31", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2007.04979v1", + "title": "A Cordial Sync: Going Beyond Marginal Policies for Multi-Agent Embodied Tasks", + "abstract": "Autonomous agents must learn to collaborate. It is not scalable to develop a\nnew centralized agent every time a task's difficulty outpaces a single agent's\nabilities. While multi-agent collaboration research has flourished in\ngridworld-like environments, relatively little work has considered visually\nrich domains. Addressing this, we introduce the novel task FurnMove in which\nagents work together to move a piece of furniture through a living room to a\ngoal. Unlike existing tasks, FurnMove requires agents to coordinate at every\ntimestep. We identify two challenges when training agents to complete FurnMove:\nexisting decentralized action sampling procedures do not permit expressive\njoint action policies and, in tasks requiring close coordination, the number of\nfailed actions dominates successful actions. To confront these challenges we\nintroduce SYNC-policies (synchronize your actions coherently) and CORDIAL\n(coordination loss). Using SYNC-policies and CORDIAL, our agents achieve a 58%\ncompletion rate on FurnMove, an impressive absolute gain of 25 percentage\npoints over competitive decentralized baselines. Our dataset, code, and\npretrained models are available at https://unnat.github.io/cordial-sync .", + "authors": "Unnat Jain, Luca Weihs, Eric Kolve, Ali Farhadi, Svetlana Lazebnik, Aniruddha Kembhavi, Alexander Schwing", + "published": "2020-07-09", + "updated": "2020-07-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG", + "cs.MA" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.13251v1", + "title": "Dexterous Imitation Made Easy: A Learning-Based Framework for Efficient Dexterous Manipulation", + "abstract": "Optimizing behaviors for dexterous manipulation has been a longstanding\nchallenge in robotics, with a variety of methods from model-based control to\nmodel-free reinforcement learning having been previously explored in\nliterature. Perhaps one of the most powerful techniques to learn complex\nmanipulation strategies is imitation learning. However, collecting and learning\nfrom demonstrations in dexterous manipulation is quite challenging. The\ncomplex, high-dimensional action-space involved with multi-finger control often\nleads to poor sample efficiency of learning-based methods. In this work, we\npropose 'Dexterous Imitation Made Easy' (DIME) a new imitation learning\nframework for dexterous manipulation. DIME only requires a single RGB camera to\nobserve a human operator and teleoperate our robotic hand. Once demonstrations\nare collected, DIME employs standard imitation learning methods to train\ndexterous manipulation policies. On both simulation and real robot benchmarks\nwe demonstrate that DIME can be used to solve complex, in-hand manipulation\ntasks such as 'flipping', 'spinning', and 'rotating' objects with the Allegro\nhand. Our framework along with pre-collected demonstrations is publicly\navailable at https://nyu-robot-learning.github.io/dime.", + "authors": "Sridhar Pandian Arunachalam, Sneha Silwal, Ben Evans, Lerrel Pinto", + "published": "2022-03-24", + "updated": "2022-03-24", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.09120v2", + "title": "Human Hands as Probes for Interactive Object Understanding", + "abstract": "Interactive object understanding, or what we can do to objects and how is a\nlong-standing goal of computer vision. In this paper, we tackle this problem\nthrough observation of human hands in in-the-wild egocentric videos. We\ndemonstrate that observation of what human hands interact with and how can\nprovide both the relevant data and the necessary supervision. Attending to\nhands, readily localizes and stabilizes active objects for learning and reveals\nplaces where interactions with objects occur. Analyzing the hands shows what we\ncan do to objects and how. We apply these basic principles on the EPIC-KITCHENS\ndataset, and successfully learn state-sensitive features, and object\naffordances (regions of interaction and afforded grasps), purely by observing\nhands in egocentric videos.", + "authors": "Mohit Goyal, Sahil Modi, Rishabh Goyal, Saurabh Gupta", + "published": "2021-12-16", + "updated": "2022-04-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1804.02748v2", + "title": "Scaling Egocentric Vision: The EPIC-KITCHENS Dataset", + "abstract": "First-person vision is gaining interest as it offers a unique viewpoint on\npeople's interaction with objects, their attention, and even intention.\nHowever, progress in this challenging domain has been relatively slow due to\nthe lack of sufficiently large datasets. In this paper, we introduce\nEPIC-KITCHENS, a large-scale egocentric video benchmark recorded by 32\nparticipants in their native kitchen environments. Our videos depict\nnonscripted daily activities: we simply asked each participant to start\nrecording every time they entered their kitchen. Recording took place in 4\ncities (in North America and Europe) by participants belonging to 10 different\nnationalities, resulting in highly diverse cooking styles. Our dataset features\n55 hours of video consisting of 11.5M frames, which we densely labeled for a\ntotal of 39.6K action segments and 454.3K object bounding boxes. Our annotation\nis unique in that we had the participants narrate their own videos (after\nrecording), thus reflecting true intention, and we crowd-sourced ground-truths\nbased on these. We describe our object, action and anticipation challenges, and\nevaluate several baselines over two test splits, seen and unseen kitchens.\nDataset and Project page: http://epic-kitchens.github.io", + "authors": "Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, Michael Wray", + "published": "2018-04-08", + "updated": "2018-07-31", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.01511v2", + "title": "The Surprising Effectiveness of Representation Learning for Visual Imitation", + "abstract": "While visual imitation learning offers one of the most effective ways of\nlearning from visual demonstrations, generalizing from them requires either\nhundreds of diverse demonstrations, task specific priors, or large,\nhard-to-train parametric models. One reason such complexities arise is because\nstandard visual imitation frameworks try to solve two coupled problems at once:\nlearning a succinct but good representation from the diverse visual data, while\nsimultaneously learning to associate the demonstrated actions with such\nrepresentations. Such joint learning causes an interdependence between these\ntwo problems, which often results in needing large amounts of demonstrations\nfor learning. To address this challenge, we instead propose to decouple\nrepresentation learning from behavior learning for visual imitation. First, we\nlearn a visual representation encoder from offline data using standard\nsupervised and self-supervised learning methods. Once the representations are\ntrained, we use non-parametric Locally Weighted Regression to predict the\nactions. We experimentally show that this simple decoupling improves the\nperformance of visual imitation models on both offline demonstration datasets\nand real-robot door opening compared to prior work in visual imitation. All of\nour generated data, code, and robot videos are publicly available at\nhttps://jyopari.github.io/VINN/.", + "authors": "Jyothish Pari, Nur Muhammad Shafiullah, Sridhar Pandian Arunachalam, Lerrel Pinto", + "published": "2021-12-02", + "updated": "2021-12-06", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.11547v1", + "title": "Offline Reinforcement Learning from Images with Latent Space Models", + "abstract": "Offline reinforcement learning (RL) refers to the problem of learning\npolicies from a static dataset of environment interactions. Offline RL enables\nextensive use and re-use of historical datasets, while also alleviating safety\nconcerns associated with online exploration, thereby expanding the real-world\napplicability of RL. Most prior work in offline RL has focused on tasks with\ncompact state representations. However, the ability to learn directly from rich\nobservation spaces like images is critical for real-world applications such as\nrobotics. In this work, we build on recent advances in model-based algorithms\nfor offline RL, and extend them to high-dimensional visual observation spaces.\nModel-based offline RL algorithms have achieved state of the art results in\nstate based tasks and have strong theoretical guarantees. However, they rely\ncrucially on the ability to quantify uncertainty in the model predictions,\nwhich is particularly challenging with image observations. To overcome this\nchallenge, we propose to learn a latent-state dynamics model, and represent the\nuncertainty in the latent space. Our approach is both tractable in practice and\ncorresponds to maximizing a lower bound of the ELBO in the unknown POMDP. In\nexperiments on a range of challenging image-based locomotion and manipulation\ntasks, we find that our algorithm significantly outperforms previous offline\nmodel-free RL methods as well as state-of-the-art online visual model-based RL\nmethods. Moreover, we also find that our approach excels on an image-based\ndrawer closing task on a real robot using a pre-existing dataset. All results\nincluding videos can be found online at https://sites.google.com/view/lompo/ .", + "authors": "Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, Chelsea Finn", + "published": "2020-12-21", + "updated": "2020-12-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.07058v3", + "title": "Ego4D: Around the World in 3,000 Hours of Egocentric Video", + "abstract": "We introduce Ego4D, a massive-scale egocentric video dataset and benchmark\nsuite. It offers 3,670 hours of daily-life activity video spanning hundreds of\nscenarios (household, outdoor, workplace, leisure, etc.) captured by 931 unique\ncamera wearers from 74 worldwide locations and 9 different countries. The\napproach to collection is designed to uphold rigorous privacy and ethics\nstandards with consenting participants and robust de-identification procedures\nwhere relevant. Ego4D dramatically expands the volume of diverse egocentric\nvideo footage publicly available to the research community. Portions of the\nvideo are accompanied by audio, 3D meshes of the environment, eye gaze, stereo,\nand/or synchronized videos from multiple egocentric cameras at the same event.\nFurthermore, we present a host of new benchmark challenges centered around\nunderstanding the first-person visual experience in the past (querying an\nepisodic memory), present (analyzing hand-object manipulation, audio-visual\nconversation, and social interactions), and future (forecasting activities). By\npublicly sharing this massive annotated dataset and benchmark suite, we aim to\npush the frontier of first-person perception. Project page:\nhttps://ego4d-data.org/", + "authors": "Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik", + "published": "2021-10-13", + "updated": "2022-03-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1411.4734v4", + "title": "Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture", + "abstract": "In this paper we address three different computer vision tasks using a single\nbasic architecture: depth prediction, surface normal estimation, and semantic\nlabeling. We use a multiscale convolutional network that is able to adapt\neasily to each task using only small modifications, regressing from the input\nimage to the output map directly. Our method progressively refines predictions\nusing a sequence of scales, and captures many image details without any\nsuperpixels or low-level segmentation. We achieve state-of-the-art performance\non benchmarks for all three tasks.", + "authors": "David Eigen, Rob Fergus", + "published": "2014-11-18", + "updated": "2015-12-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.09676v1", + "title": "Third-Person Visual Imitation Learning via Decoupled Hierarchical Controller", + "abstract": "We study a generalized setup for learning from demonstration to build an\nagent that can manipulate novel objects in unseen scenarios by looking at only\na single video of human demonstration from a third-person perspective. To\naccomplish this goal, our agent should not only learn to understand the intent\nof the demonstrated third-person video in its context but also perform the\nintended task in its environment configuration. Our central insight is to\nenforce this structure explicitly during learning by decoupling what to achieve\n(intended task) from how to perform it (controller). We propose a hierarchical\nsetup where a high-level module learns to generate a series of first-person\nsub-goals conditioned on the third-person video demonstration, and a low-level\ncontroller predicts the actions to achieve those sub-goals. Our agent acts from\nraw image observations without any access to the full state information. We\nshow results on a real robotic platform using Baxter for the manipulation tasks\nof pouring and placing objects in a box. Project video and code are at\nhttps://pathak22.github.io/hierarchical-imitation/", + "authors": "Pratyusha Sharma, Deepak Pathak, Abhinav Gupta", + "published": "2019-11-21", + "updated": "2019-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "cs.RO", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1807.06775v1", + "title": "Visual Affordance and Function Understanding: A Survey", + "abstract": "Nowadays, robots are dominating the manufacturing, entertainment and\nhealthcare industries. Robot vision aims to equip robots with the ability to\ndiscover information, understand it and interact with the environment. These\ncapabilities require an agent to effectively understand object affordances and\nfunctionalities in complex visual domains. In this literature survey, we first\nfocus on Visual affordances and summarize the state of the art as well as open\nproblems and research gaps. Specifically, we discuss sub-problems such as\naffordance detection, categorization, segmentation and high-level reasoning.\nFurthermore, we cover functional scene understanding and the prevalent\nfunctional descriptors used in the literature. The survey also provides\nnecessary background to the problem, sheds light on its significance and\nhighlights the existing challenges for affordance and functionality learning.", + "authors": "Mohammed Hassanin, Salman Khan, Murat Tahtali", + "published": "2018-07-18", + "updated": "2018-07-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.02788v1", + "title": "Neural Dynamic Policies for End-to-End Sensorimotor Learning", + "abstract": "The current dominant paradigm in sensorimotor control, whether imitation or\nreinforcement learning, is to train policies directly in raw action spaces such\nas torque, joint angle, or end-effector position. This forces the agent to make\ndecisions individually at each timestep in training, and hence, limits the\nscalability to continuous, high-dimensional, and long-horizon tasks. In\ncontrast, research in classical robotics has, for a long time, exploited\ndynamical systems as a policy representation to learn robot behaviors via\ndemonstrations. These techniques, however, lack the flexibility and\ngeneralizability provided by deep learning or reinforcement learning and have\nremained under-explored in such settings. In this work, we begin to close this\ngap and embed the structure of a dynamical system into deep neural\nnetwork-based policies by reparameterizing action spaces via second-order\ndifferential equations. We propose Neural Dynamic Policies (NDPs) that make\npredictions in trajectory distribution space as opposed to prior policy\nlearning methods where actions represent the raw control space. The embedded\nstructure allows end-to-end policy learning for both reinforcement and\nimitation learning setups. We show that NDPs outperform the prior\nstate-of-the-art in terms of either efficiency or performance across several\nrobotic control tasks for both imitation and reinforcement learning setups.\nProject video and code are available at\nhttps://shikharbahl.github.io/neural-dynamic-policies/", + "authors": "Shikhar Bahl, Mustafa Mukadam, Abhinav Gupta, Deepak Pathak", + "published": "2020-12-04", + "updated": "2020-12-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.13064v1", + "title": "EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations", + "abstract": "We introduce VISOR, a new dataset of pixel annotations and a benchmark suite\nfor segmenting hands and active objects in egocentric video. VISOR annotates\nvideos from EPIC-KITCHENS, which comes with a new set of challenges not\nencountered in current video segmentation datasets. Specifically, we need to\nensure both short- and long-term consistency of pixel-level annotations as\nobjects undergo transformative interactions, e.g. an onion is peeled, diced and\ncooked - where we aim to obtain accurate pixel-level annotations of the peel,\nonion pieces, chopping board, knife, pan, as well as the acting hands. VISOR\nintroduces an annotation pipeline, AI-powered in parts, for scalability and\nquality. In total, we publicly release 272K manual semantic masks of 257 object\nclasses, 9.9M interpolated dense masks, 67K hand-object relations, covering 36\nhours of 179 untrimmed videos. Along with the annotations, we introduce three\nchallenges in video object segmentation, interaction understanding and\nlong-term reasoning.\n For data, code and leaderboards: http://epic-kitchens.github.io/VISOR", + "authors": "Ahmad Darkhalil, Dandan Shan, Bin Zhu, Jian Ma, Amlan Kar, Richard Higgins, Sanja Fidler, David Fouhey, Dima Damen", + "published": "2022-09-26", + "updated": "2022-09-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2001.04583v2", + "title": "EGO-TOPO: Environment Affordances from Egocentric Video", + "abstract": "First-person video naturally brings the use of a physical environment to the\nforefront, since it shows the camera wearer interacting fluidly in a space\nbased on his intentions. However, current methods largely separate the observed\nactions from the persistent space itself. We introduce a model for environment\naffordances that is learned directly from egocentric video. The main idea is to\ngain a human-centric model of a physical space (such as a kitchen) that\ncaptures (1) the primary spatial zones of interaction and (2) the likely\nactivities they support. Our approach decomposes a space into a topological map\nderived from first-person activity, organizing an ego-video into a series of\nvisits to the different zones. Further, we show how to link zones across\nmultiple related environments (e.g., from videos of multiple kitchens) to\nobtain a consolidated representation of environment functionality. On\nEPIC-Kitchens and EGTEA+, we demonstrate our approach for learning scene\naffordances and anticipating future actions in long-form video.", + "authors": "Tushar Nagarajan, Yanghao Li, Christoph Feichtenhofer, Kristen Grauman", + "published": "2020-01-14", + "updated": "2020-03-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.09856v2", + "title": "Reconstructing Hand-Object Interactions in the Wild", + "abstract": "In this work we explore reconstructing hand-object interactions in the wild.\nThe core challenge of this problem is the lack of appropriate 3D labeled data.\nTo overcome this issue, we propose an optimization-based procedure which does\nnot require direct 3D supervision. The general strategy we adopt is to exploit\nall available related data (2D bounding boxes, 2D hand keypoints, 2D instance\nmasks, 3D object models, 3D in-the-lab MoCap) to provide constraints for the 3D\nreconstruction. Rather than optimizing the hand and object individually, we\noptimize them jointly which allows us to impose additional constraints based on\nhand-object contact, collision, and occlusion. Our method produces compelling\nreconstructions on the challenging in-the-wild data from the EPIC Kitchens and\nthe 100 Days of Hands datasets, across a range of object categories.\nQuantitatively, we demonstrate that our approach compares favorably to existing\napproaches in the lab settings where ground truth 3D annotations are available.", + "authors": "Zhe Cao, Ilija Radosavovic, Angjoo Kanazawa, Jitendra Malik", + "published": "2020-12-17", + "updated": "2021-12-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.12601v3", + "title": "R3M: A Universal Visual Representation for Robot Manipulation", + "abstract": "We study how visual representations pre-trained on diverse human video data\ncan enable data-efficient learning of downstream robotic manipulation tasks.\nConcretely, we pre-train a visual representation using the Ego4D human video\ndataset using a combination of time-contrastive learning, video-language\nalignment, and an L1 penalty to encourage sparse and compact representations.\nThe resulting representation, R3M, can be used as a frozen perception module\nfor downstream policy learning. Across a suite of 12 simulated robot\nmanipulation tasks, we find that R3M improves task success by over 20% compared\nto training from scratch and by over 10% compared to state-of-the-art visual\nrepresentations like CLIP and MoCo. Furthermore, R3M enables a Franka Emika\nPanda arm to learn a range of manipulation tasks in a real, cluttered apartment\ngiven just 20 demonstrations. Code and pre-trained models are available at\nhttps://tinyurl.com/robotr3m.", + "authors": "Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, Abhinav Gupta", + "published": "2022-03-23", + "updated": "2022-11-18", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1904.05250v1", + "title": "Next-Active-Object prediction from Egocentric Videos", + "abstract": "Although First Person Vision systems can sense the environment from the\nuser's perspective, they are generally unable to predict his intentions and\ngoals. Since human activities can be decomposed in terms of atomic actions and\ninteractions with objects, intelligent wearable systems would benefit from the\nability to anticipate user-object interactions. Even if this task is not\ntrivial, the First Person Vision paradigm can provide important cues to address\nthis challenge. We propose to exploit the dynamics of the scene to recognize\nnext-active-objects before an object interaction begins. We train a classifier\nto discriminate trajectories leading to an object activation from all others\nand forecast next-active-objects by analyzing fixed-length trajectory segments\nwithin a temporal sliding window. The proposed method compares favorably with\nrespect to several baselines on the Activity of Daily Living (ADL) egocentric\ndataset comprising 10 hours of videos acquired by 20 subjects while performing\nunconstrained interactions with several objects.", + "authors": "Antonino Furnari, Sebastiano Battiato, Kristen Grauman, Giovanni Maria Farinella", + "published": "2019-04-10", + "updated": "2019-04-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.13396v1", + "title": "Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets", + "abstract": "Robot learning holds the promise of learning policies that generalize\nbroadly. However, such generalization requires sufficiently diverse datasets of\nthe task of interest, which can be prohibitively expensive to collect. In other\nfields, such as computer vision, it is common to utilize shared, reusable\ndatasets, such as ImageNet, to overcome this challenge, but this has proven\ndifficult in robotics. In this paper, we ask: what would it take to enable\npractical data reuse in robotics for end-to-end skill learning? We hypothesize\nthat the key is to use datasets with multiple tasks and multiple domains, such\nthat a new user that wants to train their robot to perform a new task in a new\ndomain can include this dataset in their training process and benefit from\ncross-task and cross-domain generalization. To evaluate this hypothesis, we\ncollect a large multi-domain and multi-task dataset, with 7,200 demonstrations\nconstituting 71 tasks across 10 environments, and empirically study how this\ndata can improve the learning of new tasks in new environments. We find that\njointly training with the proposed dataset and 50 demonstrations of a\nnever-before-seen task in a new domain on average leads to a 2x improvement in\nsuccess rate compared to using target domain data alone. We also find that data\nfor only a few tasks in a new domain can bridge the domain gap and make it\npossible for a robot to perform a variety of prior tasks that were only seen in\nother domains. These results suggest that reusing diverse multi-task and\nmulti-domain datasets, including our open-source dataset, may pave the way for\nbroader robot generalization, eliminating the need to re-collect data for each\nnew robot learning project.", + "authors": "Frederik Ebert, Yanlai Yang, Karl Schmeckpeper, Bernadette Bucher, Georgios Georgakis, Kostas Daniilidis, Chelsea Finn, Sergey Levine", + "published": "2021-09-27", + "updated": "2021-09-27", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.03298v2", + "title": "What Matters in Learning from Offline Human Demonstrations for Robot Manipulation", + "abstract": "Imitating human demonstrations is a promising approach to endow robots with\nvarious manipulation capabilities. While recent advances have been made in\nimitation learning and batch (offline) reinforcement learning, a lack of\nopen-source human datasets and reproducible learning methods make assessing the\nstate of the field difficult. In this paper, we conduct an extensive study of\nsix offline learning algorithms for robot manipulation on five simulated and\nthree real-world multi-stage manipulation tasks of varying complexity, and with\ndatasets of varying quality. Our study analyzes the most critical challenges\nwhen learning from offline human data for manipulation. Based on the study, we\nderive a series of lessons including the sensitivity to different algorithmic\ndesign choices, the dependence on the quality of the demonstrations, and the\nvariability based on the stopping criteria due to the different objectives in\ntraining and evaluation. We also highlight opportunities for learning from\nhuman datasets, such as the ability to learn proficient policies on\nchallenging, multi-stage tasks beyond the scope of current reinforcement\nlearning methods, and the ability to easily scale to natural, real-world\nmanipulation scenarios where only raw sensory signals are available. We have\nopen-sourced our datasets and all algorithm implementations to facilitate\nfuture research and fair comparisons in learning from human demonstration data.\nCodebase, datasets, trained models, and more available at\nhttps://arise-initiative.github.io/robomimic-web/", + "authors": "Ajay Mandlekar, Danfei Xu, Josiah Wong, Soroush Nasiriany, Chen Wang, Rohun Kulkarni, Li Fei-Fei, Silvio Savarese, Yuke Zhu, Roberto Mart\u00edn-Mart\u00edn", + "published": "2021-08-06", + "updated": "2021-09-25", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.14406v3", + "title": "Transporter Networks: Rearranging the Visual World for Robotic Manipulation", + "abstract": "Robotic manipulation can be formulated as inducing a sequence of spatial\ndisplacements: where the space being moved can encompass an object, part of an\nobject, or end effector. In this work, we propose the Transporter Network, a\nsimple model architecture that rearranges deep features to infer spatial\ndisplacements from visual input - which can parameterize robot actions. It\nmakes no assumptions of objectness (e.g. canonical poses, models, or\nkeypoints), it exploits spatial symmetries, and is orders of magnitude more\nsample efficient than our benchmarked alternatives in learning vision-based\nmanipulation tasks: from stacking a pyramid of blocks, to assembling kits with\nunseen objects; from manipulating deformable ropes, to pushing piles of small\nobjects with closed-loop feedback. Our method can represent complex multi-modal\npolicy distributions and generalizes to multi-step sequential tasks, as well as\n6DoF pick-and-place. Experiments on 10 simulated tasks show that it learns\nfaster and generalizes better than a variety of end-to-end baselines, including\npolicies that use ground-truth object poses. We validate our methods with\nhardware in the real world. Experiment videos and code are available at\nhttps://transporternets.github.io", + "authors": "Andy Zeng, Pete Florence, Jonathan Tompson, Stefan Welker, Jonathan Chien, Maria Attarian, Travis Armstrong, Ivan Krasin, Dan Duong, Ayzaan Wahid, Vikas Sindhwani, Johnny Lee", + "published": "2020-10-27", + "updated": "2022-01-05", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1806.10293v3", + "title": "QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation", + "abstract": "In this paper, we study the problem of learning vision-based dynamic\nmanipulation skills using a scalable reinforcement learning approach. We study\nthis problem in the context of grasping, a longstanding challenge in robotic\nmanipulation. In contrast to static learning behaviors that choose a grasp\npoint and then execute the desired grasp, our method enables closed-loop\nvision-based control, whereby the robot continuously updates its grasp strategy\nbased on the most recent observations to optimize long-horizon grasp success.\nTo that end, we introduce QT-Opt, a scalable self-supervised vision-based\nreinforcement learning framework that can leverage over 580k real-world grasp\nattempts to train a deep neural network Q-function with over 1.2M parameters to\nperform closed-loop, real-world grasping that generalizes to 96% grasp success\non unseen objects. Aside from attaining a very high success rate, our method\nexhibits behaviors that are quite distinct from more standard grasping systems:\nusing only RGB vision-based perception from an over-the-shoulder camera, our\nmethod automatically learns regrasping strategies, probes objects to find the\nmost effective grasps, learns to reposition objects and perform other\nnon-prehensile pre-grasp manipulations, and responds dynamically to\ndisturbances and perturbations.", + "authors": "Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine", + "published": "2018-06-27", + "updated": "2018-11-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2009.02615v2", + "title": "Learning Topological Motion Primitives for Knot Planning", + "abstract": "In this paper, we approach the challenging problem of motion planning for\nknot tying. We propose a hierarchical approach in which the top layer produces\na topological plan and the bottom layer translates this plan into continuous\nrobot motion. The top layer decomposes a knotting task into sequences of\nabstract topological actions based on knot theory. The bottom layer translates\neach of these abstract actions into robot motion trajectories through learned\ntopological motion primitives. To adapt each topological action to the specific\nrope geometry, the motion primitives take the observed rope configuration as\ninput. We train the motion primitives by imitating human demonstrations and\nreinforcement learning in simulation. To generalize human demonstrations of\nsimple knots into more complex knots, we observe similarities in the motion\nstrategies of different topological actions and design the neural network\nstructure to exploit such similarities. We demonstrate that our learned motion\nprimitives can be used to efficiently generate motion plans for tying the\noverhand knot. The motion plan can then be executed on a real robot using\nvisual tracking and Model Predictive Control. We also demonstrate that our\nlearned motion primitives can be composed to tie a more complex pentagram-like\nknot despite being only trained on human demonstrations of simpler knots.", + "authors": "Mengyuan Yan, Gen Li, Yilin Zhu, Jeannette Bohg", + "published": "2020-09-05", + "updated": "2020-10-06", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.15360v1", + "title": "Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives", + "abstract": "Despite the potential of reinforcement learning (RL) for building\ngeneral-purpose robotic systems, training RL agents to solve robotics tasks\nstill remains challenging due to the difficulty of exploration in purely\ncontinuous action spaces. Addressing this problem is an active area of research\nwith the majority of focus on improving RL methods via better optimization or\nmore efficient exploration. An alternate but important component to consider\nimproving is the interface of the RL algorithm with the robot. In this work, we\nmanually specify a library of robot action primitives (RAPS), parameterized\nwith arguments that are learned by an RL policy. These parameterized primitives\nare expressive, simple to implement, enable efficient exploration and can be\ntransferred across robots, tasks and environments. We perform a thorough\nempirical study across challenging tasks in three distinct domains with image\ninput and a sparse terminal reward. We find that our simple change to the\naction interface substantially improves both the learning efficiency and task\nperformance irrespective of the underlying RL algorithm, significantly\noutperforming prior methods which learn skills from offline expert data. Code\nand videos at https://mihdalal.github.io/raps/", + "authors": "Murtaza Dalal, Deepak Pathak, Ruslan Salakhutdinov", + "published": "2021-10-28", + "updated": "2021-10-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2202.10448v2", + "title": "Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on Youtube", + "abstract": "We build a system that enables any human to control a robot hand and arm,\nsimply by demonstrating motions with their own hand. The robot observes the\nhuman operator via a single RGB camera and imitates their actions in real-time.\nHuman hands and robot hands differ in shape, size, and joint structure, and\nperforming this translation from a single uncalibrated camera is a highly\nunderconstrained problem. Moreover, the retargeted trajectories must\neffectively execute tasks on a physical robot, which requires them to be\ntemporally smooth and free of self-collisions. Our key insight is that while\npaired human-robot correspondence data is expensive to collect, the internet\ncontains a massive corpus of rich and diverse human hand videos. We leverage\nthis data to train a system that understands human hands and retargets a human\nvideo stream into a robot hand-arm trajectory that is smooth, swift, safe, and\nsemantically similar to the guiding demonstration. We demonstrate that it\nenables previously untrained people to teleoperate a robot on various dexterous\nmanipulation tasks. Our low-cost, glove-free, marker-free remote teleoperation\nsystem makes robot teaching more accessible and we hope that it can aid robots\nin learning to act autonomously in the real world. Videos at\nhttps://robotic-telekinesis.github.io/", + "authors": "Aravind Sivakumar, Kenneth Shaw, Deepak Pathak", + "published": "2022-02-21", + "updated": "2022-07-24", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2107.02349v1", + "title": "Physical Interaction as Communication: Learning Robot Objectives Online from Human Corrections", + "abstract": "When a robot performs a task next to a human, physical interaction is\ninevitable: the human might push, pull, twist, or guide the robot. The\nstate-of-the-art treats these interactions as disturbances that the robot\nshould reject or avoid. At best, these robots respond safely while the human\ninteracts; but after the human lets go, these robots simply return to their\noriginal behavior. We recognize that physical human-robot interaction (pHRI) is\noften intentional -- the human intervenes on purpose because the robot is not\ndoing the task correctly. In this paper, we argue that when pHRI is intentional\nit is also informative: the robot can leverage interactions to learn how it\nshould complete the rest of its current task even after the person lets go. We\nformalize pHRI as a dynamical system, where the human has in mind an objective\nfunction they want the robot to optimize, but the robot does not get direct\naccess to the parameters of this objective -- they are internal to the human.\nWithin our proposed framework human interactions become observations about the\ntrue objective. We introduce approximations to learn from and respond to pHRI\nin real-time. We recognize that not all human corrections are perfect: often\nusers interact with the robot noisily, and so we improve the efficiency of\nrobot learning from pHRI by reducing unintended learning. Finally, we conduct\nsimulations and user studies on a robotic manipulator to compare our proposed\napproach to the state-of-the-art. Our results indicate that learning from pHRI\nleads to better task performance and improved human satisfaction.", + "authors": "Dylan P. Losey, Andrea Bajcsy, Marcia K. O'Malley, Anca D. Dragan", + "published": "2021-07-06", + "updated": "2021-07-06", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG", + "cs.SY", + "eess.SY" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.03580v2", + "title": "The Unsurprising Effectiveness of Pre-Trained Vision Models for Control", + "abstract": "Recent years have seen the emergence of pre-trained representations as a\npowerful abstraction for AI applications in computer vision, natural language,\nand speech. However, policy learning for control is still dominated by a\ntabula-rasa learning paradigm, with visuo-motor policies often trained from\nscratch using data from deployment environments. In this context, we revisit\nand study the role of pre-trained visual representations for control, and in\nparticular representations trained on large-scale computer vision datasets.\nThrough extensive empirical evaluation in diverse control domains (Habitat,\nDeepMind Control, Adroit, Franka Kitchen), we isolate and study the importance\nof different representation training methods, data augmentations, and feature\nhierarchies. Overall, we find that pre-trained visual representations can be\ncompetitive or even better than ground-truth state representations to train\ncontrol policies. This is in spite of using only out-of-domain data from\nstandard vision datasets, without any in-domain data from the deployment\nenvironments. Source code and more at\nhttps://sites.google.com/view/pvr-control.", + "authors": "Simone Parisi, Aravind Rajeswaran, Senthil Purushwalkam, Abhinav Gupta", + "published": "2022-03-07", + "updated": "2022-08-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2107.03380v3", + "title": "RRL: Resnet as representation for Reinforcement Learning", + "abstract": "The ability to autonomously learn behaviors via direct interactions in\nuninstrumented environments can lead to generalist robots capable of enhancing\nproductivity or providing care in unstructured settings like homes. Such\nuninstrumented settings warrant operations only using the robot's\nproprioceptive sensor such as onboard cameras, joint encoders, etc which can be\nchallenging for policy learning owing to the high dimensionality and partial\nobservability issues. We propose RRL: Resnet as representation for\nReinforcement Learning -- a straightforward yet effective approach that can\nlearn complex behaviors directly from proprioceptive inputs. RRL fuses features\nextracted from pre-trained Resnet into the standard reinforcement learning\npipeline and delivers results comparable to learning directly from the state.\nIn a simulated dexterous manipulation benchmark, where the state of the art\nmethods fail to make significant progress, RRL delivers contact rich behaviors.\nThe appeal of RRL lies in its simplicity in bringing together progress from the\nfields of Representation Learning, Imitation Learning, and Reinforcement\nLearning. Its effectiveness in learning behaviors directly from visual inputs\nwith performance and sample efficiency matching learning directly from the\nstate, even in complex high dimensional domains, is far from obvious.", + "authors": "Rutav Shah, Vikash Kumar", + "published": "2021-07-07", + "updated": "2021-11-11", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1604.01347v1", + "title": "Marr Revisited: 2D-3D Alignment via Surface Normal Prediction", + "abstract": "We introduce an approach that leverages surface normal predictions, along\nwith appearance cues, to retrieve 3D models for objects depicted in 2D still\nimages from a large CAD object library. Critical to the success of our approach\nis the ability to recover accurate surface normals for objects in the depicted\nscene. We introduce a skip-network model built on the pre-trained Oxford VGG\nconvolutional neural network (CNN) for surface normal prediction. Our model\nachieves state-of-the-art accuracy on the NYUv2 RGB-D dataset for surface\nnormal prediction, and recovers fine object detail compared to previous\nmethods. Furthermore, we develop a two-stream network over the input image and\npredicted surface normals that jointly learns pose and style for CAD model\nretrieval. When using the predicted surface normals, our two-stream network\nmatches prior work using surface normals computed from RGB-D images on the task\nof pose prediction, and achieves state of the art when using RGB-D input.\nFinally, our two-stream network allows us to retrieve CAD models that better\nmatch the style and pose of a depicted object compared with baseline\napproaches.", + "authors": "Aayush Bansal, Bryan Russell, Abhinav Gupta", + "published": "2016-04-05", + "updated": "2016-04-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1509.06825v1", + "title": "Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours", + "abstract": "Current learning-based robot grasping approaches exploit human-labeled\ndatasets for training the models. However, there are two problems with such a\nmethodology: (a) since each object can be grasped in multiple ways, manually\nlabeling grasp locations is not a trivial task; (b) human labeling is biased by\nsemantics. While there have been attempts to train robots using trial-and-error\nexperiments, the amount of data used in such experiments remains substantially\nlow and hence makes the learner prone to over-fitting. In this paper, we take\nthe leap of increasing the available training data to 40 times more than prior\nwork, leading to a dataset size of 50K data points collected over 700 hours of\nrobot grasping attempts. This allows us to train a Convolutional Neural Network\n(CNN) for the task of predicting grasp locations without severe overfitting. In\nour formulation, we recast the regression problem to an 18-way binary\nclassification over image patches. We also present a multi-stage learning\napproach where a CNN trained in one stage is used to collect hard negatives in\nsubsequent stages. Our experiments clearly show the benefit of using\nlarge-scale datasets (and multi-stage training) for the task of grasping. We\nalso compare to several baselines and show state-of-the-art performance on\ngeneralization to unseen objects for grasping.", + "authors": "Lerrel Pinto, Abhinav Gupta", + "published": "2015-09-23", + "updated": "2015-09-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.05877v5", + "title": "DexMV: Imitation Learning for Dexterous Manipulation from Human Videos", + "abstract": "While significant progress has been made on understanding hand-object\ninteractions in computer vision, it is still very challenging for robots to\nperform complex dexterous manipulation. In this paper, we propose a new\nplatform and pipeline DexMV (Dexterous Manipulation from Videos) for imitation\nlearning. We design a platform with: (i) a simulation system for complex\ndexterous manipulation tasks with a multi-finger robot hand and (ii) a computer\nvision system to record large-scale demonstrations of a human hand conducting\nthe same tasks. In our novel pipeline, we extract 3D hand and object poses from\nvideos, and propose a novel demonstration translation method to convert human\nmotion to robot demonstrations. We then apply and benchmark multiple imitation\nlearning algorithms with the demonstrations. We show that the demonstrations\ncan indeed improve robot learning by a large margin and solve the complex tasks\nwhich reinforcement learning alone cannot solve. More details can be found in\nthe project page: https://yzqin.github.io/dexmv", + "authors": "Yuzhe Qin, Yueh-Hua Wu, Shaowei Liu, Hanwen Jiang, Ruihan Yang, Yang Fu, Xiaolong Wang", + "published": "2021-08-12", + "updated": "2022-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1803.09956v3", + "title": "Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning", + "abstract": "Skilled robotic manipulation benefits from complex synergies between\nnon-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing\ncan help rearrange cluttered objects to make space for arms and fingers;\nlikewise, grasping can help displace objects to make pushing movements more\nprecise and collision-free. In this work, we demonstrate that it is possible to\ndiscover and learn these synergies from scratch through model-free deep\nreinforcement learning. Our method involves training two fully convolutional\nnetworks that map from visual observations to actions: one infers the utility\nof pushes for a dense pixel-wise sampling of end effector orientations and\nlocations, while the other does the same for grasping. Both networks are\ntrained jointly in a Q-learning framework and are entirely self-supervised by\ntrial and error, where rewards are provided from successful grasps. In this\nway, our policy learns pushing motions that enable future grasps, while\nlearning grasps that can leverage past pushes. During picking experiments in\nboth simulation and real-world scenarios, we find that our system quickly\nlearns complex behaviors amid challenging cases of clutter, and achieves better\ngrasping success rates and picking efficiencies than baseline alternatives\nafter only a few hours of training. We further demonstrate that our method is\ncapable of generalizing to novel objects. Qualitative results (videos), code,\npre-trained models, and simulation environments are available at\nhttp://vpg.cs.princeton.edu", + "authors": "Andy Zeng, Shuran Song, Stefan Welker, Johnny Lee, Alberto Rodriguez, Thomas Funkhouser", + "published": "2018-03-27", + "updated": "2018-09-30", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CV", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.09450v1", + "title": "Human-to-Robot Imitation in the Wild", + "abstract": "We approach the problem of learning by watching humans in the wild. While\ntraditional approaches in Imitation and Reinforcement Learning are promising\nfor learning in the real world, they are either sample inefficient or are\nconstrained to lab settings. Meanwhile, there has been a lot of success in\nprocessing passive, unstructured human data. We propose tackling this problem\nvia an efficient one-shot robot learning algorithm, centered around learning\nfrom a third-person perspective. We call our method WHIRL: In-the-Wild Human\nImitating Robot Learning. WHIRL extracts a prior over the intent of the human\ndemonstrator, using it to initialize our agent's policy. We introduce an\nefficient real-world policy learning scheme that improves using interactions.\nOur key contributions are a simple sampling-based policy optimization approach,\na novel objective function for aligning human and robot videos as well as an\nexploration method to boost sample efficiency. We show one-shot generalization\nand success in real-world settings, including 20 different manipulation tasks\nin the wild. Videos and talk at https://human2robot.github.io", + "authors": "Shikhar Bahl, Abhinav Gupta, Deepak Pathak", + "published": "2022-07-19", + "updated": "2022-07-19", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CV", + "cs.LG", + "cs.SY", + "eess.SY" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.01696v1", + "title": "Joint Hand Motion and Interaction Hotspots Prediction from Egocentric Videos", + "abstract": "We propose to forecast future hand-object interactions given an egocentric\nvideo. Instead of predicting action labels or pixels, we directly predict the\nhand motion trajectory and the future contact points on the next active object\n(i.e., interaction hotspots). This relatively low-dimensional representation\nprovides a concrete description of future interactions. To tackle this task, we\nfirst provide an automatic way to collect trajectory and hotspots labels on\nlarge-scale data. We then use this data to train an Object-Centric Transformer\n(OCT) model for prediction. Our model performs hand and object interaction\nreasoning via the self-attention mechanism in Transformers. OCT also provides a\nprobabilistic framework to sample the future trajectory and hotspots to handle\nuncertainty in prediction. We perform experiments on the Epic-Kitchens-55,\nEpic-Kitchens-100, and EGTEA Gaze+ datasets, and show that OCT significantly\noutperforms state-of-the-art approaches by a large margin. Project page is\navailable at https://stevenlsw.github.io/hoi-forecast .", + "authors": "Shaowei Liu, Subarna Tripathi, Somdeb Majumdar, Xiaolong Wang", + "published": "2022-04-04", + "updated": "2022-04-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.06173v1", + "title": "Masked Visual Pre-training for Motor Control", + "abstract": "This paper shows that self-supervised visual pre-training from real-world\nimages is effective for learning motor control tasks from pixels. We first\ntrain the visual representations by masked modeling of natural images. We then\nfreeze the visual encoder and train neural network controllers on top with\nreinforcement learning. We do not perform any task-specific fine-tuning of the\nencoder; the same visual representations are used for all motor control tasks.\nTo the best of our knowledge, this is the first self-supervised model to\nexploit real-world images at scale for motor control. To accelerate progress in\nlearning from pixels, we contribute a benchmark suite of hand-designed tasks\nvarying in movements, scenes, and robots. Without relying on labels,\nstate-estimation, or expert demonstrations, we consistently outperform\nsupervised encoders by up to 80% absolute success rate, sometimes even matching\nthe oracle state performance. We also find that in-the-wild images, e.g., from\nYouTube or Egocentric videos, lead to better visual representations for various\nmanipulation tasks than ImageNet images.", + "authors": "Tete Xiao, Ilija Radosavovic, Trevor Darrell, Jitendra Malik", + "published": "2022-03-11", + "updated": "2022-03-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.04745v3", + "title": "Mildly Conservative Q-Learning for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) defines the task of learning from a\nstatic logged dataset without continually interacting with the environment. The\ndistribution shift between the learned policy and the behavior policy makes it\nnecessary for the value function to stay conservative such that\nout-of-distribution (OOD) actions will not be severely overestimated. However,\nexisting approaches, penalizing the unseen actions or regularizing with the\nbehavior policy, are too pessimistic, which suppresses the generalization of\nthe value function and hinders the performance improvement. This paper explores\nmild but enough conservatism for offline learning while not harming\ngeneralization. We propose Mildly Conservative Q-learning (MCQ), where OOD\nactions are actively trained by assigning them proper pseudo Q values. We\ntheoretically show that MCQ induces a policy that behaves at least as well as\nthe behavior policy and no erroneous overestimation will occur for OOD actions.\nExperimental results on the D4RL benchmarks demonstrate that MCQ achieves\nremarkable performance compared with prior work. Furthermore, MCQ shows\nsuperior generalization ability when transferring from offline to online, and\nsignificantly outperforms baselines. Our code is publicly available at\nhttps://github.com/dmksjfl/MCQ.", + "authors": "Jiafei Lyu, Xiaoteng Ma, Xiu Li, Zongqing Lu", + "published": "2022-06-09", + "updated": "2024-02-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1810.13400v3", + "title": "Differentiable MPC for End-to-end Planning and Control", + "abstract": "We present foundations for using Model Predictive Control (MPC) as a\ndifferentiable policy class for reinforcement learning in continuous state and\naction spaces. This provides one way of leveraging and combining the advantages\nof model-free and model-based approaches. Specifically, we differentiate\nthrough MPC by using the KKT conditions of the convex approximation at a fixed\npoint of the controller. Using this strategy, we are able to learn the cost and\ndynamics of a controller via end-to-end learning. Our experiments focus on\nimitation learning in the pendulum and cartpole domains, where we learn the\ncost and dynamics terms of an MPC policy class. We show that our MPC policies\nare significantly more data-efficient than a generic neural network and that\nour method is superior to traditional system identification in a setting where\nthe expert is unrealizable.", + "authors": "Brandon Amos, Ivan Dario Jimenez Rodriguez, Jacob Sacks, Byron Boots, J. Zico Kolter", + "published": "2018-10-31", + "updated": "2019-10-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1504.00702v5", + "title": "End-to-End Training of Deep Visuomotor Policies", + "abstract": "Policy search methods can allow robots to learn control policies for a wide\nrange of tasks, but practical applications of policy search often require\nhand-engineered components for perception, state estimation, and low-level\ncontrol. In this paper, we aim to answer the following question: does training\nthe perception and control systems jointly end-to-end provide better\nperformance than training each component separately? To this end, we develop a\nmethod that can be used to learn policies that map raw image observations\ndirectly to torques at the robot's motors. The policies are represented by deep\nconvolutional neural networks (CNNs) with 92,000 parameters, and are trained\nusing a partially observed guided policy search method, which transforms policy\nsearch into supervised learning, with supervision provided by a simple\ntrajectory-centric reinforcement learning method. We evaluate our method on a\nrange of real-world manipulation tasks that require close coordination between\nvision and control, such as screwing a cap onto a bottle, and present simulated\ncomparisons to a range of prior policy search methods.", + "authors": "Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel", + "published": "2015-04-02", + "updated": "2016-04-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2008.04899v1", + "title": "Visual Imitation Made Easy", + "abstract": "Visual imitation learning provides a framework for learning complex\nmanipulation behaviors by leveraging human demonstrations. However, current\ninterfaces for imitation such as kinesthetic teaching or teleoperation\nprohibitively restrict our ability to efficiently collect large-scale data in\nthe wild. Obtaining such diverse demonstration data is paramount for the\ngeneralization of learned skills to novel scenarios. In this work, we present\nan alternate interface for imitation that simplifies the data collection\nprocess while allowing for easy transfer to robots. We use commercially\navailable reacher-grabber assistive tools both as a data collection device and\nas the robot's end-effector. To extract action information from these visual\ndemonstrations, we use off-the-shelf Structure from Motion (SfM) techniques in\naddition to training a finger detection network. We experimentally evaluate on\ntwo challenging tasks: non-prehensile pushing and prehensile stacking, with\n1000 diverse demonstrations for each task. For both tasks, we use standard\nbehavior cloning to learn executable policies from the previously collected\noffline demonstrations. To improve learning performance, we employ a variety of\ndata augmentations and provide an extensive analysis of its effects. Finally,\nwe demonstrate the utility of our interface by evaluating on real robotic\nscenarios with previously unseen objects and achieve a 87% success rate on\npushing and a 62% success rate on stacking. Robot videos are available at\nhttps://dhiraj100892.github.io/Visual-Imitation-Made-Easy.", + "authors": "Sarah Young, Dhiraj Gandhi, Shubham Tulsiani, Abhinav Gupta, Pieter Abbeel, Lerrel Pinto", + "published": "2020-08-11", + "updated": "2020-08-11", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.01577v4", + "title": "HOI4D: A 4D Egocentric Dataset for Category-Level Human-Object Interaction", + "abstract": "We present HOI4D, a large-scale 4D egocentric dataset with rich annotations,\nto catalyze the research of category-level human-object interaction. HOI4D\nconsists of 2.4M RGB-D egocentric video frames over 4000 sequences collected by\n4 participants interacting with 800 different object instances from 16\ncategories over 610 different indoor rooms. Frame-wise annotations for panoptic\nsegmentation, motion segmentation, 3D hand pose, category-level object pose and\nhand action have also been provided, together with reconstructed object meshes\nand scene point clouds. With HOI4D, we establish three benchmarking tasks to\npromote category-level HOI from 4D visual signals including semantic\nsegmentation of 4D dynamic point cloud sequences, category-level object pose\ntracking, and egocentric action segmentation with diverse interaction targets.\nIn-depth analysis shows HOI4D poses great challenges to existing methods and\nproduces great research opportunities.", + "authors": "Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen, Boqiang Liang, Zhoujie Fu, He Wang, Li Yi", + "published": "2022-03-03", + "updated": "2024-01-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.07153v1", + "title": "What's in your hands? 3D Reconstruction of Generic Objects in Hands", + "abstract": "Our work aims to reconstruct hand-held objects given a single RGB image. In\ncontrast to prior works that typically assume known 3D templates and reduce the\nproblem to 3D pose estimation, our work reconstructs generic hand-held object\nwithout knowing their 3D templates. Our key insight is that hand articulation\nis highly predictive of the object shape, and we propose an approach that\nconditionally reconstructs the object based on the articulation and the visual\ninput. Given an image depicting a hand-held object, we first use off-the-shelf\nsystems to estimate the underlying hand pose and then infer the object shape in\na normalized hand-centric coordinate frame. We parameterized the object by\nsigned distance which are inferred by an implicit network which leverages the\ninformation from both visual feature and articulation-aware coordinates to\nprocess a query point. We perform experiments across three datasets and show\nthat our method consistently outperforms baselines and is able to reconstruct a\ndiverse set of objects. We analyze the benefits and robustness of explicit\narticulation conditioning and also show that this allows the hand pose\nestimation to further improve in test-time optimization.", + "authors": "Yufei Ye, Abhinav Gupta, Shubham Tulsiani", + "published": "2022-04-14", + "updated": "2022-04-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1603.04908v3", + "title": "First Person Action-Object Detection with EgoNet", + "abstract": "Unlike traditional third-person cameras mounted on robots, a first-person\ncamera, captures a person's visual sensorimotor object interactions from up\nclose. In this paper, we study the tight interplay between our momentary visual\nattention and motor action with objects from a first-person camera. We propose\na concept of action-objects---the objects that capture person's conscious\nvisual (watching a TV) or tactile (taking a cup) interactions. Action-objects\nmay be task-dependent but since many tasks share common person-object spatial\nconfigurations, action-objects exhibit a characteristic 3D spatial distance and\norientation with respect to the person.\n We design a predictive model that detects action-objects using EgoNet, a\njoint two-stream network that holistically integrates visual appearance (RGB)\nand 3D spatial layout (depth and height) cues to predict per-pixel likelihood\nof action-objects. Our network also incorporates a first-person coordinate\nembedding, which is designed to learn a spatial distribution of the\naction-objects in the first-person data. We demonstrate EgoNet's predictive\npower, by showing that it consistently outperforms previous baseline\napproaches. Furthermore, EgoNet also exhibits a strong generalization ability,\ni.e., it predicts semantically meaningful objects in novel first-person\ndatasets. Our method's ability to effectively detect action-objects could be\nused to improve robots' understanding of human-object interactions.", + "authors": "Gedas Bertasius, Hyun Soo Park, Stella X. Yu, Jianbo Shi", + "published": "2016-03-15", + "updated": "2017-06-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1912.04344v2", + "title": "Grasping in the Wild:Learning 6DoF Closed-Loop Grasping from Low-Cost Demonstrations", + "abstract": "Intelligent manipulation benefits from the capacity to flexibly control an\nend-effector with high degrees of freedom (DoF) and dynamically react to the\nenvironment. However, due to the challenges of collecting effective training\ndata and learning efficiently, most grasping algorithms today are limited to\ntop-down movements and open-loop execution. In this work, we propose a new\nlow-cost hardware interface for collecting grasping demonstrations by people in\ndiverse environments. Leveraging this data, we show that it is possible to\ntrain a robust end-to-end 6DoF closed-loop grasping model with reinforcement\nlearning that transfers to real robots. A key aspect of our grasping model is\nthat it uses \"action-view\" based rendering to simulate future states with\nrespect to different possible actions. By evaluating these states using a\nlearned value function (Q-function), our method is able to better select\ncorresponding actions that maximize total rewards (i.e., grasping success). Our\nfinal grasping system is able to achieve reliable 6DoF closed-loop grasping of\nnovel objects across various scene configurations, as well as dynamic scenes\nwith moving objects.", + "authors": "Shuran Song, Andy Zeng, Johnny Lee, Thomas Funkhouser", + "published": "2019-12-09", + "updated": "2020-06-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.09119v2", + "title": "Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL", + "abstract": "Offline Reinforcement Learning (RL) aims to extract near-optimal policies\nfrom imperfect offline data without additional environment interactions.\nExtracting policies from diverse offline datasets has the potential to expand\nthe range of applicability of RL by making the training process safer, faster,\nand more streamlined. We investigate how to improve the performance of offline\nRL algorithms, its robustness to the quality of offline data, as well as its\ngeneralization capabilities. To this end, we introduce Offline Model-based RL\nwith Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding\nthat dynamics models, which support within-domain generalization, and\nbehavioral priors, which support cross-domain generalization, are\ncomplementary. When combined together, they substantially improve the\nperformance and generalization of offline RL policies. In the widely studied\nD4RL offline RL benchmark, we find that MABE achieves higher average\nperformance compared to prior model-free and model-based algorithms. In\nexperiments that require cross-domain generalization, we find that MABE\noutperforms prior methods. Our website is available at\nhttps://sites.google.com/berkeley.edu/mabe .", + "authors": "Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, Michael Laskin", + "published": "2021-06-16", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.10905v1", + "title": "Efficient Robotic Manipulation Through Offline-to-Online Reinforcement Learning and Goal-Aware State Information", + "abstract": "End-to-end learning robotic manipulation with high data efficiency is one of\nthe key challenges in robotics. The latest methods that utilize human\ndemonstration data and unsupervised representation learning has proven to be a\npromising direction to improve RL learning efficiency. The use of demonstration\ndata also allows \"warming-up\" the RL policies using offline data with imitation\nlearning or the recently emerged offline reinforcement learning algorithms.\nHowever, existing works often treat offline policy learning and online\nexploration as two separate processes, which are often accompanied by severe\nperformance drop during the offline-to-online transition. Furthermore, many\nrobotic manipulation tasks involve complex sub-task structures, which are very\nchallenging to be solved in RL with sparse reward. In this work, we propose a\nunified offline-to-online RL framework that resolves the transition performance\ndrop issue. Additionally, we introduce goal-aware state information to the RL\nagent, which can greatly reduce task complexity and accelerate policy learning.\nCombined with an advanced unsupervised representation learning module, our\nframework achieves great training efficiency and performance compared with the\nstate-of-the-art methods in multiple robotic manipulation tasks.", + "authors": "Jin Li, Xianyuan Zhan, Zixu Xiao, Guyue Zhou", + "published": "2021-10-21", + "updated": "2021-10-21", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05951v3", + "title": "MOReL : Model-Based Offline Reinforcement Learning", + "abstract": "In offline reinforcement learning (RL), the goal is to learn a highly\nrewarding policy based solely on a dataset of historical interactions with the\nenvironment. The ability to train RL policies offline can greatly expand the\napplicability of RL, its data efficiency, and its experimental velocity. Prior\nwork in offline RL has been confined almost exclusively to model-free RL\napproaches. In this work, we present MOReL, an algorithmic framework for\nmodel-based offline RL. This framework consists of two steps: (a) learning a\npessimistic MDP (P-MDP) using the offline dataset; and (b) learning a\nnear-optimal policy in this P-MDP. The learned P-MDP has the property that for\nany policy, the performance in the real environment is approximately\nlower-bounded by the performance in the P-MDP. This enables it to serve as a\ngood surrogate for purposes of policy evaluation and learning, and overcome\ncommon pitfalls of model-based RL like model exploitation. Theoretically, we\nshow that MOReL is minimax optimal (up to log factors) for offline RL. Through\nexperiments, we show that MOReL matches or exceeds state-of-the-art results in\nwidely studied offline RL benchmarks. Moreover, the modular design of MOReL\nenables future advances in its components (e.g. generative modeling,\nuncertainty estimation, planning etc.) to directly translate into advances for\noffline RL.", + "authors": "Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims", + "published": "2020-05-12", + "updated": "2021-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.09796v1", + "title": "Offline Reinforcement Learning with Value-based Episodic Memory", + "abstract": "Offline reinforcement learning (RL) shows promise of applying RL to\nreal-world problems by effectively utilizing previously collected data. Most\nexisting offline RL algorithms use regularization or constraints to suppress\nextrapolation error for actions outside the dataset. In this paper, we adopt a\ndifferent framework, which learns the V-function instead of the Q-function to\nnaturally keep the learning procedure within the support of an offline dataset.\nTo enable effective generalization while maintaining proper conservatism in\noffline learning, we propose Expectile V-Learning (EVL), which smoothly\ninterpolates between the optimal value learning and behavior cloning. Further,\nwe introduce implicit planning along offline trajectories to enhance learned\nV-values and accelerate convergence. Together, we present a new offline method\ncalled Value-based Episodic Memory (VEM). We provide theoretical analysis for\nthe convergence properties of our proposed VEM method, and empirical results in\nthe D4RL benchmark show that our method achieves superior performance in most\ntasks, particularly in sparse-reward tasks.", + "authors": "Xiaoteng Ma, Yiqin Yang, Hao Hu, Qihan Liu, Jun Yang, Chongjie Zhang, Qianchuan Zhao, Bin Liang", + "published": "2021-10-19", + "updated": "2021-10-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2107.06106v2", + "title": "Conservative Offline Distributional Reinforcement Learning", + "abstract": "Many reinforcement learning (RL) problems in practice are offline, learning\npurely from observational data. A key challenge is how to ensure the learned\npolicy is safe, which requires quantifying the risk associated with different\nactions. In the online setting, distributional RL algorithms do so by learning\nthe distribution over returns (i.e., cumulative rewards) instead of the\nexpected return; beyond quantifying risk, they have also been shown to learn\nbetter representations for planning. We propose Conservative Offline\nDistributional Actor Critic (CODAC), an offline RL algorithm suitable for both\nrisk-neutral and risk-averse domains. CODAC adapts distributional RL to the\noffline setting by penalizing the predicted quantiles of the return for\nout-of-distribution actions. We prove that CODAC learns a conservative return\ndistribution -- in particular, for finite MDPs, CODAC converges to an uniform\nlower bound on the quantiles of the return distribution; our proof relies on a\nnovel analysis of the distributional Bellman operator. In our experiments, on\ntwo challenging robot navigation tasks, CODAC successfully learns risk-averse\npolicies using offline data collected purely from risk-neutral agents.\nFurthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of\nboth expected and risk-sensitive performance.", + "authors": "Yecheng Jason Ma, Dinesh Jayaraman, Osbert Bastani", + "published": "2021-07-12", + "updated": "2021-10-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.06734v1", + "title": "Corruption Robust Offline Reinforcement Learning with Human Feedback", + "abstract": "We study data corruption robustness for reinforcement learning with human\nfeedback (RLHF) in an offline setting. Given an offline dataset of pairs of\ntrajectories along with feedback about human preferences, an\n$\\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or\ntrajectory features manipulated), capturing an adversarial attack or noisy\nhuman preferences. We aim to design algorithms that identify a near-optimal\npolicy from the corrupted data, with provable guarantees. Existing theoretical\nworks have separately studied the settings of corruption robust RL (learning\nfrom scalar rewards directly under corruption) and offline RLHF (learning from\nhuman feedback without corruption); however, they are inapplicable to our\nproblem of dealing with corrupted data in offline RLHF setting. To this end, we\ndesign novel corruption robust offline RLHF methods under various assumptions\non the coverage of the data-generating distributions. At a high level, our\nmethodology robustifies an offline RLHF framework by first learning a reward\nmodel along with confidence sets and then learning a pessimistic optimal policy\nover the confidence set. Our key insight is that learning optimal policy can be\ndone by leveraging an offline corruption-robust RL oracle in different ways\n(e.g., zero-order oracle or first-order oracle), depending on the data coverage\nassumptions. To our knowledge, ours is the first work that provides provable\ncorruption robust offline RLHF methods.", + "authors": "Debmalya Mandal, Andi Nika, Parameswaran Kamalaruban, Adish Singla, Goran Radanovi\u0107", + "published": "2024-02-09", + "updated": "2024-02-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.05433v1", + "title": "Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) Algorithms are often designed with\nenvironments such as MuJoCo in mind, in which the planning horizon is extremely\nlong and no noise exists. We compare model-free, model-based, as well as hybrid\noffline RL approaches on various industrial benchmark (IB) datasets to test the\nalgorithms in settings closer to real world problems, including complex noise\nand partially observable states. We find that on the IB, hybrid approaches face\nsevere difficulties and that simpler algorithms, such as rollout based\nalgorithms or model-free algorithms with simpler regularizers perform best on\nthe datasets.", + "authors": "Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler", + "published": "2022-01-14", + "updated": "2022-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.13885v1", + "title": "Offline Learning from Demonstrations and Unlabeled Experience", + "abstract": "Behavior cloning (BC) is often practical for robot learning because it allows\na policy to be trained offline without rewards, by supervised learning on\nexpert demonstrations. However, BC does not effectively leverage what we will\nrefer to as unlabeled experience: data of mixed and unknown quality without\nreward annotations. This unlabeled data can be generated by a variety of\nsources such as human teleoperation, scripted policies and other agents on the\nsame robot. Towards data-driven offline robot learning that can use this\nunlabeled experience, we introduce Offline Reinforced Imitation Learning\n(ORIL). ORIL first learns a reward function by contrasting observations from\ndemonstrator and unlabeled trajectories, then annotates all data with the\nlearned reward, and finally trains an agent via offline reinforcement learning.\nAcross a diverse set of continuous control and simulated robotic manipulation\ntasks, we show that ORIL consistently outperforms comparable BC agents by\neffectively leveraging unlabeled experience.", + "authors": "Konrad Zolna, Alexander Novikov, Ksenia Konyushkova, Caglar Gulcehre, Ziyu Wang, Yusuf Aytar, Misha Denil, Nando de Freitas, Scott Reed", + "published": "2020-11-27", + "updated": "2020-11-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13611v3", + "title": "OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning", + "abstract": "Reinforcement learning (RL) has achieved impressive performance in a variety\nof online settings in which an agent's ability to query the environment for\ntransitions and rewards is effectively unlimited. However, in many practical\napplications, the situation is reversed: an agent may have access to large\namounts of undirected offline experience data, while access to the online\nenvironment is severely limited. In this work, we focus on this offline\nsetting. Our main insight is that, when presented with offline data composed of\na variety of behaviors, an effective way to leverage this data is to extract a\ncontinuous space of recurring and temporally extended primitive behaviors\nbefore using these primitives for downstream task learning. Primitives\nextracted in this way serve two purposes: they delineate the behaviors that are\nsupported by the data from those that are not, making them useful for avoiding\ndistributional shift in offline RL; and they provide a degree of temporal\nabstraction, which reduces the effective horizon yielding better learning in\ntheory, and improved offline RL in practice. In addition to benefiting offline\npolicy optimization, we show that performing offline primitive learning in this\nway can also be leveraged for improving few-shot imitation learning as well as\nexploration and transfer in online RL on a variety of benchmark domains.\nVisualizations are available at https://sites.google.com/view/opal-iclr", + "authors": "Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum", + "published": "2020-10-26", + "updated": "2021-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.09712v1", + "title": "Semi-Offline Reinforcement Learning for Optimized Text Generation", + "abstract": "In reinforcement learning (RL), there are two major settings for interacting\nwith the environment: online and offline. Online methods explore the\nenvironment at significant time cost, and offline methods efficiently obtain\nreward signals by sacrificing exploration capability. We propose semi-offline\nRL, a novel paradigm that smoothly transits from offline to online settings,\nbalances exploration capability and training cost, and provides a theoretical\nfoundation for comparing different RL settings. Based on the semi-offline\nformulation, we present the RL setting that is optimal in terms of optimization\ncost, asymptotic error, and overfitting error bound. Extensive experiments show\nthat our semi-offline approach is efficient and yields comparable or often\nbetter performance compared with state-of-the-art methods.", + "authors": "Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, Rui Yan", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05815v1", + "title": "Representation Matters: Offline Pretraining for Sequential Decision Making", + "abstract": "The recent success of supervised learning methods on ever larger offline\ndatasets has spurred interest in the reinforcement learning (RL) field to\ninvestigate whether the same paradigms can be translated to RL algorithms. This\nresearch area, known as offline RL, has largely focused on offline policy\noptimization, aiming to find a return-maximizing policy exclusively from\noffline data. In this paper, we consider a slightly different approach to\nincorporating offline data into sequential decision-making. We aim to answer\nthe question, what unsupervised objectives applied to offline datasets are able\nto learn state representations which elevate performance on downstream tasks,\nwhether those downstream tasks be online RL, imitation learning from expert\ndemonstrations, or even offline policy optimization based on the same offline\ndataset? Through a variety of experiments utilizing standard offline RL\ndatasets, we find that the use of pretraining with unsupervised learning\nobjectives can dramatically improve the performance of policy learning\nalgorithms that otherwise yield mediocre performance on their own. Extensive\nablations further provide insights into what components of these unsupervised\nobjectives -- e.g., reward prediction, continuous or discrete representations,\npretraining or finetuning -- are most important and in which settings.", + "authors": "Mengjiao Yang, Ofir Nachum", + "published": "2021-02-11", + "updated": "2021-02-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.06043v3", + "title": "Offline Meta-Reinforcement Learning with Advantage Weighting", + "abstract": "This paper introduces the offline meta-reinforcement learning (offline\nmeta-RL) problem setting and proposes an algorithm that performs well in this\nsetting. Offline meta-RL is analogous to the widely successful supervised\nlearning strategy of pre-training a model on a large batch of fixed,\npre-collected data (possibly from various tasks) and fine-tuning the model to a\nnew task with relatively little data. That is, in offline meta-RL, we\nmeta-train on fixed, pre-collected data from several tasks in order to adapt to\na new task with a very small amount (less than 5 trajectories) of data from the\nnew task. By nature of being offline, algorithms for offline meta-RL can\nutilize the largest possible pool of training data available and eliminate\npotentially unsafe or costly data collection during meta-training. This setting\ninherits the challenges of offline RL, but it differs significantly because\noffline RL does not generally consider a) transfer to new tasks or b) limited\ndata from the test task, both of which we face in offline meta-RL. Targeting\nthe offline meta-RL setting, we propose Meta-Actor Critic with Advantage\nWeighting (MACAW), an optimization-based meta-learning algorithm that uses\nsimple, supervised regression objectives for both the inner and outer loop of\nmeta-training. On offline variants of common meta-RL benchmarks, we empirically\nfind that this approach enables fully offline meta-reinforcement learning and\nachieves notable gains over prior methods.", + "authors": "Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn", + "published": "2020-08-13", + "updated": "2021-07-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.07920v2", + "title": "Causal Decision Transformer for Recommender Systems via Offline Reinforcement Learning", + "abstract": "Reinforcement learning-based recommender systems have recently gained\npopularity. However, the design of the reward function, on which the agent\nrelies to optimize its recommendation policy, is often not straightforward.\nExploring the causality underlying users' behavior can take the place of the\nreward function in guiding the agent to capture the dynamic interests of users.\nMoreover, due to the typical limitations of simulation environments (e.g., data\ninefficiency), most of the work cannot be broadly applied in large-scale\nsituations. Although some works attempt to convert the offline dataset into a\nsimulator, data inefficiency makes the learning process even slower. Because of\nthe nature of reinforcement learning (i.e., learning by interaction), it cannot\ncollect enough data to train during a single interaction. Furthermore,\ntraditional reinforcement learning algorithms do not have a solid capability\nlike supervised learning methods to learn from offline datasets directly. In\nthis paper, we propose a new model named the causal decision transformer for\nrecommender systems (CDT4Rec). CDT4Rec is an offline reinforcement learning\nsystem that can learn from a dataset rather than from online interaction.\nMoreover, CDT4Rec employs the transformer architecture, which is capable of\nprocessing large offline datasets and capturing both short-term and long-term\ndependencies within the data to estimate the causal relationship between\naction, state, and reward. To demonstrate the feasibility and superiority of\nour model, we have conducted experiments on six real-world offline datasets and\none online simulator.", + "authors": "Siyu Wang, Xiaocong Chen, Dietmar Jannach, Lina Yao", + "published": "2023-04-17", + "updated": "2023-08-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.06662v1", + "title": "DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning algorithms promise to be applicable in\nsettings where a fixed dataset is available and no new experience can be\nacquired. However, such formulation is inevitably offline-data-hungry and, in\npractice, collecting a large offline dataset for one specific task over one\nspecific environment is also costly and laborious. In this paper, we thus 1)\nformulate the offline dynamics adaptation by using (source) offline data\ncollected from another dynamics to relax the requirement for the extensive\n(target) offline data, 2) characterize the dynamics shift problem in which\nprior offline methods do not scale well, and 3) derive a simple dynamics-aware\nreward augmentation (DARA) framework from both model-free and model-based\noffline settings. Specifically, DARA emphasizes learning from those source\ntransition pairs that are adaptive for the target environment and mitigates the\noffline dynamics shift by characterizing state-action-next-state pairs instead\nof the typical state-action distribution sketched by prior offline RL methods.\nThe experimental evaluation demonstrates that DARA, by augmenting rewards in\nthe source offline dataset, can acquire an adaptive policy for the target\nenvironment and yet significantly reduce the requirement of target offline\ndata. With only modest amounts of target offline data, our performance\nconsistently outperforms the prior offline RL methods in both simulated and\nreal-world tasks.", + "authors": "Jinxin Liu, Hongyin Zhang, Donglin Wang", + "published": "2022-03-13", + "updated": "2022-03-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.12639v1", + "title": "Single-Task Continual Offline Reinforcement Learning", + "abstract": "In this paper, we study the continual learning problem of single-task offline\nreinforcement learning. In the past, continual reinforcement learning usually\nonly dealt with multitasking, that is, learning multiple related or unrelated\ntasks in a row, but once each learned task was learned, it was not relearned,\nbut only used in subsequent processes. However, offline reinforcement learning\ntasks require the continuously learning of multiple different datasets for the\nsame task. Existing algorithms will try their best to achieve the best results\nin each offline dataset they have learned and the skills of the network will\noverwrite the high-quality datasets that have been learned after learning the\nsubsequent poor datasets. On the other hand, if too much emphasis is placed on\nstability, the network will learn the subsequent better dataset after learning\nthe poor offline dataset, and the problem of insufficient plasticity and\nnon-learning will occur. How to design a strategy that can always preserve the\nbest performance for each state in the data that has been learned is a new\nchallenge and the focus of this study. Therefore, this study proposes a new\nalgorithm, called Ensemble Offline Reinforcement Learning Based on Experience\nReplay, which introduces multiple value networks to learn the same dataset and\njudge whether the strategy has been learned by the discrete degree of the value\nnetwork, to improve the performance of the network in single-task offline\nreinforcement learning.", + "authors": "Sibo Gai, Donglin Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "I.2.6" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03360v1", + "title": "A Survey on Offline Model-Based Reinforcement Learning", + "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", + "authors": "Haoyang He", + "published": "2023-05-05", + "updated": "2023-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "I.2.6; I.2.8" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13412v2", + "title": "CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn an optimal policy from\npre-collected and labeled datasets, which eliminates the time-consuming data\ncollection in online RL. However, offline RL still bears a large burden of\nspecifying/handcrafting extrinsic rewards for each transition in the offline\ndata. As a remedy for the labor-intensive labeling, we propose to endow offline\nRL tasks with a few expert data and utilize the limited expert data to drive\nintrinsic rewards, thus eliminating the need for extrinsic rewards. To achieve\nthat, we introduce \\textbf{C}alibrated \\textbf{L}atent\ng\\textbf{U}idanc\\textbf{E} (CLUE), which utilizes a conditional variational\nauto-encoder to learn a latent space such that intrinsic rewards can be\ndirectly qualified over the latent space. CLUE's key idea is to align the\nintrinsic rewards consistent with the expert intention via enforcing the\nembeddings of expert data to a calibrated contextual representation. We\ninstantiate the expert-driven intrinsic rewards in sparse-reward offline RL\ntasks, offline imitation learning (IL) tasks, and unsupervised offline RL\ntasks. Empirically, we find that CLUE can effectively improve the sparse-reward\noffline RL performance, outperform the state-of-the-art offline IL baselines,\nand discover diverse skills from static reward-free offline data.", + "authors": "Jinxin Liu, Lipeng Zu, Li He, Donglin Wang", + "published": "2023-06-23", + "updated": "2023-10-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13630v1", + "title": "Offline Skill Graph (OSG): A Framework for Learning and Planning using Offline Reinforcement Learning Skills", + "abstract": "Reinforcement Learning has received wide interest due to its success in\ncompetitive games. Yet, its adoption in everyday applications is limited (e.g.\nindustrial, home, healthcare, etc.). In this paper, we address this limitation\nby presenting a framework for planning over offline skills and solving complex\ntasks in real-world environments. Our framework is comprised of three modules\nthat together enable the agent to learn from previously collected data and\ngeneralize over it to solve long-horizon tasks. We demonstrate our approach by\ntesting it on a robotic arm that is required to solve complex tasks.", + "authors": "Ben-ya Halevy, Yehudit Aperstein, Dotan Di Castro", + "published": "2023-06-23", + "updated": "2023-06-23", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2205.09550v1", + "title": "Data Valuation for Offline Reinforcement Learning", + "abstract": "The success of deep reinforcement learning (DRL) hinges on the availability\nof training data, which is typically obtained via a large number of environment\ninteractions. In many real-world scenarios, costs and risks are associated with\ngathering these data. The field of offline reinforcement learning addresses\nthese issues through outsourcing the collection of data to a domain expert or a\ncarefully monitored program and subsequently searching for a batch-constrained\noptimal policy. With the emergence of data markets, an alternative to\nconstructing a dataset in-house is to purchase external data. However, while\nstate-of-the-art offline reinforcement learning approaches have shown a lot of\npromise, they currently rely on carefully constructed datasets that are well\naligned with the intended target domains. This raises questions regarding the\ntransferability and robustness of an offline reinforcement learning agent\ntrained on externally acquired data. In this paper, we empirically evaluate the\nability of the current state-of-the-art offline reinforcement learning\napproaches to coping with the source-target domain mismatch within two MuJoCo\nenvironments, finding that current state-of-the-art offline reinforcement\nlearning algorithms underperform in the target domain. To address this, we\npropose data valuation for offline reinforcement learning (DVORL), which allows\nus to identify relevant and high-quality transitions, improving the performance\nand transferability of policies learned by offline reinforcement learning\nalgorithms. The results show that our method outperforms offline reinforcement\nlearning baselines on two MuJoCo environments.", + "authors": "Amir Abolfazli, Gregory Palmer, Daniel Kudenko", + "published": "2022-05-19", + "updated": "2022-05-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.01734v1", + "title": "Offline Goal-Conditioned Reinforcement Learning for Safety-Critical Tasks with Recovery Policy", + "abstract": "Offline goal-conditioned reinforcement learning (GCRL) aims at solving\ngoal-reaching tasks with sparse rewards from an offline dataset. While prior\nwork has demonstrated various approaches for agents to learn near-optimal\npolicies, these methods encounter limitations when dealing with diverse\nconstraints in complex environments, such as safety constraints. Some of these\napproaches prioritize goal attainment without considering safety, while others\nexcessively focus on safety at the expense of training efficiency. In this\npaper, we study the problem of constrained offline GCRL and propose a new\nmethod called Recovery-based Supervised Learning (RbSL) to accomplish\nsafety-critical tasks with various goals. To evaluate the method performance,\nwe build a benchmark based on the robot-fetching environment with a randomly\npositioned obstacle and use expert or random policies to generate an offline\ndataset. We compare RbSL with three offline GCRL algorithms and one offline\nsafe RL algorithm. As a result, our method outperforms the existing\nstate-of-the-art methods to a large extent. Furthermore, we validate the\npracticality and effectiveness of RbSL by deploying it on a real Panda\nmanipulator. Code is available at https://github.com/Sunlighted/RbSL.git.", + "authors": "Chenyang Cao, Zichen Yan, Renhao Lu, Junbo Tan, Xueqian Wang", + "published": "2024-03-04", + "updated": "2024-03-04", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG", + "68T40" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08302v1", + "title": "Safe Evaluation For Offline Learning: Are We Ready To Deploy?", + "abstract": "The world currently offers an abundance of data in multiple domains, from\nwhich we can learn reinforcement learning (RL) policies without further\ninteraction with the environment. RL agents learning offline from such data is\npossible but deploying them while learning might be dangerous in domains where\nsafety is critical. Therefore, it is essential to find a way to estimate how a\nnewly-learned agent will perform if deployed in the target environment before\nactually deploying it and without the risk of overestimating its true\nperformance. To achieve this, we introduce a framework for safe evaluation of\noffline learning using approximate high-confidence off-policy evaluation\n(HCOPE) to estimate the performance of offline policies during learning. In our\nsetting, we assume a source of data, which we split into a train-set, to learn\nan offline policy, and a test-set, to estimate a lower-bound on the offline\npolicy using off-policy evaluation with bootstrapping. A lower-bound estimate\ntells us how good a newly-learned target policy would perform before it is\ndeployed in the real environment, and therefore allows us to decide when to\ndeploy our learned policy.", + "authors": "Hager Radi, Josiah P. Hanna, Peter Stone, Matthew E. Taylor", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.02929v2", + "title": "Model-Based Offline Meta-Reinforcement Learning with Regularization", + "abstract": "Existing offline reinforcement learning (RL) methods face a few major\nchallenges, particularly the distributional shift between the learned policy\nand the behavior policy. Offline Meta-RL is emerging as a promising approach to\naddress these challenges, aiming to learn an informative meta-policy from a\ncollection of tasks. Nevertheless, as shown in our empirical studies, offline\nMeta-RL could be outperformed by offline single-task RL methods on tasks with\ngood quality of datasets, indicating that a right balance has to be delicately\ncalibrated between \"exploring\" the out-of-distribution state-actions by\nfollowing the meta-policy and \"exploiting\" the offline dataset by staying close\nto the behavior policy. Motivated by such empirical analysis, we explore\nmodel-based offline Meta-RL with regularized Policy Optimization (MerPO), which\nlearns a meta-model for efficient task structure inference and an informative\nmeta-policy for safe exploration of out-of-distribution state-actions. In\nparticular, we devise a new meta-Regularized model-based Actor-Critic (RAC)\nmethod for within-task policy optimization, as a key building block of MerPO,\nusing conservative policy evaluation and regularized policy improvement; and\nthe intrinsic tradeoff therein is achieved via striking the right balance\nbetween two regularizers, one based on the behavior policy and the other on the\nmeta-policy. We theoretically show that the learnt policy offers guaranteed\nimprovement over both the behavior policy and the meta-policy, thus ensuring\nthe performance improvement on new tasks via offline Meta-RL. Experiments\ncorroborate the superior performance of MerPO over existing offline Meta-RL\nmethods.", + "authors": "Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, Junshan Zhang", + "published": "2022-02-07", + "updated": "2022-07-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.16973v1", + "title": "Towards Robust Offline-to-Online Reinforcement Learning via Uncertainty and Smoothness", + "abstract": "To obtain a near-optimal policy with fewer interactions in Reinforcement\nLearning (RL), a promising approach involves the combination of offline RL,\nwhich enhances sample efficiency by leveraging offline datasets, and online RL,\nwhich explores informative transitions by interacting with the environment.\nOffline-to-Online (O2O) RL provides a paradigm for improving an offline trained\nagent within limited online interactions. However, due to the significant\ndistribution shift between online experiences and offline data, most offline RL\nalgorithms suffer from performance drops and fail to achieve stable policy\nimprovement in O2O adaptation. To address this problem, we propose the Robust\nOffline-to-Online (RO2O) algorithm, designed to enhance offline policies\nthrough uncertainty and smoothness, and to mitigate the performance drop in\nonline adaptation. Specifically, RO2O incorporates Q-ensemble for uncertainty\npenalty and adversarial samples for policy and value smoothness, which enable\nRO2O to maintain a consistent learning procedure in online adaptation without\nrequiring special changes to the learning objective. Theoretical analyses in\nlinear MDPs demonstrate that the uncertainty and smoothness lead to a tighter\noptimality bound in O2O against distribution shift. Experimental results\nillustrate the superiority of RO2O in facilitating stable offline-to-online\nlearning and achieving significant improvement with limited online\ninteractions.", + "authors": "Xiaoyu Wen, Xudong Yu, Rui Yang, Chenjia Bai, Zhen Wang", + "published": "2023-09-29", + "updated": "2023-09-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.08566v1", + "title": "Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining", + "abstract": "Large transformer models pretrained on offline reinforcement learning\ndatasets have demonstrated remarkable in-context reinforcement learning (ICRL)\ncapabilities, where they can make good decisions when prompted with interaction\ntrajectories from unseen environments. However, when and how transformers can\nbe trained to perform ICRL have not been theoretically well-understood. In\nparticular, it is unclear which reinforcement-learning algorithms transformers\ncan perform in context, and how distribution mismatch in offline training data\naffects the learned algorithms. This paper provides a theoretical framework\nthat analyzes supervised pretraining for ICRL. This includes two recently\nproposed training methods -- algorithm distillation and decision-pretrained\ntransformers. First, assuming model realizability, we prove the\nsupervised-pretrained transformer will imitate the conditional expectation of\nthe expert algorithm given the observed trajectory. The generalization error\nwill scale with model capacity and a distribution divergence factor between the\nexpert and offline algorithms. Second, we show transformers with ReLU attention\ncan efficiently approximate near-optimal online reinforcement learning\nalgorithms like LinUCB and Thompson sampling for stochastic linear bandits, and\nUCB-VI for tabular Markov decision processes. This provides the first\nquantitative analysis of the ICRL capabilities of transformers pretrained from\noffline trajectories.", + "authors": "Licong Lin, Yu Bai, Song Mei", + "published": "2023-10-12", + "updated": "2023-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "math.ST", + "stat.ML", + "stat.TH" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08232v1", + "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", + "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", + "authors": "Ashish Kumar, Ilya Kuzovkin", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.04779v3", + "title": "Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations", + "abstract": "Offline reinforcement learning has shown great promise in leveraging large\npre-collected datasets for policy learning, allowing agents to forgo\noften-expensive online data collection. However, offline reinforcement learning\nfrom visual observations with continuous action spaces remains under-explored,\nwith a limited understanding of the key challenges in this complex domain. In\nthis paper, we establish simple baselines for continuous control in the visual\ndomain and introduce a suite of benchmarking tasks for offline reinforcement\nlearning from visual observations designed to better represent the data\ndistributions present in real-world offline RL problems and guided by a set of\ndesiderata for offline RL from visual observations, including robustness to\nvisual distractions and visually identifiable changes in dynamics. Using this\nsuite of benchmarking tasks, we show that simple modifications to two popular\nvision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2,\nsuffice to outperform existing offline RL methods and establish competitive\nbaselines for continuous control in the visual domain. We rigorously evaluate\nthese algorithms and perform an empirical evaluation of the differences between\nstate-of-the-art model-based and model-free offline RL methods for continuous\ncontrol from visual observations. All code and data used in this evaluation are\nopen-sourced to facilitate progress in this domain.", + "authors": "Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh", + "published": "2022-06-09", + "updated": "2023-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08251v1", + "title": "Offline Reinforcement Learning with Adaptive Behavior Regularization", + "abstract": "Offline reinforcement learning (RL) defines a sample-efficient learning\nparadigm, where a policy is learned from static and previously collected\ndatasets without additional interaction with the environment. The major\nobstacle to offline RL is the estimation error arising from evaluating the\nvalue of out-of-distribution actions. To tackle this problem, most existing\noffline RL methods attempt to acquire a policy both ``close\" to the behaviors\ncontained in the dataset and sufficiently improved over them, which requires a\ntrade-off between two possibly conflicting targets. In this paper, we propose a\nnovel approach, which we refer to as adaptive behavior regularization (ABR), to\nbalance this critical trade-off. By simply utilizing a sample-based\nregularization, ABR enables the policy to adaptively adjust its optimization\nobjective between cloning and improving over the policy used to generate the\ndataset. In the evaluation on D4RL datasets, a widely adopted benchmark for\noffline reinforcement learning, ABR can achieve improved or competitive\nperformance compared to existing state-of-the-art algorithms.", + "authors": "Yunfan Zhou, Xijun Li, Qingyu Qu", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08569v2", + "title": "Bootstrapped Transformer for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims at learning policies from previously\ncollected static trajectory data without interacting with the real environment.\nRecent works provide a novel perspective by viewing offline RL as a generic\nsequence generation problem, adopting sequence models such as Transformer\narchitecture to model distributions over trajectories, and repurposing beam\nsearch as a planning algorithm. However, the training datasets utilized in\ngeneral offline RL tasks are quite limited and often suffer from insufficient\ndistribution coverage, which could be harmful to training sequence generation\nmodels yet has not drawn enough attention in the previous works. In this paper,\nwe propose a novel algorithm named Bootstrapped Transformer, which incorporates\nthe idea of bootstrapping and leverages the learned model to self-generate more\noffline data to further boost the sequence model training. We conduct extensive\nexperiments on two offline RL benchmarks and demonstrate that our model can\nlargely remedy the existing offline RL training limitations and beat other\nstrong baseline methods. We also analyze the generated pseudo data and the\nrevealed characteristics may shed some light on offline RL training. The codes\nare available at https://seqml.github.io/bootorl.", + "authors": "Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, Dongsheng Li", + "published": "2022-06-17", + "updated": "2022-10-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.00750v2", + "title": "Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient", + "abstract": "Offline reinforcement learning, which aims at optimizing sequential\ndecision-making strategies with historical data, has been extensively applied\nin real-life applications. State-Of-The-Art algorithms usually leverage\npowerful function approximators (e.g. neural networks) to alleviate the sample\ncomplexity hurdle for better empirical performances. Despite the successes, a\nmore systematic understanding of the statistical complexity for function\napproximation remains lacking. Towards bridging the gap, we take a step by\nconsidering offline reinforcement learning with differentiable function class\napproximation (DFA). This function class naturally incorporates a wide range of\nmodels with nonlinear/nonconvex structures. Most importantly, we show offline\nRL with differentiable function approximation is provably efficient by\nanalyzing the pessimistic fitted Q-learning (PFQL) algorithm, and our results\nprovide the theoretical basis for understanding a variety of practical\nheuristics that rely on Fitted Q-Iteration style design. In addition, we\nfurther improve our guarantee with a tighter instance-dependent\ncharacterization. We hope our work could draw interest in studying\nreinforcement learning with differentiable function approximation beyond the\nscope of current research.", + "authors": "Ming Yin, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-10-03", + "updated": "2022-11-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08900v1", + "title": "Offline Multi-Agent Reinforcement Learning with Coupled Value Factorization", + "abstract": "Offline reinforcement learning (RL) that learns policies from offline\ndatasets without environment interaction has received considerable attention in\nrecent years. Compared with the rich literature in the single-agent case,\noffline multi-agent RL is still a relatively underexplored area. Most existing\nmethods directly apply offline RL ingredients in the multi-agent setting\nwithout fully leveraging the decomposable problem structure, leading to less\nsatisfactory performance in complex tasks. We present OMAC, a new offline\nmulti-agent RL algorithm with coupled value factorization. OMAC adopts a\ncoupled value factorization scheme that decomposes the global value function\ninto local and shared components, and also maintains the credit assignment\nconsistency between the state-value and Q-value functions. Moreover, OMAC\nperforms in-sample learning on the decomposed local state-value functions,\nwhich implicitly conducts max-Q operation at the local level while avoiding\ndistributional shift caused by evaluating out-of-distribution actions. Based on\nthe comprehensive evaluations of the offline multi-agent StarCraft II\nmicro-management tasks, we demonstrate the superior performance of OMAC over\nthe state-of-the-art offline multi-agent RL methods.", + "authors": "Xiangsen Wang, Xianyuan Zhan", + "published": "2023-06-15", + "updated": "2023-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.05804v1", + "title": "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism", + "abstract": "Offline reinforcement learning, which seeks to utilize offline/historical\ndata to optimize sequential decision-making strategies, has gained surging\nprominence in recent studies. Due to the advantage that appropriate function\napproximators can help mitigate the sample complexity burden in modern\nreinforcement learning problems, existing endeavors usually enforce powerful\nfunction representation models (e.g. neural networks) to learn the optimal\npolicies. However, a precise understanding of the statistical limits with\nfunction representations, remains elusive, even when such a representation is\nlinear.\n Towards this goal, we study the statistical limits of offline reinforcement\nlearning with linear model representations. To derive the tight offline\nlearning bound, we design the variance-aware pessimistic value iteration\n(VAPVI), which adopts the conditional variance information of the value\nfunction for time-inhomogeneous episodic linear Markov decision processes\n(MDPs). VAPVI leverages estimated variances of the value functions to reweight\nthe Bellman residuals in the least-square pessimistic value iteration and\nprovides improved offline learning bounds over the best-known existing results\n(whereas the Bellman residuals are equally weighted by design). More\nimportantly, our learning bounds are expressed in terms of system quantities,\nwhich provide natural instance-dependent characterizations that previous\nresults are short of. We hope our results draw a clearer picture of what\noffline learning should look like when linear representations are provided.", + "authors": "Ming Yin, Yaqi Duan, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-03-11", + "updated": "2022-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.11574v1", + "title": "Offline Multitask Representation Learning for Reinforcement Learning", + "abstract": "We study offline multitask representation learning in reinforcement learning\n(RL), where a learner is provided with an offline dataset from different tasks\nthat share a common representation and is asked to learn the shared\nrepresentation. We theoretically investigate offline multitask low-rank RL, and\npropose a new algorithm called MORL for offline multitask representation\nlearning. Furthermore, we examine downstream RL in reward-free, offline and\nonline scenarios, where a new task is introduced to the agent that shares the\nsame representation as the upstream offline tasks. Our theoretical results\ndemonstrate the benefits of using the learned representation from the upstream\noffline task instead of directly learning the representation of the low-rank\nmodel.", + "authors": "Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.10442v1", + "title": "Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning", + "abstract": "We study offline meta-reinforcement learning, a practical reinforcement\nlearning paradigm that learns from offline data to adapt to new tasks. The\ndistribution of offline data is determined jointly by the behavior policy and\nthe task. Existing offline meta-reinforcement learning algorithms cannot\ndistinguish these factors, making task representations unstable to the change\nof behavior policies. To address this problem, we propose a contrastive\nlearning framework for task representations that are robust to the distribution\nmismatch of behavior policies in training and test. We design a bi-level\nencoder structure, use mutual information maximization to formalize task\nrepresentation learning, derive a contrastive learning objective, and introduce\nseveral approaches to approximate the true distribution of negative pairs.\nExperiments on a variety of offline meta-reinforcement learning benchmarks\ndemonstrate the advantages of our method over prior methods, especially on the\ngeneralization to out-of-distribution behavior policies. The code is available\nat https://github.com/PKU-AI-Edge/CORRO.", + "authors": "Haoqi Yuan, Zongqing Lu", + "published": "2022-06-21", + "updated": "2022-06-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.11336v1", + "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", + "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", + "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.05546v1", + "title": "Offline Actor-Critic Reinforcement Learning Scales to Large Models", + "abstract": "We show that offline actor-critic reinforcement learning can scale to large\nmodels - such as transformers - and follows similar scaling laws as supervised\nlearning. We find that offline actor-critic algorithms can outperform strong,\nsupervised, behavioral cloning baselines for multi-task training on a large\ndataset containing both sub-optimal and expert behavior on 132 continuous\ncontrol tasks. We introduce a Perceiver-based actor-critic model and elucidate\nthe key model features needed to make offline RL work with self- and\ncross-attention modules. Overall, we find that: i) simple offline actor critic\nalgorithms are a natural choice for gradually moving away from the currently\npredominant paradigm of behavioral cloning, and ii) via offline RL it is\npossible to learn multi-task policies that master many domains simultaneously,\nincluding real robotics tasks, from sub-optimal demonstrations or\nself-generated data.", + "authors": "Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin Riedmiller", + "published": "2024-02-08", + "updated": "2024-02-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.01643v3", + "title": "Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems", + "abstract": "In this tutorial article, we aim to provide the reader with the conceptual\ntools needed to get started on research on offline reinforcement learning\nalgorithms: reinforcement learning algorithms that utilize previously collected\ndata, without additional online data collection. Offline reinforcement learning\nalgorithms hold tremendous promise for making it possible to turn large\ndatasets into powerful decision making engines. Effective offline reinforcement\nlearning methods would be able to extract policies with the maximum possible\nutility out of the available data, thereby allowing automation of a wide range\nof decision-making domains, from healthcare and education to robotics. However,\nthe limitations of current algorithms make this difficult. We will aim to\nprovide the reader with an understanding of these challenges, particularly in\nthe context of modern deep reinforcement learning methods, and describe some\npotential solutions that have been explored in recent work to mitigate these\nchallenges, along with recent applications, and a discussion of perspectives on\nopen problems in the field.", + "authors": "Sergey Levine, Aviral Kumar, George Tucker, Justin Fu", + "published": "2020-05-04", + "updated": "2020-11-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02439v2", + "title": "DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching", + "abstract": "In offline reinforcement learning (RL), the performance of the learned policy\nhighly depends on the quality of offline datasets. However, in many cases, the\noffline dataset contains very limited optimal trajectories, which poses a\nchallenge for offline RL algorithms as agents must acquire the ability to\ntransit to high-reward regions. To address this issue, we introduce\nDiffusion-based Trajectory Stitching (DiffStitch), a novel diffusion-based data\naugmentation pipeline that systematically generates stitching transitions\nbetween trajectories. DiffStitch effectively connects low-reward trajectories\nwith high-reward trajectories, forming globally optimal trajectories to address\nthe challenges faced by offline RL algorithms. Empirical experiments conducted\non D4RL datasets demonstrate the effectiveness of DiffStitch across RL\nmethodologies. Notably, DiffStitch demonstrates substantial enhancements in the\nperformance of one-step methods (IQL), imitation learning methods (TD3+BC), and\ntrajectory optimization methods (DT).", + "authors": "Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, Weinan Zhang", + "published": "2024-02-04", + "updated": "2024-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11895v1", + "title": "What are the Statistical Limits of Offline RL with Linear Function Approximation?", + "abstract": "Offline reinforcement learning seeks to utilize offline (observational) data\nto guide the learning of (causal) sequential decision making strategies. The\nhope is that offline reinforcement learning coupled with function approximation\nmethods (to deal with the curse of dimensionality) can provide a means to help\nalleviate the excessive sample complexity burden in modern sequential decision\nmaking problems. However, the extent to which this broader approach can be\neffective is not well understood, where the literature largely consists of\nsufficient conditions.\n This work focuses on the basic question of what are necessary\nrepresentational and distributional conditions that permit provable\nsample-efficient offline reinforcement learning. Perhaps surprisingly, our main\nresult shows that even if: i) we have realizability in that the true value\nfunction of \\emph{every} policy is linear in a given set of features and 2) our\noff-policy data has good coverage over all features (under a strong spectral\ncondition), then any algorithm still (information-theoretically) requires a\nnumber of offline samples that is exponential in the problem horizon in order\nto non-trivially estimate the value of \\emph{any} given policy. Our results\nhighlight that sample-efficient offline policy evaluation is simply not\npossible unless significantly stronger conditions hold; such conditions include\neither having low distribution shift (where the offline data distribution is\nclose to the distribution of the policy to be evaluated) or significantly\nstronger representational conditions (beyond realizability).", + "authors": "Ruosong Wang, Dean P. Foster, Sham M. Kakade", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07166v1", + "title": "Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) extends the paradigm of classical RL\nalgorithms to purely learning from static datasets, without interacting with\nthe underlying environment during the learning process. A key challenge of\noffline RL is the instability of policy training, caused by the mismatch\nbetween the distribution of the offline data and the undiscounted stationary\nstate-action distribution of the learned policy. To avoid the detrimental\nimpact of distribution mismatch, we regularize the undiscounted stationary\ndistribution of the current policy towards the offline data during the policy\noptimization process. Further, we train a dynamics model to both implement this\nregularization and better estimate the stationary distribution of the current\npolicy, reducing the error induced by distribution mismatch. On a wide range of\ncontinuous-control offline RL datasets, our method indicates competitive\nperformance, which validates our algorithm. The code is publicly available.", + "authors": "Shentao Yang, Yihao Feng, Shujian Zhang, Mingyuan Zhou", + "published": "2022-06-14", + "updated": "2022-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08331v1", + "title": "Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation", + "abstract": "In recommender systems (RecSys) and real-time bidding (RTB) for online\nadvertisements, we often try to optimize sequential decision making using\nbandit and reinforcement learning (RL) techniques. In these applications,\noffline reinforcement learning (offline RL) and off-policy evaluation (OPE) are\nbeneficial because they enable safe policy optimization using only logged data\nwithout any risky online interaction. In this position paper, we explore the\npotential of using simulation to accelerate practical research of offline RL\nand OPE, particularly in RecSys and RTB. Specifically, we discuss how\nsimulation can help us conduct empirical research of offline RL and OPE. We\ntake a position to argue that we should effectively use simulations in the\nempirical research of offline RL and OPE. To refute the counterclaim that\nexperiments using only real-world data are preferable, we first point out the\nunderlying risks and reproducibility issue in real-world experiments. Then, we\ndescribe how these issues can be addressed by using simulations. Moreover, we\nshow how to incorporate the benefits of both real-world and simulation-based\nexperiments to defend our position. Finally, we also present an open challenge\nto further facilitate practical research of offline RL and OPE in RecSys and\nRTB, with respect to public simulation platforms. As a possible solution for\nthe issue, we show our ongoing open source project and its potential use case.\nWe believe that building and utilizing simulation-based evaluation platforms\nfor offline RL and OPE will be of great interest and relevance for the RecSys\nand RTB community.", + "authors": "Haruka Kiyohara, Kosuke Kawakami, Yuta Saito", + "published": "2021-09-17", + "updated": "2021-09-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.10813v2", + "title": "A Workflow for Offline Model-Free Robotic Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables learning control policies by\nutilizing only prior experience, without any online interaction. This can allow\nrobots to acquire generalizable skills from large and diverse datasets, without\nany costly or unsafe online data collection. Despite recent algorithmic\nadvances in offline RL, applying these methods to real-world problems has\nproven challenging. Although offline RL methods can learn from prior data,\nthere is no clear and well-understood process for making various design\nchoices, from model architecture to algorithm hyperparameters, without actually\nevaluating the learned policies online. In this paper, our aim is to develop a\npractical workflow for using offline RL analogous to the relatively\nwell-understood workflows for supervised learning problems. To this end, we\ndevise a set of metrics and conditions that can be tracked over the course of\noffline training, and can inform the practitioner about how the algorithm and\nmodel architecture should be adjusted to improve final performance. Our\nworkflow is derived from a conceptual understanding of the behavior of\nconservative offline RL algorithms and cross-validation in supervised learning.\nWe demonstrate the efficacy of this workflow in producing effective policies\nwithout any online tuning, both in several simulated robotic learning scenarios\nand for three tasks on two distinct real robots, focusing on learning\nmanipulation skills with raw image observations with sparse binary rewards.\nExplanatory video and additional results can be found at\nsites.google.com/view/offline-rl-workflow", + "authors": "Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine", + "published": "2021-09-22", + "updated": "2021-09-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.09701v2", + "title": "A Natural Extension To Online Algorithms For Hybrid RL With Limited Coverage", + "abstract": "Hybrid Reinforcement Learning (RL), leveraging both online and offline data,\nhas garnered recent interest, yet research on its provable benefits remains\nsparse. Additionally, many existing hybrid RL algorithms (Song et al., 2023;\nNakamoto et al., 2023; Amortila et al., 2024) impose coverage assumptions on\nthe offline dataset, but we show that this is unnecessary. A well-designed\nonline algorithm should \"fill in the gaps\" in the offline dataset, exploring\nstates and actions that the behavior policy did not explore. Unlike previous\napproaches that focus on estimating the offline data distribution to guide\nonline exploration (Li et al., 2023b), we show that a natural extension to\nstandard optimistic online algorithms -- warm-starting them by including the\noffline dataset in the experience replay buffer -- achieves similar provable\ngains from hybrid data even when the offline dataset does not have\nsingle-policy concentrability. We accomplish this by partitioning the\nstate-action space into two, bounding the regret on each partition through an\noffline and an online complexity measure, and showing that the regret of this\nhybrid RL algorithm can be characterized by the best partition -- despite the\nalgorithm not knowing the partition itself. As an example, we propose\nDISC-GOLF, a modification of an existing optimistic online algorithm with\ngeneral function approximation called GOLF used in Jin et al. (2021); Xie et\nal. (2022a), and show that it demonstrates provable gains over both online-only\nand offline-only reinforcement learning, with competitive bounds when\nspecialized to the tabular, linear and block MDP cases. Numerical simulations\nfurther validate our theory that hybrid data facilitates more efficient\nexploration, supporting the potential of hybrid RL in various scenarios.", + "authors": "Kevin Tan, Ziping Xu", + "published": "2024-03-07", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02000v1", + "title": "Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning", + "abstract": "Value function estimation is an indispensable subroutine in reinforcement\nlearning, which becomes more challenging in the offline setting. In this paper,\nwe propose Hybrid Value Estimation (HVE) to reduce value estimation error,\nwhich trades off bias and variance by balancing between the value estimation\nfrom offline data and the learned model. Theoretical analysis discloses that\nHVE enjoys a better error bound than the direct methods. HVE can be leveraged\nin both off-policy evaluation and offline reinforcement learning settings. We,\ntherefore, provide two concrete algorithms Off-policy HVE (OPHVE) and\nModel-based Offline HVE (MOHVE), respectively. Empirical evaluations on MuJoCo\ntasks corroborate the theoretical claim. OPHVE outperforms other off-policy\nevaluation methods in all three metrics measuring the estimation effectiveness,\nwhile MOHVE achieves better or comparable performance with state-of-the-art\noffline reinforcement learning algorithms. We hope that HVE could shed some\nlight on further research on reinforcement learning from fixed data.", + "authors": "Xue-Kun Jin, Xu-Hui Liu, Shengyi Jiang, Yang Yu", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.05440v1", + "title": "Dealing with the Unknown: Pessimistic Offline Reinforcement Learning", + "abstract": "Reinforcement Learning (RL) has been shown effective in domains where the\nagent can learn policies by actively interacting with its operating\nenvironment. However, if we change the RL scheme to offline setting where the\nagent can only update its policy via static datasets, one of the major issues\nin offline reinforcement learning emerges, i.e. distributional shift. We\npropose a Pessimistic Offline Reinforcement Learning (PessORL) algorithm to\nactively lead the agent back to the area where it is familiar by manipulating\nthe value function. We focus on problems caused by out-of-distribution (OOD)\nstates, and deliberately penalize high values at states that are absent in the\ntraining dataset, so that the learned pessimistic value function lower bounds\nthe true value anywhere within the state space. We evaluate the PessORL\nalgorithm on various benchmark tasks, where we show that our method gains\nbetter performance by explicitly handling OOD states, when compared to those\nmethods merely considering OOD actions.", + "authors": "Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan", + "published": "2021-11-09", + "updated": "2021-11-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2303.06347v1", + "title": "User Retention-oriented Recommendation with Decision Transformer", + "abstract": "Improving user retention with reinforcement learning~(RL) has attracted\nincreasing attention due to its significant importance in boosting user\nengagement. However, training the RL policy from scratch without hurting users'\nexperience is unavoidable due to the requirement of trial-and-error searches.\nFurthermore, the offline methods, which aim to optimize the policy without\nonline interactions, suffer from the notorious stability problem in value\nestimation or unbounded variance in counterfactual policy evaluation. To this\nend, we propose optimizing user retention with Decision Transformer~(DT), which\navoids the offline difficulty by translating the RL as an autoregressive\nproblem. However, deploying the DT in recommendation is a non-trivial problem\nbecause of the following challenges: (1) deficiency in modeling the numerical\nreward value; (2) data discrepancy between the policy learning and\nrecommendation generation; (3) unreliable offline performance evaluation. In\nthis work, we, therefore, contribute a series of strategies for tackling the\nexposed issues. We first articulate an efficient reward prompt by weighted\naggregation of meta embeddings for informative reward embedding. Then, we endow\na weighted contrastive learning method to solve the discrepancy between\ntraining and inference. Furthermore, we design two robust offline metrics to\nmeasure user retention. Finally, the significant improvement in the benchmark\ndatasets demonstrates the superiority of the proposed method.", + "authors": "Kesen Zhao, Lixin Zou, Xiangyu Zhao, Maolin Wang, Dawei yin", + "published": "2023-03-11", + "updated": "2023-03-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Original Paper", + "paper_cat": "Offline AND Reinforcement AND Learning", + "gt": "We summarize the earlier literature related to our work, i.e., sequential recommendations and reinforcement learning based recommender systems as follows. 4.1 Sequential Recommender Systems The purpose of sequential recommender systems is to address the order of interactions [22, 38]. Early methods are mainly based on Markov Chain and Markov Processes. For example, FPMC learns a transition matrix for each user by combining matrix factorization (MF) and Markov chains (MC) [28]; Hosseinzadeh et al. capture the variation of user preference by a hierarchical Markov model [16]. In addition to MC-based methods, there are also RNN-based methods. For example, GRU4Rec [32] firstly proposes to build RS by taking full use of the RNN model to learn sequential information; GRU4RecF proposes the paralleled RNN to leverage item features and improve recommendation efficiency [15]. However, none of these studies are for optimizing the long-term engagement, e.g., user return times [9], browsing depth [47], which is crucial for improving the quality of recommendations. 4.2 Reinforcement Learning based Recommendation Since trial and error search is too costly in recommender systems, here we only present offline reinforcement learning methods. In general, RL-based RSs are mainly divided into two categories: modelbased and model-free. Model-based models. Model-based models represent the whole environment by learning a fixed model [6, 43, 48, 50]. However, in recommendation scenario, the model\u2019s representation of the environment will be biased due to the dynamic changes in the environment. If the model is constantly updated to adapt to the environmental changes, it will incur a huge expense. In addition, the probability of user\u2019s behavior cannot be estimated and the transition probability function becomes difficult to determine. Model-free models. In model-free models, the transition probability function is unknown and not required [6, 49]. DEERS [44] suggests that negative feedback can also have an impact on RSs. RML [25] improves on past heuristics by directly optimizing metrics for relevant feedback. TPGR [1] proposes a tree-structured policybased model to solve the larger discrete action space problem. LIRD [45] utilizes actor-critic framework to make list-wise recommendations. DeepPage [41] further takes into account the 2-D position of the recommended item sequences. Although model-free models achieve better results than model-based models, many problems remain. First, in RS, due to the large action space, the approximate estimation of value function is very difficult, and the reward function is hard to determine. Second, it is hard to evaluate policy, and the variance of evaluation methods is often unbounded.", + "pre_questions": [], + "main_content": "INTRODUCTION Sequential recommender systems (SRSs), which model users\u2019 historical interactions and recommend potentially interesting items for users, have received considerable attention in both academia and industry due to their irreplaceable role in the real world systems, e.g., movie recommendation in Netflix1, and E-commerce recommendation in Amazon2. The success of SRSs heavily relays on users\u2019 engagement on the platform, which, to some extent, is reflected by users\u2019 immediate feedback, liking clicks [8, 40]. However, these immediate feedback can not completely reveal users\u2019 preferences [37]. For example, some items with eye-catching titles and covers but low-quality content may attract users\u2019 clicks and further break users\u2019 trust in the platform [35]. Therefore, it is essential to optimize users\u2019 long-term engagement at the platform [47], liking user retention, which is a preferable indicator of user satisfaction. As the tool for optimizing the long-term/delayed metrics [24], reinforcement learning (RL) has been widely studied for optimizing user retention in recent years [6]. Though they are capable of exploring and modeling users\u2019 dynamic interests [39], existing RL-based SRSs leave much to be desired due to the offline learning challenge. Unlike gaming scenarios, where RL agents achieve great success by trial and error search [29], training from scratch in online SRSs is unaffordable due to the risk of losing users by recommending inappropriate items. Therefore, recent attention of the whole community has been paid to offline RL-based SRSs. However, putting offline RL into practice is frustrating in both valuebased and policy-based methods. For value-based approaches, the notorious instability problem (i.e., the \u2019Deadly Triad\u2019) pushes the development of model-based mothods [5]. However, due to the vast state space in the recommendation scenario, estimating the transition probability is a problem and further leads to unsatisfactory performance [42]. For policy-based methods, the unbounded variance of counterfactual policy evaluation drives the community to clip or discard the counterfactual weights [3], which might lead to inaccurate weight and discourage performance [36]. 1https://www.netflix.com/ 2https://www.amazon.com/ arXiv:2303.06347v1 [cs.IR] 11 Mar 2023 WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Kesen Zhao, Lixin Zou\u2020, Xiangyu Zhao\u2020, Maolin Wang, and Dawei Yin To explore the potential of RL-based recommendation, we propose to optimize the user retention recommendation with Decision Transformer [2] (DT), which casts the offline RL as an autoregressive problem and therefore solves the mentioned offline learning challenges. Specifically, DT is required to generate the recommendation under a specific reward, i.e., the user retention and state. When conditional on optimal reward, DT can generate future actions that achieve the desired return in the recommendation stages. Though DT is promising in the recommendation, applying the DT in SRSs is a non-trivial problem. It has the following challenges: (1) deficiency in reward modeling. Reward, as the most crucial part of DT, directly affects the quality of the recommendation. However, in DT, translating the reward into embedding ignores its partial order, leading to the deficiency in model training. (2) discrepancy in recommendation generation. Under the DT, the recommendation is generated with the maximum reward, which is inconsistent with the diversified reward encountered in the training stages. As a result, the model is not capable of utilizing the knowledge of data with smaller rewards (depicted in Figure 2). (3) unreliable performance evaluation. Though DT solves the problem of offline learning, we still need the importance weighted offline evaluation to measure the performance of the learned policy, which leads to unbounded variance and results in an unreliable evaluation. To handle these problems, we propose a novel framework DT4Rec, which firstly deploy Decision Transformer for recommendation. Specifically, we generate the reward embedding by auto-weighted aggregating the meta-embeddings generated from discretized reward value, which maintains partial order relationship between rewards. Then, we introduce weighted contrastive learning to remedy the discrepancy between inference and training, which leverages the smaller reward samples by contrasting the large and small reward samples. Furthermore, we propose two novel reliable metrics, i.e., the model-based and the similarity-based user retention score, to evaluate the policies all around. Compared with the off-policy evaluation methods, it achieves lower variance, i.e., more stability, on the performance evaluation (depicted in Figure 4). Our main contributions can be summarized as follows: \u2022 We propose a novel Decision Transformer-based SRS model, DT4Rec, which eliminates the difficulty of offline learning with an RL-based recommender system. \u2022 We contribute the auto-discretized reward prompt and contrastive supervised policy learning, which effectively cope with the deficiency in reward modeling and discrepancy between training and inference respectively. \u2022 We design the model-based and the similarity-based user retention score. Compared with the off-policy evaluation methods, they can fairly evaluate model performance. \u2022 Experiments on two benchmark datasets illustrate the superiority of our proposed model. 2 METHODOLOGY 2.1 Problem Formulation In a typical sequential recommendation, we are given a sequence of \ud835\udc40user-interested items denoted as \ud835\udc48\ud835\udc61= {\ud835\udc63\ud835\udc61 \ud835\udc56}\ud835\udc40 \ud835\udc56=1 in the \ud835\udc61-th recommendation and seek to predict a set of \ud835\udc41items \ud835\udc4e\ud835\udc61= [\ud835\udc63\ud835\udc61,1, . . . , \ud835\udc63\ud835\udc61,\ud835\udc41] that the user might be interested in. Formally, we aim at generating the recommendation \ud835\udc4e\u2217 \ud835\udc61, maximizing a specific metric \u0398 as \ud835\udc4e\u2217 \ud835\udc61= arg max \ud835\udc4e\ud835\udc61 \u0398(\ud835\udc4e\ud835\udc61|\ud835\udc48\ud835\udc61). (1) According to the target of sequential recommender, we could instantiate \u0398 with the metric of optimizing user immediate feedback (i.e., prediction accuracy) or long-term user engagement (e.g., user retention). For user retention, we define it as the number of users logging in the platform over the next \ud835\udc3etime intervals (e.g., next \ud835\udc3e days, next \ud835\udc3emonths) \ud835\udc52\ud835\udc61= \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 1[log-in\ud835\udc61+\ud835\udc58], (2) where 1 is the indicator function, log-in\ud835\udc61+\ud835\udc58is a binary function that indicates whether the user logs in the platform or not at the next (\ud835\udc61+ \ud835\udc58)-th timestep. MDP Formulation of Sequential Recommendation. To optimize the metric \u0398 with reinforcement learning, we formally introduce the sequential recommendation under the framework of the Markov Decision Process (MDP), including a sequence of states, actions, and rewards. Particularly, the state \ud835\udc60\ud835\udc61= \ud835\udc48\ud835\udc61is defined as the historical interactions. The action \ud835\udc4e\ud835\udc61is the recommendation list, which is generated by a recommendation policy \ud835\udf0b: \ud835\udc60\ud835\udc61\u2192\ud835\udc4e\ud835\udc61. Before every recommendation, \ud835\udc60\ud835\udc61will be updated by appending the user\u2019s interested items at its end as \ud835\udc60\ud835\udc61= \ud835\udc60\ud835\udc61\u22121 \u2295{\ud835\udc48\ud835\udc61\u2212\ud835\udc48\ud835\udc61\u22121}. Given a specific evaluation metric \u0398, the reward is usually defined as the function of the metric. Particularly, for user retention, the reward is set the same as the \ud835\udc52\ud835\udc61. With the formally defined \ud835\udc60\ud835\udc61, \ud835\udc4e\ud835\udc61, \ud835\udc5f\ud835\udc61, user systems\u2019 interactions form a trajectory \ud835\udf0f= [\ud835\udc601,\ud835\udc4e1,\ud835\udc5f1, . . . ,\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61,\ud835\udc5f\ud835\udc61]. (3) 2.2 Decision Transformer based Recommendation Unlike traditional frameworks that approximate the state-action value for planning [17] or directly optimize gradient-based policy function [4], the Decision Transformer translates the RL problem as an autoregression problem by directly predicting the desired action with previous rewards, states, and actions (refer to Section 2.7). As an supervised learning task, DT avoids the problems of \u2018Deadly Traid\u2019 and unbounded variance and shows superiority in offline learning [2]. To instantiate the DT in SRSs, we formally disassemble the DT into the following four blocks (shown in Figure 1): \u2022 Embedding Module: The embedding module aims to map the rewards, states, and actions into dense vectors. Particularly, before translating raw \ud835\udc60\ud835\udc61, \ud835\udc4e\ud835\udc61and \ud835\udc5f\ud835\udc61into embedding, we need to cast the data in the trajectory ordered by reward, state and action to simplify the prediction of action as \ud835\udf0f\u2032 \ud835\udc61= [b \ud835\udc5f1,\ud835\udc601,\ud835\udc4e1, . . . ,b \ud835\udc5f\ud835\udc61,\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61], (4) where b \ud835\udc5f\ud835\udc61= \u00cd\ud835\udc47 \ud835\udc61\u2032=\ud835\udc61\ud835\udc52\ud835\udc61\u2032 is the cumulative reward, i.e., the return-togo [2].\ud835\udc47is the round of total recommendation. Then, embedding module sequentially converts the raw feature sequence \ud835\udf0f\u2032 \ud835\udc61into feature vector sequence as \ud835\udf49\u2032 \ud835\udc61= [b \ud835\udc931, \ud835\udc941, \ud835\udc821, . . . ,b \ud835\udc93\ud835\udc61, \ud835\udc94\ud835\udc61, \ud835\udc82\ud835\udc61] with the encoder model (Section 2.4). Particularly, due to the importance of reward, we distinguish the embedding module for reward as the User Retention-oriented Recommendation with Decision Transformer WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Embedding Module Embedding Module Transformer Decision Block Action Decoder Contrastive Learning Loss Transformer Decision Block Action Decoder Transformer Decision Block Action Decoder Contrastive Learning Loss Transformer Decision Block Action Decoder Aggregation Meta Embedding Embedding Module Prompt Generator Action Encoder State Encoder ReZard Prompt Action Encoder State Encoder Figure 1: Framework overview of DT4Rec. Negative samples have the same states and actions as positive samples, while rewards are replaced with different values. Positive and negative samples share the same model parameters. reward prompt, which is designed to prompt the DT generating desired recommendation lists. \u2022 Decision Block: The decision block is the centre of the model, which transforms the dense vectors of contextual information \ud835\udf49\u2032 \ud835\udc61\u2212{\ud835\udc82\ud835\udc61} = [b \ud835\udc931, \ud835\udc941, \ud835\udc821, . . . ,b \ud835\udc93\ud835\udc61, \ud835\udc94\ud835\udc61] into context feature \ud835\udc68\ud835\udc61for generating the desired response in the next time step, where b \ud835\udc93\ud835\udc61is the generated reward prompt. \u2022 Action Decoder: Given contextual information \ud835\udc68\ud835\udc61, action decoder is expected to generate a sequence of action \u02c6 \ud835\udc4e\ud835\udc61that matches the ground truth action \ud835\udc4e\ud835\udc61. \u2022 Supervised Policy Learning: The goal of supervised policy learning is to minimize the difference between the generated action \u02c6 \ud835\udc4e\ud835\udc61and ground truth \ud835\udc4e\ud835\udc61with a specific loss function. Therefore, it translates the ordinary RL task into a supervised learning problem. By specifying the optimal reward as the prompt in inference, the DT is desired to recommend the items that can maximize user retention. 2.3 Auto-Discretized Reward Prompt Reward, as the target of RL, differentiates the performance of different policies through its numerical value. Therefore, the prompts generated by DT should maintain the partial order relationship between rewards, i.e., if two rewards are similar, then the Euclidean distance between their generated prompts is smaller. To this end, we propose to generate more efficient prompts by the auto-discretizing method, as shown in Figure 1, which consists of discretizing the numerical value and auto-weighted aggregating the meta-embedding learned by MLP. It shares the embedding between similar rewards. Specifically, according to the reward value, we convert it into a weighted score for a batch of \ud835\udc35learnable embeddings \ud835\udc74\ud835\udc4f\u2208R, \u2200\ud835\udc4f\u2208[1, \ud835\udc35] as \ud835\udc9b= \ud835\udf19(\ud835\udc7e\ud835\udf0e(\ud835\udc98b \ud835\udc5f) + \ud835\udefc\ud835\udf0e(\ud835\udc98b \ud835\udc5f)) , (5) where \ud835\udc98\u2208R1\u00d7\ud835\udc35and \ud835\udc7e\u2208R\ud835\udc35\u00d7\ud835\udc35are learnable variables. \ud835\udf0eis the Leaky Relu [11]. \ud835\udf19is the Softmax function. Given the weighted score \ud835\udc9b, the reward embedding b \ud835\udc93is set as the aggregation of meta embedding, which can be formulated as b \ud835\udc93= \ud835\udc35 \u2211\ufe01 \ud835\udc4f=1 \ud835\udc9b\ud835\udc4f\ud835\udc74\ud835\udc4f, (6) where \ud835\udc9b\ud835\udc4fis the \ud835\udc4fth element of \ud835\udc9b. Since the value of \ud835\udc5fis directly used as the input of the neural network to ensure the partial order between rewards, the similar b \ud835\udc5fwill share the similar embedding as long as the neural network is smooth. 2.4 State-Action Encoder The action encoder maps actions to vectors. The challenge lies in the dynamic length of recommender action since the user may interact with different numbers of items each time. Therefore, we model the sequence of interaction sequence with GRU, which has shown better performance than LSTM [14] in modeling the dynamic item sequence for recommender systems. Furthermore, compared with the heavy Transformer [19] based methods, GRU is a more affordable alternative for balancing the efficiency and effectiveness. Specifically, we set the maximum length of sequences to \ud835\udc41. For those less than \ud835\udc41, we pad the sequence to \ud835\udc41with 0 vector [32]. Formally, the sequence information can be extracted as \ud835\udc6f\ud835\udc5b = \ud835\udc3a\ud835\udc45\ud835\udc48\ud835\udc52(\ud835\udc97\ud835\udc5b, \ud835\udc6f\ud835\udc5b\u22121) (7) \ud835\udc82\ud835\udc61 = \ud835\udc6f\ud835\udc41, (8) where \ud835\udc3a\ud835\udc45\ud835\udc48\ud835\udc52is the recurrent layer, \ud835\udc41is maximum sequence length, \ud835\udc6f\ud835\udc5bis the hidden state. Additionally, we set the embedding of \ud835\udc4e\ud835\udc61as the hidden state of last timestep, i.e., \ud835\udc82\ud835\udc61. WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Kesen Zhao, Lixin Zou\u2020, Xiangyu Zhao\u2020, Maolin Wang, and Dawei Yin 2.5 Transformer Decision Block Prior efforts have demonstrated the superiority of Transformer and RL in SRS tasks. On the one hand, Transformer equally treats the element of sequence through self-attention, which avoids information loss of long sequence in RNN and further diminishes the gradient vanishing or explosion in modeling long sequence [19, 46, 51]. On the other hand, RL is expert at dynamically modeling users\u2019 interest profiles and capturing users\u2019 interest shifts. To get the best of both worlds, we propose to learn user interests from their interaction trajectories via Transformer Decision blocks [2]. Specifically, for user retention recommendations, we are required to model the dynamic contextual information for generating the recommender decision, which is similar to the generative task. Therefore, we select the unidirectional Transformer layer as the backbone model for modeling the complex feature interactions, which has shown substantial advantage over existing methods in generative task [10]. Furthermore, we employ the skip-connections for mitigating the over-fitting [12] and feed-forward neural layers for linear-mapping of features [20]. Consequently, the contextual information for recommender decision can be formulated as e \ud835\udc68= \ud835\udc39\ud835\udc39\ud835\udc41\u0000\ud835\udc40\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56\ud835\udc3b\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b\u0000\ud835\udf49\u2032 \u2212{\ud835\udc82\ud835\udc61}\u0001\u0001 , (9) where e \ud835\udc68is the matrix of predicted action embedding with the \ud835\udc61-th row as e \ud835\udc82\ud835\udc61for \ud835\udc61\u2208[1,\ud835\udc47]. \ud835\udc39\ud835\udc39\ud835\udc41(\ud835\udc65) = \ud835\udc3a\ud835\udc38\ud835\udc3f\ud835\udc48(\ud835\udc65\ud835\udc7e1 + \ud835\udc831)\ud835\udc7e2 + \ud835\udc832 is the feed-forward neural layer with skip-connection. Here, \ud835\udc3a\ud835\udc38\ud835\udc3f\ud835\udc48[13] is commonly used activation function in Transformer models. The \ud835\udc40\ud835\udc62\ud835\udc59\ud835\udc61\ud835\udc56\ud835\udc3b\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5bpresents the multi-head self-attentive mechanism defined in [33], which has been proved effective for jointly learning information from different representation subspaces and , therefore, massively deployed in the recommender systems [18]. 2.6 Action Decoder Given the contextual information of items interacted with at each time step, action decoder aims to decode sequences of items of interest to users with the GRU [7]. The item of current interaction is unknown for decoding. We only use previous interactions and contextual information to predict it as b \ud835\udc97\ud835\udc5b = \ud835\udc97\ud835\udc5b\u2295e \ud835\udc82\ud835\udc61 (10) e \ud835\udc97\ud835\udc5b+1 = \ud835\udc3a\ud835\udc45\ud835\udc48\ud835\udc51(e \ud835\udc97\ud835\udc5b,b \ud835\udc97\ud835\udc5b) , (11) where \ud835\udc97\ud835\udc5bis the embedding of \ud835\udc63\ud835\udc5b, e \ud835\udc97\ud835\udc5b+1 is the prediction for the item at \ud835\udc5b+ 1 place, \u2295represents concatenate operations. Since there is no information for predicting the first item, we use \u2018bos\u2019 token as the start marker and initialize the e \ud835\udc970 randomly. The full sequence can be decoded as e \ud835\udc7d= decoder (bos,b \ud835\udc971, . . . ,b \ud835\udc97\ud835\udc5b, . . . ,b \ud835\udc97\ud835\udc41\u22121) = [e \ud835\udc971, . . . ,e \ud835\udc97\ud835\udc5b, . . . ,e \ud835\udc97\ud835\udc41], (12) where e \ud835\udc7dis predicted matrix for interacted items sequence at the \ud835\udc61-th time step, \ud835\udc41is maximum sequence length. Notice that the length of sequences may not be fixed. When predicting, we do not know the length of sequences in decoder, so we add an \u2018eos\u2019 token at the end of each sequence as an end marker and then still pad it with zeros to a length of \ud835\udc41. The decoder does not continue predicting when it has predicted \u2018eos\u2019. 2.7 Contrastive Supervised Policy Learning In Decision Transformer, only the maximal reward will be used for inference since it is designed to generate the actions with maximal reward. Therefore, the samples with small rewards might not be fully utilized (validated in Section 3.7). In order to fully exploit the knowledge from the samples, we propose to use a weighted contrastive learning approach, which treats actions with small rewards as negative samples to avoid recommending small rewarded actions. Therefore, our objective function consists of two parts, CE loss and weighted contrastive learning loss. 2.7.1 Weighted Contrastive Loss. For each sample, we use same state and action, with different rewards, as negative samples, denoting its predicted action embedding as e \ud835\udc7d\u2212. The weighted contrastive learning loss can be formulated as LCL = \u2212 \u2211\ufe01 e \ud835\udc7d\u2212\u2208\ud835\udebc \ud835\udf05 \u0010 e \ud835\udc7d\u2212\u0011 \ud835\udc53\ud835\udc60 \u0010 e \ud835\udc7d,e \ud835\udc7d\u2212\u0011 , (13) where \u03a5 is the set of negative samples. \ud835\udc53\ud835\udc60(\u00b7) calculates the similarity of two sequences by averaging of embeddings\u2019 dot product at every row of the matrices e \ud835\udc7dand e \ud835\udc7d\u2212. \ud835\udf05 \u0010 e \ud835\udc7d\u2212\u0011 is the weighting hyperparameter set according to the reward value of negative sample, i.e., the weight is inversely proportional to reward. The reason is that smaller rewards make user retention lower and we want to be dissimilar to those ones. Besides the weighted contrastive loss, we also optimize the DT with the original cross-entropy loss as b \ud835\udc80 = \ud835\udf13 \u0010 \ud835\udc7e\ud835\udc63e \ud835\udc7d+ \ud835\udc83\ud835\udc63 \u0011 (14) L\ud835\udc36\ud835\udc38 = CE \u0010 b \ud835\udc80, \ud835\udc80 \u0011 , (15) where \ud835\udc80is the label matrix with the one-hot label in every row, b \ud835\udc80 is the corresponding predicted label distribution. Here, CE is the cross-entropy loss function, \ud835\udf13is the softmax function. \ud835\udc7e\ud835\udc63and \ud835\udc83\ud835\udc63 are learnable parameters. Finally, the overall loss function is formulated by combining the cross-entropy loss and weighted contrastive loss as L = L\ud835\udc36\ud835\udc38+ \ud835\udefdL\ud835\udc36\ud835\udc3f, (16) where \ud835\udefdis a hyper-parameters. 3 EXPERIMENTS In this section, we conduct experiments on two benchmark datasets, IQiYi and ML-1m, to answer the following research questions. \u2022 RQ1: How does the performance of our proposed DT4Rec compare with other state-of-the-art baselines? \u2022 RQ2: How does our auto-discretized reward prompt contribute to reward modeling? \u2022 RQ3: How does our contrastive supervised policy learning method mitigate the out-of-distribution (OOD) issue between the recommendation training and inference? \u2022 RQ4: How does stability of our designed evaluation method? \u2022 RQ5: How does the generalization capability of our DT4Rec? User Retention-oriented Recommendation with Decision Transformer WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Table 1: Statistics of datasets. UR is the short for average user retention. Datasets Users Items Interactions UR Density IQiYi 3,000,000 4,000,000 71,046,026 4.80 5.8e-6 ML-1m 6,014 3,417 1,000,000 4.13 4.84% 3.1 Datasets We conduct experiments on two real datasets, iqiyi user retention data (IQiYi) and ML-1m. In Table 1, we provide details of two datasets. Explicitly, the IQiYi dataset3 records about 70 million users interactions for 4 million videos, e.g., views and comments, which is a very sparse and noise dataset. Therefore, without losing generality, we select the users with at least 20 interaction records for experiment. Furthermore, we do not distinguish between the types of interactions and treat them as the identical interactions. User retention is set as the average number of days users log in the next week, i.e., \ud835\udc3e= 7 (the setting used in WSDM Cup 20223). The ML-1m dataset 4 is a benchmark dataset commonly used in SRSs, which records 6,014 users\u2019 scores for 3,417 movies. Since the ML-1m records span a large time scope, we calculate the user retention in the perspective of month. 3.2 Evaluation We use the following evaluation metrics to evaluate the performance of the proposed methods on both immediate feedback (i.e., prediction accuracy) and long-term engagement (i.e., user retention). Specifically, these metrics are defined as follows. 3.2.1 Prediction Accuracy. To assess prediction accuracy, we use four widely used common metrics, BLEU [26], ROUGE [23], NDCG, HR in sequence prediction problems [31]: \u2022 BLEU evaluates the precision of dynamic length recommendations in sequence-to-sequence recommendations [27], which is a frequently-used metric in NLP. \u2022 ROUGE refers to the recall of dynamic length recommendations, which is a commonly used metric for sequence-to-sequence recommender systems [21]. \u2022 HR@K measures the probability that ground-truth items appear in the top-K recommendations. \u2022 NDCG@K measures the cumulative gain (CG) scores in the top-K recommendations and considers the influence of position on recommendations [34], where the CG scores represent the similarity between items and ground truths. 3.2.2 User Retention. To evaluate the effectiveness of the generated recommendation sequences, we provide two designed metrics, MBURS and SB-URS, and two common ones, improved user retention (IUR) and no return count (NRC) [35]. \u2022 MB-URS. The model-based user return score evaluates the effectiveness of recommended item sequences by directly returning a user-retention score. Particularly, we train a supervised model that predicts the reward of particular state-action pairs with the 30% validation dataset, which is independent of the training 3http://challenge.ai.iqiyi.com/ 4https://grouplens.org/datasets/movielens/1m/ dataset. Therefore, the model is optimized by minimizing the MSE loss between predicted rewards and the ground truth. \u2022 SB-URS. The similarity-based user return score evaluates the effectiveness of recommended item sequences by weighted sum over the ground truth user retention score. Specifically, we divide the samples into 8 classes according to their ground truth reward and calculate BLEU-1 scores between predicted sequences and ground truth for each class as their similarity. Then, the SB-URS are calculated as follows: SB-URS = \ud835\udc3e \u2211\ufe01 \ud835\udc58=0 \ud835\udc60\ud835\udc58\u00b7 \u0012 \ud835\udc54\ud835\udc58\u2212\ud835\udc3e 2 \u0013 \u00b7 \ud835\udc41\ud835\udc58, (17) where \ud835\udc60\ud835\udc58is the similarity of class k. \ud835\udc54\ud835\udc58is the ground truth reward of class \ud835\udc58. We want the similarity to be as small as possible for samples with small ground truth reward, and as large as possible for samples with large ground truth reward. \ud835\udc41\ud835\udc58is the number of samples with a reward of \ud835\udc58. \u2022 Improved user retention (IUR). It measures the percentage of relative users\u2019 average retention improvement compared to their logged average retention in the offline data, which directly reflects users\u2019 engagement on the system. And a larger percentage of IUR indicates that recommendation lists make more users engaged in the system. \u2022 No return count (NRC). It is the percentage of users who left the system after a recommendation was made. A smaller no return count indicates that recommendation lists keep more users engaged in the system. 3.3 Baselines We compare our model with state-of-the-art methods from different types of recommendation approaches, including: \u2022 BERT4Rec [30]: It uses a bidirectional Transformer to learn sequential information. In addition, it utilizes a mask language model, to increase the difficulty of the task and enhance the training power of the model. \u2022 SASRec [18]: It employs a left-to-right unidirectional Transformer that captures users\u2019 sequential preferences. \u2022 LIRD [45]: It combines both policy-based gradient and valuebased DQN, using an actor-critic (AC) framework. At each recommendation step, multiple items are recommended, and a simulator automatically learns the optimal recommendation strategy to maximize long-term rewards from users. \u2022 TopK [3]: It uses a policy-based RL approach in offline learning and addresses the distribution mismatch with importance weight. It assumes that the reward of a set of non-repeating items is equal to the sum of rewards of each item and recommends multiple items to the user at each time step. In this paper, we will refer to this model as TopK. \u2022 DT4Rec-R: To illustrate the significance of reward in guiding the model, we train a DT model without considering reward, denoted as DT4Rec-R. 3.4 Hyperparameter Setting In prompt generator, we set bucket number \ud835\udc35= 10, skip-connection \ud835\udefc= 0.1. In action encoder and decoder, we set maximum sequence length \ud835\udc41= 20, RNN layer number as 1. In Decision Transformer, WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Kesen Zhao, Lixin Zou\u2020, Xiangyu Zhao\u2020, Maolin Wang, and Dawei Yin Table 2: Overall performance comparison in prediction accuracy. The best performance is marked in bold font. Dataset Model Metric BLEU\u2191 ROUGE\u2191 NDCG\u2191 HR\u2191 IQiYi BERT4Rec 0.7964 0.7693 0.7384 0.7673 SASRec 0.8009 0.7906 0.7827 0.7815 DT4Rec 0.8249* 0.8172* 0.8139* 0.8044* ML-1m BERT4Rec 0.3817 0.3806 0.5286 0.2769 SASRec 0.4052 0.3983 0.5409 0.3123 DT4Rec 0.4331* 0.4185* 0.5679* 0.3342* \u201c*\u201d indicates the statistically significant improvements (i.e., two-sided t-test with \ud835\udc5d< 0.05) over the best baseline. \u2191: the higher the better; \u2193: the lower the better. Table 3: Overall performance comparison in user retention. The best performance is marked in bold font. Dataset Model Metric MB-URS\u2191 SB-URS\u2191 IUR\u2191 NRC\u2193 IQiYi DT4Rec-R 5.16 52131 7.5% 2.6% TopK 5.41 61045 12.7% 2.0% LIRD 5.63 63572 17.3% 1.9% DT4Rec 6.05* 72270* 26.0%* 1.4%* ML-1m DT4Rec-R 5.42 13050 31.2% 9.7% TopK 5.71 13964 38.3% 6.9% LIRD 5.86 14627 41.9% 4.8% DT4Rec 5.93* 15562* 43.6%* 3.6%* \u201c*\u201d indicates the statistically significant improvements (i.e., two-sided t-test with \ud835\udc5d< 0.05) over the best baseline. \u2191: the higher the better; \u2193: the lower the better. we set maximum trajectory length T from {10, 20, 30, 40, 50}, Transformer layer number as 2, heads number as 8, embedding size as 128. We choose AdamW as optimizer and set learning rate as 0.01. To save the computation cost, we only save the 30 most recently interacted items as the state. For other baselines, the hyper-parameters are set to their optimal values as recommended by their original authors or searched within the same ranges as our model. The results for each model are reported under their optimal hyper-parameter settings. The implementation code is available online5. 3.5 RQ1: Overall Comparison 3.5.1 Prediction Accuracy. As shown in Table 2, we record all models\u2019 best results on the prediction accuracy task. From Table 2, we have following observations: \u2022 Our model outperforms all baselines on both datasets. This illustrates the effectiveness of our model in optimizing the immediate user feedback. In comparison to the existing methods, our model takes full advantage of RL to model rewards and states of users at each time step. This allows our model to dynamically model 5https://github.com/kesenzhao/DT4Rec.git Table 4: Ablation study on IQiYi dataset. Architecture Metric SB-URS MB-URS IUR DT4Rec 72270 6.05 26.0% w/o contrastive 65175 (-9.82%) 5.68 (-6.11%) 18.3% (-29.6%) w/o weight 69316 (-4.09%) 5.79 (-4.30%) 20.6% (-20.8%) w/o auto-dis 70953 (-1.82%) 5.96 (-1.49%) 24.2% (-6.9%) the user\u2019s interest characteristics to meet the changing needs of the user, so our model can achieve superior performance on the prediction accuracy task. \u2022 SASRec performs better than BERT4Rec on both datasets. Our model and SASRec both use a unidirectional Transformer, while BERT4Rec uses a bidirectional Transformer. Since our task is an end-to-end generation task, the unidirectional Transformer is more suitable for our task. 3.5.2 User Retention. Table 3 records all models\u2019 best results on maximizing user retention task. We can have following conclusions: \u2022 Our model outperforms other baselines on both MB-URS and SBURS. DT4Rec outperforms the best base LIRD by a large margin on both IQiYi and ML-1m. \u2022 Traditional offline methods, i.e., TopK, LIRD, still suffer from the \u2018Dead Traid\u2019 problem and the short-sightedness problem caused by the discount factor since they perform significantly worse than the proposed method. \u2022 In addition, TopK models based on policy gradients are prone to converge to suboptimal solutions, and determining the appropriate learning rate can be challenging. Due to the large action space in SRSs, the LIRD model based on the AC framework is more difficult in the estimation of Q values. \u2022 Our model, however, learns rewards and states to action mappings by sequential modeling, eliminating the need for bootstrap and hence avoiding the \u2018Dead Traid\u2019 issue. Unlike traditional RL strategies, our model also does not require estimating Q-values. This circumvents the problem of large action space in SRSs. The comparison with DT4Rec-R illustrates the importance of rewards for learning users\u2019 long-term preferences. 3.6 RQ2: Effectiveness of Auto-Discretized Reward Prompt By comparing the results of DT4Rec and DT4Rec-R, we have demonstrated the significance of reward as a guide for model training. In this subsection, we further show the effectiveness of our improvements for modeling the partial order of reward by ablation study. Specifically, we compare DT4Rec with the model using the naive prompt, which is implemented with a single-layer feed-forward neural network, i.e., \u2018w/o auto-dis\u2019 in Table 4. The experimental results show that the auto-discretized reward prompt is effective. Without the auto-discretized reward prompt, the performance on MB-URS and SB-URS drops 1.82% and 1.49% respectively, indicating that the auto-discretized reward prompt can make the generated embeddings maintain the partial order relationship between their corresponding reward values. User Retention-oriented Recommendation with Decision Transformer WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Data-B Original 62000 64000 66000 (a) SB-URS Data-B Original 5.2 5.4 5.6 (b) MB-URS Figure 2: Comparison between the model trained on the original IQiYi dataset and the date removing the smaller reward parts, i.e., the Data-B. Data-B Original 67000 69000 71000 (a) SB-URS Data-B Original 5.7 5.8 5.9 6.0 (b) MB-URS Figure 3: Comparison between the DT4Rec trained on the original IQiYi dataset and the date removing the smaller reward, i.e., the Data-B. 3.7 RQ3: Validity of Contrastive Supervised Policy Learning We further reveal the OOD problems of DT and then powerfully illustrate the effectiveness of our improvements through the following analytical experiments. Analysis of OOD Problem. In the DT model, the model uses the knowledge from the samples with high reward values since only the maximum value of reward is input when generating recommendations. Therefore, the knowledge in the samples with small rewards is not incorporated in final recommendation. Specifically, we evaluate the model without contrastive learning on the original dataset and data-B, which removes the samples with rewards smaller than 4. Figure 2 shows the results on IQiYi dataset. From Figure 2, we can see that there is negligible disparity in the model\u2019s performance on the Data-B and the original dataset. This indicates that the model barely uses the knowledge of samples with a large reward, which verifies our claim on the OOD problem. Effectiveness of Weighted Contrastive Learning Method (RQ3). In this paragraph, we verify the effectiveness of the weighted contrastive learning from the perspective of improving performance and well-using knowledge of small reward samples. For the former, we compare the DT4Rec with the model without the contrast learning loss, denoting it as \u2018w/o contrastive\u2019 to demonstrate the superiority of weighted contrastive learning. The significant performance drop in Table 4 validates the effectiveness of our weighted MB-URS ASB-URS RiB 0.02 0.04 0.06 (a) IQiYi MB-URS ASB-URS RiB 0.02 0.04 0.06 (b) ML-1m Figure 4: Comparison of different evaluation methods\u2019 variance on IQiYi and ML-1m datasets. contrastive learning method. For utilizing the knowledge of small reward samples, we compare DT4Rec with the variant that does not use the small reward samples in Figure 3. Particularly, we plot original one and the version without using the small reward samples\u2019 performance on SB-URS and MB-URS. From the figure, we clearly observe that the SB-URS and MB-URS of our model on original IQiYi dataset are higher than those without using the small reward samples, showing that our model makes full use of the samples with small rewards. The reason is that our model will avoid recommending item sequences that are similar to the samples with small rewards, which is consistent with our intuition on weighted contrastive learning method. 3.8 RQ4: Evaluation Efficiency We illustrate the validity of the MB-URS and SB-URS by comparing our methods with the evaluation method proposed by Wu et al., denoting it as \u2018RiB\u2019. We split the test set into 5 sets and calculate the variance of the performance on these 5 test sets as the indicator of evaluation methods\u2019 stability. For fair comparison in the same size, we replace the SB-URS with the ASB-URS, which evaluate the average similarity-based user return score by dividing the \ud835\udc41\ud835\udc58in Equation (17). Figure 4 plots the variance of evaluation methods on the IQiYi and ML-1m. The results show that the variance of our evaluation method is small on both datasets, confirming the effectiveness of our proposed methods. 3.9 RQ5: Model Generalization Analysis In inference stage, instead of using ground truth reward, only the maximal reward is being utilized for generating the recommendation based on current states. Consequently, it leads to the mismatching between training and test since the maximal reward may not exist in the training samples. Therefore, it requires the model to have a strong generalization capability, rather than just performing imitation learning on original dataset with a certain return. We conduct behavior cloning analysis experiments on IQiYi dataset. Specifically, we treat samples with rewards of 6 or 7 as high reward parts and resplit datasets into high reward parts and not-high reward parts. Furthermore, we control the high reward samples\u2019 proportions of the whole dataset to study models\u2019 generalization capability. In Figure 5, we plot DT4Rec and TopK\u2019s MB-URS and SB-URS on these datasets with different high reward sample proportion, i.e., [10, 25, 40, 100]. From the figure, we observe following WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Kesen Zhao, Lixin Zou\u2020, Xiangyu Zhao\u2020, Maolin Wang, and Dawei Yin 20 40 60 80 100 reward sample propor 20 40 60 80 100 high reward sample proportion 4.5 5.0 5.5 6.0 6.5 (a) MB-URS DT4Rec TopK 20 40 60 80 100 h reward sample proport 20 40 60 80 100 high reward sample proportion 5 10 15 20 25 30 (b) IUR% DT4Rec TopK Figure 5: Behavior cloning analysis on IQiYi dataset. facts: (1) on both new datasets, our model performs better than TopK, indicating a strong generalization capability of proposed method. (2) DT4Rec\u2019s SB-URS is higher than 20%, even only using 10% high reward samples. It demonstrates the effectiveness of proposed method in increasing user retention. When the high reward samples is small, the policy evaluation for high reward samples will become inaccurate and hard. However, our model is trained in a supervised way, which avoids the high variance of the policy evaluation and boosts model\u2019s performance on test. We present a novel reinforcement learning-based sequential recommender system, i.e., DT4Rec, in this work. Particularly, it avoids the instability problem and unbounded variance headache by casting the RL as an autoregressive problem. Furthermore, we contribute a series of technologies to the success application of DT4Rec. Specifically, the design of an auto-discretized reward prompt effectively models the numerical value of reward and allows guiding the training of models with long-term user engagement. The proposed contrastive supervised policy learning diminishes the inconsistencies between inference and training of the naive Decision Transformer. To evaluate our model, we propose two stable metrics, i.e., MB-URS and SB-URS, which are verified to be more stable than existing ones. Extensive experiments conducted on the benchmark datasets have demonstrated the effectiveness of the proposed methods. ACKNOWLEDGEMENT This research was partially supported by APRC CityU New Research Initiatives (No.9610565, Start-up Grant for New Faculty of City University of Hong Kong), SIRG CityU Strategic Interdisciplinary Research Grant (No.7020046, No.7020074), HKIDS Early Career Research Grant (No.9360163), Huawei Innovation Research Program and Ant Group (CCF-Ant Research Fund)." + }, + { + "url": "http://arxiv.org/abs/1801.00209v3", + "title": "Deep Reinforcement Learning for List-wise Recommendations", + "abstract": "Recommender systems play a crucial role in mitigating the problem of\ninformation overload by suggesting users' personalized items or services. The\nvast majority of traditional recommender systems consider the recommendation\nprocedure as a static process and make recommendations following a fixed\nstrategy. In this paper, we propose a novel recommender system with the\ncapability of continuously improving its strategies during the interactions\nwith users. We model the sequential interactions between users and a\nrecommender system as a Markov Decision Process (MDP) and leverage\nReinforcement Learning (RL) to automatically learn the optimal strategies via\nrecommending trial-and-error items and receiving reinforcements of these items\nfrom users' feedbacks. In particular, we introduce an online user-agent\ninteracting environment simulator, which can pre-train and evaluate model\nparameters offline before applying the model online. Moreover, we validate the\nimportance of list-wise recommendations during the interactions between users\nand agent, and develop a novel approach to incorporate them into the proposed\nframework LIRD for list-wide recommendations. The experimental results based on\na real-world e-commerce dataset demonstrate the effectiveness of the proposed\nframework.", + "authors": "Xiangyu Zhao, Liang Zhang, Long Xia, Zhuoye Ding, Dawei Yin, Jiliang Tang", + "published": "2017-12-30", + "updated": "2019-06-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1805.02343v2", + "title": "Deep Reinforcement Learning for Page-wise Recommendations", + "abstract": "Recommender systems can mitigate the information overload problem by\nsuggesting users' personalized items. In real-world recommendations such as\ne-commerce, a typical interaction between the system and its users is -- users\nare recommended a page of items and provide feedback; and then the system\nrecommends a new page of items. To effectively capture such interaction for\nrecommendations, we need to solve two key problems -- (1) how to update\nrecommending strategy according to user's \\textit{real-time feedback}, and 2)\nhow to generate a page of items with proper display, which pose tremendous\nchallenges to traditional recommender systems. In this paper, we study the\nproblem of page-wise recommendations aiming to address aforementioned two\nchallenges simultaneously. In particular, we propose a principled approach to\njointly generate a set of complementary items and the corresponding strategy to\ndisplay them in a 2-D page; and propose a novel page-wise recommendation\nframework based on deep reinforcement learning, DeepPage, which can optimize a\npage of items with proper display based on real-time feedback from users. The\nexperimental results based on a real-world e-commerce dataset demonstrate the\neffectiveness of the proposed framework.", + "authors": "Xiangyu Zhao, Long Xia, Liang Zhang, Zhuoye Ding, Dawei Yin, Jiliang Tang", + "published": "2018-05-07", + "updated": "2018-08-10", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2105.01620v2", + "title": "Data-Efficient Reinforcement Learning for Malaria Control", + "abstract": "Sequential decision-making under cost-sensitive tasks is prohibitively\ndaunting, especially for the problem that has a significant impact on people's\ndaily lives, such as malaria control, treatment recommendation. The main\nchallenge faced by policymakers is to learn a policy from scratch by\ninteracting with a complex environment in a few trials. This work introduces a\npractical, data-efficient policy learning method, named Variance-Bonus Monte\nCarlo Tree Search~(VB-MCTS), which can copy with very little data and\nfacilitate learning from scratch in only a few trials. Specifically, the\nsolution is a model-based reinforcement learning method. To avoid model bias,\nwe apply Gaussian Process~(GP) regression to estimate the transitions\nexplicitly. With the GP world model, we propose a variance-bonus reward to\nmeasure the uncertainty about the world. Adding the reward to the planning with\nMCTS can result in more efficient and effective exploration. Furthermore, the\nderived polynomial sample complexity indicates that VB-MCTS is sample\nefficient. Finally, outstanding performance on a competitive world-level RL\ncompetition and extensive experimental results verify its advantage over the\nstate-of-the-art on the challenging malaria control task.", + "authors": "Lixin Zou, Long Xia, Linfang Hou, Xiangyu Zhao, Dawei Yin", + "published": "2021-05-04", + "updated": "2021-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1606.08117v2", + "title": "Improved Recurrent Neural Networks for Session-based Recommendations", + "abstract": "Recurrent neural networks (RNNs) were recently proposed for the session-based\nrecommendation task. The models showed promising improvements over traditional\nrecommendation approaches. In this work, we further study RNN-based models for\nsession-based recommendations. We propose the application of two techniques to\nimprove model performance, namely, data augmentation, and a method to account\nfor shifts in the input data distribution. We also empirically study the use of\ngeneralised distillation, and a novel alternative model that directly predicts\nitem embeddings. Experiments on the RecSys Challenge 2015 dataset demonstrate\nrelative improvements of 12.8% and 14.8% over previously reported results on\nthe Recall@20 and Mean Reciprocal Rank@20 metrics respectively.", + "authors": "Yong Kiam Tan, Xinxing Xu, Yong Liu", + "published": "2016-06-27", + "updated": "2016-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2007.02095v1", + "title": "Neural Interactive Collaborative Filtering", + "abstract": "In this paper, we study collaborative filtering in an interactive setting, in\nwhich the recommender agents iterate between making recommendations and\nupdating the user profile based on the interactive feedback. The most\nchallenging problem in this scenario is how to suggest items when the user\nprofile has not been well established, i.e., recommend for cold-start users or\nwarm-start users with taste drifting. Existing approaches either rely on overly\npessimistic linear exploration strategy or adopt meta-learning based algorithms\nin a full exploitation way. In this work, to quickly catch up with the user's\ninterests, we propose to represent the exploration policy with a neural network\nand directly learn it from the feedback data. Specifically, the exploration\npolicy is encoded in the weights of multi-channel stacked self-attention neural\nnetworks and trained with efficient Q-learning by maximizing users' overall\nsatisfaction in the recommender systems. The key insight is that the satisfied\nrecommendations triggered by the exploration recommendation can be viewed as\nthe exploration bonus (delayed reward) for its contribution on improving the\nquality of the user profile. Therefore, the proposed exploration policy, to\nbalance between learning the user profile and making accurate recommendations,\ncan be directly optimized by maximizing users' long-term satisfaction with\nreinforcement learning. Extensive experiments and analysis conducted on three\nbenchmark collaborative filtering datasets have demonstrated the advantage of\nour method over state-of-the-art methods.", + "authors": "Lixin Zou, Long Xia, Yulong Gu, Xiangyu Zhao, Weidong Liu, Jimmy Xiangji Huang, Dawei Yin", + "published": "2020-07-04", + "updated": "2020-07-04", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.11510v1", + "title": "MLP4Rec: A Pure MLP Architecture for Sequential Recommendations", + "abstract": "Self-attention models have achieved state-of-the-art performance in\nsequential recommender systems by capturing the sequential dependencies among\nuser-item interactions. However, they rely on positional embeddings to retain\nthe sequential information, which may break the semantics of item embeddings.\nIn addition, most existing works assume that such sequential dependencies exist\nsolely in the item embeddings, but neglect their existence among the item\nfeatures. In this work, we propose a novel sequential recommender system\n(MLP4Rec) based on the recent advances of MLP-based architectures, which is\nnaturally sensitive to the order of items in a sequence. To be specific, we\ndevelop a tri-directional fusion scheme to coherently capture sequential,\ncross-channel and cross-feature correlations. Extensive experiments demonstrate\nthe effectiveness of MLP4Rec over various representative baselines upon two\nbenchmark datasets. The simple architecture of MLP4Rec also leads to the linear\ncomputational complexity as well as much fewer model parameters than existing\nself-attention methods.", + "authors": "Muyang Li, Xiangyu Zhao, Chuan Lyu, Minghao Zhao, Runze Wu, Ruocheng Guo", + "published": "2022-04-25", + "updated": "2022-04-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1802.06501v3", + "title": "Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning", + "abstract": "Recommender systems play a crucial role in mitigating the problem of\ninformation overload by suggesting users' personalized items or services. The\nvast majority of traditional recommender systems consider the recommendation\nprocedure as a static process and make recommendations following a fixed\nstrategy. In this paper, we propose a novel recommender system with the\ncapability of continuously improving its strategies during the interactions\nwith users. We model the sequential interactions between users and a\nrecommender system as a Markov Decision Process (MDP) and leverage\nReinforcement Learning (RL) to automatically learn the optimal strategies via\nrecommending trial-and-error items and receiving reinforcements of these items\nfrom users' feedback. Users' feedback can be positive and negative and both\ntypes of feedback have great potentials to boost recommendations. However, the\nnumber of negative feedback is much larger than that of positive one; thus\nincorporating them simultaneously is challenging since positive feedback could\nbe buried by negative one. In this paper, we develop a novel approach to\nincorporate them into the proposed deep recommender system (DEERS) framework.\nThe experimental results based on real-world e-commerce data demonstrate the\neffectiveness of the proposed framework. Further experiments have been\nconducted to understand the importance of both positive and negative feedback\nin recommendations.", + "authors": "Xiangyu Zhao, Liang Zhang, Zhuoye Ding, Long Xia, Jiliang Tang, Dawei Yin", + "published": "2018-02-19", + "updated": "2018-08-10", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1902.05570v4", + "title": "Reinforcement Learning to Optimize Long-term User Engagement in Recommender Systems", + "abstract": "Recommender systems play a crucial role in our daily lives. Feed streaming\nmechanism has been widely used in the recommender system, especially on the\nmobile Apps. The feed streaming setting provides users the interactive manner\nof recommendation in never-ending feeds. In such an interactive manner, a good\nrecommender system should pay more attention to user stickiness, which is far\nbeyond classical instant metrics, and typically measured by {\\bf long-term user\nengagement}. Directly optimizing the long-term user engagement is a non-trivial\nproblem, as the learning target is usually not available for conventional\nsupervised learning methods. Though reinforcement learning~(RL) naturally fits\nthe problem of maximizing the long term rewards, applying RL to optimize\nlong-term user engagement is still facing challenges: user behaviors are\nversatile and difficult to model, which typically consists of both instant\nfeedback~(e.g. clicks, ordering) and delayed feedback~(e.g. dwell time,\nrevisit); in addition, performing effective off-policy learning is still\nimmature, especially when combining bootstrapping and function approximation.\n To address these issues, in this work, we introduce a reinforcement learning\nframework --- FeedRec to optimize the long-term user engagement. FeedRec\nincludes two components: 1)~a Q-Network which designed in hierarchical LSTM\ntakes charge of modeling complex user behaviors, and 2)~an S-Network, which\nsimulates the environment, assists the Q-Network and voids the instability of\nconvergence in policy learning. Extensive experiments on synthetic data and a\nreal-world large scale data show that FeedRec effectively optimizes the\nlong-term user engagement and outperforms state-of-the-arts.", + "authors": "Lixin Zou, Long Xia, Zhuoye Ding, Jiaxing Song, Weidong Liu, Dawei Yin", + "published": "2019-02-13", + "updated": "2019-07-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.03540v2", + "title": "A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions", + "abstract": "In light of the emergence of deep reinforcement learning (DRL) in recommender\nsystems research and several fruitful results in recent years, this survey aims\nto provide a timely and comprehensive overview of the recent trends of deep\nreinforcement learning in recommender systems. We start with the motivation of\napplying DRL in recommender systems. Then, we provide a taxonomy of current\nDRL-based recommender systems and a summary of existing methods. We discuss\nemerging topics and open issues, and provide our perspective on advancing the\ndomain. This survey serves as introductory material for readers from academia\nand industry into the topic and identifies notable opportunities for further\nresearch.", + "authors": "Xiaocong Chen, Lina Yao, Julian McAuley, Guanglin Zhou, Xianzhi Wang", + "published": "2021-09-08", + "updated": "2021-09-09", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1811.05869v1", + "title": "Large-scale Interactive Recommendation with Tree-structured Policy Gradient", + "abstract": "Reinforcement learning (RL) has recently been introduced to interactive\nrecommender systems (IRS) because of its nature of learning from dynamic\ninteractions and planning for long-run performance. As IRS is always with\nthousands of items to recommend (i.e., thousands of actions), most existing\nRL-based methods, however, fail to handle such a large discrete action space\nproblem and thus become inefficient. The existing work that tries to deal with\nthe large discrete action space problem by utilizing the deep deterministic\npolicy gradient framework suffers from the inconsistency between the continuous\naction representation (the output of the actor network) and the real discrete\naction. To avoid such inconsistency and achieve high efficiency and\nrecommendation effectiveness, in this paper, we propose a Tree-structured\nPolicy Gradient Recommendation (TPGR) framework, where a balanced hierarchical\nclustering tree is built over the items and picking an item is formulated as\nseeking a path from the root to a certain leaf of the tree. Extensive\nexperiments on carefully-designed environments based on two real-world datasets\ndemonstrate that our model provides superior recommendation performance and\nsignificant efficiency improvement over state-of-the-art methods.", + "authors": "Haokun Chen, Xinyi Dai, Han Cai, Weinan Zhang, Xuejian Wang, Ruiming Tang, Yuzhou Zhang, Yong Yu", + "published": "2018-11-14", + "updated": "2018-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.09550v1", + "title": "Data Valuation for Offline Reinforcement Learning", + "abstract": "The success of deep reinforcement learning (DRL) hinges on the availability\nof training data, which is typically obtained via a large number of environment\ninteractions. In many real-world scenarios, costs and risks are associated with\ngathering these data. The field of offline reinforcement learning addresses\nthese issues through outsourcing the collection of data to a domain expert or a\ncarefully monitored program and subsequently searching for a batch-constrained\noptimal policy. With the emergence of data markets, an alternative to\nconstructing a dataset in-house is to purchase external data. However, while\nstate-of-the-art offline reinforcement learning approaches have shown a lot of\npromise, they currently rely on carefully constructed datasets that are well\naligned with the intended target domains. This raises questions regarding the\ntransferability and robustness of an offline reinforcement learning agent\ntrained on externally acquired data. In this paper, we empirically evaluate the\nability of the current state-of-the-art offline reinforcement learning\napproaches to coping with the source-target domain mismatch within two MuJoCo\nenvironments, finding that current state-of-the-art offline reinforcement\nlearning algorithms underperform in the target domain. To address this, we\npropose data valuation for offline reinforcement learning (DVORL), which allows\nus to identify relevant and high-quality transitions, improving the performance\nand transferability of policies learned by offline reinforcement learning\nalgorithms. The results show that our method outperforms offline reinforcement\nlearning baselines on two MuJoCo environments.", + "authors": "Amir Abolfazli, Gregory Palmer, Daniel Kudenko", + "published": "2022-05-19", + "updated": "2022-05-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08128v1", + "title": "Conservative Data Sharing for Multi-Task Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) algorithms have shown promising results\nin domains where abundant pre-collected data is available. However, prior\nmethods focus on solving individual problems from scratch with an offline\ndataset without considering how an offline RL agent can acquire multiple\nskills. We argue that a natural use case of offline RL is in settings where we\ncan pool large amounts of data collected in various scenarios for solving\ndifferent tasks, and utilize all of this data to learn behaviors for all the\ntasks more effectively rather than training each one in isolation. However,\nsharing data across all tasks in multi-task offline RL performs surprisingly\npoorly in practice. Thorough empirical analysis, we find that sharing data can\nactually exacerbate the distributional shift between the learned policy and the\ndataset, which in turn can lead to divergence of the learned policy and poor\nperformance. To address this challenge, we develop a simple technique for\ndata-sharing in multi-task offline RL that routes data based on the improvement\nover the task-specific data. We call this approach conservative data sharing\n(CDS), and it can be applied with multiple single-task offline RL methods. On a\nrange of challenging multi-task locomotion, navigation, and vision-based\nrobotic manipulation problems, CDS achieves the best or comparable performance\ncompared to prior offline multi-task RL methods and previous data sharing\napproaches.", + "authors": "Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn", + "published": "2021-09-16", + "updated": "2021-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.10393v1", + "title": "Offline Trajectory Generalization for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn policies from static\ndatasets of previously collected trajectories. Existing methods for offline RL\neither constrain the learned policy to the support of offline data or utilize\nmodel-based virtual environments to generate simulated rollouts. However, these\nmethods suffer from (i) poor generalization to unseen states; and (ii) trivial\nimprovement from low-qualified rollout simulation. In this paper, we propose\noffline trajectory generalization through world transformers for offline\nreinforcement learning (OTTO). Specifically, we use casual Transformers, a.k.a.\nWorld Transformers, to predict state dynamics and the immediate reward. Then we\npropose four strategies to use World Transformers to generate high-rewarded\ntrajectory simulation by perturbing the offline data. Finally, we jointly use\noffline data with simulated data to train an offline RL algorithm. OTTO serves\nas a plug-in module and can be integrated with existing offline RL methods to\nenhance them with better generalization capability of transformers and\nhigh-rewarded data augmentation. Conducting extensive experiments on D4RL\nbenchmark datasets, we verify that OTTO significantly outperforms\nstate-of-the-art offline RL methods.", + "authors": "Ziqi Zhao, Zhaochun Ren, Liu Yang, Fajie Yuan, Pengjie Ren, Zhumin Chen, jun Ma, Xin Xin", + "published": "2024-04-16", + "updated": "2024-04-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.15503v1", + "title": "Prioritized Trajectory Replay: A Replay Memory for Data-driven Reinforcement Learning", + "abstract": "In recent years, data-driven reinforcement learning (RL), also known as\noffline RL, have gained significant attention. However, the role of data\nsampling techniques in offline RL has been overlooked despite its potential to\nenhance online RL performance. Recent research suggests applying sampling\ntechniques directly to state-transitions does not consistently improve\nperformance in offline RL. Therefore, in this study, we propose a memory\ntechnique, (Prioritized) Trajectory Replay (TR/PTR), which extends the sampling\nperspective to trajectories for more comprehensive information extraction from\nlimited data. TR enhances learning efficiency by backward sampling of\ntrajectories that optimizes the use of subsequent state information. Building\non TR, we build the weighted critic target to avoid sampling unseen actions in\noffline training, and Prioritized Trajectory Replay (PTR) that enables more\nefficient trajectory sampling, prioritized by various trajectory priority\nmetrics. We demonstrate the benefits of integrating TR and PTR with existing\noffline RL algorithms on D4RL. In summary, our research emphasizes the\nsignificance of trajectory-based data sampling techniques in enhancing the\nefficiency and performance of offline RL algorithms.", + "authors": "Jinyi Liu, Yi Ma, Jianye Hao, Yujing Hu, Yan Zheng, Tangjie Lv, Changjie Fan", + "published": "2023-06-27", + "updated": "2023-06-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.04779v3", + "title": "Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations", + "abstract": "Offline reinforcement learning has shown great promise in leveraging large\npre-collected datasets for policy learning, allowing agents to forgo\noften-expensive online data collection. However, offline reinforcement learning\nfrom visual observations with continuous action spaces remains under-explored,\nwith a limited understanding of the key challenges in this complex domain. In\nthis paper, we establish simple baselines for continuous control in the visual\ndomain and introduce a suite of benchmarking tasks for offline reinforcement\nlearning from visual observations designed to better represent the data\ndistributions present in real-world offline RL problems and guided by a set of\ndesiderata for offline RL from visual observations, including robustness to\nvisual distractions and visually identifiable changes in dynamics. Using this\nsuite of benchmarking tasks, we show that simple modifications to two popular\nvision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2,\nsuffice to outperform existing offline RL methods and establish competitive\nbaselines for continuous control in the visual domain. We rigorously evaluate\nthese algorithms and perform an empirical evaluation of the differences between\nstate-of-the-art model-based and model-free offline RL methods for continuous\ncontrol from visual observations. All code and data used in this evaluation are\nopen-sourced to facilitate progress in this domain.", + "authors": "Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh", + "published": "2022-06-09", + "updated": "2023-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.13464v3", + "title": "When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning", + "abstract": "Learning effective reinforcement learning (RL) policies to solve real-world\ncomplex tasks can be quite challenging without a high-fidelity simulation\nenvironment. In most cases, we are only given imperfect simulators with\nsimplified dynamics, which inevitably lead to severe sim-to-real gaps in RL\npolicy learning. The recently emerged field of offline RL provides another\npossibility to learn policies directly from pre-collected historical data.\nHowever, to achieve reasonable performance, existing offline RL algorithms need\nimpractically large offline data with sufficient state-action space coverage\nfor training. This brings up a new question: is it possible to combine learning\nfrom limited real data in offline RL and unrestricted exploration through\nimperfect simulators in online RL to address the drawbacks of both approaches?\nIn this study, we propose the Dynamics-Aware Hybrid Offline-and-Online\nReinforcement Learning (H2O) framework to provide an affirmative answer to this\nquestion. H2O introduces a dynamics-aware policy evaluation scheme, which\nadaptively penalizes the Q function learning on simulated state-action pairs\nwith large dynamics gaps, while also simultaneously allowing learning from a\nfixed real-world dataset. Through extensive simulation and real-world tasks, as\nwell as theoretical analysis, we demonstrate the superior performance of H2O\nagainst other cross-domain online and offline RL algorithms. H2O provides a\nbrand new hybrid offline-and-online RL paradigm, which can potentially shed\nlight on future RL algorithm design for solving practical real-world tasks.", + "authors": "Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2022-06-27", + "updated": "2023-01-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02752v2", + "title": "Offline Reinforcement Learning with Imbalanced Datasets", + "abstract": "The prevalent use of benchmarks in current offline reinforcement learning\n(RL) research has led to a neglect of the imbalance of real-world dataset\ndistributions in the development of models. The real-world offline RL dataset\nis often imbalanced over the state space due to the challenge of exploration or\nsafety considerations. In this paper, we specify properties of imbalanced\ndatasets in offline RL, where the state coverage follows a power law\ndistribution characterized by skewed policies. Theoretically and empirically,\nwe show that typically offline RL methods based on distributional constraints,\nsuch as conservative Q-learning (CQL), are ineffective in extracting policies\nunder the imbalanced dataset. Inspired by natural intelligence, we propose a\nnovel offline RL method that utilizes the augmentation of CQL with a retrieval\nprocess to recall past related experiences, effectively alleviating the\nchallenges posed by imbalanced datasets. We evaluate our method on several\ntasks in the context of imbalanced datasets with varying levels of imbalance,\nutilizing the variant of D4RL. Empirical results demonstrate the superiority of\nour method over other baselines.", + "authors": "Li Jiang, Sijie Chen, Jielin Qiu, Haoran Xu, Wai Kin Chan, Zhao Ding", + "published": "2023-07-06", + "updated": "2023-07-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08232v1", + "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", + "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", + "authors": "Ashish Kumar, Ilya Kuzovkin", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08331v1", + "title": "Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation", + "abstract": "In recommender systems (RecSys) and real-time bidding (RTB) for online\nadvertisements, we often try to optimize sequential decision making using\nbandit and reinforcement learning (RL) techniques. In these applications,\noffline reinforcement learning (offline RL) and off-policy evaluation (OPE) are\nbeneficial because they enable safe policy optimization using only logged data\nwithout any risky online interaction. In this position paper, we explore the\npotential of using simulation to accelerate practical research of offline RL\nand OPE, particularly in RecSys and RTB. Specifically, we discuss how\nsimulation can help us conduct empirical research of offline RL and OPE. We\ntake a position to argue that we should effectively use simulations in the\nempirical research of offline RL and OPE. To refute the counterclaim that\nexperiments using only real-world data are preferable, we first point out the\nunderlying risks and reproducibility issue in real-world experiments. Then, we\ndescribe how these issues can be addressed by using simulations. Moreover, we\nshow how to incorporate the benefits of both real-world and simulation-based\nexperiments to defend our position. Finally, we also present an open challenge\nto further facilitate practical research of offline RL and OPE in RecSys and\nRTB, with respect to public simulation platforms. As a possible solution for\nthe issue, we show our ongoing open source project and its potential use case.\nWe believe that building and utilizing simulation-based evaluation platforms\nfor offline RL and OPE will be of great interest and relevance for the RecSys\nand RTB community.", + "authors": "Haruka Kiyohara, Kosuke Kawakami, Yuta Saito", + "published": "2021-09-17", + "updated": "2021-09-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.00750v2", + "title": "Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient", + "abstract": "Offline reinforcement learning, which aims at optimizing sequential\ndecision-making strategies with historical data, has been extensively applied\nin real-life applications. State-Of-The-Art algorithms usually leverage\npowerful function approximators (e.g. neural networks) to alleviate the sample\ncomplexity hurdle for better empirical performances. Despite the successes, a\nmore systematic understanding of the statistical complexity for function\napproximation remains lacking. Towards bridging the gap, we take a step by\nconsidering offline reinforcement learning with differentiable function class\napproximation (DFA). This function class naturally incorporates a wide range of\nmodels with nonlinear/nonconvex structures. Most importantly, we show offline\nRL with differentiable function approximation is provably efficient by\nanalyzing the pessimistic fitted Q-learning (PFQL) algorithm, and our results\nprovide the theoretical basis for understanding a variety of practical\nheuristics that rely on Fitted Q-Iteration style design. In addition, we\nfurther improve our guarantee with a tighter instance-dependent\ncharacterization. We hope our work could draw interest in studying\nreinforcement learning with differentiable function approximation beyond the\nscope of current research.", + "authors": "Ming Yin, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-10-03", + "updated": "2022-11-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.03383v2", + "title": "On the Role of Discount Factor in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables effective learning from\npreviously collected data without exploration, which shows great promise in\nreal-world applications when exploration is expensive or even infeasible. The\ndiscount factor, $\\gamma$, plays a vital role in improving online RL sample\nefficiency and estimation accuracy, but the role of the discount factor in\noffline RL is not well explored. This paper examines two distinct effects of\n$\\gamma$ in offline RL with theoretical analysis, namely the regularization\neffect and the pessimism effect. On the one hand, $\\gamma$ is a regulator to\ntrade-off optimality with sample efficiency upon existing offline techniques.\nOn the other hand, lower guidance $\\gamma$ can also be seen as a way of\npessimism where we optimize the policy's performance in the worst possible\nmodels. We empirically verify the above theoretical observation with tabular\nMDPs and standard D4RL tasks. The results show that the discount factor plays\nan essential role in the performance of offline RL algorithms, both under small\ndata regimes upon existing offline methods and in large data regimes without\nother conservative methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2022-06-07", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.10070v1", + "title": "MOORe: Model-based Offline-to-Online Reinforcement Learning", + "abstract": "With the success of offline reinforcement learning (RL), offline trained RL\npolicies have the potential to be further improved when deployed online. A\nsmooth transfer of the policy matters in safe real-world deployment. Besides,\nfast adaptation of the policy plays a vital role in practical online\nperformance improvement. To tackle these challenges, we propose a simple yet\nefficient algorithm, Model-based Offline-to-Online Reinforcement learning\n(MOORe), which employs a prioritized sampling scheme that can dynamically\nadjust the offline and online data for smooth and efficient online adaptation\nof the policy. We provide a theoretical foundation for our algorithms design.\nExperiment results on the D4RL benchmark show that our algorithm smoothly\ntransfers from offline to online stages while enabling sample-efficient online\nadaption, and also significantly outperforms existing methods.", + "authors": "Yihuan Mao, Chao Wang, Bin Wang, Chongjie Zhang", + "published": "2022-01-25", + "updated": "2022-01-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10411v2", + "title": "Boosting Offline Reinforcement Learning with Residual Generative Modeling", + "abstract": "Offline reinforcement learning (RL) tries to learn the near-optimal policy\nwith recorded offline experience without online exploration. Current offline RL\nresearch includes: 1) generative modeling, i.e., approximating a policy using\nfixed data; and 2) learning the state-action value function. While most\nresearch focuses on the state-action function part through reducing the\nbootstrapping error in value function approximation induced by the distribution\nshift of training data, the effects of error propagation in generative modeling\nhave been neglected. In this paper, we analyze the error in generative\nmodeling. We propose AQL (action-conditioned Q-learning), a residual generative\nmodel to reduce policy approximation error for offline RL. We show that our\nmethod can learn more accurate policy approximations in different benchmark\ndatasets. In addition, we show that the proposed offline RL method can learn\nmore competitive AI agents in complex control tasks under the multiplayer\nonline battle arena (MOBA) game Honor of Kings.", + "authors": "Hua Wei, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, Zhenhui Li", + "published": "2021-06-19", + "updated": "2021-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T01", + "I.2.8; I.2.1" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.01643v3", + "title": "Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems", + "abstract": "In this tutorial article, we aim to provide the reader with the conceptual\ntools needed to get started on research on offline reinforcement learning\nalgorithms: reinforcement learning algorithms that utilize previously collected\ndata, without additional online data collection. Offline reinforcement learning\nalgorithms hold tremendous promise for making it possible to turn large\ndatasets into powerful decision making engines. Effective offline reinforcement\nlearning methods would be able to extract policies with the maximum possible\nutility out of the available data, thereby allowing automation of a wide range\nof decision-making domains, from healthcare and education to robotics. However,\nthe limitations of current algorithms make this difficult. We will aim to\nprovide the reader with an understanding of these challenges, particularly in\nthe context of modern deep reinforcement learning methods, and describe some\npotential solutions that have been explored in recent work to mitigate these\nchallenges, along with recent applications, and a discussion of perspectives on\nopen problems in the field.", + "authors": "Sergey Levine, Aviral Kumar, George Tucker, Justin Fu", + "published": "2020-05-04", + "updated": "2020-11-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02439v2", + "title": "DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching", + "abstract": "In offline reinforcement learning (RL), the performance of the learned policy\nhighly depends on the quality of offline datasets. However, in many cases, the\noffline dataset contains very limited optimal trajectories, which poses a\nchallenge for offline RL algorithms as agents must acquire the ability to\ntransit to high-reward regions. To address this issue, we introduce\nDiffusion-based Trajectory Stitching (DiffStitch), a novel diffusion-based data\naugmentation pipeline that systematically generates stitching transitions\nbetween trajectories. DiffStitch effectively connects low-reward trajectories\nwith high-reward trajectories, forming globally optimal trajectories to address\nthe challenges faced by offline RL algorithms. Empirical experiments conducted\non D4RL datasets demonstrate the effectiveness of DiffStitch across RL\nmethodologies. Notably, DiffStitch demonstrates substantial enhancements in the\nperformance of one-step methods (IQL), imitation learning methods (TD3+BC), and\ntrajectory optimization methods (DT).", + "authors": "Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, Weinan Zhang", + "published": "2024-02-04", + "updated": "2024-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.10813v2", + "title": "A Workflow for Offline Model-Free Robotic Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables learning control policies by\nutilizing only prior experience, without any online interaction. This can allow\nrobots to acquire generalizable skills from large and diverse datasets, without\nany costly or unsafe online data collection. Despite recent algorithmic\nadvances in offline RL, applying these methods to real-world problems has\nproven challenging. Although offline RL methods can learn from prior data,\nthere is no clear and well-understood process for making various design\nchoices, from model architecture to algorithm hyperparameters, without actually\nevaluating the learned policies online. In this paper, our aim is to develop a\npractical workflow for using offline RL analogous to the relatively\nwell-understood workflows for supervised learning problems. To this end, we\ndevise a set of metrics and conditions that can be tracked over the course of\noffline training, and can inform the practitioner about how the algorithm and\nmodel architecture should be adjusted to improve final performance. Our\nworkflow is derived from a conceptual understanding of the behavior of\nconservative offline RL algorithms and cross-validation in supervised learning.\nWe demonstrate the efficacy of this workflow in producing effective policies\nwithout any online tuning, both in several simulated robotic learning scenarios\nand for three tasks on two distinct real robots, focusing on learning\nmanipulation skills with raw image observations with sparse binary rewards.\nExplanatory video and additional results can be found at\nsites.google.com/view/offline-rl-workflow", + "authors": "Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine", + "published": "2021-09-22", + "updated": "2021-09-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05951v3", + "title": "MOReL : Model-Based Offline Reinforcement Learning", + "abstract": "In offline reinforcement learning (RL), the goal is to learn a highly\nrewarding policy based solely on a dataset of historical interactions with the\nenvironment. The ability to train RL policies offline can greatly expand the\napplicability of RL, its data efficiency, and its experimental velocity. Prior\nwork in offline RL has been confined almost exclusively to model-free RL\napproaches. In this work, we present MOReL, an algorithmic framework for\nmodel-based offline RL. This framework consists of two steps: (a) learning a\npessimistic MDP (P-MDP) using the offline dataset; and (b) learning a\nnear-optimal policy in this P-MDP. The learned P-MDP has the property that for\nany policy, the performance in the real environment is approximately\nlower-bounded by the performance in the P-MDP. This enables it to serve as a\ngood surrogate for purposes of policy evaluation and learning, and overcome\ncommon pitfalls of model-based RL like model exploitation. Theoretically, we\nshow that MOReL is minimax optimal (up to log factors) for offline RL. Through\nexperiments, we show that MOReL matches or exceeds state-of-the-art results in\nwidely studied offline RL benchmarks. Moreover, the modular design of MOReL\nenables future advances in its components (e.g. generative modeling,\nuncertainty estimation, planning etc.) to directly translate into advances for\noffline RL.", + "authors": "Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims", + "published": "2020-05-12", + "updated": "2021-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.06662v1", + "title": "DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning algorithms promise to be applicable in\nsettings where a fixed dataset is available and no new experience can be\nacquired. However, such formulation is inevitably offline-data-hungry and, in\npractice, collecting a large offline dataset for one specific task over one\nspecific environment is also costly and laborious. In this paper, we thus 1)\nformulate the offline dynamics adaptation by using (source) offline data\ncollected from another dynamics to relax the requirement for the extensive\n(target) offline data, 2) characterize the dynamics shift problem in which\nprior offline methods do not scale well, and 3) derive a simple dynamics-aware\nreward augmentation (DARA) framework from both model-free and model-based\noffline settings. Specifically, DARA emphasizes learning from those source\ntransition pairs that are adaptive for the target environment and mitigates the\noffline dynamics shift by characterizing state-action-next-state pairs instead\nof the typical state-action distribution sketched by prior offline RL methods.\nThe experimental evaluation demonstrates that DARA, by augmenting rewards in\nthe source offline dataset, can acquire an adaptive policy for the target\nenvironment and yet significantly reduce the requirement of target offline\ndata. With only modest amounts of target offline data, our performance\nconsistently outperforms the prior offline RL methods in both simulated and\nreal-world tasks.", + "authors": "Jinxin Liu, Hongyin Zhang, Donglin Wang", + "published": "2022-03-13", + "updated": "2022-03-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.09701v2", + "title": "A Natural Extension To Online Algorithms For Hybrid RL With Limited Coverage", + "abstract": "Hybrid Reinforcement Learning (RL), leveraging both online and offline data,\nhas garnered recent interest, yet research on its provable benefits remains\nsparse. Additionally, many existing hybrid RL algorithms (Song et al., 2023;\nNakamoto et al., 2023; Amortila et al., 2024) impose coverage assumptions on\nthe offline dataset, but we show that this is unnecessary. A well-designed\nonline algorithm should \"fill in the gaps\" in the offline dataset, exploring\nstates and actions that the behavior policy did not explore. Unlike previous\napproaches that focus on estimating the offline data distribution to guide\nonline exploration (Li et al., 2023b), we show that a natural extension to\nstandard optimistic online algorithms -- warm-starting them by including the\noffline dataset in the experience replay buffer -- achieves similar provable\ngains from hybrid data even when the offline dataset does not have\nsingle-policy concentrability. We accomplish this by partitioning the\nstate-action space into two, bounding the regret on each partition through an\noffline and an online complexity measure, and showing that the regret of this\nhybrid RL algorithm can be characterized by the best partition -- despite the\nalgorithm not knowing the partition itself. As an example, we propose\nDISC-GOLF, a modification of an existing optimistic online algorithm with\ngeneral function approximation called GOLF used in Jin et al. (2021); Xie et\nal. (2022a), and show that it demonstrates provable gains over both online-only\nand offline-only reinforcement learning, with competitive bounds when\nspecialized to the tabular, linear and block MDP cases. Numerical simulations\nfurther validate our theory that hybrid data facilitates more efficient\nexploration, supporting the potential of hybrid RL in various scenarios.", + "authors": "Kevin Tan, Ziping Xu", + "published": "2024-03-07", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.03086v1", + "title": "DITTO: Offline Imitation Learning with World Models", + "abstract": "We propose DITTO, an offline imitation learning algorithm which uses world\nmodels and on-policy reinforcement learning to addresses the problem of\ncovariate shift, without access to an oracle or any additional online\ninteractions. We discuss how world models enable offline, on-policy imitation\nlearning, and propose a simple intrinsic reward defined in the world model\nlatent space that induces imitation learning by reinforcement learning.\nTheoretically, we show that our formulation induces a divergence bound between\nexpert and learner, in turn bounding the difference in reward. We test our\nmethod on difficult Atari environments from pixels alone, and achieve\nstate-of-the-art performance in the offline setting.", + "authors": "Branton DeMoss, Paul Duckworth, Nick Hawes, Ingmar Posner", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.13425v3", + "title": "Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning", + "abstract": "Recent progress in deep learning has relied on access to large and diverse\ndatasets. Such data-driven progress has been less evident in offline\nreinforcement learning (RL), because offline RL data is usually collected to\noptimize specific target tasks limiting the data's diversity. In this work, we\npropose Exploratory data for Offline RL (ExORL), a data-centric approach to\noffline RL. ExORL first generates data with unsupervised reward-free\nexploration, then relabels this data with a downstream reward before training a\npolicy with offline RL. We find that exploratory data allows vanilla off-policy\nRL algorithms, without any offline-specific modifications, to outperform or\nmatch state-of-the-art offline RL algorithms on downstream tasks. Our findings\nsuggest that data generation is as important as algorithmic advances for\noffline RL and hence requires careful consideration from the community. Code\nand data can be found at https://github.com/denisyarats/exorl .", + "authors": "Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, Lerrel Pinto", + "published": "2022-01-31", + "updated": "2022-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.11566v1", + "title": "Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning", + "abstract": "Offline Reinforcement Learning (RL) aims to learn policies from previously\ncollected datasets without exploring the environment. Directly applying\noff-policy algorithms to offline RL usually fails due to the extrapolation\nerror caused by the out-of-distribution (OOD) actions. Previous methods tackle\nsuch problem by penalizing the Q-values of OOD actions or constraining the\ntrained policy to be close to the behavior policy. Nevertheless, such methods\ntypically prevent the generalization of value functions beyond the offline data\nand also lack precise characterization of OOD data. In this paper, we propose\nPessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven\noffline algorithm without explicit policy constraints. Specifically, PBRL\nconducts uncertainty quantification via the disagreement of bootstrapped\nQ-functions, and performs pessimistic updates by penalizing the value function\nbased on the estimated uncertainty. To tackle the extrapolating error, we\nfurther propose a novel OOD sampling method. We show that such OOD sampling and\npessimistic bootstrapping yields provable uncertainty quantifier in linear\nMDPs, thus providing the theoretical underpinning for PBRL. Extensive\nexperiments on D4RL benchmark show that PBRL has better performance compared to\nthe state-of-the-art algorithms.", + "authors": "Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang", + "published": "2022-02-23", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11895v1", + "title": "What are the Statistical Limits of Offline RL with Linear Function Approximation?", + "abstract": "Offline reinforcement learning seeks to utilize offline (observational) data\nto guide the learning of (causal) sequential decision making strategies. The\nhope is that offline reinforcement learning coupled with function approximation\nmethods (to deal with the curse of dimensionality) can provide a means to help\nalleviate the excessive sample complexity burden in modern sequential decision\nmaking problems. However, the extent to which this broader approach can be\neffective is not well understood, where the literature largely consists of\nsufficient conditions.\n This work focuses on the basic question of what are necessary\nrepresentational and distributional conditions that permit provable\nsample-efficient offline reinforcement learning. Perhaps surprisingly, our main\nresult shows that even if: i) we have realizability in that the true value\nfunction of \\emph{every} policy is linear in a given set of features and 2) our\noff-policy data has good coverage over all features (under a strong spectral\ncondition), then any algorithm still (information-theoretically) requires a\nnumber of offline samples that is exponential in the problem horizon in order\nto non-trivially estimate the value of \\emph{any} given policy. Our results\nhighlight that sample-efficient offline policy evaluation is simply not\npossible unless significantly stronger conditions hold; such conditions include\neither having low distribution shift (where the offline data distribution is\nclose to the distribution of the policy to be evaluated) or significantly\nstronger representational conditions (beyond realizability).", + "authors": "Ruosong Wang, Dean P. Foster, Sham M. Kakade", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.09119v2", + "title": "Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL", + "abstract": "Offline Reinforcement Learning (RL) aims to extract near-optimal policies\nfrom imperfect offline data without additional environment interactions.\nExtracting policies from diverse offline datasets has the potential to expand\nthe range of applicability of RL by making the training process safer, faster,\nand more streamlined. We investigate how to improve the performance of offline\nRL algorithms, its robustness to the quality of offline data, as well as its\ngeneralization capabilities. To this end, we introduce Offline Model-based RL\nwith Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding\nthat dynamics models, which support within-domain generalization, and\nbehavioral priors, which support cross-domain generalization, are\ncomplementary. When combined together, they substantially improve the\nperformance and generalization of offline RL policies. In the widely studied\nD4RL offline RL benchmark, we find that MABE achieves higher average\nperformance compared to prior model-free and model-based algorithms. In\nexperiments that require cross-domain generalization, we find that MABE\noutperforms prior methods. Our website is available at\nhttps://sites.google.com/berkeley.edu/mabe .", + "authors": "Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, Michael Laskin", + "published": "2021-06-16", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03788v2", + "title": "d3rlpy: An Offline Deep Reinforcement Learning Library", + "abstract": "In this paper, we introduce d3rlpy, an open-sourced offline deep\nreinforcement learning (RL) library for Python. d3rlpy supports a set of\noffline deep RL algorithms as well as off-policy online algorithms via a fully\ndocumented plug-and-play API. To address a reproducibility issue, we conduct a\nlarge-scale benchmark with D4RL and Atari 2600 dataset to ensure implementation\nquality and provide experimental scripts and full tables of results. The d3rlpy\nsource code can be found on GitHub: \\url{https://github.com/takuseno/d3rlpy}.", + "authors": "Takuma Seno, Michita Imai", + "published": "2021-11-06", + "updated": "2022-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08569v2", + "title": "Bootstrapped Transformer for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims at learning policies from previously\ncollected static trajectory data without interacting with the real environment.\nRecent works provide a novel perspective by viewing offline RL as a generic\nsequence generation problem, adopting sequence models such as Transformer\narchitecture to model distributions over trajectories, and repurposing beam\nsearch as a planning algorithm. However, the training datasets utilized in\ngeneral offline RL tasks are quite limited and often suffer from insufficient\ndistribution coverage, which could be harmful to training sequence generation\nmodels yet has not drawn enough attention in the previous works. In this paper,\nwe propose a novel algorithm named Bootstrapped Transformer, which incorporates\nthe idea of bootstrapping and leverages the learned model to self-generate more\noffline data to further boost the sequence model training. We conduct extensive\nexperiments on two offline RL benchmarks and demonstrate that our model can\nlargely remedy the existing offline RL training limitations and beat other\nstrong baseline methods. We also analyze the generated pseudo data and the\nrevealed characteristics may shed some light on offline RL training. The codes\nare available at https://seqml.github.io/bootorl.", + "authors": "Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, Dongsheng Li", + "published": "2022-06-17", + "updated": "2022-10-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.13777v4", + "title": "Deep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future Directions", + "abstract": "Deep generative models (DGMs) have demonstrated great success across various\ndomains, particularly in generating texts, images, and videos using models\ntrained from offline data. Similarly, data-driven decision-making and robotic\ncontrol also necessitate learning a generator function from the offline data to\nserve as the strategy or policy. In this case, applying deep generative models\nin offline policy learning exhibits great potential, and numerous studies have\nexplored in this direction. However, this field still lacks a comprehensive\nreview and so developments of different branches are relatively independent.\nThus, we provide the first systematic review on the applications of deep\ngenerative models for offline policy learning. In particular, we cover five\nmainstream deep generative models, including Variational Auto-Encoders,\nGenerative Adversarial Networks, Normalizing Flows, Transformers, and Diffusion\nModels, and their applications in both offline reinforcement learning (offline\nRL) and imitation learning (IL). Offline RL and IL are two main branches of\noffline policy learning and are widely-adopted techniques for sequential\ndecision-making. Specifically, for each type of DGM-based offline policy\nlearning, we distill its fundamental scheme, categorize related works based on\nthe usage of the DGM, and sort out the development process of algorithms in\nthat field. Subsequent to the main content, we provide in-depth discussions on\ndeep generative models and offline policy learning as a summary, based on which\nwe present our perspectives on future research directions. This work offers a\nhands-on reference for the research progress in deep generative models for\noffline policy learning, and aims to inspire improved DGM-based offline RL or\nIL algorithms. For convenience, we maintain a paper list on\nhttps://github.com/LucasCJYSDL/DGMs-for-Offline-Policy-Learning.", + "authors": "Jiayu Chen, Bhargav Ganguly, Yang Xu, Yongsheng Mei, Tian Lan, Vaneet Aggarwal", + "published": "2024-02-21", + "updated": "2024-02-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.08566v1", + "title": "Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining", + "abstract": "Large transformer models pretrained on offline reinforcement learning\ndatasets have demonstrated remarkable in-context reinforcement learning (ICRL)\ncapabilities, where they can make good decisions when prompted with interaction\ntrajectories from unseen environments. However, when and how transformers can\nbe trained to perform ICRL have not been theoretically well-understood. In\nparticular, it is unclear which reinforcement-learning algorithms transformers\ncan perform in context, and how distribution mismatch in offline training data\naffects the learned algorithms. This paper provides a theoretical framework\nthat analyzes supervised pretraining for ICRL. This includes two recently\nproposed training methods -- algorithm distillation and decision-pretrained\ntransformers. First, assuming model realizability, we prove the\nsupervised-pretrained transformer will imitate the conditional expectation of\nthe expert algorithm given the observed trajectory. The generalization error\nwill scale with model capacity and a distribution divergence factor between the\nexpert and offline algorithms. Second, we show transformers with ReLU attention\ncan efficiently approximate near-optimal online reinforcement learning\nalgorithms like LinUCB and Thompson sampling for stochastic linear bandits, and\nUCB-VI for tabular Markov decision processes. This provides the first\nquantitative analysis of the ICRL capabilities of transformers pretrained from\noffline trajectories.", + "authors": "Licong Lin, Yu Bai, Song Mei", + "published": "2023-10-12", + "updated": "2023-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "math.ST", + "stat.ML", + "stat.TH" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.07344v1", + "title": "Measurement Scheduling for ICU Patients with Offline Reinforcement Learning", + "abstract": "Scheduling laboratory tests for ICU patients presents a significant\nchallenge. Studies show that 20-40% of lab tests ordered in the ICU are\nredundant and could be eliminated without compromising patient safety. Prior\nwork has leveraged offline reinforcement learning (Offline-RL) to find optimal\npolicies for ordering lab tests based on patient information. However, new ICU\npatient datasets have since been released, and various advancements have been\nmade in Offline-RL methods. In this study, we first introduce a preprocessing\npipeline for the newly-released MIMIC-IV dataset geared toward time-series\ntasks. We then explore the efficacy of state-of-the-art Offline-RL methods in\nidentifying better policies for ICU patient lab test scheduling. Besides\nassessing methodological performance, we also discuss the overall suitability\nand practicality of using Offline-RL frameworks for scheduling laboratory tests\nin ICU settings.", + "authors": "Zongliang Ji, Anna Goldenberg, Rahul G. Krishnan", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.02845v3", + "title": "Offline Pre-trained Multi-Agent Decision Transformer: One Big Sequence Model Tackles All SMAC Tasks", + "abstract": "Offline reinforcement learning leverages previously-collected offline\ndatasets to learn optimal policies with no necessity to access the real\nenvironment. Such a paradigm is also desirable for multi-agent reinforcement\nlearning (MARL) tasks, given the increased interactions among agents and with\nthe enviroment. Yet, in MARL, the paradigm of offline pre-training with online\nfine-tuning has not been studied, nor datasets or benchmarks for offline MARL\nresearch are available. In this paper, we facilitate the research by providing\nlarge-scale datasets, and use them to examine the usage of the Decision\nTransformer in the context of MARL. We investigate the generalisation of MARL\noffline pre-training in the following three aspects: 1) between single agents\nand multiple agents, 2) from offline pretraining to the online fine-tuning, and\n3) to that of multiple downstream tasks with few-shot and zero-shot\ncapabilities. We start by introducing the first offline MARL dataset with\ndiverse quality levels based on the StarCraftII environment, and then propose\nthe novel architecture of multi-agent decision transformer (MADT) for effective\noffline learning. MADT leverages transformer's modelling ability of sequence\nmodelling and integrates it seamlessly with both offline and online MARL tasks.\nA crucial benefit of MADT is that it learns generalisable policies that can\ntransfer between different types of agents under different task scenarios. On\nStarCraft II offline dataset, MADT outperforms the state-of-the-art offline RL\nbaselines. When applied to online tasks, the pre-trained MADT significantly\nimproves sample efficiency, and enjoys strong performance both few-short and\nzero-shot cases. To our best knowledge, this is the first work that studies and\ndemonstrates the effectiveness of offline pre-trained models in terms of sample\nefficiency and generalisability enhancements in MARL.", + "authors": "Linghui Meng, Muning Wen, Yaodong Yang, Chenyang Le, Xiyun Li, Weinan Zhang, Ying Wen, Haifeng Zhang, Jun Wang, Bo Xu", + "published": "2021-12-06", + "updated": "2022-06-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.14379v1", + "title": "Offline Reinforcement Learning Hands-On", + "abstract": "Offline Reinforcement Learning (RL) aims to turn large datasets into powerful\ndecision-making engines without any online interactions with the environment.\nThis great promise has motivated a large amount of research that hopes to\nreplicate the success RL has experienced in simulation settings. This work\nambitions to reflect upon these efforts from a practitioner viewpoint. We start\nby discussing the dataset properties that we hypothesise can characterise the\ntype of offline methods that will be the most successful. We then verify these\nclaims through a set of experiments and designed datasets generated from\nenvironments with both discrete and continuous action spaces. We experimentally\nvalidate that diversity and high-return examples in the data are crucial to the\nsuccess of offline RL and show that behavioural cloning remains a strong\ncontender compared to its contemporaries. Overall, this work stands as a\ntutorial to help people build their intuition on today's offline RL methods and\ntheir applicability.", + "authors": "Louis Monier, Jakub Kmec, Alexandre Laterre, Thomas Pierrot, Valentin Courgeau, Olivier Sigaud, Karim Beguir", + "published": "2020-11-29", + "updated": "2020-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2104.06294v1", + "title": "Online and Offline Reinforcement Learning by Planning with a Learned Model", + "abstract": "Learning efficiently from small amounts of data has long been the focus of\nmodel-based reinforcement learning, both for the online case when interacting\nwith the environment and the offline case when learning from a fixed dataset.\nHowever, to date no single unified algorithm could demonstrate state-of-the-art\nresults in both settings. In this work, we describe the Reanalyse algorithm\nwhich uses model-based policy and value improvement operators to compute new\nimproved training targets on existing data points, allowing efficient learning\nfor data budgets varying by several orders of magnitude. We further show that\nReanalyse can also be used to learn entirely from demonstrations without any\nenvironment interactions, as in the case of offline Reinforcement Learning\n(offline RL). Combining Reanalyse with the MuZero algorithm, we introduce\nMuZero Unplugged, a single unified algorithm for any data budget, including\noffline RL. In contrast to previous work, our algorithm does not require any\nspecial adaptations for the off-policy or offline RL settings. MuZero Unplugged\nsets new state-of-the-art results in the RL Unplugged offline RL benchmark as\nwell as in the online RL benchmark of Atari in the standard 200 million frame\nsetting.", + "authors": "Julian Schrittwieser, Thomas Hubert, Amol Mandhane, Mohammadamin Barekatain, Ioannis Antonoglou, David Silver", + "published": "2021-04-13", + "updated": "2021-04-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.06043v3", + "title": "Offline Meta-Reinforcement Learning with Advantage Weighting", + "abstract": "This paper introduces the offline meta-reinforcement learning (offline\nmeta-RL) problem setting and proposes an algorithm that performs well in this\nsetting. Offline meta-RL is analogous to the widely successful supervised\nlearning strategy of pre-training a model on a large batch of fixed,\npre-collected data (possibly from various tasks) and fine-tuning the model to a\nnew task with relatively little data. That is, in offline meta-RL, we\nmeta-train on fixed, pre-collected data from several tasks in order to adapt to\na new task with a very small amount (less than 5 trajectories) of data from the\nnew task. By nature of being offline, algorithms for offline meta-RL can\nutilize the largest possible pool of training data available and eliminate\npotentially unsafe or costly data collection during meta-training. This setting\ninherits the challenges of offline RL, but it differs significantly because\noffline RL does not generally consider a) transfer to new tasks or b) limited\ndata from the test task, both of which we face in offline meta-RL. Targeting\nthe offline meta-RL setting, we propose Meta-Actor Critic with Advantage\nWeighting (MACAW), an optimization-based meta-learning algorithm that uses\nsimple, supervised regression objectives for both the inner and outer loop of\nmeta-training. On offline variants of common meta-RL benchmarks, we empirically\nfind that this approach enables fully offline meta-reinforcement learning and\nachieves notable gains over prior methods.", + "authors": "Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn", + "published": "2020-08-13", + "updated": "2021-07-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.09712v1", + "title": "Semi-Offline Reinforcement Learning for Optimized Text Generation", + "abstract": "In reinforcement learning (RL), there are two major settings for interacting\nwith the environment: online and offline. Online methods explore the\nenvironment at significant time cost, and offline methods efficiently obtain\nreward signals by sacrificing exploration capability. We propose semi-offline\nRL, a novel paradigm that smoothly transits from offline to online settings,\nbalances exploration capability and training cost, and provides a theoretical\nfoundation for comparing different RL settings. Based on the semi-offline\nformulation, we present the RL setting that is optimal in terms of optimization\ncost, asymptotic error, and overfitting error bound. Extensive experiments show\nthat our semi-offline approach is efficient and yields comparable or often\nbetter performance compared with state-of-the-art methods.", + "authors": "Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, Rui Yan", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.06860v2", + "title": "A Minimalist Approach to Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) defines the task of learning from a fixed\nbatch of data. Due to errors in value estimation from out-of-distribution\nactions, most offline RL algorithms take the approach of constraining or\nregularizing the policy with the actions contained in the dataset. Built on\npre-existing RL algorithms, modifications to make an RL algorithm work offline\ncomes at the cost of additional complexity. Offline RL algorithms introduce new\nhyperparameters and often leverage secondary components such as generative\nmodels, while adjusting the underlying RL algorithm. In this paper we aim to\nmake a deep RL algorithm work while making minimal changes. We find that we can\nmatch the performance of state-of-the-art offline RL algorithms by simply\nadding a behavior cloning term to the policy update of an online RL algorithm\nand normalizing the data. The resulting algorithm is a simple to implement and\ntune baseline, while more than halving the overall run time by removing the\nadditional computational overhead of previous methods.", + "authors": "Scott Fujimoto, Shixiang Shane Gu", + "published": "2021-06-12", + "updated": "2021-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.05546v1", + "title": "Offline Actor-Critic Reinforcement Learning Scales to Large Models", + "abstract": "We show that offline actor-critic reinforcement learning can scale to large\nmodels - such as transformers - and follows similar scaling laws as supervised\nlearning. We find that offline actor-critic algorithms can outperform strong,\nsupervised, behavioral cloning baselines for multi-task training on a large\ndataset containing both sub-optimal and expert behavior on 132 continuous\ncontrol tasks. We introduce a Perceiver-based actor-critic model and elucidate\nthe key model features needed to make offline RL work with self- and\ncross-attention modules. Overall, we find that: i) simple offline actor critic\nalgorithms are a natural choice for gradually moving away from the currently\npredominant paradigm of behavioral cloning, and ii) via offline RL it is\npossible to learn multi-task policies that master many domains simultaneously,\nincluding real robotics tasks, from sub-optimal demonstrations or\nself-generated data.", + "authors": "Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin Riedmiller", + "published": "2024-02-08", + "updated": "2024-02-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.05742v2", + "title": "The Generalization Gap in Offline Reinforcement Learning", + "abstract": "Despite recent progress in offline learning, these methods are still trained\nand tested on the same environment. In this paper, we compare the\ngeneralization abilities of widely used online and offline learning methods\nsuch as online reinforcement learning (RL), offline RL, sequence modeling, and\nbehavioral cloning. Our experiments show that offline learning algorithms\nperform worse on new environments than online learning ones. We also introduce\nthe first benchmark for evaluating generalization in offline learning,\ncollecting datasets of varying sizes and skill-levels from Procgen (2D video\ngames) and WebShop (e-commerce websites). The datasets contain trajectories for\na limited number of game levels or natural language instructions and at test\ntime, the agent has to generalize to new levels or instructions. Our\nexperiments reveal that existing offline learning algorithms struggle to match\nthe performance of online RL on both train and test environments. Behavioral\ncloning is a strong baseline, outperforming state-of-the-art offline RL and\nsequence modeling approaches when trained on data from multiple environments\nand tested on new ones. Finally, we find that increasing the diversity of the\ndata, rather than its size, improves performance on new environments for all\noffline learning algorithms. Our study demonstrates the limited generalization\nof current offline learning algorithms highlighting the need for more research\nin this area.", + "authors": "Ishita Mediratta, Qingfei You, Minqi Jiang, Roberta Raileanu", + "published": "2023-12-10", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08251v1", + "title": "Offline Reinforcement Learning with Adaptive Behavior Regularization", + "abstract": "Offline reinforcement learning (RL) defines a sample-efficient learning\nparadigm, where a policy is learned from static and previously collected\ndatasets without additional interaction with the environment. The major\nobstacle to offline RL is the estimation error arising from evaluating the\nvalue of out-of-distribution actions. To tackle this problem, most existing\noffline RL methods attempt to acquire a policy both ``close\" to the behaviors\ncontained in the dataset and sufficiently improved over them, which requires a\ntrade-off between two possibly conflicting targets. In this paper, we propose a\nnovel approach, which we refer to as adaptive behavior regularization (ABR), to\nbalance this critical trade-off. By simply utilizing a sample-based\nregularization, ABR enables the policy to adaptively adjust its optimization\nobjective between cloning and improving over the policy used to generate the\ndataset. In the evaluation on D4RL datasets, a widely adopted benchmark for\noffline reinforcement learning, ABR can achieve improved or competitive\nperformance compared to existing state-of-the-art algorithms.", + "authors": "Yunfan Zhou, Xijun Li, Qingyu Qu", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.02929v2", + "title": "Model-Based Offline Meta-Reinforcement Learning with Regularization", + "abstract": "Existing offline reinforcement learning (RL) methods face a few major\nchallenges, particularly the distributional shift between the learned policy\nand the behavior policy. Offline Meta-RL is emerging as a promising approach to\naddress these challenges, aiming to learn an informative meta-policy from a\ncollection of tasks. Nevertheless, as shown in our empirical studies, offline\nMeta-RL could be outperformed by offline single-task RL methods on tasks with\ngood quality of datasets, indicating that a right balance has to be delicately\ncalibrated between \"exploring\" the out-of-distribution state-actions by\nfollowing the meta-policy and \"exploiting\" the offline dataset by staying close\nto the behavior policy. Motivated by such empirical analysis, we explore\nmodel-based offline Meta-RL with regularized Policy Optimization (MerPO), which\nlearns a meta-model for efficient task structure inference and an informative\nmeta-policy for safe exploration of out-of-distribution state-actions. In\nparticular, we devise a new meta-Regularized model-based Actor-Critic (RAC)\nmethod for within-task policy optimization, as a key building block of MerPO,\nusing conservative policy evaluation and regularized policy improvement; and\nthe intrinsic tradeoff therein is achieved via striking the right balance\nbetween two regularizers, one based on the behavior policy and the other on the\nmeta-policy. We theoretically show that the learnt policy offers guaranteed\nimprovement over both the behavior policy and the meta-policy, thus ensuring\nthe performance improvement on new tasks via offline Meta-RL. Experiments\ncorroborate the superior performance of MerPO over existing offline Meta-RL\nmethods.", + "authors": "Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, Junshan Zhang", + "published": "2022-02-07", + "updated": "2022-07-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.05804v1", + "title": "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism", + "abstract": "Offline reinforcement learning, which seeks to utilize offline/historical\ndata to optimize sequential decision-making strategies, has gained surging\nprominence in recent studies. Due to the advantage that appropriate function\napproximators can help mitigate the sample complexity burden in modern\nreinforcement learning problems, existing endeavors usually enforce powerful\nfunction representation models (e.g. neural networks) to learn the optimal\npolicies. However, a precise understanding of the statistical limits with\nfunction representations, remains elusive, even when such a representation is\nlinear.\n Towards this goal, we study the statistical limits of offline reinforcement\nlearning with linear model representations. To derive the tight offline\nlearning bound, we design the variance-aware pessimistic value iteration\n(VAPVI), which adopts the conditional variance information of the value\nfunction for time-inhomogeneous episodic linear Markov decision processes\n(MDPs). VAPVI leverages estimated variances of the value functions to reweight\nthe Bellman residuals in the least-square pessimistic value iteration and\nprovides improved offline learning bounds over the best-known existing results\n(whereas the Bellman residuals are equally weighted by design). More\nimportantly, our learning bounds are expressed in terms of system quantities,\nwhich provide natural instance-dependent characterizations that previous\nresults are short of. We hope our results draw a clearer picture of what\noffline learning should look like when linear representations are provided.", + "authors": "Ming Yin, Yaqi Duan, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-03-11", + "updated": "2022-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.14629v1", + "title": "Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions", + "abstract": "Reinforcement learning (RL) agents are widely used for solving complex\nsequential decision making tasks, but still exhibit difficulty in generalizing\nto scenarios not seen during training. While prior online approaches\ndemonstrated that using additional signals beyond the reward function can lead\nto better generalization capabilities in RL agents, i.e. using self-supervised\nlearning (SSL), they struggle in the offline RL setting, i.e. learning from a\nstatic dataset. We show that performance of online algorithms for\ngeneralization in RL can be hindered in the offline setting due to poor\nestimation of similarity between observations. We propose a new\ntheoretically-motivated framework called Generalized Similarity Functions\n(GSF), which uses contrastive learning to train an offline RL agent to\naggregate observations based on the similarity of their expected future\nbehavior, where we quantify this similarity using \\emph{generalized value\nfunctions}. We show that GSF is general enough to recover existing SSL\nobjectives while also improving zero-shot generalization performance on a\ncomplex offline RL benchmark, offline Procgen.", + "authors": "Bogdan Mazoure, Ilya Kostrikov, Ofir Nachum, Jonathan Tompson", + "published": "2021-11-29", + "updated": "2021-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.06734v1", + "title": "Corruption Robust Offline Reinforcement Learning with Human Feedback", + "abstract": "We study data corruption robustness for reinforcement learning with human\nfeedback (RLHF) in an offline setting. Given an offline dataset of pairs of\ntrajectories along with feedback about human preferences, an\n$\\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or\ntrajectory features manipulated), capturing an adversarial attack or noisy\nhuman preferences. We aim to design algorithms that identify a near-optimal\npolicy from the corrupted data, with provable guarantees. Existing theoretical\nworks have separately studied the settings of corruption robust RL (learning\nfrom scalar rewards directly under corruption) and offline RLHF (learning from\nhuman feedback without corruption); however, they are inapplicable to our\nproblem of dealing with corrupted data in offline RLHF setting. To this end, we\ndesign novel corruption robust offline RLHF methods under various assumptions\non the coverage of the data-generating distributions. At a high level, our\nmethodology robustifies an offline RLHF framework by first learning a reward\nmodel along with confidence sets and then learning a pessimistic optimal policy\nover the confidence set. Our key insight is that learning optimal policy can be\ndone by leveraging an offline corruption-robust RL oracle in different ways\n(e.g., zero-order oracle or first-order oracle), depending on the data coverage\nassumptions. To our knowledge, ours is the first work that provides provable\ncorruption robust offline RLHF methods.", + "authors": "Debmalya Mandal, Andi Nika, Parameswaran Kamalaruban, Adish Singla, Goran Radanovi\u0107", + "published": "2024-02-09", + "updated": "2024-02-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.12716v1", + "title": "H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps", + "abstract": "Solving real-world complex tasks using reinforcement learning (RL) without\nhigh-fidelity simulation environments or large amounts of offline data can be\nquite challenging. Online RL agents trained in imperfect simulation\nenvironments can suffer from severe sim-to-real issues. Offline RL approaches\nalthough bypass the need for simulators, often pose demanding requirements on\nthe size and quality of the offline datasets. The recently emerged hybrid\noffline-and-online RL provides an attractive framework that enables joint use\nof limited offline data and imperfect simulator for transferable policy\nlearning. In this paper, we develop a new algorithm, called H2O+, which offers\ngreat flexibility to bridge various choices of offline and online learning\nmethods, while also accounting for dynamics gaps between the real and\nsimulation environment. Through extensive simulation and real-world robotics\nexperiments, we demonstrate superior performance and flexibility over advanced\ncross-domain online and offline RL algorithms.", + "authors": "Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu, Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2023-09-22", + "updated": "2023-09-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.13493v1", + "title": "The Provable Benefits of Unsupervised Data Sharing for Offline Reinforcement Learning", + "abstract": "Self-supervised methods have become crucial for advancing deep learning by\nleveraging data itself to reduce the need for expensive annotations. However,\nthe question of how to conduct self-supervised offline reinforcement learning\n(RL) in a principled way remains unclear. In this paper, we address this issue\nby investigating the theoretical benefits of utilizing reward-free data in\nlinear Markov Decision Processes (MDPs) within a semi-supervised setting.\n Further, we propose a novel, Provable Data Sharing algorithm (PDS) to utilize\nsuch reward-free data for offline RL. PDS uses additional penalties on the\nreward function learned from labeled data to prevent overestimation, ensuring a\nconservative algorithm. Our results on various offline RL tasks demonstrate\nthat PDS significantly improves the performance of offline RL algorithms with\nreward-free data. Overall, our work provides a promising approach to leveraging\nthe benefits of unlabeled data in offline RL while maintaining theoretical\nguarantees. We believe our findings will contribute to developing more robust\nself-supervised RL methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.11574v1", + "title": "Offline Multitask Representation Learning for Reinforcement Learning", + "abstract": "We study offline multitask representation learning in reinforcement learning\n(RL), where a learner is provided with an offline dataset from different tasks\nthat share a common representation and is asked to learn the shared\nrepresentation. We theoretically investigate offline multitask low-rank RL, and\npropose a new algorithm called MORL for offline multitask representation\nlearning. Furthermore, we examine downstream RL in reward-free, offline and\nonline scenarios, where a new task is introduced to the agent that shares the\nsame representation as the upstream offline tasks. Our theoretical results\ndemonstrate the benefits of using the learned representation from the upstream\noffline task instead of directly learning the representation of the low-rank\nmodel.", + "authors": "Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.13885v1", + "title": "Offline Learning from Demonstrations and Unlabeled Experience", + "abstract": "Behavior cloning (BC) is often practical for robot learning because it allows\na policy to be trained offline without rewards, by supervised learning on\nexpert demonstrations. However, BC does not effectively leverage what we will\nrefer to as unlabeled experience: data of mixed and unknown quality without\nreward annotations. This unlabeled data can be generated by a variety of\nsources such as human teleoperation, scripted policies and other agents on the\nsame robot. Towards data-driven offline robot learning that can use this\nunlabeled experience, we introduce Offline Reinforced Imitation Learning\n(ORIL). ORIL first learns a reward function by contrasting observations from\ndemonstrator and unlabeled trajectories, then annotates all data with the\nlearned reward, and finally trains an agent via offline reinforcement learning.\nAcross a diverse set of continuous control and simulated robotic manipulation\ntasks, we show that ORIL consistently outperforms comparable BC agents by\neffectively leveraging unlabeled experience.", + "authors": "Konrad Zolna, Alexander Novikov, Ksenia Konyushkova, Caglar Gulcehre, Ziyu Wang, Yusuf Aytar, Misha Denil, Nando de Freitas, Scott Reed", + "published": "2020-11-27", + "updated": "2020-11-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13630v1", + "title": "Offline Skill Graph (OSG): A Framework for Learning and Planning using Offline Reinforcement Learning Skills", + "abstract": "Reinforcement Learning has received wide interest due to its success in\ncompetitive games. Yet, its adoption in everyday applications is limited (e.g.\nindustrial, home, healthcare, etc.). In this paper, we address this limitation\nby presenting a framework for planning over offline skills and solving complex\ntasks in real-world environments. Our framework is comprised of three modules\nthat together enable the agent to learn from previously collected data and\ngeneralize over it to solve long-horizon tasks. We demonstrate our approach by\ntesting it on a robotic arm that is required to solve complex tasks.", + "authors": "Ben-ya Halevy, Yehudit Aperstein, Dotan Di Castro", + "published": "2023-06-23", + "updated": "2023-06-23", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.09796v1", + "title": "Offline Reinforcement Learning with Value-based Episodic Memory", + "abstract": "Offline reinforcement learning (RL) shows promise of applying RL to\nreal-world problems by effectively utilizing previously collected data. Most\nexisting offline RL algorithms use regularization or constraints to suppress\nextrapolation error for actions outside the dataset. In this paper, we adopt a\ndifferent framework, which learns the V-function instead of the Q-function to\nnaturally keep the learning procedure within the support of an offline dataset.\nTo enable effective generalization while maintaining proper conservatism in\noffline learning, we propose Expectile V-Learning (EVL), which smoothly\ninterpolates between the optimal value learning and behavior cloning. Further,\nwe introduce implicit planning along offline trajectories to enhance learned\nV-values and accelerate convergence. Together, we present a new offline method\ncalled Value-based Episodic Memory (VEM). We provide theoretical analysis for\nthe convergence properties of our proposed VEM method, and empirical results in\nthe D4RL benchmark show that our method achieves superior performance in most\ntasks, particularly in sparse-reward tasks.", + "authors": "Xiaoteng Ma, Yiqin Yang, Hao Hu, Qihan Liu, Jun Yang, Chongjie Zhang, Qianchuan Zhao, Bin Liang", + "published": "2021-10-19", + "updated": "2021-10-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.00935v3", + "title": "Policy Expansion for Bridging Offline-to-Online Reinforcement Learning", + "abstract": "Pre-training with offline data and online fine-tuning using reinforcement\nlearning is a promising strategy for learning control policies by leveraging\nthe best of both worlds in terms of sample efficiency and performance. One\nnatural approach is to initialize the policy for online learning with the one\ntrained offline. In this work, we introduce a policy expansion scheme for this\ntask. After learning the offline policy, we use it as one candidate policy in a\npolicy set. We then expand the policy set with another policy which will be\nresponsible for further learning. The two policies will be composed in an\nadaptive manner for interacting with the environment. With this approach, the\npolicy previously learned offline is fully retained during online learning,\nthus mitigating the potential issues such as destroying the useful behaviors of\nthe offline policy in the initial stage of online learning while allowing the\noffline policy participate in the exploration naturally in an adaptive manner.\nMoreover, new useful behaviors can potentially be captured by the newly added\npolicy through learning. Experiments are conducted on a number of tasks and the\nresults demonstrate the effectiveness of the proposed approach.", + "authors": "Haichao Zhang, We Xu, Haonan Yu", + "published": "2023-02-02", + "updated": "2023-04-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13611v3", + "title": "OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning", + "abstract": "Reinforcement learning (RL) has achieved impressive performance in a variety\nof online settings in which an agent's ability to query the environment for\ntransitions and rewards is effectively unlimited. However, in many practical\napplications, the situation is reversed: an agent may have access to large\namounts of undirected offline experience data, while access to the online\nenvironment is severely limited. In this work, we focus on this offline\nsetting. Our main insight is that, when presented with offline data composed of\na variety of behaviors, an effective way to leverage this data is to extract a\ncontinuous space of recurring and temporally extended primitive behaviors\nbefore using these primitives for downstream task learning. Primitives\nextracted in this way serve two purposes: they delineate the behaviors that are\nsupported by the data from those that are not, making them useful for avoiding\ndistributional shift in offline RL; and they provide a degree of temporal\nabstraction, which reduces the effective horizon yielding better learning in\ntheory, and improved offline RL in practice. In addition to benefiting offline\npolicy optimization, we show that performing offline primitive learning in this\nway can also be leveraged for improving few-shot imitation learning as well as\nexploration and transfer in online RL on a variety of benchmark domains.\nVisualizations are available at https://sites.google.com/view/opal-iclr", + "authors": "Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum", + "published": "2020-10-26", + "updated": "2021-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05612v1", + "title": "Personalization for Web-based Services using Offline Reinforcement Learning", + "abstract": "Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.", + "authors": "Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, Igor L. Markov", + "published": "2021-02-10", + "updated": "2021-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "cs.SE" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05815v1", + "title": "Representation Matters: Offline Pretraining for Sequential Decision Making", + "abstract": "The recent success of supervised learning methods on ever larger offline\ndatasets has spurred interest in the reinforcement learning (RL) field to\ninvestigate whether the same paradigms can be translated to RL algorithms. This\nresearch area, known as offline RL, has largely focused on offline policy\noptimization, aiming to find a return-maximizing policy exclusively from\noffline data. In this paper, we consider a slightly different approach to\nincorporating offline data into sequential decision-making. We aim to answer\nthe question, what unsupervised objectives applied to offline datasets are able\nto learn state representations which elevate performance on downstream tasks,\nwhether those downstream tasks be online RL, imitation learning from expert\ndemonstrations, or even offline policy optimization based on the same offline\ndataset? Through a variety of experiments utilizing standard offline RL\ndatasets, we find that the use of pretraining with unsupervised learning\nobjectives can dramatically improve the performance of policy learning\nalgorithms that otherwise yield mediocre performance on their own. Extensive\nablations further provide insights into what components of these unsupervised\nobjectives -- e.g., reward prediction, continuous or discrete representations,\npretraining or finetuning -- are most important and in which settings.", + "authors": "Mengjiao Yang, Ofir Nachum", + "published": "2021-02-11", + "updated": "2021-02-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07166v1", + "title": "Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) extends the paradigm of classical RL\nalgorithms to purely learning from static datasets, without interacting with\nthe underlying environment during the learning process. A key challenge of\noffline RL is the instability of policy training, caused by the mismatch\nbetween the distribution of the offline data and the undiscounted stationary\nstate-action distribution of the learned policy. To avoid the detrimental\nimpact of distribution mismatch, we regularize the undiscounted stationary\ndistribution of the current policy towards the offline data during the policy\noptimization process. Further, we train a dynamics model to both implement this\nregularization and better estimate the stationary distribution of the current\npolicy, reducing the error induced by distribution mismatch. On a wide range of\ncontinuous-control offline RL datasets, our method indicates competitive\nperformance, which validates our algorithm. The code is publicly available.", + "authors": "Shentao Yang, Yihao Feng, Shujian Zhang, Mingyuan Zhou", + "published": "2022-06-14", + "updated": "2022-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2107.01757v1", + "title": "The Least Restriction for Offline Reinforcement Learning", + "abstract": "Many practical applications of reinforcement learning (RL) constrain the\nagent to learn from a fixed offline dataset of logged interactions, which has\nalready been gathered, without offering further possibility for data\ncollection. However, commonly used off-policy RL algorithms, such as the Deep Q\nNetwork and the Deep Deterministic Policy Gradient, are incapable of learning\nwithout data correlated to the distribution under the current policy, making\nthem ineffective for this offline setting. As the first step towards useful\noffline RL algorithms, we analysis the reason of instability in standard\noff-policy RL algorithms. It is due to the bootstrapping error. The key to\navoiding this error, is ensuring that the agent's action space does not go out\nof the fixed offline dataset. Based on our consideration, a creative offline RL\nframework, the Least Restriction (LR), is proposed in this paper. The LR\nregards selecting an action as taking a sample from the probability\ndistribution. It merely set a little limit for action selection, which not only\navoid the action being out of the offline dataset but also remove all the\nunreasonable restrictions in earlier approaches (e.g. Batch-Constrained Deep\nQ-Learning). In the further, we will demonstrate that the LR, is able to learn\nrobustly from different offline datasets, including random and suboptimal\ndemonstrations, on a range of practical control tasks.", + "authors": "Zizhou Su", + "published": "2021-07-05", + "updated": "2021-07-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.13846v1", + "title": "Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning", + "abstract": "Offline reinforcement learning, by learning from a fixed dataset, makes it\npossible to learn agent behaviors without interacting with the environment.\nHowever, depending on the quality of the offline dataset, such pre-trained\nagents may have limited performance and would further need to be fine-tuned\nonline by interacting with the environment. During online fine-tuning, the\nperformance of the pre-trained agent may collapse quickly due to the sudden\ndistribution shift from offline to online data. While constraints enforced by\noffline RL methods such as a behaviour cloning loss prevent this to an extent,\nthese constraints also significantly slow down online fine-tuning by forcing\nthe agent to stay close to the behavior policy. We propose to adaptively weigh\nthe behavior cloning loss during online fine-tuning based on the agent's\nperformance and training stability. Moreover, we use a randomized ensemble of Q\nfunctions to further increase the sample efficiency of online fine-tuning by\nperforming a large number of learning updates. Experiments show that the\nproposed method yields state-of-the-art offline-to-online reinforcement\nlearning performance on the popular D4RL benchmark. Code is available:\n\\url{https://github.com/zhaoyi11/adaptive_bc}.", + "authors": "Yi Zhao, Rinu Boney, Alexander Ilin, Juho Kannala, Joni Pajarinen", + "published": "2022-10-25", + "updated": "2022-10-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.12755v1", + "title": "Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn a policy using only\npre-collected and fixed data. Although avoiding the time-consuming online\ninteractions in RL, it poses challenges for out-of-distribution (OOD) state\nactions and often suffers from data inefficiency for training. Despite many\nefforts being devoted to addressing OOD state actions, the latter (data\ninefficiency) receives little attention in offline RL. To address this, this\npaper proposes the cross-domain offline RL, which assumes offline data\nincorporate additional source-domain data from varying transition dynamics\n(environments), and expects it to contribute to the offline data efficiency. To\ndo so, we identify a new challenge of OOD transition dynamics, beyond the\ncommon OOD state actions issue, when utilizing cross-domain offline data. Then,\nwe propose our method BOSA, which employs two support-constrained objectives to\naddress the above OOD issues. Through extensive experiments in the cross-domain\noffline RL setting, we demonstrate BOSA can greatly improve offline data\nefficiency: using only 10\\% of the target data, BOSA could achieve {74.4\\%} of\nthe SOTA offline RL performance that uses 100\\% of the target data.\nAdditionally, we also show BOSA can be effortlessly plugged into model-based\noffline RL and noising data augmentation techniques (used for generating\nsource-domain data), which naturally avoids the potential dynamics mismatch\nbetween target-domain data and newly generated source-domain data.", + "authors": "Jinxin Liu, Ziqi Zhang, Zhenyu Wei, Zifeng Zhuang, Yachen Kang, Sibo Gai, Donglin Wang", + "published": "2023-06-22", + "updated": "2023-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.00188v2", + "title": "Offline Reinforcement Learning with Reverse Model-based Imagination", + "abstract": "In offline reinforcement learning (offline RL), one of the main challenges is\nto deal with the distributional shift between the learning policy and the given\ndataset. To address this problem, recent offline RL methods attempt to\nintroduce conservatism bias to encourage learning in high-confidence areas.\nModel-free approaches directly encode such bias into policy or value function\nlearning using conservative regularizations or special network structures, but\ntheir constrained policy search limits the generalization beyond the offline\ndataset. Model-based approaches learn forward dynamics models with conservatism\nquantifications and then generate imaginary trajectories to extend the offline\ndatasets. However, due to limited samples in offline datasets, conservatism\nquantifications often suffer from overgeneralization in out-of-support regions.\nThe unreliable conservative measures will mislead forward model-based\nimaginations to undesired areas, leading to overaggressive behaviors. To\nencourage more conservatism, we propose a novel model-based offline RL\nframework, called Reverse Offline Model-based Imagination (ROMI). We learn a\nreverse dynamics model in conjunction with a novel reverse policy, which can\ngenerate rollouts leading to the target goal states within the offline dataset.\nThese reverse imaginations provide informed data augmentation for model-free\npolicy learning and enable conservative generalization beyond the offline\ndataset. ROMI can effectively combine with off-the-shelf model-free algorithms\nto enable model-based generalization with proper conservatism. Empirical\nresults show that our method can generate more conservative behaviors and\nachieve state-of-the-art performance on offline RL benchmark tasks.", + "authors": "Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, Chongjie Zhang", + "published": "2021-10-01", + "updated": "2021-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.11336v1", + "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", + "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", + "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02829v3", + "title": "RORL: Robust Offline Reinforcement Learning via Conservative Smoothing", + "abstract": "Offline reinforcement learning (RL) provides a promising direction to exploit\nmassive amount of offline data for complex decision-making tasks. Due to the\ndistribution shift issue, current offline RL algorithms are generally designed\nto be conservative in value estimation and action selection. However, such\nconservatism can impair the robustness of learned policies when encountering\nobservation deviation under realistic conditions, such as sensor errors and\nadversarial attacks. To trade off robustness and conservatism, we propose\nRobust Offline Reinforcement Learning (RORL) with a novel conservative\nsmoothing technique. In RORL, we explicitly introduce regularization on the\npolicy and the value function for states near the dataset, as well as\nadditional conservative value estimation on these states. Theoretically, we\nshow RORL enjoys a tighter suboptimality bound than recent theoretical results\nin linear MDPs. We demonstrate that RORL can achieve state-of-the-art\nperformance on the general offline RL benchmark and is considerably robust to\nadversarial observation perturbations.", + "authors": "Rui Yang, Chenjia Bai, Xiaoteng Ma, Zhaoran Wang, Chongjie Zhang, Lei Han", + "published": "2022-06-06", + "updated": "2022-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.17396v1", + "title": "Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions", + "abstract": "Offline reinforcement learning (RL) allows for the training of competent\nagents from offline datasets without any interaction with the environment.\nOnline finetuning of such offline models can further improve performance. But\nhow should we ideally finetune agents obtained from offline RL training? While\noffline RL algorithms can in principle be used for finetuning, in practice,\ntheir online performance improves slowly. In contrast, we show that it is\npossible to use standard online off-policy algorithms for faster improvement.\nHowever, we find this approach may suffer from policy collapse, where the\npolicy undergoes severe performance deterioration during initial online\nlearning. We investigate the issue of policy collapse and how it relates to\ndata diversity, algorithm choices and online replay distribution. Based on\nthese insights, we propose a conservative policy optimization procedure that\ncan achieve stable and sample-efficient online learning from offline\npretraining.", + "authors": "Yicheng Luo, Jackie Kay, Edward Grefenstette, Marc Peter Deisenroth", + "published": "2023-03-30", + "updated": "2023-03-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.07614v1", + "title": "Towards Data-Driven Offline Simulations for Online Reinforcement Learning", + "abstract": "Modern decision-making systems, from robots to web recommendation engines,\nare expected to adapt: to user preferences, changing circumstances or even new\ntasks. Yet, it is still uncommon to deploy a dynamically learning agent (rather\nthan a fixed policy) to a production system, as it's perceived as unsafe. Using\nhistorical data to reason about learning algorithms, similar to offline policy\nevaluation (OPE) applied to fixed policies, could help practitioners evaluate\nand ultimately deploy such adaptive agents to production. In this work, we\nformalize offline learner simulation (OLS) for reinforcement learning (RL) and\npropose a novel evaluation protocol that measures both fidelity and efficiency\nof the simulation. For environments with complex high-dimensional observations,\nwe propose a semi-parametric approach that leverages recent advances in latent\nstate discovery in order to achieve accurate and efficient offline simulations.\nIn preliminary experiments, we show the advantage of our approach compared to\nfully non-parametric baselines. The code to reproduce these experiments will be\nmade available at https://github.com/microsoft/rl-offline-simulation.", + "authors": "Shengpu Tang, Felipe Vieira Frujeri, Dipendra Misra, Alex Lamb, John Langford, Paul Mineiro, Sebastian Kochman", + "published": "2022-11-14", + "updated": "2022-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.07920v2", + "title": "Causal Decision Transformer for Recommender Systems via Offline Reinforcement Learning", + "abstract": "Reinforcement learning-based recommender systems have recently gained\npopularity. However, the design of the reward function, on which the agent\nrelies to optimize its recommendation policy, is often not straightforward.\nExploring the causality underlying users' behavior can take the place of the\nreward function in guiding the agent to capture the dynamic interests of users.\nMoreover, due to the typical limitations of simulation environments (e.g., data\ninefficiency), most of the work cannot be broadly applied in large-scale\nsituations. Although some works attempt to convert the offline dataset into a\nsimulator, data inefficiency makes the learning process even slower. Because of\nthe nature of reinforcement learning (i.e., learning by interaction), it cannot\ncollect enough data to train during a single interaction. Furthermore,\ntraditional reinforcement learning algorithms do not have a solid capability\nlike supervised learning methods to learn from offline datasets directly. In\nthis paper, we propose a new model named the causal decision transformer for\nrecommender systems (CDT4Rec). CDT4Rec is an offline reinforcement learning\nsystem that can learn from a dataset rather than from online interaction.\nMoreover, CDT4Rec employs the transformer architecture, which is capable of\nprocessing large offline datasets and capturing both short-term and long-term\ndependencies within the data to estimate the causal relationship between\naction, state, and reward. To demonstrate the feasibility and superiority of\nour model, we have conducted experiments on six real-world offline datasets and\none online simulator.", + "authors": "Siyu Wang, Xiaocong Chen, Dietmar Jannach, Lina Yao", + "published": "2023-04-17", + "updated": "2023-08-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.10442v1", + "title": "Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning", + "abstract": "We study offline meta-reinforcement learning, a practical reinforcement\nlearning paradigm that learns from offline data to adapt to new tasks. The\ndistribution of offline data is determined jointly by the behavior policy and\nthe task. Existing offline meta-reinforcement learning algorithms cannot\ndistinguish these factors, making task representations unstable to the change\nof behavior policies. To address this problem, we propose a contrastive\nlearning framework for task representations that are robust to the distribution\nmismatch of behavior policies in training and test. We design a bi-level\nencoder structure, use mutual information maximization to formalize task\nrepresentation learning, derive a contrastive learning objective, and introduce\nseveral approaches to approximate the true distribution of negative pairs.\nExperiments on a variety of offline meta-reinforcement learning benchmarks\ndemonstrate the advantages of our method over prior methods, especially on the\ngeneralization to out-of-distribution behavior policies. The code is available\nat https://github.com/PKU-AI-Edge/CORRO.", + "authors": "Haoqi Yuan, Zongqing Lu", + "published": "2022-06-21", + "updated": "2022-06-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.16217v2", + "title": "Beyond Reward: Offline Preference-guided Policy Optimization", + "abstract": "This study focuses on the topic of offline preference-based reinforcement\nlearning (PbRL), a variant of conventional reinforcement learning that\ndispenses with the need for online interaction or specification of reward\nfunctions. Instead, the agent is provided with fixed offline trajectories and\nhuman preferences between pairs of trajectories to extract the dynamics and\ntask information, respectively. Since the dynamics and task information are\northogonal, a naive approach would involve using preference-based reward\nlearning followed by an off-the-shelf offline RL algorithm. However, this\nrequires the separate learning of a scalar reward function, which is assumed to\nbe an information bottleneck of the learning process. To address this issue, we\npropose the offline preference-guided policy optimization (OPPO) paradigm,\nwhich models offline trajectories and preferences in a one-step process,\neliminating the need for separately learning a reward function. OPPO achieves\nthis by introducing an offline hindsight information matching objective for\noptimizing a contextual policy and a preference modeling objective for finding\nthe optimal context. OPPO further integrates a well-performing decision policy\nby optimizing the two objectives iteratively. Our empirical results demonstrate\nthat OPPO effectively models offline preferences and outperforms prior\ncompeting baselines, including offline RL algorithms performed over either true\nor pseudo reward function specifications. Our code is available on the project\nwebsite: https://sites.google.com/view/oppo-icml-2023 .", + "authors": "Yachen Kang, Diyuan Shi, Jinxin Liu, Li He, Donglin Wang", + "published": "2023-05-25", + "updated": "2023-06-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.11731v1", + "title": "Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning", + "abstract": "The offline reinforcement learning (RL) paradigm provides a general recipe to\nconvert static behavior datasets into policies that can perform better than the\npolicy that collected the data. While policy constraints, conservatism, and\nother methods for mitigating distributional shifts have made offline\nreinforcement learning more effective, the continuous action setting often\nnecessitates various approximations for applying these techniques. Many of\nthese challenges are greatly alleviated in discrete action settings, where\noffline RL constraints and regularizers can often be computed more precisely or\neven exactly. In this paper, we propose an adaptive scheme for action\nquantization. We use a VQ-VAE to learn state-conditioned action quantization,\navoiding the exponential blowup that comes with na\\\"ive discretization of the\naction space. We show that several state-of-the-art offline RL methods such as\nIQL, CQL, and BRAC improve in performance on benchmarks when combined with our\nproposed discretization scheme. We further validate our approach on a set of\nchallenging long-horizon complex robotic manipulation tasks in the Robomimic\nenvironment, where our discretized offline RL algorithms are able to improve\nupon their continuous counterparts by 2-3x. Our project page is at\nhttps://saqrl.github.io/", + "authors": "Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine", + "published": "2023-10-18", + "updated": "2023-10-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.12876v2", + "title": "Guiding Online Reinforcement Learning with Action-Free Offline Pretraining", + "abstract": "Offline RL methods have been shown to reduce the need for environment\ninteraction by training agents using offline collected episodes. However, these\nmethods typically require action information to be logged during data\ncollection, which can be difficult or even impossible in some practical cases.\nIn this paper, we investigate the potential of using action-free offline\ndatasets to improve online reinforcement learning, name this problem\nReinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We\nintroduce Action-Free Guide (AF-Guide), a method that guides online training by\nextracting knowledge from action-free offline datasets. AF-Guide consists of an\nAction-Free Decision Transformer (AFDT) implementing a variant of Upside-Down\nReinforcement Learning. It learns to plan the next states from the offline\ndataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with\nguidance from AFDT. Experimental results show that AF-Guide can improve sample\nefficiency and performance in online training thanks to the knowledge from the\naction-free offline dataset. Code is available at\nhttps://github.com/Vision-CAIR/AF-Guide.", + "authors": "Deyao Zhu, Yuhui Wang, J\u00fcrgen Schmidhuber, Mohamed Elhoseiny", + "published": "2023-01-30", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.05922v1", + "title": "A Unified Framework for Alternating Offline Model Training and Policy Learning", + "abstract": "In offline model-based reinforcement learning (offline MBRL), we learn a\ndynamic model from historically collected data, and subsequently utilize the\nlearned model and fixed datasets for policy learning, without further\ninteracting with the environment. Offline MBRL algorithms can improve the\nefficiency and stability of policy learning over the model-free algorithms.\nHowever, in most of the existing offline MBRL algorithms, the learning\nobjectives for the dynamic models and the policies are isolated from each\nother. Such an objective mismatch may lead to inferior performance of the\nlearned agents. In this paper, we address this issue by developing an iterative\noffline MBRL framework, where we maximize a lower bound of the true expected\nreturn, by alternating between dynamic-model training and policy learning. With\nthe proposed unified model-policy learning framework, we achieve competitive\nperformance on a wide range of continuous-control offline reinforcement\nlearning datasets. Source code is publicly released.", + "authors": "Shentao Yang, Shujian Zhang, Yihao Feng, Mingyuan Zhou", + "published": "2022-10-12", + "updated": "2022-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03097v1", + "title": "Federated Ensemble-Directed Offline Reinforcement Learning", + "abstract": "We consider the problem of federated offline reinforcement learning (RL), a\nscenario under which distributed learning agents must collaboratively learn a\nhigh-quality control policy only using small pre-collected datasets generated\naccording to different unknown behavior policies. Naively combining a standard\noffline RL approach with a standard federated learning approach to solve this\nproblem can lead to poorly performing policies. In response, we develop the\nFederated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA),\nwhich distills the collective wisdom of the clients using an ensemble learning\napproach. We develop the FEDORA codebase to utilize distributed compute\nresources on a federated learning platform. We show that FEDORA significantly\noutperforms other approaches, including offline RL over the combined data pool,\nin various complex continuous control environments and real world datasets.\nFinally, we demonstrate the performance of FEDORA in the real-world on a mobile\nrobot.", + "authors": "Desik Rengarajan, Nitin Ragothaman, Dileep Kalathil, Srinivas Shakkottai", + "published": "2023-05-04", + "updated": "2023-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.18617v1", + "title": "ELA: Exploited Level Augmentation for Offline Learning in Zero-Sum Games", + "abstract": "Offline learning has become widely used due to its ability to derive\neffective policies from offline datasets gathered by expert demonstrators\nwithout interacting with the environment directly. Recent research has explored\nvarious ways to enhance offline learning efficiency by considering the\ncharacteristics (e.g., expertise level or multiple demonstrators) of the\ndataset. However, a different approach is necessary in the context of zero-sum\ngames, where outcomes vary significantly based on the strategy of the opponent.\nIn this study, we introduce a novel approach that uses unsupervised learning\ntechniques to estimate the exploited level of each trajectory from the offline\ndataset of zero-sum games made by diverse demonstrators. Subsequently, we\nincorporate the estimated exploited level into the offline learning to maximize\nthe influence of the dominant strategy. Our method enables interpretable\nexploited level estimation in multiple zero-sum games and effectively\nidentifies dominant strategy data. Also, our exploited level augmented offline\nlearning significantly enhances the original offline learning algorithms\nincluding imitation learning and offline reinforcement learning for zero-sum\ngames.", + "authors": "Shiqi Lei, Kanghoon Lee, Linjing Li, Jinkyoo Park, Jiachen Li", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.AI", + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02000v1", + "title": "Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning", + "abstract": "Value function estimation is an indispensable subroutine in reinforcement\nlearning, which becomes more challenging in the offline setting. In this paper,\nwe propose Hybrid Value Estimation (HVE) to reduce value estimation error,\nwhich trades off bias and variance by balancing between the value estimation\nfrom offline data and the learned model. Theoretical analysis discloses that\nHVE enjoys a better error bound than the direct methods. HVE can be leveraged\nin both off-policy evaluation and offline reinforcement learning settings. We,\ntherefore, provide two concrete algorithms Off-policy HVE (OPHVE) and\nModel-based Offline HVE (MOHVE), respectively. Empirical evaluations on MuJoCo\ntasks corroborate the theoretical claim. OPHVE outperforms other off-policy\nevaluation methods in all three metrics measuring the estimation effectiveness,\nwhile MOHVE achieves better or comparable performance with state-of-the-art\noffline reinforcement learning algorithms. We hope that HVE could shed some\nlight on further research on reinforcement learning from fixed data.", + "authors": "Xue-Kun Jin, Xu-Hui Liu, Shengyi Jiang, Yang Yu", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.07693v1", + "title": "Adaptive Policy Learning for Offline-to-Online Reinforcement Learning", + "abstract": "Conventional reinforcement learning (RL) needs an environment to collect\nfresh data, which is impractical when online interactions are costly. Offline\nRL provides an alternative solution by directly learning from the previously\ncollected dataset. However, it will yield unsatisfactory performance if the\nquality of the offline datasets is poor. In this paper, we consider an\noffline-to-online setting where the agent is first learned from the offline\ndataset and then trained online, and propose a framework called Adaptive Policy\nLearning for effectively taking advantage of offline and online data.\nSpecifically, we explicitly consider the difference between the online and\noffline data and apply an adaptive update scheme accordingly, that is, a\npessimistic update strategy for the offline dataset and an optimistic/greedy\nupdate scheme for the online dataset. Such a simple and effective method\nprovides a way to mix the offline and online RL and achieve the best of both\nworlds. We further provide two detailed algorithms for implementing the\nframework through embedding value or policy-based RL algorithms into it.\nFinally, we conduct extensive experiments on popular continuous control tasks,\nand results show that our algorithm can learn the expert policy with high\nsample efficiency even when the quality of offline dataset is poor, e.g.,\nrandom dataset.", + "authors": "Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.04268v1", + "title": "On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples", + "abstract": "Offline reinforcement learning (offline RL) considers problems where learning\nis performed using only previously collected samples and is helpful for the\nsettings in which collecting new data is costly or risky. In model-based\noffline RL, the learner performs estimation (or optimization) using a model\nconstructed according to the empirical transition frequencies. We analyze the\nsample complexity of vanilla model-based offline RL with dependent samples in\nthe infinite-horizon discounted-reward setting. In our setting, the samples\nobey the dynamics of the Markov decision process and, consequently, may have\ninterdependencies. Under no assumption of independent samples, we provide a\nhigh-probability, polynomial sample complexity bound for vanilla model-based\noff-policy evaluation that requires partial or uniform coverage. We extend this\nresult to the off-policy optimization under uniform coverage. As a comparison\nto the model-based approach, we analyze the sample complexity of off-policy\nevaluation with vanilla importance sampling in the infinite-horizon setting.\nFinally, we provide an estimator that outperforms the sample-mean estimator for\nalmost deterministic dynamics that are prevalent in reinforcement learning.", + "authors": "Mustafa O. Karabag, Ufuk Topcu", + "published": "2023-03-07", + "updated": "2023-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.11620v2", + "title": "Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization", + "abstract": "Offline reinforcement learning (RL) has received considerable attention in\nrecent years due to its attractive capability of learning policies from offline\ndatasets without environmental interactions. Despite some success in the\nsingle-agent setting, offline multi-agent RL (MARL) remains to be a challenge.\nThe large joint state-action space and the coupled multi-agent behaviors pose\nextra complexities for offline policy optimization. Most existing offline MARL\nstudies simply apply offline data-related regularizations on individual agents,\nwithout fully considering the multi-agent system at the global level. In this\nwork, we present OMIGA, a new offline m ulti-agent RL algorithm with implicit\nglobal-to-local v alue regularization. OMIGA provides a principled framework to\nconvert global-level value regularization into equivalent implicit local value\nregularizations and simultaneously enables in-sample learning, thus elegantly\nbridging multi-agent value decomposition and policy learning with offline\nregularizations. Based on comprehensive experiments on the offline multi-agent\nMuJoCo and StarCraft II micro-management tasks, we show that OMIGA achieves\nsuperior performance over the state-of-the-art offline MARL methods in almost\nall tasks.", + "authors": "Xiangsen Wang, Haoran Xu, Yinan Zheng, Xianyuan Zhan", + "published": "2023-07-21", + "updated": "2023-11-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.15578v1", + "title": "Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning", + "abstract": "We hypothesize that empirically studying the sample complexity of offline\nreinforcement learning (RL) is crucial for the practical applications of RL in\nthe real world. Several recent works have demonstrated the ability to learn\npolicies directly from offline data. In this work, we ask the question of the\ndependency on the number of samples for learning from offline data. Our\nobjective is to emphasize that studying sample complexity for offline RL is\nimportant, and is an indicator of the usefulness of existing offline\nalgorithms. We propose an evaluation approach for sample complexity analysis of\noffline RL.", + "authors": "Samin Yeasar Arnob, Riashat Islam, Doina Precup", + "published": "2021-12-31", + "updated": "2021-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.12639v1", + "title": "Single-Task Continual Offline Reinforcement Learning", + "abstract": "In this paper, we study the continual learning problem of single-task offline\nreinforcement learning. In the past, continual reinforcement learning usually\nonly dealt with multitasking, that is, learning multiple related or unrelated\ntasks in a row, but once each learned task was learned, it was not relearned,\nbut only used in subsequent processes. However, offline reinforcement learning\ntasks require the continuously learning of multiple different datasets for the\nsame task. Existing algorithms will try their best to achieve the best results\nin each offline dataset they have learned and the skills of the network will\noverwrite the high-quality datasets that have been learned after learning the\nsubsequent poor datasets. On the other hand, if too much emphasis is placed on\nstability, the network will learn the subsequent better dataset after learning\nthe poor offline dataset, and the problem of insufficient plasticity and\nnon-learning will occur. How to design a strategy that can always preserve the\nbest performance for each state in the data that has been learned is a new\nchallenge and the focus of this study. Therefore, this study proposes a new\nalgorithm, called Ensemble Offline Reinforcement Learning Based on Experience\nReplay, which introduces multiple value networks to learn the same dataset and\njudge whether the strategy has been learned by the discrete degree of the value\nnetwork, to improve the performance of the network in single-task offline\nreinforcement learning.", + "authors": "Sibo Gai, Donglin Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "I.2.6" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.15690v1", + "title": "Benchmarking Offline Reinforcement Learning on Real-Robot Hardware", + "abstract": "Learning policies from previously recorded data is a promising direction for\nreal-world robotics tasks, as online learning is often infeasible. Dexterous\nmanipulation in particular remains an open problem in its general form. The\ncombination of offline reinforcement learning with large diverse datasets,\nhowever, has the potential to lead to a breakthrough in this challenging domain\nanalogously to the rapid progress made in supervised learning in recent years.\nTo coordinate the efforts of the research community toward tackling this\nproblem, we propose a benchmark including: i) a large collection of data for\noffline learning from a dexterous manipulation platform on two tasks, obtained\nwith capable RL agents trained in simulation; ii) the option to execute learned\npolicies on a real-world robotic system and a simulation for efficient\ndebugging. We evaluate prominent open-sourced offline reinforcement learning\nalgorithms on the datasets and provide a reproducible experimental setup for\noffline reinforcement learning on real systems.", + "authors": "Nico G\u00fcrtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel W\u00fcthrich, Stefan Bauer, Bernhard Sch\u00f6lkopf, Georg Martius", + "published": "2023-07-28", + "updated": "2023-07-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08900v1", + "title": "Offline Multi-Agent Reinforcement Learning with Coupled Value Factorization", + "abstract": "Offline reinforcement learning (RL) that learns policies from offline\ndatasets without environment interaction has received considerable attention in\nrecent years. Compared with the rich literature in the single-agent case,\noffline multi-agent RL is still a relatively underexplored area. Most existing\nmethods directly apply offline RL ingredients in the multi-agent setting\nwithout fully leveraging the decomposable problem structure, leading to less\nsatisfactory performance in complex tasks. We present OMAC, a new offline\nmulti-agent RL algorithm with coupled value factorization. OMAC adopts a\ncoupled value factorization scheme that decomposes the global value function\ninto local and shared components, and also maintains the credit assignment\nconsistency between the state-value and Q-value functions. Moreover, OMAC\nperforms in-sample learning on the decomposed local state-value functions,\nwhich implicitly conducts max-Q operation at the local level while avoiding\ndistributional shift caused by evaluating out-of-distribution actions. Based on\nthe comprehensive evaluations of the offline multi-agent StarCraft II\nmicro-management tasks, we demonstrate the superior performance of OMAC over\nthe state-of-the-art offline multi-agent RL methods.", + "authors": "Xiangsen Wang, Xianyuan Zhan", + "published": "2023-06-15", + "updated": "2023-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.14897v1", + "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", + "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", + "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", + "published": "2023-08-28", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.06871v3", + "title": "Improving Offline-to-Online Reinforcement Learning with Q-Ensembles", + "abstract": "Offline reinforcement learning (RL) is a learning paradigm where an agent\nlearns from a fixed dataset of experience. However, learning solely from a\nstatic dataset can limit the performance due to the lack of exploration. To\novercome it, offline-to-online RL combines offline pre-training with online\nfine-tuning, which enables the agent to further refine its policy by\ninteracting with the environment in real-time. Despite its benefits, existing\noffline-to-online RL methods suffer from performance degradation and slow\nimprovement during the online phase. To tackle these challenges, we propose a\nnovel framework called Ensemble-based Offline-to-Online (E2O) RL. By increasing\nthe number of Q-networks, we seamlessly bridge offline pre-training and online\nfine-tuning without degrading performance. Moreover, to expedite online\nperformance enhancement, we appropriately loosen the pessimism of Q-value\nestimation and incorporate ensemble-based exploration mechanisms into our\nframework. Experimental results demonstrate that E2O can substantially improve\nthe training stability, learning efficiency, and final performance of existing\noffline RL methods during online fine-tuning on a range of locomotion and\nnavigation tasks, significantly outperforming existing offline-to-online RL\nmethods.", + "authors": "Kai Zhao, Yi Ma, Jianye Hao, Jinyi Liu, Yan Zheng, Zhaopeng Meng", + "published": "2023-06-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.10905v1", + "title": "Efficient Robotic Manipulation Through Offline-to-Online Reinforcement Learning and Goal-Aware State Information", + "abstract": "End-to-end learning robotic manipulation with high data efficiency is one of\nthe key challenges in robotics. The latest methods that utilize human\ndemonstration data and unsupervised representation learning has proven to be a\npromising direction to improve RL learning efficiency. The use of demonstration\ndata also allows \"warming-up\" the RL policies using offline data with imitation\nlearning or the recently emerged offline reinforcement learning algorithms.\nHowever, existing works often treat offline policy learning and online\nexploration as two separate processes, which are often accompanied by severe\nperformance drop during the offline-to-online transition. Furthermore, many\nrobotic manipulation tasks involve complex sub-task structures, which are very\nchallenging to be solved in RL with sparse reward. In this work, we propose a\nunified offline-to-online RL framework that resolves the transition performance\ndrop issue. Additionally, we introduce goal-aware state information to the RL\nagent, which can greatly reduce task complexity and accelerate policy learning.\nCombined with an advanced unsupervised representation learning module, our\nframework achieves great training efficiency and performance compared with the\nstate-of-the-art methods in multiple robotic manipulation tasks.", + "authors": "Jin Li, Xianyuan Zhan, Zixu Xiao, Guyue Zhou", + "published": "2021-10-21", + "updated": "2021-10-21", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.16973v1", + "title": "Towards Robust Offline-to-Online Reinforcement Learning via Uncertainty and Smoothness", + "abstract": "To obtain a near-optimal policy with fewer interactions in Reinforcement\nLearning (RL), a promising approach involves the combination of offline RL,\nwhich enhances sample efficiency by leveraging offline datasets, and online RL,\nwhich explores informative transitions by interacting with the environment.\nOffline-to-Online (O2O) RL provides a paradigm for improving an offline trained\nagent within limited online interactions. However, due to the significant\ndistribution shift between online experiences and offline data, most offline RL\nalgorithms suffer from performance drops and fail to achieve stable policy\nimprovement in O2O adaptation. To address this problem, we propose the Robust\nOffline-to-Online (RO2O) algorithm, designed to enhance offline policies\nthrough uncertainty and smoothness, and to mitigate the performance drop in\nonline adaptation. Specifically, RO2O incorporates Q-ensemble for uncertainty\npenalty and adversarial samples for policy and value smoothness, which enable\nRO2O to maintain a consistent learning procedure in online adaptation without\nrequiring special changes to the learning objective. Theoretical analyses in\nlinear MDPs demonstrate that the uncertainty and smoothness lead to a tighter\noptimality bound in O2O against distribution shift. Experimental results\nillustrate the superiority of RO2O in facilitating stable offline-to-online\nlearning and achieving significant improvement with limited online\ninteractions.", + "authors": "Xiaoyu Wen, Xudong Yu, Rui Yang, Chenjia Bai, Zhen Wang", + "published": "2023-09-29", + "updated": "2023-09-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.03351v4", + "title": "Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization", + "abstract": "Combining offline and online reinforcement learning (RL) is crucial for\nefficient and safe learning. However, previous approaches treat offline and\nonline learning as separate procedures, resulting in redundant designs and\nlimited performance. We ask: Can we achieve straightforward yet effective\noffline and online learning without introducing extra conservatism or\nregularization? In this study, we propose Uni-o4, which utilizes an on-policy\nobjective for both offline and online learning. Owning to the alignment of\nobjectives in two phases, the RL agent can transfer between offline and online\nlearning seamlessly. This property enhances the flexibility of the learning\nparadigm, allowing for arbitrary combinations of pretraining, fine-tuning,\noffline, and online learning. In the offline phase, specifically, Uni-o4\nleverages diverse ensemble policies to address the mismatch issues between the\nestimated behavior policy and the offline dataset. Through a simple offline\npolicy evaluation (OPE) approach, Uni-o4 can achieve multi-step policy\nimprovement safely. We demonstrate that by employing the method above, the\nfusion of these two paradigms can yield superior offline initialization as well\nas stable and rapid online fine-tuning capabilities. Through real-world robot\ntasks, we highlight the benefits of this paradigm for rapid deployment in\nchallenging, previously unseen real-world environments. Additionally, through\ncomprehensive evaluations using numerous simulated benchmarks, we substantiate\nthat our method achieves state-of-the-art performance in both offline and\noffline-to-online fine-tuning learning. Our website:\nhttps://lei-kun.github.io/uni-o4/ .", + "authors": "Kun Lei, Zhengmao He, Chenhao Lu, Kaizhe Hu, Yang Gao, Huazhe Xu", + "published": "2023-11-06", + "updated": "2024-03-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.05422v1", + "title": "Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning", + "abstract": "Learning a precise dynamics model can be crucial for offline reinforcement\nlearning, which, unfortunately, has been found to be quite challenging.\nDynamics models that are learned by fitting historical transitions often\nstruggle to generalize to unseen transitions. In this study, we identify a\nhidden but pivotal factor termed dynamics reward that remains consistent across\ntransitions, offering a pathway to better generalization. Therefore, we propose\nthe idea of reward-consistent dynamics models: any trajectory generated by the\ndynamics model should maximize the dynamics reward derived from the data. We\nimplement this idea as the MOREC (Model-based Offline reinforcement learning\nwith Reward Consistency) method, which can be seamlessly integrated into\nprevious offline model-based reinforcement learning (MBRL) methods. MOREC\nlearns a generalizable dynamics reward function from offline data, which is\nsubsequently employed as a transition filter in any offline MBRL method: when\ngenerating transitions, the dynamics model generates a batch of transitions and\nselects the one with the highest dynamics reward value. On a synthetic task, we\nvisualize that MOREC has a strong generalization ability and can surprisingly\nrecover some distant unseen transitions. On 21 offline tasks in D4RL and NeoRL\nbenchmarks, MOREC improves the previous state-of-the-art performance by a\nsignificant margin, i.e., 4.6% on D4RL tasks and 25.9% on NeoRL tasks. Notably,\nMOREC is the first method that can achieve above 95% online RL performance in 6\nout of 12 D4RL tasks and 3 out of 9 NeoRL tasks.", + "authors": "Fan-Ming Luo, Tian Xu, Xingchen Cao, Yang Yu", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13412v2", + "title": "CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn an optimal policy from\npre-collected and labeled datasets, which eliminates the time-consuming data\ncollection in online RL. However, offline RL still bears a large burden of\nspecifying/handcrafting extrinsic rewards for each transition in the offline\ndata. As a remedy for the labor-intensive labeling, we propose to endow offline\nRL tasks with a few expert data and utilize the limited expert data to drive\nintrinsic rewards, thus eliminating the need for extrinsic rewards. To achieve\nthat, we introduce \\textbf{C}alibrated \\textbf{L}atent\ng\\textbf{U}idanc\\textbf{E} (CLUE), which utilizes a conditional variational\nauto-encoder to learn a latent space such that intrinsic rewards can be\ndirectly qualified over the latent space. CLUE's key idea is to align the\nintrinsic rewards consistent with the expert intention via enforcing the\nembeddings of expert data to a calibrated contextual representation. We\ninstantiate the expert-driven intrinsic rewards in sparse-reward offline RL\ntasks, offline imitation learning (IL) tasks, and unsupervised offline RL\ntasks. Empirically, we find that CLUE can effectively improve the sparse-reward\noffline RL performance, outperform the state-of-the-art offline IL baselines,\nand discover diverse skills from static reward-free offline data.", + "authors": "Jinxin Liu, Lipeng Zu, Li He, Donglin Wang", + "published": "2023-06-23", + "updated": "2023-10-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1907.04543v4", + "title": "An Optimistic Perspective on Offline Reinforcement Learning", + "abstract": "Off-policy reinforcement learning (RL) using a fixed offline dataset of\nlogged interactions is an important consideration in real world applications.\nThis paper studies offline RL using the DQN replay dataset comprising the\nentire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate\nthat recent off-policy deep RL algorithms, even when trained solely on this\nfixed dataset, outperform the fully trained DQN agent. To enhance\ngeneralization in the offline setting, we present Random Ensemble Mixture\n(REM), a robust Q-learning algorithm that enforces optimal Bellman consistency\non random convex combinations of multiple Q-value estimates. Offline REM\ntrained on the DQN replay dataset surpasses strong RL baselines. Ablation\nstudies highlight the role of offline dataset size and diversity as well as the\nalgorithm choice in our positive results. Overall, the results here present an\noptimistic view that robust RL algorithms trained on sufficiently large and\ndiverse offline datasets can lead to high quality policies. The DQN replay\ndataset can serve as an offline RL benchmark and is open-sourced.", + "authors": "Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi", + "published": "2019-07-10", + "updated": "2020-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.05440v1", + "title": "Dealing with the Unknown: Pessimistic Offline Reinforcement Learning", + "abstract": "Reinforcement Learning (RL) has been shown effective in domains where the\nagent can learn policies by actively interacting with its operating\nenvironment. However, if we change the RL scheme to offline setting where the\nagent can only update its policy via static datasets, one of the major issues\nin offline reinforcement learning emerges, i.e. distributional shift. We\npropose a Pessimistic Offline Reinforcement Learning (PessORL) algorithm to\nactively lead the agent back to the area where it is familiar by manipulating\nthe value function. We focus on problems caused by out-of-distribution (OOD)\nstates, and deliberately penalize high values at states that are absent in the\ntraining dataset, so that the learned pessimistic value function lower bounds\nthe true value anywhere within the state space. We evaluate the PessORL\nalgorithm on various benchmark tasks, where we show that our method gains\nbetter performance by explicitly handling OOD states, when compared to those\nmethods merely considering OOD actions.", + "authors": "Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan", + "published": "2021-11-09", + "updated": "2021-11-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2107.06106v2", + "title": "Conservative Offline Distributional Reinforcement Learning", + "abstract": "Many reinforcement learning (RL) problems in practice are offline, learning\npurely from observational data. A key challenge is how to ensure the learned\npolicy is safe, which requires quantifying the risk associated with different\nactions. In the online setting, distributional RL algorithms do so by learning\nthe distribution over returns (i.e., cumulative rewards) instead of the\nexpected return; beyond quantifying risk, they have also been shown to learn\nbetter representations for planning. We propose Conservative Offline\nDistributional Actor Critic (CODAC), an offline RL algorithm suitable for both\nrisk-neutral and risk-averse domains. CODAC adapts distributional RL to the\noffline setting by penalizing the predicted quantiles of the return for\nout-of-distribution actions. We prove that CODAC learns a conservative return\ndistribution -- in particular, for finite MDPs, CODAC converges to an uniform\nlower bound on the quantiles of the return distribution; our proof relies on a\nnovel analysis of the distributional Bellman operator. In our experiments, on\ntwo challenging robot navigation tasks, CODAC successfully learns risk-averse\npolicies using offline data collected purely from risk-neutral agents.\nFurthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of\nboth expected and risk-sensitive performance.", + "authors": "Yecheng Jason Ma, Dinesh Jayaraman, Osbert Bastani", + "published": "2021-07-12", + "updated": "2021-10-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + } + ] + ] + }, + { + "url": "http://arxiv.org/abs/2305.18290v2", + "title": "Direct Preference Optimization: Your Language Model is Secretly a Reward Model", + "abstract": "While large-scale unsupervised language models (LMs) learn broad world\nknowledge and some reasoning skills, achieving precise control of their\nbehavior is difficult due to the completely unsupervised nature of their\ntraining. Existing methods for gaining such steerability collect human labels\nof the relative quality of model generations and fine-tune the unsupervised LM\nto align with these preferences, often with reinforcement learning from human\nfeedback (RLHF). However, RLHF is a complex and often unstable procedure, first\nfitting a reward model that reflects the human preferences, and then\nfine-tuning the large unsupervised LM using reinforcement learning to maximize\nthis estimated reward without drifting too far from the original model. In this\npaper we introduce a new parameterization of the reward model in RLHF that\nenables extraction of the corresponding optimal policy in closed form, allowing\nus to solve the standard RLHF problem with only a simple classification loss.\nThe resulting algorithm, which we call Direct Preference Optimization (DPO), is\nstable, performant, and computationally lightweight, eliminating the need for\nsampling from the LM during fine-tuning or performing significant\nhyperparameter tuning. Our experiments show that DPO can fine-tune LMs to align\nwith human preferences as well as or better than existing methods. Notably,\nfine-tuning with DPO exceeds PPO-based RLHF in ability to control sentiment of\ngenerations, and matches or improves response quality in summarization and\nsingle-turn dialogue while being substantially simpler to implement and train.", + "authors": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, Chelsea Finn", + "published": "2023-05-29", + "updated": "2023-12-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.02155v1", + "title": "Training language models to follow instructions with human feedback", + "abstract": "Making language models bigger does not inherently make them better at\nfollowing a user's intent. For example, large language models can generate\noutputs that are untruthful, toxic, or simply not helpful to the user. In other\nwords, these models are not aligned with their users. In this paper, we show an\navenue for aligning language models with user intent on a wide range of tasks\nby fine-tuning with human feedback. Starting with a set of labeler-written\nprompts and prompts submitted through the OpenAI API, we collect a dataset of\nlabeler demonstrations of the desired model behavior, which we use to fine-tune\nGPT-3 using supervised learning. We then collect a dataset of rankings of model\noutputs, which we use to further fine-tune this supervised model using\nreinforcement learning from human feedback. We call the resulting models\nInstructGPT. In human evaluations on our prompt distribution, outputs from the\n1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3,\ndespite having 100x fewer parameters. Moreover, InstructGPT models show\nimprovements in truthfulness and reductions in toxic output generation while\nhaving minimal performance regressions on public NLP datasets. Even though\nInstructGPT still makes simple mistakes, our results show that fine-tuning with\nhuman feedback is a promising direction for aligning language models with human\nintent.", + "authors": "Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe", + "published": "2022-03-04", + "updated": "2022-03-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1502.05477v5", + "title": "Trust Region Policy Optimization", + "abstract": "We describe an iterative procedure for optimizing policies, with guaranteed\nmonotonic improvement. By making several approximations to the\ntheoretically-justified procedure, we develop a practical algorithm, called\nTrust Region Policy Optimization (TRPO). This algorithm is similar to natural\npolicy gradient methods and is effective for optimizing large nonlinear\npolicies such as neural networks. Our experiments demonstrate its robust\nperformance on a wide variety of tasks: learning simulated robotic swimming,\nhopping, and walking gaits; and playing Atari games using images of the screen\nas input. Despite its approximations that deviate from the theory, TRPO tends\nto give monotonic improvement, with little tuning of hyperparameters.", + "authors": "John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel", + "published": "2015-02-19", + "updated": "2017-04-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1707.06347v2", + "title": "Proximal Policy Optimization Algorithms", + "abstract": "We propose a new family of policy gradient methods for reinforcement\nlearning, which alternate between sampling data through interaction with the\nenvironment, and optimizing a \"surrogate\" objective function using stochastic\ngradient ascent. Whereas standard policy gradient methods perform one gradient\nupdate per data sample, we propose a novel objective function that enables\nmultiple epochs of minibatch updates. The new methods, which we call proximal\npolicy optimization (PPO), have some of the benefits of trust region policy\noptimization (TRPO), but they are much simpler to implement, more general, and\nhave better sample complexity (empirically). Our experiments test PPO on a\ncollection of benchmark tasks, including simulated robotic locomotion and Atari\ngame playing, and we show that PPO outperforms other online policy gradient\nmethods, and overall strikes a favorable balance between sample complexity,\nsimplicity, and wall-time.", + "authors": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov", + "published": "2017-07-20", + "updated": "2017-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.14367v2", + "title": "Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data", + "abstract": "Learning from preference labels plays a crucial role in fine-tuning large\nlanguage models. There are several distinct approaches for preference\nfine-tuning, including supervised learning, on-policy reinforcement learning\n(RL), and contrastive learning. Different methods come with different\nimplementation tradeoffs and performance differences, and existing empirical\nfindings present different conclusions, for instance, some results show that\nonline RL is quite important to attain good fine-tuning results, while others\nfind (offline) contrastive or even purely supervised methods sufficient. This\nraises a natural question: what kind of approaches are important for\nfine-tuning with preference data and why? In this paper, we answer this\nquestion by performing a rigorous analysis of a number of fine-tuning\ntechniques on didactic and full-scale LLM problems. Our main finding is that,\nin general, approaches that use on-policy sampling or attempt to push down the\nlikelihood on certain responses (i.e., employ a \"negative gradient\") outperform\noffline and maximum likelihood objectives. We conceptualize our insights and\nunify methods that use on-policy sampling or negative gradient under a notion\nof mode-seeking objectives for categorical distributions. Mode-seeking\nobjectives are able to alter probability mass on specific bins of a categorical\ndistribution at a fast rate compared to maximum likelihood, allowing them to\nrelocate masses across bins more effectively. Our analysis prescribes\nactionable insights for preference fine-tuning of LLMs and informs how data\nshould be collected for maximal improvement.", + "authors": "Fahim Tajwar, Anikait Singh, Archit Sharma, Rafael Rafailov, Jeff Schneider, Tengyang Xie, Stefano Ermon, Chelsea Finn, Aviral Kumar", + "published": "2024-04-22", + "updated": "2024-04-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.08848v1", + "title": "Hybrid Inverse Reinforcement Learning", + "abstract": "The inverse reinforcement learning approach to imitation learning is a\ndouble-edged sword. On the one hand, it can enable learning from a smaller\nnumber of expert demonstrations with more robustness to error compounding than\nbehavioral cloning approaches. On the other hand, it requires that the learner\nrepeatedly solve a computationally expensive reinforcement learning (RL)\nproblem. Often, much of this computation is wasted searching over policies very\ndissimilar to the expert's. In this work, we propose using hybrid RL --\ntraining on a mixture of online and expert data -- to curtail unnecessary\nexploration. Intuitively, the expert data focuses the learner on good states\nduring training, which reduces the amount of exploration required to compute a\nstrong policy. Notably, such an approach doesn't need the ability to reset the\nlearner to arbitrary states in the environment, a requirement of prior work in\nefficient inverse RL. More formally, we derive a reduction from inverse RL to\nexpert-competitive RL (rather than globally optimal RL) that allows us to\ndramatically reduce interaction during the inner policy search loop while\nmaintaining the benefits of the IRL approach. This allows us to derive both\nmodel-free and model-based hybrid inverse RL algorithms with strong policy\nperformance guarantees. Empirically, we find that our approaches are\nsignificantly more sample efficient than standard inverse RL and several other\nbaselines on a suite of continuous control tasks.", + "authors": "Juntao Ren, Gokul Swamy, Zhiwei Steven Wu, J. Andrew Bagnell, Sanjiban Choudhury", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.03236v2", + "title": "Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap", + "abstract": "We provide a unifying view of a large family of previous imitation learning\nalgorithms through the lens of moment matching. At its core, our classification\nscheme is based on whether the learner attempts to match (1) reward or (2)\naction-value moments of the expert's behavior, with each option leading to\ndiffering algorithmic approaches. By considering adversarially chosen\ndivergences between learner and expert behavior, we are able to derive bounds\non policy performance that apply for all algorithms in each of these classes,\nthe first to our knowledge. We also introduce the notion of moment\nrecoverability, implicit in many previous analyses of imitation learning, which\nallows us to cleanly delineate how well each algorithmic family is able to\nmitigate compounding errors. We derive three novel algorithm templates (AdVIL,\nAdRIL, and DAeQuIL) with strong guarantees, simple implementation, and\ncompetitive empirical performance.", + "authors": "Gokul Swamy, Sanjiban Choudhury, J. Andrew Bagnell, Zhiwei Steven Wu", + "published": "2021-03-04", + "updated": "2021-06-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.03715v1", + "title": "Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences", + "abstract": "This paper studies post-training large language models (LLMs) using\npreference feedback from a powerful oracle to help a model iteratively improve\nover itself. The typical approach for post-training LLMs involves Reinforcement\nLearning from Human Feedback (RLHF), which traditionally separates reward\nlearning and subsequent policy optimization. However, such a reward\nmaximization approach is limited by the nature of \"point-wise\" rewards (such as\nBradley-Terry model), which fails to express complex intransitive or cyclic\npreference relations. While advances on RLHF show reward learning and policy\noptimization can be merged into a single contrastive objective for stability,\nthey yet still remain tethered to the reward maximization framework. Recently,\na new wave of research sidesteps the reward maximization presumptions in favor\nof directly optimizing over \"pair-wise\" or general preferences. In this paper,\nwe introduce Direct Nash Optimization (DNO), a provable and scalable algorithm\nthat marries the simplicity and stability of contrastive learning with\ntheoretical generality from optimizing general preferences. Because DNO is a\nbatched on-policy algorithm using a regression-based objective, its\nimplementation is straightforward and efficient. Moreover, DNO enjoys monotonic\nimprovement across iterations that help it improve even over a strong teacher\n(such as GPT-4). In our experiments, a resulting 7B parameter Orca-2.5 model\naligned by DNO achieves the state-of-the-art win-rate against GPT-4-Turbo of\n33% on AlpacaEval 2.0 (even after controlling for response length), an absolute\ngain of 26% (7% to 33%) over the initializing model. It outperforms models with\nfar more parameters, including Mistral Large, Self-Rewarding LM (70B\nparameters), and older versions of GPT-4.", + "authors": "Corby Rosset, Ching-An Cheng, Arindam Mitra, Michael Santacroce, Ahmed Awadallah, Tengyang Xie", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.14367v2", + "title": "Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data", + "abstract": "Learning from preference labels plays a crucial role in fine-tuning large\nlanguage models. There are several distinct approaches for preference\nfine-tuning, including supervised learning, on-policy reinforcement learning\n(RL), and contrastive learning. Different methods come with different\nimplementation tradeoffs and performance differences, and existing empirical\nfindings present different conclusions, for instance, some results show that\nonline RL is quite important to attain good fine-tuning results, while others\nfind (offline) contrastive or even purely supervised methods sufficient. This\nraises a natural question: what kind of approaches are important for\nfine-tuning with preference data and why? In this paper, we answer this\nquestion by performing a rigorous analysis of a number of fine-tuning\ntechniques on didactic and full-scale LLM problems. Our main finding is that,\nin general, approaches that use on-policy sampling or attempt to push down the\nlikelihood on certain responses (i.e., employ a \"negative gradient\") outperform\noffline and maximum likelihood objectives. We conceptualize our insights and\nunify methods that use on-policy sampling or negative gradient under a notion\nof mode-seeking objectives for categorical distributions. Mode-seeking\nobjectives are able to alter probability mass on specific bins of a categorical\ndistribution at a fast rate compared to maximum likelihood, allowing them to\nrelocate masses across bins more effectively. Our analysis prescribes\nactionable insights for preference fine-tuning of LLMs and informs how data\nshould be collected for maximal improvement.", + "authors": "Fahim Tajwar, Anikait Singh, Archit Sharma, Rafael Rafailov, Jeff Schneider, Tengyang Xie, Stefano Ermon, Chelsea Finn, Aviral Kumar", + "published": "2024-04-22", + "updated": "2024-04-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2007.08459v2", + "title": "PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning", + "abstract": "Direct policy gradient methods for reinforcement learning are a successful\napproach for a variety of reasons: they are model free, they directly optimize\nthe performance metric of interest, and they allow for richly parameterized\npolicies. Their primary drawback is that, by being local in nature, they fail\nto adequately explore the environment. In contrast, while model-based\napproaches and Q-learning directly handle exploration through the use of\noptimism, their ability to handle model misspecification and function\napproximation is far less evident. This work introduces the the Policy\nCover-Policy Gradient (PC-PG) algorithm, which provably balances the\nexploration vs. exploitation tradeoff using an ensemble of learned policies\n(the policy cover). PC-PG enjoys polynomial sample complexity and run time for\nboth tabular MDPs and, more generally, linear MDPs in an infinite dimensional\nRKHS. Furthermore, PC-PG also has strong guarantees under model\nmisspecification that go beyond the standard worst case $\\ell_{\\infty}$\nassumptions; this includes approximation guarantees for state aggregation under\nan average case error assumption, along with guarantees under a more general\nassumption where the approximation error under distribution shift is\ncontrolled. We complement the theory with empirical evaluation across a variety\nof domains in both reward-free and reward-driven settings.", + "authors": "Alekh Agarwal, Mikael Henaff, Sham Kakade, Wen Sun", + "published": "2020-07-16", + "updated": "2020-08-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.12192v1", + "title": "Aligning Text-to-Image Models using Human Feedback", + "abstract": "Deep generative models have shown impressive results in text-to-image\nsynthesis. However, current text-to-image models often generate images that are\ninadequately aligned with text prompts. We propose a fine-tuning method for\naligning such models using human feedback, comprising three stages. First, we\ncollect human feedback assessing model output alignment from a set of diverse\ntext prompts. We then use the human-labeled image-text dataset to train a\nreward function that predicts human feedback. Lastly, the text-to-image model\nis fine-tuned by maximizing reward-weighted likelihood to improve image-text\nalignment. Our method generates objects with specified colors, counts and\nbackgrounds more accurately than the pre-trained model. We also analyze several\ndesign choices and find that careful investigations on such design choices are\nimportant in balancing the alignment-fidelity tradeoffs. Our results\ndemonstrate the potential for learning from human feedback to significantly\nimprove text-to-image models.", + "authors": "Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Shixiang Shane Gu", + "published": "2023-02-23", + "updated": "2023-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.11456v4", + "title": "Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint", + "abstract": "This paper studies the alignment process of generative models with\nReinforcement Learning from Human Feedback (RLHF). We first identify the\nprimary challenges of existing popular methods like offline PPO and offline DPO\nas lacking in strategical exploration of the environment. Then, to understand\nthe mathematical principle of RLHF, we consider a standard mathematical\nformulation, the reverse-KL regularized contextual bandit for RLHF. Despite its\nwidespread practical application, a rigorous theoretical analysis of this\nformulation remains open. We investigate its behavior in three distinct\nsettings -- offline, online, and hybrid -- and propose efficient algorithms\nwith finite-sample theoretical guarantees.\n Moving towards practical applications, our framework, with a robust\napproximation of the information-theoretical policy improvement oracle,\nnaturally gives rise to several novel RLHF algorithms. This includes an\niterative version of the Direct Preference Optimization (DPO) algorithm for\nonline settings, and a multi-step rejection sampling strategy for offline\nscenarios. Our empirical evaluations on real-world alignment experiment of\nlarge language model demonstrate that these proposed methods significantly\nsurpass existing strong baselines, such as DPO and Rejection Sampling\nOptimization (RSO), showcasing the connections between solid theoretical\nfoundations and their potent practical implementations.", + "authors": "Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, Tong Zhang", + "published": "2023-12-18", + "updated": "2024-05-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.04056v1", + "title": "A Minimaximalist Approach to Reinforcement Learning from Human Feedback", + "abstract": "We present Self-Play Preference Optimization (SPO), an algorithm for\nreinforcement learning from human feedback. Our approach is minimalist in that\nit does not require training a reward model nor unstable adversarial training\nand is therefore rather simple to implement. Our approach is maximalist in that\nit provably handles non-Markovian, intransitive, and stochastic preferences\nwhile being robust to the compounding errors that plague offline approaches to\nsequential prediction. To achieve the preceding qualities, we build upon the\nconcept of a Minimax Winner (MW), a notion of preference aggregation from the\nsocial choice theory literature that frames learning from preferences as a\nzero-sum game between two policies. By leveraging the symmetry of this game, we\nprove that rather than using the traditional technique of dueling two policies\nto compute the MW, we can simply have a single agent play against itself while\nmaintaining strong convergence guarantees. Practically, this corresponds to\nsampling multiple trajectories from a policy, asking a rater or preference\nmodel to compare them, and then using the proportion of wins as the reward for\na particular trajectory. We demonstrate that on a suite of continuous control\ntasks, we are able to learn significantly more efficiently than reward-model\nbased approaches while maintaining robustness to the intransitive and\nstochastic preferences that frequently occur in practice when aggregating human\njudgments.", + "authors": "Gokul Swamy, Christoph Dann, Rahul Kidambi, Zhiwei Steven Wu, Alekh Agarwal", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.08384v1", + "title": "Offline Data Enhanced On-Policy Policy Gradient with Provable Guarantees", + "abstract": "Hybrid RL is the setting where an RL agent has access to both offline data\nand online data by interacting with the real-world environment. In this work,\nwe propose a new hybrid RL algorithm that combines an on-policy actor-critic\nmethod with offline data. On-policy methods such as policy gradient and natural\npolicy gradient (NPG) have shown to be more robust to model misspecification,\nthough sometimes it may not be as sample efficient as methods that rely on\noff-policy learning. On the other hand, offline methods that depend on\noff-policy training often require strong assumptions in theory and are less\nstable to train in practice. Our new approach integrates a procedure of\noff-policy training on the offline data into an on-policy NPG framework. We\nshow that our approach, in theory, can obtain a best-of-both-worlds type of\nresult -- it achieves the state-of-art theoretical guarantees of offline RL\nwhen offline RL-specific assumptions hold, while at the same time maintaining\nthe theoretical guarantees of on-policy NPG regardless of the offline RL\nassumptions' validity. Experimentally, in challenging rich-observation\nenvironments, we show that our approach outperforms a state-of-the-art hybrid\nRL baseline which only relies on off-policy policy optimization, demonstrating\nthe empirical benefit of combining on-policy and off-policy learning. Our code\nis publicly available at https://github.com/YifeiZhou02/HNPG.", + "authors": "Yifei Zhou, Ayush Sekhari, Yuda Song, Wen Sun", + "published": "2023-11-14", + "updated": "2023-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.10436v2", + "title": "VarGrad: A Low-Variance Gradient Estimator for Variational Inference", + "abstract": "We analyse the properties of an unbiased gradient estimator of the ELBO for\nvariational inference, based on the score function method with leave-one-out\ncontrol variates. We show that this gradient estimator can be obtained using a\nnew loss, defined as the variance of the log-ratio between the exact posterior\nand the variational approximation, which we call the $\\textit{log-variance\nloss}$. Under certain conditions, the gradient of the log-variance loss equals\nthe gradient of the (negative) ELBO. We show theoretically that this gradient\nestimator, which we call $\\textit{VarGrad}$ due to its connection to the\nlog-variance loss, exhibits lower variance than the score function method in\ncertain settings, and that the leave-one-out control variate coefficients are\nclose to the optimal ones. We empirically demonstrate that VarGrad offers a\nfavourable variance versus computation trade-off compared to other\nstate-of-the-art estimators on a discrete VAE.", + "authors": "Lorenz Richter, Ayman Boustati, Nikolas N\u00fcsken, Francisco J. R. Ruiz, \u00d6mer Deniz Akyildiz", + "published": "2020-10-20", + "updated": "2020-10-29", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "math.ST", + "stat.TH" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.06718v3", + "title": "Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient", + "abstract": "We consider a hybrid reinforcement learning setting (Hybrid RL), in which an\nagent has access to an offline dataset and the ability to collect experience\nvia real-world online interaction. The framework mitigates the challenges that\narise in both pure offline and online RL settings, allowing for the design of\nsimple and highly effective algorithms, in both theory and practice. We\ndemonstrate these advantages by adapting the classical Q learning/iteration\nalgorithm to the hybrid setting, which we call Hybrid Q-Learning or Hy-Q. In\nour theoretical results, we prove that the algorithm is both computationally\nand statistically efficient whenever the offline dataset supports a\nhigh-quality policy and the environment has bounded bilinear rank. Notably, we\nrequire no assumptions on the coverage provided by the initial distribution, in\ncontrast with guarantees for policy gradient/iteration methods. In our\nexperimental results, we show that Hy-Q with neural network function\napproximation outperforms state-of-the-art online, offline, and hybrid RL\nbaselines on challenging benchmarks, including Montezuma's Revenge.", + "authors": "Yuda Song, Yifei Zhou, Ayush Sekhari, J. Andrew Bagnell, Akshay Krishnamurthy, Wen Sun", + "published": "2022-10-13", + "updated": "2023-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.06294v1", + "title": "Online and Offline Reinforcement Learning by Planning with a Learned Model", + "abstract": "Learning efficiently from small amounts of data has long been the focus of\nmodel-based reinforcement learning, both for the online case when interacting\nwith the environment and the offline case when learning from a fixed dataset.\nHowever, to date no single unified algorithm could demonstrate state-of-the-art\nresults in both settings. In this work, we describe the Reanalyse algorithm\nwhich uses model-based policy and value improvement operators to compute new\nimproved training targets on existing data points, allowing efficient learning\nfor data budgets varying by several orders of magnitude. We further show that\nReanalyse can also be used to learn entirely from demonstrations without any\nenvironment interactions, as in the case of offline Reinforcement Learning\n(offline RL). Combining Reanalyse with the MuZero algorithm, we introduce\nMuZero Unplugged, a single unified algorithm for any data budget, including\noffline RL. In contrast to previous work, our algorithm does not require any\nspecial adaptations for the off-policy or offline RL settings. MuZero Unplugged\nsets new state-of-the-art results in the RL Unplugged offline RL benchmark as\nwell as in the online RL benchmark of Atari in the standard 200 million frame\nsetting.", + "authors": "Julian Schrittwieser, Thomas Hubert, Amol Mandhane, Mohammadamin Barekatain, Ioannis Antonoglou, David Silver", + "published": "2021-04-13", + "updated": "2021-04-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.05440v1", + "title": "Dealing with the Unknown: Pessimistic Offline Reinforcement Learning", + "abstract": "Reinforcement Learning (RL) has been shown effective in domains where the\nagent can learn policies by actively interacting with its operating\nenvironment. However, if we change the RL scheme to offline setting where the\nagent can only update its policy via static datasets, one of the major issues\nin offline reinforcement learning emerges, i.e. distributional shift. We\npropose a Pessimistic Offline Reinforcement Learning (PessORL) algorithm to\nactively lead the agent back to the area where it is familiar by manipulating\nthe value function. We focus on problems caused by out-of-distribution (OOD)\nstates, and deliberately penalize high values at states that are absent in the\ntraining dataset, so that the learned pessimistic value function lower bounds\nthe true value anywhere within the state space. We evaluate the PessORL\nalgorithm on various benchmark tasks, where we show that our method gains\nbetter performance by explicitly handling OOD states, when compared to those\nmethods merely considering OOD actions.", + "authors": "Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan", + "published": "2021-11-09", + "updated": "2021-11-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.00750v2", + "title": "Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient", + "abstract": "Offline reinforcement learning, which aims at optimizing sequential\ndecision-making strategies with historical data, has been extensively applied\nin real-life applications. State-Of-The-Art algorithms usually leverage\npowerful function approximators (e.g. neural networks) to alleviate the sample\ncomplexity hurdle for better empirical performances. Despite the successes, a\nmore systematic understanding of the statistical complexity for function\napproximation remains lacking. Towards bridging the gap, we take a step by\nconsidering offline reinforcement learning with differentiable function class\napproximation (DFA). This function class naturally incorporates a wide range of\nmodels with nonlinear/nonconvex structures. Most importantly, we show offline\nRL with differentiable function approximation is provably efficient by\nanalyzing the pessimistic fitted Q-learning (PFQL) algorithm, and our results\nprovide the theoretical basis for understanding a variety of practical\nheuristics that rely on Fitted Q-Iteration style design. In addition, we\nfurther improve our guarantee with a tighter instance-dependent\ncharacterization. We hope our work could draw interest in studying\nreinforcement learning with differentiable function approximation beyond the\nscope of current research.", + "authors": "Ming Yin, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-10-03", + "updated": "2022-11-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08232v1", + "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", + "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", + "authors": "Ashish Kumar, Ilya Kuzovkin", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2403.11574v1", + "title": "Offline Multitask Representation Learning for Reinforcement Learning", + "abstract": "We study offline multitask representation learning in reinforcement learning\n(RL), where a learner is provided with an offline dataset from different tasks\nthat share a common representation and is asked to learn the shared\nrepresentation. We theoretically investigate offline multitask low-rank RL, and\npropose a new algorithm called MORL for offline multitask representation\nlearning. Furthermore, we examine downstream RL in reward-free, offline and\nonline scenarios, where a new task is introduced to the agent that shares the\nsame representation as the upstream offline tasks. Our theoretical results\ndemonstrate the benefits of using the learned representation from the upstream\noffline task instead of directly learning the representation of the low-rank\nmodel.", + "authors": "Haque Ishfaq, Thanh Nguyen-Tang, Songtao Feng, Raman Arora, Mengdi Wang, Ming Yin, Doina Precup", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02000v1", + "title": "Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning", + "abstract": "Value function estimation is an indispensable subroutine in reinforcement\nlearning, which becomes more challenging in the offline setting. In this paper,\nwe propose Hybrid Value Estimation (HVE) to reduce value estimation error,\nwhich trades off bias and variance by balancing between the value estimation\nfrom offline data and the learned model. Theoretical analysis discloses that\nHVE enjoys a better error bound than the direct methods. HVE can be leveraged\nin both off-policy evaluation and offline reinforcement learning settings. We,\ntherefore, provide two concrete algorithms Off-policy HVE (OPHVE) and\nModel-based Offline HVE (MOHVE), respectively. Empirical evaluations on MuJoCo\ntasks corroborate the theoretical claim. OPHVE outperforms other off-policy\nevaluation methods in all three metrics measuring the estimation effectiveness,\nwhile MOHVE achieves better or comparable performance with state-of-the-art\noffline reinforcement learning algorithms. We hope that HVE could shed some\nlight on further research on reinforcement learning from fixed data.", + "authors": "Xue-Kun Jin, Xu-Hui Liu, Shengyi Jiang, Yang Yu", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.11620v2", + "title": "Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization", + "abstract": "Offline reinforcement learning (RL) has received considerable attention in\nrecent years due to its attractive capability of learning policies from offline\ndatasets without environmental interactions. Despite some success in the\nsingle-agent setting, offline multi-agent RL (MARL) remains to be a challenge.\nThe large joint state-action space and the coupled multi-agent behaviors pose\nextra complexities for offline policy optimization. Most existing offline MARL\nstudies simply apply offline data-related regularizations on individual agents,\nwithout fully considering the multi-agent system at the global level. In this\nwork, we present OMIGA, a new offline m ulti-agent RL algorithm with implicit\nglobal-to-local v alue regularization. OMIGA provides a principled framework to\nconvert global-level value regularization into equivalent implicit local value\nregularizations and simultaneously enables in-sample learning, thus elegantly\nbridging multi-agent value decomposition and policy learning with offline\nregularizations. Based on comprehensive experiments on the offline multi-agent\nMuJoCo and StarCraft II micro-management tasks, we show that OMIGA achieves\nsuperior performance over the state-of-the-art offline MARL methods in almost\nall tasks.", + "authors": "Xiangsen Wang, Haoran Xu, Yinan Zheng, Xianyuan Zhan", + "published": "2023-07-21", + "updated": "2023-11-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.10393v1", + "title": "Offline Trajectory Generalization for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn policies from static\ndatasets of previously collected trajectories. Existing methods for offline RL\neither constrain the learned policy to the support of offline data or utilize\nmodel-based virtual environments to generate simulated rollouts. However, these\nmethods suffer from (i) poor generalization to unseen states; and (ii) trivial\nimprovement from low-qualified rollout simulation. In this paper, we propose\noffline trajectory generalization through world transformers for offline\nreinforcement learning (OTTO). Specifically, we use casual Transformers, a.k.a.\nWorld Transformers, to predict state dynamics and the immediate reward. Then we\npropose four strategies to use World Transformers to generate high-rewarded\ntrajectory simulation by perturbing the offline data. Finally, we jointly use\noffline data with simulated data to train an offline RL algorithm. OTTO serves\nas a plug-in module and can be integrated with existing offline RL methods to\nenhance them with better generalization capability of transformers and\nhigh-rewarded data augmentation. Conducting extensive experiments on D4RL\nbenchmark datasets, we verify that OTTO significantly outperforms\nstate-of-the-art offline RL methods.", + "authors": "Ziqi Zhao, Zhaochun Ren, Liu Yang, Fajie Yuan, Pengjie Ren, Zhumin Chen, jun Ma, Xin Xin", + "published": "2024-04-16", + "updated": "2024-04-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.07693v1", + "title": "Adaptive Policy Learning for Offline-to-Online Reinforcement Learning", + "abstract": "Conventional reinforcement learning (RL) needs an environment to collect\nfresh data, which is impractical when online interactions are costly. Offline\nRL provides an alternative solution by directly learning from the previously\ncollected dataset. However, it will yield unsatisfactory performance if the\nquality of the offline datasets is poor. In this paper, we consider an\noffline-to-online setting where the agent is first learned from the offline\ndataset and then trained online, and propose a framework called Adaptive Policy\nLearning for effectively taking advantage of offline and online data.\nSpecifically, we explicitly consider the difference between the online and\noffline data and apply an adaptive update scheme accordingly, that is, a\npessimistic update strategy for the offline dataset and an optimistic/greedy\nupdate scheme for the online dataset. Such a simple and effective method\nprovides a way to mix the offline and online RL and achieve the best of both\nworlds. We further provide two detailed algorithms for implementing the\nframework through embedding value or policy-based RL algorithms into it.\nFinally, we conduct extensive experiments on popular continuous control tasks,\nand results show that our algorithm can learn the expert policy with high\nsample efficiency even when the quality of offline dataset is poor, e.g.,\nrandom dataset.", + "authors": "Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.08569v2", + "title": "Bootstrapped Transformer for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims at learning policies from previously\ncollected static trajectory data without interacting with the real environment.\nRecent works provide a novel perspective by viewing offline RL as a generic\nsequence generation problem, adopting sequence models such as Transformer\narchitecture to model distributions over trajectories, and repurposing beam\nsearch as a planning algorithm. However, the training datasets utilized in\ngeneral offline RL tasks are quite limited and often suffer from insufficient\ndistribution coverage, which could be harmful to training sequence generation\nmodels yet has not drawn enough attention in the previous works. In this paper,\nwe propose a novel algorithm named Bootstrapped Transformer, which incorporates\nthe idea of bootstrapping and leverages the learned model to self-generate more\noffline data to further boost the sequence model training. We conduct extensive\nexperiments on two offline RL benchmarks and demonstrate that our model can\nlargely remedy the existing offline RL training limitations and beat other\nstrong baseline methods. We also analyze the generated pseudo data and the\nrevealed characteristics may shed some light on offline RL training. The codes\nare available at https://seqml.github.io/bootorl.", + "authors": "Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, Dongsheng Li", + "published": "2022-06-17", + "updated": "2022-10-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03360v1", + "title": "A Survey on Offline Model-Based Reinforcement Learning", + "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", + "authors": "Haoyang He", + "published": "2023-05-05", + "updated": "2023-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "I.2.6; I.2.8" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08016v1", + "title": "Contextual Transformer for Offline Meta Reinforcement Learning", + "abstract": "The pretrain-finetuning paradigm in large-scale sequence models has made\nsignificant progress in natural language processing and computer vision tasks.\nHowever, such a paradigm is still hindered by several challenges in\nReinforcement Learning (RL), including the lack of self-supervised pretraining\nalgorithms based on offline data and efficient fine-tuning/prompt-tuning over\nunseen downstream tasks. In this work, we explore how prompts can improve\nsequence modeling-based offline reinforcement learning (offline-RL) algorithms.\nFirstly, we propose prompt tuning for offline RL, where a context vector\nsequence is concatenated with the input to guide the conditional policy\ngeneration. As such, we can pretrain a model on the offline dataset with\nself-supervised loss and learn a prompt to guide the policy towards desired\nactions. Secondly, we extend our framework to Meta-RL settings and propose\nContextual Meta Transformer (CMT); CMT leverages the context among different\ntasks as the prompt to improve generalization on unseen tasks. We conduct\nextensive experiments across three different offline-RL settings: offline\nsingle-agent RL on the D4RL dataset, offline Meta-RL on the MuJoCo benchmark,\nand offline MARL on the SMAC benchmark. Superior results validate the strong\nperformance, and generality of our methods.", + "authors": "Runji Lin, Ye Li, Xidong Feng, Zhaowei Zhang, Xian Hong Wu Fung, Haifeng Zhang, Jun Wang, Yali Du, Yaodong Yang", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.03383v2", + "title": "On the Role of Discount Factor in Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables effective learning from\npreviously collected data without exploration, which shows great promise in\nreal-world applications when exploration is expensive or even infeasible. The\ndiscount factor, $\\gamma$, plays a vital role in improving online RL sample\nefficiency and estimation accuracy, but the role of the discount factor in\noffline RL is not well explored. This paper examines two distinct effects of\n$\\gamma$ in offline RL with theoretical analysis, namely the regularization\neffect and the pessimism effect. On the one hand, $\\gamma$ is a regulator to\ntrade-off optimality with sample efficiency upon existing offline techniques.\nOn the other hand, lower guidance $\\gamma$ can also be seen as a way of\npessimism where we optimize the policy's performance in the worst possible\nmodels. We empirically verify the above theoretical observation with tabular\nMDPs and standard D4RL tasks. The results show that the discount factor plays\nan essential role in the performance of offline RL algorithms, both under small\ndata regimes upon existing offline methods and in large data regimes without\nother conservative methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2022-06-07", + "updated": "2022-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03788v2", + "title": "d3rlpy: An Offline Deep Reinforcement Learning Library", + "abstract": "In this paper, we introduce d3rlpy, an open-sourced offline deep\nreinforcement learning (RL) library for Python. d3rlpy supports a set of\noffline deep RL algorithms as well as off-policy online algorithms via a fully\ndocumented plug-and-play API. To address a reproducibility issue, we conduct a\nlarge-scale benchmark with D4RL and Atari 2600 dataset to ensure implementation\nquality and provide experimental scripts and full tables of results. The d3rlpy\nsource code can be found on GitHub: \\url{https://github.com/takuseno/d3rlpy}.", + "authors": "Takuma Seno, Michita Imai", + "published": "2021-11-06", + "updated": "2022-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13630v1", + "title": "Offline Skill Graph (OSG): A Framework for Learning and Planning using Offline Reinforcement Learning Skills", + "abstract": "Reinforcement Learning has received wide interest due to its success in\ncompetitive games. Yet, its adoption in everyday applications is limited (e.g.\nindustrial, home, healthcare, etc.). In this paper, we address this limitation\nby presenting a framework for planning over offline skills and solving complex\ntasks in real-world environments. Our framework is comprised of three modules\nthat together enable the agent to learn from previously collected data and\ngeneralize over it to solve long-horizon tasks. We demonstrate our approach by\ntesting it on a robotic arm that is required to solve complex tasks.", + "authors": "Ben-ya Halevy, Yehudit Aperstein, Dotan Di Castro", + "published": "2023-06-23", + "updated": "2023-06-23", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.03086v1", + "title": "DITTO: Offline Imitation Learning with World Models", + "abstract": "We propose DITTO, an offline imitation learning algorithm which uses world\nmodels and on-policy reinforcement learning to addresses the problem of\ncovariate shift, without access to an oracle or any additional online\ninteractions. We discuss how world models enable offline, on-policy imitation\nlearning, and propose a simple intrinsic reward defined in the world model\nlatent space that induces imitation learning by reinforcement learning.\nTheoretically, we show that our formulation induces a divergence bound between\nexpert and learner, in turn bounding the difference in reward. We test our\nmethod on difficult Atari environments from pixels alone, and achieve\nstate-of-the-art performance in the offline setting.", + "authors": "Branton DeMoss, Paul Duckworth, Nick Hawes, Ingmar Posner", + "published": "2023-02-06", + "updated": "2023-02-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.04268v1", + "title": "On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples", + "abstract": "Offline reinforcement learning (offline RL) considers problems where learning\nis performed using only previously collected samples and is helpful for the\nsettings in which collecting new data is costly or risky. In model-based\noffline RL, the learner performs estimation (or optimization) using a model\nconstructed according to the empirical transition frequencies. We analyze the\nsample complexity of vanilla model-based offline RL with dependent samples in\nthe infinite-horizon discounted-reward setting. In our setting, the samples\nobey the dynamics of the Markov decision process and, consequently, may have\ninterdependencies. Under no assumption of independent samples, we provide a\nhigh-probability, polynomial sample complexity bound for vanilla model-based\noff-policy evaluation that requires partial or uniform coverage. We extend this\nresult to the off-policy optimization under uniform coverage. As a comparison\nto the model-based approach, we analyze the sample complexity of off-policy\nevaluation with vanilla importance sampling in the infinite-horizon setting.\nFinally, we provide an estimator that outperforms the sample-mean estimator for\nalmost deterministic dynamics that are prevalent in reinforcement learning.", + "authors": "Mustafa O. Karabag, Ufuk Topcu", + "published": "2023-03-07", + "updated": "2023-03-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.13464v3", + "title": "When to Trust Your Simulator: Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning", + "abstract": "Learning effective reinforcement learning (RL) policies to solve real-world\ncomplex tasks can be quite challenging without a high-fidelity simulation\nenvironment. In most cases, we are only given imperfect simulators with\nsimplified dynamics, which inevitably lead to severe sim-to-real gaps in RL\npolicy learning. The recently emerged field of offline RL provides another\npossibility to learn policies directly from pre-collected historical data.\nHowever, to achieve reasonable performance, existing offline RL algorithms need\nimpractically large offline data with sufficient state-action space coverage\nfor training. This brings up a new question: is it possible to combine learning\nfrom limited real data in offline RL and unrestricted exploration through\nimperfect simulators in online RL to address the drawbacks of both approaches?\nIn this study, we propose the Dynamics-Aware Hybrid Offline-and-Online\nReinforcement Learning (H2O) framework to provide an affirmative answer to this\nquestion. H2O introduces a dynamics-aware policy evaluation scheme, which\nadaptively penalizes the Q function learning on simulated state-action pairs\nwith large dynamics gaps, while also simultaneously allowing learning from a\nfixed real-world dataset. Through extensive simulation and real-world tasks, as\nwell as theoretical analysis, we demonstrate the superior performance of H2O\nagainst other cross-domain online and offline RL algorithms. H2O provides a\nbrand new hybrid offline-and-online RL paradigm, which can potentially shed\nlight on future RL algorithm design for solving practical real-world tasks.", + "authors": "Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2022-06-27", + "updated": "2023-01-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08331v1", + "title": "Accelerating Offline Reinforcement Learning Application in Real-Time Bidding and Recommendation: Potential Use of Simulation", + "abstract": "In recommender systems (RecSys) and real-time bidding (RTB) for online\nadvertisements, we often try to optimize sequential decision making using\nbandit and reinforcement learning (RL) techniques. In these applications,\noffline reinforcement learning (offline RL) and off-policy evaluation (OPE) are\nbeneficial because they enable safe policy optimization using only logged data\nwithout any risky online interaction. In this position paper, we explore the\npotential of using simulation to accelerate practical research of offline RL\nand OPE, particularly in RecSys and RTB. Specifically, we discuss how\nsimulation can help us conduct empirical research of offline RL and OPE. We\ntake a position to argue that we should effectively use simulations in the\nempirical research of offline RL and OPE. To refute the counterclaim that\nexperiments using only real-world data are preferable, we first point out the\nunderlying risks and reproducibility issue in real-world experiments. Then, we\ndescribe how these issues can be addressed by using simulations. Moreover, we\nshow how to incorporate the benefits of both real-world and simulation-based\nexperiments to defend our position. Finally, we also present an open challenge\nto further facilitate practical research of offline RL and OPE in RecSys and\nRTB, with respect to public simulation platforms. As a possible solution for\nthe issue, we show our ongoing open source project and its potential use case.\nWe believe that building and utilizing simulation-based evaluation platforms\nfor offline RL and OPE will be of great interest and relevance for the RecSys\nand RTB community.", + "authors": "Haruka Kiyohara, Kosuke Kawakami, Yuta Saito", + "published": "2021-09-17", + "updated": "2021-09-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.11336v1", + "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", + "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", + "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.00935v3", + "title": "Policy Expansion for Bridging Offline-to-Online Reinforcement Learning", + "abstract": "Pre-training with offline data and online fine-tuning using reinforcement\nlearning is a promising strategy for learning control policies by leveraging\nthe best of both worlds in terms of sample efficiency and performance. One\nnatural approach is to initialize the policy for online learning with the one\ntrained offline. In this work, we introduce a policy expansion scheme for this\ntask. After learning the offline policy, we use it as one candidate policy in a\npolicy set. We then expand the policy set with another policy which will be\nresponsible for further learning. The two policies will be composed in an\nadaptive manner for interacting with the environment. With this approach, the\npolicy previously learned offline is fully retained during online learning,\nthus mitigating the potential issues such as destroying the useful behaviors of\nthe offline policy in the initial stage of online learning while allowing the\noffline policy participate in the exploration naturally in an adaptive manner.\nMoreover, new useful behaviors can potentially be captured by the newly added\npolicy through learning. Experiments are conducted on a number of tasks and the\nresults demonstrate the effectiveness of the proposed approach.", + "authors": "Haichao Zhang, We Xu, Haonan Yu", + "published": "2023-02-02", + "updated": "2023-04-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.05422v1", + "title": "Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning", + "abstract": "Learning a precise dynamics model can be crucial for offline reinforcement\nlearning, which, unfortunately, has been found to be quite challenging.\nDynamics models that are learned by fitting historical transitions often\nstruggle to generalize to unseen transitions. In this study, we identify a\nhidden but pivotal factor termed dynamics reward that remains consistent across\ntransitions, offering a pathway to better generalization. Therefore, we propose\nthe idea of reward-consistent dynamics models: any trajectory generated by the\ndynamics model should maximize the dynamics reward derived from the data. We\nimplement this idea as the MOREC (Model-based Offline reinforcement learning\nwith Reward Consistency) method, which can be seamlessly integrated into\nprevious offline model-based reinforcement learning (MBRL) methods. MOREC\nlearns a generalizable dynamics reward function from offline data, which is\nsubsequently employed as a transition filter in any offline MBRL method: when\ngenerating transitions, the dynamics model generates a batch of transitions and\nselects the one with the highest dynamics reward value. On a synthetic task, we\nvisualize that MOREC has a strong generalization ability and can surprisingly\nrecover some distant unseen transitions. On 21 offline tasks in D4RL and NeoRL\nbenchmarks, MOREC improves the previous state-of-the-art performance by a\nsignificant margin, i.e., 4.6% on D4RL tasks and 25.9% on NeoRL tasks. Notably,\nMOREC is the first method that can achieve above 95% online RL performance in 6\nout of 12 D4RL tasks and 3 out of 9 NeoRL tasks.", + "authors": "Fan-Ming Luo, Tian Xu, Xingchen Cao, Yang Yu", + "published": "2023-10-09", + "updated": "2023-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.11566v1", + "title": "Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning", + "abstract": "Offline Reinforcement Learning (RL) aims to learn policies from previously\ncollected datasets without exploring the environment. Directly applying\noff-policy algorithms to offline RL usually fails due to the extrapolation\nerror caused by the out-of-distribution (OOD) actions. Previous methods tackle\nsuch problem by penalizing the Q-values of OOD actions or constraining the\ntrained policy to be close to the behavior policy. Nevertheless, such methods\ntypically prevent the generalization of value functions beyond the offline data\nand also lack precise characterization of OOD data. In this paper, we propose\nPessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven\noffline algorithm without explicit policy constraints. Specifically, PBRL\nconducts uncertainty quantification via the disagreement of bootstrapped\nQ-functions, and performs pessimistic updates by penalizing the value function\nbased on the estimated uncertainty. To tackle the extrapolating error, we\nfurther propose a novel OOD sampling method. We show that such OOD sampling and\npessimistic bootstrapping yields provable uncertainty quantifier in linear\nMDPs, thus providing the theoretical underpinning for PBRL. Extensive\nexperiments on D4RL benchmark show that PBRL has better performance compared to\nthe state-of-the-art algorithms.", + "authors": "Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang", + "published": "2022-02-23", + "updated": "2022-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.08251v1", + "title": "Offline Reinforcement Learning with Adaptive Behavior Regularization", + "abstract": "Offline reinforcement learning (RL) defines a sample-efficient learning\nparadigm, where a policy is learned from static and previously collected\ndatasets without additional interaction with the environment. The major\nobstacle to offline RL is the estimation error arising from evaluating the\nvalue of out-of-distribution actions. To tackle this problem, most existing\noffline RL methods attempt to acquire a policy both ``close\" to the behaviors\ncontained in the dataset and sufficiently improved over them, which requires a\ntrade-off between two possibly conflicting targets. In this paper, we propose a\nnovel approach, which we refer to as adaptive behavior regularization (ABR), to\nbalance this critical trade-off. By simply utilizing a sample-based\nregularization, ABR enables the policy to adaptively adjust its optimization\nobjective between cloning and improving over the policy used to generate the\ndataset. In the evaluation on D4RL datasets, a widely adopted benchmark for\noffline reinforcement learning, ABR can achieve improved or competitive\nperformance compared to existing state-of-the-art algorithms.", + "authors": "Yunfan Zhou, Xijun Li, Qingyu Qu", + "published": "2022-11-15", + "updated": "2022-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.09844v2", + "title": "Small Dataset, Big Gains: Enhancing Reinforcement Learning by Offline Pre-Training with Model Based Augmentation", + "abstract": "Offline reinforcement learning leverages pre-collected datasets of\ntransitions to train policies. It can serve as effective initialization for\nonline algorithms, enhancing sample efficiency and speeding up convergence.\nHowever, when such datasets are limited in size and quality, offline\npre-training can produce sub-optimal policies and lead to degraded online\nreinforcement learning performance. In this paper we propose a model-based data\naugmentation strategy to maximize the benefits of offline reinforcement\nlearning pre-training and reduce the scale of data needed to be effective. Our\napproach leverages a world model of the environment trained on the offline\ndataset to augment states during offline pre-training. We evaluate our approach\non a variety of MuJoCo robotic tasks and our results show it can jump-start\nonline fine-tuning and substantially reduce - in some cases by an order of\nmagnitude - the required number of environment interactions.", + "authors": "Girolamo Macaluso, Alessandro Sestini, Andrew D. Bagdanov", + "published": "2023-12-15", + "updated": "2023-12-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.07344v1", + "title": "Measurement Scheduling for ICU Patients with Offline Reinforcement Learning", + "abstract": "Scheduling laboratory tests for ICU patients presents a significant\nchallenge. Studies show that 20-40% of lab tests ordered in the ICU are\nredundant and could be eliminated without compromising patient safety. Prior\nwork has leveraged offline reinforcement learning (Offline-RL) to find optimal\npolicies for ordering lab tests based on patient information. However, new ICU\npatient datasets have since been released, and various advancements have been\nmade in Offline-RL methods. In this study, we first introduce a preprocessing\npipeline for the newly-released MIMIC-IV dataset geared toward time-series\ntasks. We then explore the efficacy of state-of-the-art Offline-RL methods in\nidentifying better policies for ICU patient lab test scheduling. Besides\nassessing methodological performance, we also discuss the overall suitability\nand practicality of using Offline-RL frameworks for scheduling laboratory tests\nin ICU settings.", + "authors": "Zongliang Ji, Anna Goldenberg, Rahul G. Krishnan", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.06860v2", + "title": "A Minimalist Approach to Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) defines the task of learning from a fixed\nbatch of data. Due to errors in value estimation from out-of-distribution\nactions, most offline RL algorithms take the approach of constraining or\nregularizing the policy with the actions contained in the dataset. Built on\npre-existing RL algorithms, modifications to make an RL algorithm work offline\ncomes at the cost of additional complexity. Offline RL algorithms introduce new\nhyperparameters and often leverage secondary components such as generative\nmodels, while adjusting the underlying RL algorithm. In this paper we aim to\nmake a deep RL algorithm work while making minimal changes. We find that we can\nmatch the performance of state-of-the-art offline RL algorithms by simply\nadding a behavior cloning term to the policy update of an online RL algorithm\nand normalizing the data. The resulting algorithm is a simple to implement and\ntune baseline, while more than halving the overall run time by removing the\nadditional computational overhead of previous methods.", + "authors": "Scott Fujimoto, Shixiang Shane Gu", + "published": "2021-06-12", + "updated": "2021-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.10442v1", + "title": "Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning", + "abstract": "We study offline meta-reinforcement learning, a practical reinforcement\nlearning paradigm that learns from offline data to adapt to new tasks. The\ndistribution of offline data is determined jointly by the behavior policy and\nthe task. Existing offline meta-reinforcement learning algorithms cannot\ndistinguish these factors, making task representations unstable to the change\nof behavior policies. To address this problem, we propose a contrastive\nlearning framework for task representations that are robust to the distribution\nmismatch of behavior policies in training and test. We design a bi-level\nencoder structure, use mutual information maximization to formalize task\nrepresentation learning, derive a contrastive learning objective, and introduce\nseveral approaches to approximate the true distribution of negative pairs.\nExperiments on a variety of offline meta-reinforcement learning benchmarks\ndemonstrate the advantages of our method over prior methods, especially on the\ngeneralization to out-of-distribution behavior policies. The code is available\nat https://github.com/PKU-AI-Edge/CORRO.", + "authors": "Haoqi Yuan, Zongqing Lu", + "published": "2022-06-21", + "updated": "2022-06-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2310.08566v1", + "title": "Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining", + "abstract": "Large transformer models pretrained on offline reinforcement learning\ndatasets have demonstrated remarkable in-context reinforcement learning (ICRL)\ncapabilities, where they can make good decisions when prompted with interaction\ntrajectories from unseen environments. However, when and how transformers can\nbe trained to perform ICRL have not been theoretically well-understood. In\nparticular, it is unclear which reinforcement-learning algorithms transformers\ncan perform in context, and how distribution mismatch in offline training data\naffects the learned algorithms. This paper provides a theoretical framework\nthat analyzes supervised pretraining for ICRL. This includes two recently\nproposed training methods -- algorithm distillation and decision-pretrained\ntransformers. First, assuming model realizability, we prove the\nsupervised-pretrained transformer will imitate the conditional expectation of\nthe expert algorithm given the observed trajectory. The generalization error\nwill scale with model capacity and a distribution divergence factor between the\nexpert and offline algorithms. Second, we show transformers with ReLU attention\ncan efficiently approximate near-optimal online reinforcement learning\nalgorithms like LinUCB and Thompson sampling for stochastic linear bandits, and\nUCB-VI for tabular Markov decision processes. This provides the first\nquantitative analysis of the ICRL capabilities of transformers pretrained from\noffline trajectories.", + "authors": "Licong Lin, Yu Bai, Song Mei", + "published": "2023-10-12", + "updated": "2023-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "math.ST", + "stat.ML", + "stat.TH" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.13846v1", + "title": "Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning", + "abstract": "Offline reinforcement learning, by learning from a fixed dataset, makes it\npossible to learn agent behaviors without interacting with the environment.\nHowever, depending on the quality of the offline dataset, such pre-trained\nagents may have limited performance and would further need to be fine-tuned\nonline by interacting with the environment. During online fine-tuning, the\nperformance of the pre-trained agent may collapse quickly due to the sudden\ndistribution shift from offline to online data. While constraints enforced by\noffline RL methods such as a behaviour cloning loss prevent this to an extent,\nthese constraints also significantly slow down online fine-tuning by forcing\nthe agent to stay close to the behavior policy. We propose to adaptively weigh\nthe behavior cloning loss during online fine-tuning based on the agent's\nperformance and training stability. Moreover, we use a randomized ensemble of Q\nfunctions to further increase the sample efficiency of online fine-tuning by\nperforming a large number of learning updates. Experiments show that the\nproposed method yields state-of-the-art offline-to-online reinforcement\nlearning performance on the popular D4RL benchmark. Code is available:\n\\url{https://github.com/zhaoyi11/adaptive_bc}.", + "authors": "Yi Zhao, Rinu Boney, Alexander Ilin, Juho Kannala, Joni Pajarinen", + "published": "2022-10-25", + "updated": "2022-10-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.14379v1", + "title": "Offline Reinforcement Learning Hands-On", + "abstract": "Offline Reinforcement Learning (RL) aims to turn large datasets into powerful\ndecision-making engines without any online interactions with the environment.\nThis great promise has motivated a large amount of research that hopes to\nreplicate the success RL has experienced in simulation settings. This work\nambitions to reflect upon these efforts from a practitioner viewpoint. We start\nby discussing the dataset properties that we hypothesise can characterise the\ntype of offline methods that will be the most successful. We then verify these\nclaims through a set of experiments and designed datasets generated from\nenvironments with both discrete and continuous action spaces. We experimentally\nvalidate that diversity and high-return examples in the data are crucial to the\nsuccess of offline RL and show that behavioural cloning remains a strong\ncontender compared to its contemporaries. Overall, this work stands as a\ntutorial to help people build their intuition on today's offline RL methods and\ntheir applicability.", + "authors": "Louis Monier, Jakub Kmec, Alexandre Laterre, Thomas Pierrot, Valentin Courgeau, Olivier Sigaud, Karim Beguir", + "published": "2020-11-29", + "updated": "2020-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.04779v3", + "title": "Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations", + "abstract": "Offline reinforcement learning has shown great promise in leveraging large\npre-collected datasets for policy learning, allowing agents to forgo\noften-expensive online data collection. However, offline reinforcement learning\nfrom visual observations with continuous action spaces remains under-explored,\nwith a limited understanding of the key challenges in this complex domain. In\nthis paper, we establish simple baselines for continuous control in the visual\ndomain and introduce a suite of benchmarking tasks for offline reinforcement\nlearning from visual observations designed to better represent the data\ndistributions present in real-world offline RL problems and guided by a set of\ndesiderata for offline RL from visual observations, including robustness to\nvisual distractions and visually identifiable changes in dynamics. Using this\nsuite of benchmarking tasks, we show that simple modifications to two popular\nvision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2,\nsuffice to outperform existing offline RL methods and establish competitive\nbaselines for continuous control in the visual domain. We rigorously evaluate\nthese algorithms and perform an empirical evaluation of the differences between\nstate-of-the-art model-based and model-free offline RL methods for continuous\ncontrol from visual observations. All code and data used in this evaluation are\nopen-sourced to facilitate progress in this domain.", + "authors": "Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh", + "published": "2022-06-09", + "updated": "2023-07-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.05433v1", + "title": "Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) Algorithms are often designed with\nenvironments such as MuJoCo in mind, in which the planning horizon is extremely\nlong and no noise exists. We compare model-free, model-based, as well as hybrid\noffline RL approaches on various industrial benchmark (IB) datasets to test the\nalgorithms in settings closer to real world problems, including complex noise\nand partially observable states. We find that on the IB, hybrid approaches face\nsevere difficulties and that simpler algorithms, such as rollout based\nalgorithms or model-free algorithms with simpler regularizers perform best on\nthe datasets.", + "authors": "Phillip Swazinna, Steffen Udluft, Daniel Hein, Thomas Runkler", + "published": "2022-01-14", + "updated": "2022-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2210.05922v1", + "title": "A Unified Framework for Alternating Offline Model Training and Policy Learning", + "abstract": "In offline model-based reinforcement learning (offline MBRL), we learn a\ndynamic model from historically collected data, and subsequently utilize the\nlearned model and fixed datasets for policy learning, without further\ninteracting with the environment. Offline MBRL algorithms can improve the\nefficiency and stability of policy learning over the model-free algorithms.\nHowever, in most of the existing offline MBRL algorithms, the learning\nobjectives for the dynamic models and the policies are isolated from each\nother. Such an objective mismatch may lead to inferior performance of the\nlearned agents. In this paper, we address this issue by developing an iterative\noffline MBRL framework, where we maximize a lower bound of the true expected\nreturn, by alternating between dynamic-model training and policy learning. With\nthe proposed unified model-policy learning framework, we achieve competitive\nperformance on a wide range of continuous-control offline reinforcement\nlearning datasets. Source code is publicly released.", + "authors": "Shentao Yang, Shujian Zhang, Yihao Feng, Mingyuan Zhou", + "published": "2022-10-12", + "updated": "2022-10-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.09712v1", + "title": "Semi-Offline Reinforcement Learning for Optimized Text Generation", + "abstract": "In reinforcement learning (RL), there are two major settings for interacting\nwith the environment: online and offline. Online methods explore the\nenvironment at significant time cost, and offline methods efficiently obtain\nreward signals by sacrificing exploration capability. We propose semi-offline\nRL, a novel paradigm that smoothly transits from offline to online settings,\nbalances exploration capability and training cost, and provides a theoretical\nfoundation for comparing different RL settings. Based on the semi-offline\nformulation, we present the RL setting that is optimal in terms of optimization\ncost, asymptotic error, and overfitting error bound. Extensive experiments show\nthat our semi-offline approach is efficient and yields comparable or often\nbetter performance compared with state-of-the-art methods.", + "authors": "Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, Rui Yan", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.12639v1", + "title": "Single-Task Continual Offline Reinforcement Learning", + "abstract": "In this paper, we study the continual learning problem of single-task offline\nreinforcement learning. In the past, continual reinforcement learning usually\nonly dealt with multitasking, that is, learning multiple related or unrelated\ntasks in a row, but once each learned task was learned, it was not relearned,\nbut only used in subsequent processes. However, offline reinforcement learning\ntasks require the continuously learning of multiple different datasets for the\nsame task. Existing algorithms will try their best to achieve the best results\nin each offline dataset they have learned and the skills of the network will\noverwrite the high-quality datasets that have been learned after learning the\nsubsequent poor datasets. On the other hand, if too much emphasis is placed on\nstability, the network will learn the subsequent better dataset after learning\nthe poor offline dataset, and the problem of insufficient plasticity and\nnon-learning will occur. How to design a strategy that can always preserve the\nbest performance for each state in the data that has been learned is a new\nchallenge and the focus of this study. Therefore, this study proposes a new\nalgorithm, called Ensemble Offline Reinforcement Learning Based on Experience\nReplay, which introduces multiple value networks to learn the same dataset and\njudge whether the strategy has been learned by the discrete degree of the value\nnetwork, to improve the performance of the network in single-task offline\nreinforcement learning.", + "authors": "Sibo Gai, Donglin Wang", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "I.2.6" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.08900v1", + "title": "Offline Multi-Agent Reinforcement Learning with Coupled Value Factorization", + "abstract": "Offline reinforcement learning (RL) that learns policies from offline\ndatasets without environment interaction has received considerable attention in\nrecent years. Compared with the rich literature in the single-agent case,\noffline multi-agent RL is still a relatively underexplored area. Most existing\nmethods directly apply offline RL ingredients in the multi-agent setting\nwithout fully leveraging the decomposable problem structure, leading to less\nsatisfactory performance in complex tasks. We present OMAC, a new offline\nmulti-agent RL algorithm with coupled value factorization. OMAC adopts a\ncoupled value factorization scheme that decomposes the global value function\ninto local and shared components, and also maintains the credit assignment\nconsistency between the state-value and Q-value functions. Moreover, OMAC\nperforms in-sample learning on the decomposed local state-value functions,\nwhich implicitly conducts max-Q operation at the local level while avoiding\ndistributional shift caused by evaluating out-of-distribution actions. Based on\nthe comprehensive evaluations of the offline multi-agent StarCraft II\nmicro-management tasks, we demonstrate the superior performance of OMAC over\nthe state-of-the-art offline multi-agent RL methods.", + "authors": "Xiangsen Wang, Xianyuan Zhan", + "published": "2023-06-15", + "updated": "2023-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07166v1", + "title": "Regularizing a Model-based Policy Stationary Distribution to Stabilize Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) extends the paradigm of classical RL\nalgorithms to purely learning from static datasets, without interacting with\nthe underlying environment during the learning process. A key challenge of\noffline RL is the instability of policy training, caused by the mismatch\nbetween the distribution of the offline data and the undiscounted stationary\nstate-action distribution of the learned policy. To avoid the detrimental\nimpact of distribution mismatch, we regularize the undiscounted stationary\ndistribution of the current policy towards the offline data during the policy\noptimization process. Further, we train a dynamics model to both implement this\nregularization and better estimate the stationary distribution of the current\npolicy, reducing the error induced by distribution mismatch. On a wide range of\ncontinuous-control offline RL datasets, our method indicates competitive\nperformance, which validates our algorithm. The code is publicly available.", + "authors": "Shentao Yang, Yihao Feng, Shujian Zhang, Mingyuan Zhou", + "published": "2022-06-14", + "updated": "2022-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.00188v2", + "title": "Offline Reinforcement Learning with Reverse Model-based Imagination", + "abstract": "In offline reinforcement learning (offline RL), one of the main challenges is\nto deal with the distributional shift between the learning policy and the given\ndataset. To address this problem, recent offline RL methods attempt to\nintroduce conservatism bias to encourage learning in high-confidence areas.\nModel-free approaches directly encode such bias into policy or value function\nlearning using conservative regularizations or special network structures, but\ntheir constrained policy search limits the generalization beyond the offline\ndataset. Model-based approaches learn forward dynamics models with conservatism\nquantifications and then generate imaginary trajectories to extend the offline\ndatasets. However, due to limited samples in offline datasets, conservatism\nquantifications often suffer from overgeneralization in out-of-support regions.\nThe unreliable conservative measures will mislead forward model-based\nimaginations to undesired areas, leading to overaggressive behaviors. To\nencourage more conservatism, we propose a novel model-based offline RL\nframework, called Reverse Offline Model-based Imagination (ROMI). We learn a\nreverse dynamics model in conjunction with a novel reverse policy, which can\ngenerate rollouts leading to the target goal states within the offline dataset.\nThese reverse imaginations provide informed data augmentation for model-free\npolicy learning and enable conservative generalization beyond the offline\ndataset. ROMI can effectively combine with off-the-shelf model-free algorithms\nto enable model-based generalization with proper conservatism. Empirical\nresults show that our method can generate more conservative behaviors and\nachieve state-of-the-art performance on offline RL benchmark tasks.", + "authors": "Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, Chongjie Zhang", + "published": "2021-10-01", + "updated": "2021-11-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.13412v2", + "title": "CLUE: Calibrated Latent Guidance for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn an optimal policy from\npre-collected and labeled datasets, which eliminates the time-consuming data\ncollection in online RL. However, offline RL still bears a large burden of\nspecifying/handcrafting extrinsic rewards for each transition in the offline\ndata. As a remedy for the labor-intensive labeling, we propose to endow offline\nRL tasks with a few expert data and utilize the limited expert data to drive\nintrinsic rewards, thus eliminating the need for extrinsic rewards. To achieve\nthat, we introduce \\textbf{C}alibrated \\textbf{L}atent\ng\\textbf{U}idanc\\textbf{E} (CLUE), which utilizes a conditional variational\nauto-encoder to learn a latent space such that intrinsic rewards can be\ndirectly qualified over the latent space. CLUE's key idea is to align the\nintrinsic rewards consistent with the expert intention via enforcing the\nembeddings of expert data to a calibrated contextual representation. We\ninstantiate the expert-driven intrinsic rewards in sparse-reward offline RL\ntasks, offline imitation learning (IL) tasks, and unsupervised offline RL\ntasks. Empirically, we find that CLUE can effectively improve the sparse-reward\noffline RL performance, outperform the state-of-the-art offline IL baselines,\nand discover diverse skills from static reward-free offline data.", + "authors": "Jinxin Liu, Lipeng Zu, Li He, Donglin Wang", + "published": "2023-06-23", + "updated": "2023-10-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.10813v2", + "title": "A Workflow for Offline Model-Free Robotic Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) enables learning control policies by\nutilizing only prior experience, without any online interaction. This can allow\nrobots to acquire generalizable skills from large and diverse datasets, without\nany costly or unsafe online data collection. Despite recent algorithmic\nadvances in offline RL, applying these methods to real-world problems has\nproven challenging. Although offline RL methods can learn from prior data,\nthere is no clear and well-understood process for making various design\nchoices, from model architecture to algorithm hyperparameters, without actually\nevaluating the learned policies online. In this paper, our aim is to develop a\npractical workflow for using offline RL analogous to the relatively\nwell-understood workflows for supervised learning problems. To this end, we\ndevise a set of metrics and conditions that can be tracked over the course of\noffline training, and can inform the practitioner about how the algorithm and\nmodel architecture should be adjusted to improve final performance. Our\nworkflow is derived from a conceptual understanding of the behavior of\nconservative offline RL algorithms and cross-validation in supervised learning.\nWe demonstrate the efficacy of this workflow in producing effective policies\nwithout any online tuning, both in several simulated robotic learning scenarios\nand for three tasks on two distinct real robots, focusing on learning\nmanipulation skills with raw image observations with sparse binary rewards.\nExplanatory video and additional results can be found at\nsites.google.com/view/offline-rl-workflow", + "authors": "Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine", + "published": "2021-09-22", + "updated": "2021-09-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.02929v2", + "title": "Model-Based Offline Meta-Reinforcement Learning with Regularization", + "abstract": "Existing offline reinforcement learning (RL) methods face a few major\nchallenges, particularly the distributional shift between the learned policy\nand the behavior policy. Offline Meta-RL is emerging as a promising approach to\naddress these challenges, aiming to learn an informative meta-policy from a\ncollection of tasks. Nevertheless, as shown in our empirical studies, offline\nMeta-RL could be outperformed by offline single-task RL methods on tasks with\ngood quality of datasets, indicating that a right balance has to be delicately\ncalibrated between \"exploring\" the out-of-distribution state-actions by\nfollowing the meta-policy and \"exploiting\" the offline dataset by staying close\nto the behavior policy. Motivated by such empirical analysis, we explore\nmodel-based offline Meta-RL with regularized Policy Optimization (MerPO), which\nlearns a meta-model for efficient task structure inference and an informative\nmeta-policy for safe exploration of out-of-distribution state-actions. In\nparticular, we devise a new meta-Regularized model-based Actor-Critic (RAC)\nmethod for within-task policy optimization, as a key building block of MerPO,\nusing conservative policy evaluation and regularized policy improvement; and\nthe intrinsic tradeoff therein is achieved via striking the right balance\nbetween two regularizers, one based on the behavior policy and the other on the\nmeta-policy. We theoretically show that the learnt policy offers guaranteed\nimprovement over both the behavior policy and the meta-policy, thus ensuring\nthe performance improvement on new tasks via offline Meta-RL. Experiments\ncorroborate the superior performance of MerPO over existing offline Meta-RL\nmethods.", + "authors": "Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, Junshan Zhang", + "published": "2022-02-07", + "updated": "2022-07-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.09119v2", + "title": "Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL", + "abstract": "Offline Reinforcement Learning (RL) aims to extract near-optimal policies\nfrom imperfect offline data without additional environment interactions.\nExtracting policies from diverse offline datasets has the potential to expand\nthe range of applicability of RL by making the training process safer, faster,\nand more streamlined. We investigate how to improve the performance of offline\nRL algorithms, its robustness to the quality of offline data, as well as its\ngeneralization capabilities. To this end, we introduce Offline Model-based RL\nwith Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding\nthat dynamics models, which support within-domain generalization, and\nbehavioral priors, which support cross-domain generalization, are\ncomplementary. When combined together, they substantially improve the\nperformance and generalization of offline RL policies. In the widely studied\nD4RL offline RL benchmark, we find that MABE achieves higher average\nperformance compared to prior model-free and model-based algorithms. In\nexperiments that require cross-domain generalization, we find that MABE\noutperforms prior methods. Our website is available at\nhttps://sites.google.com/berkeley.edu/mabe .", + "authors": "Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, Michael Laskin", + "published": "2021-06-16", + "updated": "2021-06-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.08128v1", + "title": "Conservative Data Sharing for Multi-Task Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) algorithms have shown promising results\nin domains where abundant pre-collected data is available. However, prior\nmethods focus on solving individual problems from scratch with an offline\ndataset without considering how an offline RL agent can acquire multiple\nskills. We argue that a natural use case of offline RL is in settings where we\ncan pool large amounts of data collected in various scenarios for solving\ndifferent tasks, and utilize all of this data to learn behaviors for all the\ntasks more effectively rather than training each one in isolation. However,\nsharing data across all tasks in multi-task offline RL performs surprisingly\npoorly in practice. Thorough empirical analysis, we find that sharing data can\nactually exacerbate the distributional shift between the learned policy and the\ndataset, which in turn can lead to divergence of the learned policy and poor\nperformance. To address this challenge, we develop a simple technique for\ndata-sharing in multi-task offline RL that routes data based on the improvement\nover the task-specific data. We call this approach conservative data sharing\n(CDS), and it can be applied with multiple single-task offline RL methods. On a\nrange of challenging multi-task locomotion, navigation, and vision-based\nrobotic manipulation problems, CDS achieves the best or comparable performance\ncompared to prior offline multi-task RL methods and previous data sharing\napproaches.", + "authors": "Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Sergey Levine, Chelsea Finn", + "published": "2021-09-16", + "updated": "2021-09-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.10411v2", + "title": "Boosting Offline Reinforcement Learning with Residual Generative Modeling", + "abstract": "Offline reinforcement learning (RL) tries to learn the near-optimal policy\nwith recorded offline experience without online exploration. Current offline RL\nresearch includes: 1) generative modeling, i.e., approximating a policy using\nfixed data; and 2) learning the state-action value function. While most\nresearch focuses on the state-action function part through reducing the\nbootstrapping error in value function approximation induced by the distribution\nshift of training data, the effects of error propagation in generative modeling\nhave been neglected. In this paper, we analyze the error in generative\nmodeling. We propose AQL (action-conditioned Q-learning), a residual generative\nmodel to reduce policy approximation error for offline RL. We show that our\nmethod can learn more accurate policy approximations in different benchmark\ndatasets. In addition, we show that the proposed offline RL method can learn\nmore competitive AI agents in complex control tasks under the multiplayer\nonline battle arena (MOBA) game Honor of Kings.", + "authors": "Hua Wei, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, Zhenhui Li", + "published": "2021-06-19", + "updated": "2021-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T01", + "I.2.8; I.2.1" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.06871v3", + "title": "Improving Offline-to-Online Reinforcement Learning with Q-Ensembles", + "abstract": "Offline reinforcement learning (RL) is a learning paradigm where an agent\nlearns from a fixed dataset of experience. However, learning solely from a\nstatic dataset can limit the performance due to the lack of exploration. To\novercome it, offline-to-online RL combines offline pre-training with online\nfine-tuning, which enables the agent to further refine its policy by\ninteracting with the environment in real-time. Despite its benefits, existing\noffline-to-online RL methods suffer from performance degradation and slow\nimprovement during the online phase. To tackle these challenges, we propose a\nnovel framework called Ensemble-based Offline-to-Online (E2O) RL. By increasing\nthe number of Q-networks, we seamlessly bridge offline pre-training and online\nfine-tuning without degrading performance. Moreover, to expedite online\nperformance enhancement, we appropriately loosen the pessimism of Q-value\nestimation and incorporate ensemble-based exploration mechanisms into our\nframework. Experimental results demonstrate that E2O can substantially improve\nthe training stability, learning efficiency, and final performance of existing\noffline RL methods during online fine-tuning on a range of locomotion and\nnavigation tasks, significantly outperforming existing offline-to-online RL\nmethods.", + "authors": "Kai Zhao, Yi Ma, Jianye Hao, Jinyi Liu, Yan Zheng, Zhaopeng Meng", + "published": "2023-06-12", + "updated": "2023-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.06734v1", + "title": "Corruption Robust Offline Reinforcement Learning with Human Feedback", + "abstract": "We study data corruption robustness for reinforcement learning with human\nfeedback (RLHF) in an offline setting. Given an offline dataset of pairs of\ntrajectories along with feedback about human preferences, an\n$\\varepsilon$-fraction of the pairs is corrupted (e.g., feedback flipped or\ntrajectory features manipulated), capturing an adversarial attack or noisy\nhuman preferences. We aim to design algorithms that identify a near-optimal\npolicy from the corrupted data, with provable guarantees. Existing theoretical\nworks have separately studied the settings of corruption robust RL (learning\nfrom scalar rewards directly under corruption) and offline RLHF (learning from\nhuman feedback without corruption); however, they are inapplicable to our\nproblem of dealing with corrupted data in offline RLHF setting. To this end, we\ndesign novel corruption robust offline RLHF methods under various assumptions\non the coverage of the data-generating distributions. At a high level, our\nmethodology robustifies an offline RLHF framework by first learning a reward\nmodel along with confidence sets and then learning a pessimistic optimal policy\nover the confidence set. Our key insight is that learning optimal policy can be\ndone by leveraging an offline corruption-robust RL oracle in different ways\n(e.g., zero-order oracle or first-order oracle), depending on the data coverage\nassumptions. To our knowledge, ours is the first work that provides provable\ncorruption robust offline RLHF methods.", + "authors": "Debmalya Mandal, Andi Nika, Parameswaran Kamalaruban, Adish Singla, Goran Radanovi\u0107", + "published": "2024-02-09", + "updated": "2024-02-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05951v3", + "title": "MOReL : Model-Based Offline Reinforcement Learning", + "abstract": "In offline reinforcement learning (RL), the goal is to learn a highly\nrewarding policy based solely on a dataset of historical interactions with the\nenvironment. The ability to train RL policies offline can greatly expand the\napplicability of RL, its data efficiency, and its experimental velocity. Prior\nwork in offline RL has been confined almost exclusively to model-free RL\napproaches. In this work, we present MOReL, an algorithmic framework for\nmodel-based offline RL. This framework consists of two steps: (a) learning a\npessimistic MDP (P-MDP) using the offline dataset; and (b) learning a\nnear-optimal policy in this P-MDP. The learned P-MDP has the property that for\nany policy, the performance in the real environment is approximately\nlower-bounded by the performance in the P-MDP. This enables it to serve as a\ngood surrogate for purposes of policy evaluation and learning, and overcome\ncommon pitfalls of model-based RL like model exploitation. Theoretically, we\nshow that MOReL is minimax optimal (up to log factors) for offline RL. Through\nexperiments, we show that MOReL matches or exceeds state-of-the-art results in\nwidely studied offline RL benchmarks. Moreover, the modular design of MOReL\nenables future advances in its components (e.g. generative modeling,\nuncertainty estimation, planning etc.) to directly translate into advances for\noffline RL.", + "authors": "Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims", + "published": "2020-05-12", + "updated": "2021-03-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2107.06106v2", + "title": "Conservative Offline Distributional Reinforcement Learning", + "abstract": "Many reinforcement learning (RL) problems in practice are offline, learning\npurely from observational data. A key challenge is how to ensure the learned\npolicy is safe, which requires quantifying the risk associated with different\nactions. In the online setting, distributional RL algorithms do so by learning\nthe distribution over returns (i.e., cumulative rewards) instead of the\nexpected return; beyond quantifying risk, they have also been shown to learn\nbetter representations for planning. We propose Conservative Offline\nDistributional Actor Critic (CODAC), an offline RL algorithm suitable for both\nrisk-neutral and risk-averse domains. CODAC adapts distributional RL to the\noffline setting by penalizing the predicted quantiles of the return for\nout-of-distribution actions. We prove that CODAC learns a conservative return\ndistribution -- in particular, for finite MDPs, CODAC converges to an uniform\nlower bound on the quantiles of the return distribution; our proof relies on a\nnovel analysis of the distributional Bellman operator. In our experiments, on\ntwo challenging robot navigation tasks, CODAC successfully learns risk-averse\npolicies using offline data collected purely from risk-neutral agents.\nFurthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of\nboth expected and risk-sensitive performance.", + "authors": "Yecheng Jason Ma, Dinesh Jayaraman, Osbert Bastani", + "published": "2021-07-12", + "updated": "2021-10-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.15578v1", + "title": "Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning", + "abstract": "We hypothesize that empirically studying the sample complexity of offline\nreinforcement learning (RL) is crucial for the practical applications of RL in\nthe real world. Several recent works have demonstrated the ability to learn\npolicies directly from offline data. In this work, we ask the question of the\ndependency on the number of samples for learning from offline data. Our\nobjective is to emphasize that studying sample complexity for offline RL is\nimportant, and is an indicator of the usefulness of existing offline\nalgorithms. We propose an evaluation approach for sample complexity analysis of\noffline RL.", + "authors": "Samin Yeasar Arnob, Riashat Islam, Doina Precup", + "published": "2021-12-31", + "updated": "2021-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2205.09550v1", + "title": "Data Valuation for Offline Reinforcement Learning", + "abstract": "The success of deep reinforcement learning (DRL) hinges on the availability\nof training data, which is typically obtained via a large number of environment\ninteractions. In many real-world scenarios, costs and risks are associated with\ngathering these data. The field of offline reinforcement learning addresses\nthese issues through outsourcing the collection of data to a domain expert or a\ncarefully monitored program and subsequently searching for a batch-constrained\noptimal policy. With the emergence of data markets, an alternative to\nconstructing a dataset in-house is to purchase external data. However, while\nstate-of-the-art offline reinforcement learning approaches have shown a lot of\npromise, they currently rely on carefully constructed datasets that are well\naligned with the intended target domains. This raises questions regarding the\ntransferability and robustness of an offline reinforcement learning agent\ntrained on externally acquired data. In this paper, we empirically evaluate the\nability of the current state-of-the-art offline reinforcement learning\napproaches to coping with the source-target domain mismatch within two MuJoCo\nenvironments, finding that current state-of-the-art offline reinforcement\nlearning algorithms underperform in the target domain. To address this, we\npropose data valuation for offline reinforcement learning (DVORL), which allows\nus to identify relevant and high-quality transitions, improving the performance\nand transferability of policies learned by offline reinforcement learning\nalgorithms. The results show that our method outperforms offline reinforcement\nlearning baselines on two MuJoCo environments.", + "authors": "Amir Abolfazli, Gregory Palmer, Daniel Kudenko", + "published": "2022-05-19", + "updated": "2022-05-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2302.13493v1", + "title": "The Provable Benefits of Unsupervised Data Sharing for Offline Reinforcement Learning", + "abstract": "Self-supervised methods have become crucial for advancing deep learning by\nleveraging data itself to reduce the need for expensive annotations. However,\nthe question of how to conduct self-supervised offline reinforcement learning\n(RL) in a principled way remains unclear. In this paper, we address this issue\nby investigating the theoretical benefits of utilizing reward-free data in\nlinear Markov Decision Processes (MDPs) within a semi-supervised setting.\n Further, we propose a novel, Provable Data Sharing algorithm (PDS) to utilize\nsuch reward-free data for offline RL. PDS uses additional penalties on the\nreward function learned from labeled data to prevent overestimation, ensuring a\nconservative algorithm. Our results on various offline RL tasks demonstrate\nthat PDS significantly improves the performance of offline RL algorithms with\nreward-free data. Overall, our work provides a promising approach to leveraging\nthe benefits of unlabeled data in offline RL while maintaining theoretical\nguarantees. We believe our findings will contribute to developing more robust\nself-supervised RL methods.", + "authors": "Hao Hu, Yiqin Yang, Qianchuan Zhao, Chongjie Zhang", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.13885v1", + "title": "Offline Learning from Demonstrations and Unlabeled Experience", + "abstract": "Behavior cloning (BC) is often practical for robot learning because it allows\na policy to be trained offline without rewards, by supervised learning on\nexpert demonstrations. However, BC does not effectively leverage what we will\nrefer to as unlabeled experience: data of mixed and unknown quality without\nreward annotations. This unlabeled data can be generated by a variety of\nsources such as human teleoperation, scripted policies and other agents on the\nsame robot. Towards data-driven offline robot learning that can use this\nunlabeled experience, we introduce Offline Reinforced Imitation Learning\n(ORIL). ORIL first learns a reward function by contrasting observations from\ndemonstrator and unlabeled trajectories, then annotates all data with the\nlearned reward, and finally trains an agent via offline reinforcement learning.\nAcross a diverse set of continuous control and simulated robotic manipulation\ntasks, we show that ORIL consistently outperforms comparable BC agents by\neffectively leveraging unlabeled experience.", + "authors": "Konrad Zolna, Alexander Novikov, Ksenia Konyushkova, Caglar Gulcehre, Ziyu Wang, Yusuf Aytar, Misha Denil, Nando de Freitas, Scott Reed", + "published": "2020-11-27", + "updated": "2020-11-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08302v1", + "title": "Safe Evaluation For Offline Learning: Are We Ready To Deploy?", + "abstract": "The world currently offers an abundance of data in multiple domains, from\nwhich we can learn reinforcement learning (RL) policies without further\ninteraction with the environment. RL agents learning offline from such data is\npossible but deploying them while learning might be dangerous in domains where\nsafety is critical. Therefore, it is essential to find a way to estimate how a\nnewly-learned agent will perform if deployed in the target environment before\nactually deploying it and without the risk of overestimating its true\nperformance. To achieve this, we introduce a framework for safe evaluation of\noffline learning using approximate high-confidence off-policy evaluation\n(HCOPE) to estimate the performance of offline policies during learning. In our\nsetting, we assume a source of data, which we split into a train-set, to learn\nan offline policy, and a test-set, to estimate a lower-bound on the offline\npolicy using off-policy evaluation with bootstrapping. A lower-bound estimate\ntells us how good a newly-learned target policy would perform before it is\ndeployed in the real environment, and therefore allows us to decide when to\ndeploy our learned policy.", + "authors": "Hager Radi, Josiah P. Hanna, Peter Stone, Matthew E. Taylor", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.13777v4", + "title": "Deep Generative Models for Offline Policy Learning: Tutorial, Survey, and Perspectives on Future Directions", + "abstract": "Deep generative models (DGMs) have demonstrated great success across various\ndomains, particularly in generating texts, images, and videos using models\ntrained from offline data. Similarly, data-driven decision-making and robotic\ncontrol also necessitate learning a generator function from the offline data to\nserve as the strategy or policy. In this case, applying deep generative models\nin offline policy learning exhibits great potential, and numerous studies have\nexplored in this direction. However, this field still lacks a comprehensive\nreview and so developments of different branches are relatively independent.\nThus, we provide the first systematic review on the applications of deep\ngenerative models for offline policy learning. In particular, we cover five\nmainstream deep generative models, including Variational Auto-Encoders,\nGenerative Adversarial Networks, Normalizing Flows, Transformers, and Diffusion\nModels, and their applications in both offline reinforcement learning (offline\nRL) and imitation learning (IL). Offline RL and IL are two main branches of\noffline policy learning and are widely-adopted techniques for sequential\ndecision-making. Specifically, for each type of DGM-based offline policy\nlearning, we distill its fundamental scheme, categorize related works based on\nthe usage of the DGM, and sort out the development process of algorithms in\nthat field. Subsequent to the main content, we provide in-depth discussions on\ndeep generative models and offline policy learning as a summary, based on which\nwe present our perspectives on future research directions. This work offers a\nhands-on reference for the research progress in deep generative models for\noffline policy learning, and aims to inspire improved DGM-based offline RL or\nIL algorithms. For convenience, we maintain a paper list on\nhttps://github.com/LucasCJYSDL/DGMs-for-Offline-Policy-Learning.", + "authors": "Jiayu Chen, Bhargav Ganguly, Yang Xu, Yongsheng Mei, Tian Lan, Vaneet Aggarwal", + "published": "2024-02-21", + "updated": "2024-02-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.03351v4", + "title": "Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization", + "abstract": "Combining offline and online reinforcement learning (RL) is crucial for\nefficient and safe learning. However, previous approaches treat offline and\nonline learning as separate procedures, resulting in redundant designs and\nlimited performance. We ask: Can we achieve straightforward yet effective\noffline and online learning without introducing extra conservatism or\nregularization? In this study, we propose Uni-o4, which utilizes an on-policy\nobjective for both offline and online learning. Owning to the alignment of\nobjectives in two phases, the RL agent can transfer between offline and online\nlearning seamlessly. This property enhances the flexibility of the learning\nparadigm, allowing for arbitrary combinations of pretraining, fine-tuning,\noffline, and online learning. In the offline phase, specifically, Uni-o4\nleverages diverse ensemble policies to address the mismatch issues between the\nestimated behavior policy and the offline dataset. Through a simple offline\npolicy evaluation (OPE) approach, Uni-o4 can achieve multi-step policy\nimprovement safely. We demonstrate that by employing the method above, the\nfusion of these two paradigms can yield superior offline initialization as well\nas stable and rapid online fine-tuning capabilities. Through real-world robot\ntasks, we highlight the benefits of this paradigm for rapid deployment in\nchallenging, previously unseen real-world environments. Additionally, through\ncomprehensive evaluations using numerous simulated benchmarks, we substantiate\nthat our method achieves state-of-the-art performance in both offline and\noffline-to-online fine-tuning learning. Our website:\nhttps://lei-kun.github.io/uni-o4/ .", + "authors": "Kun Lei, Zhengmao He, Chenhao Lu, Kaizhe Hu, Yang Gao, Huazhe Xu", + "published": "2023-11-06", + "updated": "2024-03-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.04974v2", + "title": "Leveraging Offline Data in Online Reinforcement Learning", + "abstract": "Two central paradigms have emerged in the reinforcement learning (RL)\ncommunity: online RL and offline RL. In the online RL setting, the agent has no\nprior knowledge of the environment, and must interact with it in order to find\nan $\\epsilon$-optimal policy. In the offline RL setting, the learner instead\nhas access to a fixed dataset to learn from, but is unable to otherwise\ninteract with the environment, and must obtain the best policy it can from this\noffline data. Practical scenarios often motivate an intermediate setting: if we\nhave some set of offline data and, in addition, may also interact with the\nenvironment, how can we best use the offline data to minimize the number of\nonline interactions necessary to learn an $\\epsilon$-optimal policy?\n In this work, we consider this setting, which we call the \\textsf{FineTuneRL}\nsetting, for MDPs with linear structure. We characterize the necessary number\nof online samples needed in this setting given access to some offline dataset,\nand develop an algorithm, \\textsc{FTPedel}, which is provably optimal, up to\n$H$ factors. We show through an explicit example that combining offline data\nwith online interactions can lead to a provable improvement over either purely\noffline or purely online RL. Finally, our results illustrate the distinction\nbetween \\emph{verifiable} learning, the typical setting considered in online\nRL, and \\emph{unverifiable} learning, the setting often considered in offline\nRL, and show that there is a formal separation between these regimes.", + "authors": "Andrew Wagenmaker, Aldo Pacchiano", + "published": "2022-11-09", + "updated": "2023-07-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.06043v3", + "title": "Offline Meta-Reinforcement Learning with Advantage Weighting", + "abstract": "This paper introduces the offline meta-reinforcement learning (offline\nmeta-RL) problem setting and proposes an algorithm that performs well in this\nsetting. Offline meta-RL is analogous to the widely successful supervised\nlearning strategy of pre-training a model on a large batch of fixed,\npre-collected data (possibly from various tasks) and fine-tuning the model to a\nnew task with relatively little data. That is, in offline meta-RL, we\nmeta-train on fixed, pre-collected data from several tasks in order to adapt to\na new task with a very small amount (less than 5 trajectories) of data from the\nnew task. By nature of being offline, algorithms for offline meta-RL can\nutilize the largest possible pool of training data available and eliminate\npotentially unsafe or costly data collection during meta-training. This setting\ninherits the challenges of offline RL, but it differs significantly because\noffline RL does not generally consider a) transfer to new tasks or b) limited\ndata from the test task, both of which we face in offline meta-RL. Targeting\nthe offline meta-RL setting, we propose Meta-Actor Critic with Advantage\nWeighting (MACAW), an optimization-based meta-learning algorithm that uses\nsimple, supervised regression objectives for both the inner and outer loop of\nmeta-training. On offline variants of common meta-RL benchmarks, we empirically\nfind that this approach enables fully offline meta-reinforcement learning and\nachieves notable gains over prior methods.", + "authors": "Eric Mitchell, Rafael Rafailov, Xue Bin Peng, Sergey Levine, Chelsea Finn", + "published": "2020-08-13", + "updated": "2021-07-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.14629v1", + "title": "Improving Zero-shot Generalization in Offline Reinforcement Learning using Generalized Similarity Functions", + "abstract": "Reinforcement learning (RL) agents are widely used for solving complex\nsequential decision making tasks, but still exhibit difficulty in generalizing\nto scenarios not seen during training. While prior online approaches\ndemonstrated that using additional signals beyond the reward function can lead\nto better generalization capabilities in RL agents, i.e. using self-supervised\nlearning (SSL), they struggle in the offline RL setting, i.e. learning from a\nstatic dataset. We show that performance of online algorithms for\ngeneralization in RL can be hindered in the offline setting due to poor\nestimation of similarity between observations. We propose a new\ntheoretically-motivated framework called Generalized Similarity Functions\n(GSF), which uses contrastive learning to train an offline RL agent to\naggregate observations based on the similarity of their expected future\nbehavior, where we quantify this similarity using \\emph{generalized value\nfunctions}. We show that GSF is general enough to recover existing SSL\nobjectives while also improving zero-shot generalization performance on a\ncomplex offline RL benchmark, offline Procgen.", + "authors": "Bogdan Mazoure, Ilya Kostrikov, Ofir Nachum, Jonathan Tompson", + "published": "2021-11-29", + "updated": "2021-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02752v2", + "title": "Offline Reinforcement Learning with Imbalanced Datasets", + "abstract": "The prevalent use of benchmarks in current offline reinforcement learning\n(RL) research has led to a neglect of the imbalance of real-world dataset\ndistributions in the development of models. The real-world offline RL dataset\nis often imbalanced over the state space due to the challenge of exploration or\nsafety considerations. In this paper, we specify properties of imbalanced\ndatasets in offline RL, where the state coverage follows a power law\ndistribution characterized by skewed policies. Theoretically and empirically,\nwe show that typically offline RL methods based on distributional constraints,\nsuch as conservative Q-learning (CQL), are ineffective in extracting policies\nunder the imbalanced dataset. Inspired by natural intelligence, we propose a\nnovel offline RL method that utilizes the augmentation of CQL with a retrieval\nprocess to recall past related experiences, effectively alleviating the\nchallenges posed by imbalanced datasets. We evaluate our method on several\ntasks in the context of imbalanced datasets with varying levels of imbalance,\nutilizing the variant of D4RL. Empirical results demonstrate the superiority of\nour method over other baselines.", + "authors": "Li Jiang, Sijie Chen, Jielin Qiu, Haoran Xu, Wai Kin Chan, Zhao Ding", + "published": "2023-07-06", + "updated": "2023-07-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.16217v2", + "title": "Beyond Reward: Offline Preference-guided Policy Optimization", + "abstract": "This study focuses on the topic of offline preference-based reinforcement\nlearning (PbRL), a variant of conventional reinforcement learning that\ndispenses with the need for online interaction or specification of reward\nfunctions. Instead, the agent is provided with fixed offline trajectories and\nhuman preferences between pairs of trajectories to extract the dynamics and\ntask information, respectively. Since the dynamics and task information are\northogonal, a naive approach would involve using preference-based reward\nlearning followed by an off-the-shelf offline RL algorithm. However, this\nrequires the separate learning of a scalar reward function, which is assumed to\nbe an information bottleneck of the learning process. To address this issue, we\npropose the offline preference-guided policy optimization (OPPO) paradigm,\nwhich models offline trajectories and preferences in a one-step process,\neliminating the need for separately learning a reward function. OPPO achieves\nthis by introducing an offline hindsight information matching objective for\noptimizing a contextual policy and a preference modeling objective for finding\nthe optimal context. OPPO further integrates a well-performing decision policy\nby optimizing the two objectives iteratively. Our empirical results demonstrate\nthat OPPO effectively models offline preferences and outperforms prior\ncompeting baselines, including offline RL algorithms performed over either true\nor pseudo reward function specifications. Our code is available on the project\nwebsite: https://sites.google.com/view/oppo-icml-2023 .", + "authors": "Yachen Kang, Diyuan Shi, Jinxin Liu, Li He, Donglin Wang", + "published": "2023-05-25", + "updated": "2023-06-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.12755v1", + "title": "Beyond OOD State Actions: Supported Cross-Domain Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning (RL) aims to learn a policy using only\npre-collected and fixed data. Although avoiding the time-consuming online\ninteractions in RL, it poses challenges for out-of-distribution (OOD) state\nactions and often suffers from data inefficiency for training. Despite many\nefforts being devoted to addressing OOD state actions, the latter (data\ninefficiency) receives little attention in offline RL. To address this, this\npaper proposes the cross-domain offline RL, which assumes offline data\nincorporate additional source-domain data from varying transition dynamics\n(environments), and expects it to contribute to the offline data efficiency. To\ndo so, we identify a new challenge of OOD transition dynamics, beyond the\ncommon OOD state actions issue, when utilizing cross-domain offline data. Then,\nwe propose our method BOSA, which employs two support-constrained objectives to\naddress the above OOD issues. Through extensive experiments in the cross-domain\noffline RL setting, we demonstrate BOSA can greatly improve offline data\nefficiency: using only 10\\% of the target data, BOSA could achieve {74.4\\%} of\nthe SOTA offline RL performance that uses 100\\% of the target data.\nAdditionally, we also show BOSA can be effortlessly plugged into model-based\noffline RL and noising data augmentation techniques (used for generating\nsource-domain data), which naturally avoids the potential dynamics mismatch\nbetween target-domain data and newly generated source-domain data.", + "authors": "Jinxin Liu, Ziqi Zhang, Zhenyu Wei, Zifeng Zhuang, Yachen Kang, Sibo Gai, Donglin Wang", + "published": "2023-06-22", + "updated": "2023-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03097v1", + "title": "Federated Ensemble-Directed Offline Reinforcement Learning", + "abstract": "We consider the problem of federated offline reinforcement learning (RL), a\nscenario under which distributed learning agents must collaboratively learn a\nhigh-quality control policy only using small pre-collected datasets generated\naccording to different unknown behavior policies. Naively combining a standard\noffline RL approach with a standard federated learning approach to solve this\nproblem can lead to poorly performing policies. In response, we develop the\nFederated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA),\nwhich distills the collective wisdom of the clients using an ensemble learning\napproach. We develop the FEDORA codebase to utilize distributed compute\nresources on a federated learning platform. We show that FEDORA significantly\noutperforms other approaches, including offline RL over the combined data pool,\nin various complex continuous control environments and real world datasets.\nFinally, we demonstrate the performance of FEDORA in the real-world on a mobile\nrobot.", + "authors": "Desik Rengarajan, Nitin Ragothaman, Dileep Kalathil, Srinivas Shakkottai", + "published": "2023-05-04", + "updated": "2023-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.02439v2", + "title": "DiffStitch: Boosting Offline Reinforcement Learning with Diffusion-based Trajectory Stitching", + "abstract": "In offline reinforcement learning (RL), the performance of the learned policy\nhighly depends on the quality of offline datasets. However, in many cases, the\noffline dataset contains very limited optimal trajectories, which poses a\nchallenge for offline RL algorithms as agents must acquire the ability to\ntransit to high-reward regions. To address this issue, we introduce\nDiffusion-based Trajectory Stitching (DiffStitch), a novel diffusion-based data\naugmentation pipeline that systematically generates stitching transitions\nbetween trajectories. DiffStitch effectively connects low-reward trajectories\nwith high-reward trajectories, forming globally optimal trajectories to address\nthe challenges faced by offline RL algorithms. Empirical experiments conducted\non D4RL datasets demonstrate the effectiveness of DiffStitch across RL\nmethodologies. Notably, DiffStitch demonstrates substantial enhancements in the\nperformance of one-step methods (IQL), imitation learning methods (TD3+BC), and\ntrajectory optimization methods (DT).", + "authors": "Guanghe Li, Yixiang Shan, Zhengbang Zhu, Ting Long, Weinan Zhang", + "published": "2024-02-04", + "updated": "2024-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.17396v1", + "title": "Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions", + "abstract": "Offline reinforcement learning (RL) allows for the training of competent\nagents from offline datasets without any interaction with the environment.\nOnline finetuning of such offline models can further improve performance. But\nhow should we ideally finetune agents obtained from offline RL training? While\noffline RL algorithms can in principle be used for finetuning, in practice,\ntheir online performance improves slowly. In contrast, we show that it is\npossible to use standard online off-policy algorithms for faster improvement.\nHowever, we find this approach may suffer from policy collapse, where the\npolicy undergoes severe performance deterioration during initial online\nlearning. We investigate the issue of policy collapse and how it relates to\ndata diversity, algorithm choices and online replay distribution. Based on\nthese insights, we propose a conservative policy optimization procedure that\ncan achieve stable and sample-efficient online learning from offline\npretraining.", + "authors": "Yicheng Luo, Jackie Kay, Edward Grefenstette, Marc Peter Deisenroth", + "published": "2023-03-30", + "updated": "2023-03-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.05546v1", + "title": "Offline Actor-Critic Reinforcement Learning Scales to Large Models", + "abstract": "We show that offline actor-critic reinforcement learning can scale to large\nmodels - such as transformers - and follows similar scaling laws as supervised\nlearning. We find that offline actor-critic algorithms can outperform strong,\nsupervised, behavioral cloning baselines for multi-task training on a large\ndataset containing both sub-optimal and expert behavior on 132 continuous\ncontrol tasks. We introduce a Perceiver-based actor-critic model and elucidate\nthe key model features needed to make offline RL work with self- and\ncross-attention modules. Overall, we find that: i) simple offline actor critic\nalgorithms are a natural choice for gradually moving away from the currently\npredominant paradigm of behavioral cloning, and ii) via offline RL it is\npossible to learn multi-task policies that master many domains simultaneously,\nincluding real robotics tasks, from sub-optimal demonstrations or\nself-generated data.", + "authors": "Jost Tobias Springenberg, Abbas Abdolmaleki, Jingwei Zhang, Oliver Groth, Michael Bloesch, Thomas Lampe, Philemon Brakel, Sarah Bechtle, Steven Kapturowski, Roland Hafner, Nicolas Heess, Martin Riedmiller", + "published": "2024-02-08", + "updated": "2024-02-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.17156v2", + "title": "MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations", + "abstract": "We study a new paradigm for sequential decision making, called offline policy\nlearning from observations (PLfO). Offline PLfO aims to learn policies using\ndatasets with substandard qualities: 1) only a subset of trajectories is\nlabeled with rewards, 2) labeled trajectories may not contain actions, 3)\nlabeled trajectories may not be of high quality, and 4) the data may not have\nfull coverage. Such imperfection is common in real-world learning scenarios,\nand offline PLfO encompasses many existing offline learning setups, including\noffline imitation learning (IL), offline IL from observations (ILfO), and\noffline reinforcement learning (RL). In this work, we present a generic\napproach to offline PLfO, called $\\textbf{M}$odality-agnostic\n$\\textbf{A}$dversarial $\\textbf{H}$ypothesis $\\textbf{A}$daptation for\n$\\textbf{L}$earning from $\\textbf{O}$bservations (MAHALO). Built upon the\npessimism concept in offline RL, MAHALO optimizes the policy using a\nperformance lower bound that accounts for uncertainty due to the dataset's\ninsufficient coverage. We implement this idea by adversarially training\ndata-consistent critic and reward functions, which forces the learned policy to\nbe robust to data deficiency. We show that MAHALO consistently outperforms or\nmatches specialized algorithms across a variety of offline PLfO tasks in theory\nand experiments. Our code is available at https://github.com/AnqiLi/mahalo.", + "authors": "Anqi Li, Byron Boots, Ching-An Cheng", + "published": "2023-03-30", + "updated": "2023-08-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.07920v2", + "title": "Causal Decision Transformer for Recommender Systems via Offline Reinforcement Learning", + "abstract": "Reinforcement learning-based recommender systems have recently gained\npopularity. However, the design of the reward function, on which the agent\nrelies to optimize its recommendation policy, is often not straightforward.\nExploring the causality underlying users' behavior can take the place of the\nreward function in guiding the agent to capture the dynamic interests of users.\nMoreover, due to the typical limitations of simulation environments (e.g., data\ninefficiency), most of the work cannot be broadly applied in large-scale\nsituations. Although some works attempt to convert the offline dataset into a\nsimulator, data inefficiency makes the learning process even slower. Because of\nthe nature of reinforcement learning (i.e., learning by interaction), it cannot\ncollect enough data to train during a single interaction. Furthermore,\ntraditional reinforcement learning algorithms do not have a solid capability\nlike supervised learning methods to learn from offline datasets directly. In\nthis paper, we propose a new model named the causal decision transformer for\nrecommender systems (CDT4Rec). CDT4Rec is an offline reinforcement learning\nsystem that can learn from a dataset rather than from online interaction.\nMoreover, CDT4Rec employs the transformer architecture, which is capable of\nprocessing large offline datasets and capturing both short-term and long-term\ndependencies within the data to estimate the causal relationship between\naction, state, and reward. To demonstrate the feasibility and superiority of\nour model, we have conducted experiments on six real-world offline datasets and\none online simulator.", + "authors": "Siyu Wang, Xiaocong Chen, Dietmar Jannach, Lina Yao", + "published": "2023-04-17", + "updated": "2023-08-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2309.12716v1", + "title": "H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps", + "abstract": "Solving real-world complex tasks using reinforcement learning (RL) without\nhigh-fidelity simulation environments or large amounts of offline data can be\nquite challenging. Online RL agents trained in imperfect simulation\nenvironments can suffer from severe sim-to-real issues. Offline RL approaches\nalthough bypass the need for simulators, often pose demanding requirements on\nthe size and quality of the offline datasets. The recently emerged hybrid\noffline-and-online RL provides an attractive framework that enables joint use\nof limited offline data and imperfect simulator for transferable policy\nlearning. In this paper, we develop a new algorithm, called H2O+, which offers\ngreat flexibility to bridge various choices of offline and online learning\nmethods, while also accounting for dynamics gaps between the real and\nsimulation environment. Through extensive simulation and real-world robotics\nexperiments, we demonstrate superior performance and flexibility over advanced\ncross-domain online and offline RL algorithms.", + "authors": "Haoyi Niu, Tianying Ji, Bingqi Liu, Haocheng Zhao, Xiangyu Zhu, Jianying Zheng, Pengfei Huang, Guyue Zhou, Jianming Hu, Xianyuan Zhan", + "published": "2023-09-22", + "updated": "2023-09-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2201.13425v3", + "title": "Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning", + "abstract": "Recent progress in deep learning has relied on access to large and diverse\ndatasets. Such data-driven progress has been less evident in offline\nreinforcement learning (RL), because offline RL data is usually collected to\noptimize specific target tasks limiting the data's diversity. In this work, we\npropose Exploratory data for Offline RL (ExORL), a data-centric approach to\noffline RL. ExORL first generates data with unsupervised reward-free\nexploration, then relabels this data with a downstream reward before training a\npolicy with offline RL. We find that exploratory data allows vanilla off-policy\nRL algorithms, without any offline-specific modifications, to outperform or\nmatch state-of-the-art offline RL algorithms on downstream tasks. Our findings\nsuggest that data generation is as important as algorithmic advances for\noffline RL and hence requires careful consideration from the community. Code\nand data can be found at https://github.com/denisyarats/exorl .", + "authors": "Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, Lerrel Pinto", + "published": "2022-01-31", + "updated": "2022-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02829v3", + "title": "RORL: Robust Offline Reinforcement Learning via Conservative Smoothing", + "abstract": "Offline reinforcement learning (RL) provides a promising direction to exploit\nmassive amount of offline data for complex decision-making tasks. Due to the\ndistribution shift issue, current offline RL algorithms are generally designed\nto be conservative in value estimation and action selection. However, such\nconservatism can impair the robustness of learned policies when encountering\nobservation deviation under realistic conditions, such as sensor errors and\nadversarial attacks. To trade off robustness and conservatism, we propose\nRobust Offline Reinforcement Learning (RORL) with a novel conservative\nsmoothing technique. In RORL, we explicitly introduce regularization on the\npolicy and the value function for states near the dataset, as well as\nadditional conservative value estimation on these states. Theoretically, we\nshow RORL enjoys a tighter suboptimality bound than recent theoretical results\nin linear MDPs. We demonstrate that RORL can achieve state-of-the-art\nperformance on the general offline RL benchmark and is considerably robust to\nadversarial observation perturbations.", + "authors": "Rui Yang, Chenjia Bai, Xiaoteng Ma, Zhaoran Wang, Chongjie Zhang, Lei Han", + "published": "2022-06-06", + "updated": "2022-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11895v1", + "title": "What are the Statistical Limits of Offline RL with Linear Function Approximation?", + "abstract": "Offline reinforcement learning seeks to utilize offline (observational) data\nto guide the learning of (causal) sequential decision making strategies. The\nhope is that offline reinforcement learning coupled with function approximation\nmethods (to deal with the curse of dimensionality) can provide a means to help\nalleviate the excessive sample complexity burden in modern sequential decision\nmaking problems. However, the extent to which this broader approach can be\neffective is not well understood, where the literature largely consists of\nsufficient conditions.\n This work focuses on the basic question of what are necessary\nrepresentational and distributional conditions that permit provable\nsample-efficient offline reinforcement learning. Perhaps surprisingly, our main\nresult shows that even if: i) we have realizability in that the true value\nfunction of \\emph{every} policy is linear in a given set of features and 2) our\noff-policy data has good coverage over all features (under a strong spectral\ncondition), then any algorithm still (information-theoretically) requires a\nnumber of offline samples that is exponential in the problem horizon in order\nto non-trivially estimate the value of \\emph{any} given policy. Our results\nhighlight that sample-efficient offline policy evaluation is simply not\npossible unless significantly stronger conditions hold; such conditions include\neither having low distribution shift (where the offline data distribution is\nclose to the distribution of the policy to be evaluated) or significantly\nstronger representational conditions (beyond realizability).", + "authors": "Ruosong Wang, Dean P. Foster, Sham M. Kakade", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13611v3", + "title": "OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning", + "abstract": "Reinforcement learning (RL) has achieved impressive performance in a variety\nof online settings in which an agent's ability to query the environment for\ntransitions and rewards is effectively unlimited. However, in many practical\napplications, the situation is reversed: an agent may have access to large\namounts of undirected offline experience data, while access to the online\nenvironment is severely limited. In this work, we focus on this offline\nsetting. Our main insight is that, when presented with offline data composed of\na variety of behaviors, an effective way to leverage this data is to extract a\ncontinuous space of recurring and temporally extended primitive behaviors\nbefore using these primitives for downstream task learning. Primitives\nextracted in this way serve two purposes: they delineate the behaviors that are\nsupported by the data from those that are not, making them useful for avoiding\ndistributional shift in offline RL; and they provide a degree of temporal\nabstraction, which reduces the effective horizon yielding better learning in\ntheory, and improved offline RL in practice. In addition to benefiting offline\npolicy optimization, we show that performing offline primitive learning in this\nway can also be leveraged for improving few-shot imitation learning as well as\nexploration and transfer in online RL on a variety of benchmark domains.\nVisualizations are available at https://sites.google.com/view/opal-iclr", + "authors": "Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum", + "published": "2020-10-26", + "updated": "2021-05-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.12876v2", + "title": "Guiding Online Reinforcement Learning with Action-Free Offline Pretraining", + "abstract": "Offline RL methods have been shown to reduce the need for environment\ninteraction by training agents using offline collected episodes. However, these\nmethods typically require action information to be logged during data\ncollection, which can be difficult or even impossible in some practical cases.\nIn this paper, we investigate the potential of using action-free offline\ndatasets to improve online reinforcement learning, name this problem\nReinforcement Learning with Action-Free Offline Pretraining (AFP-RL). We\nintroduce Action-Free Guide (AF-Guide), a method that guides online training by\nextracting knowledge from action-free offline datasets. AF-Guide consists of an\nAction-Free Decision Transformer (AFDT) implementing a variant of Upside-Down\nReinforcement Learning. It learns to plan the next states from the offline\ndataset, and a Guided Soft Actor-Critic (Guided SAC) that learns online with\nguidance from AFDT. Experimental results show that AF-Guide can improve sample\nefficiency and performance in online training thanks to the knowledge from the\naction-free offline dataset. Code is available at\nhttps://github.com/Vision-CAIR/AF-Guide.", + "authors": "Deyao Zhu, Yuhui Wang, J\u00fcrgen Schmidhuber, Mohamed Elhoseiny", + "published": "2023-01-30", + "updated": "2023-03-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.08066v5", + "title": "Exploiting Action Impact Regularity and Exogenous State Variables for Offline Reinforcement Learning", + "abstract": "Offline reinforcement learning -- learning a policy from a batch of data --\nis known to be hard for general MDPs. These results motivate the need to look\nat specific classes of MDPs where offline reinforcement learning might be\nfeasible. In this work, we explore a restricted class of MDPs to obtain\nguarantees for offline reinforcement learning. The key property, which we call\nAction Impact Regularity (AIR), is that actions primarily impact a part of the\nstate (an endogenous component) and have limited impact on the remaining part\nof the state (an exogenous component). AIR is a strong assumption, but it\nnonetheless holds in a number of real-world domains including financial\nmarkets. We discuss algorithms that exploit the AIR property, and provide a\ntheoretical analysis for an algorithm based on Fitted-Q Iteration. Finally, we\ndemonstrate that the algorithm outperforms existing offline reinforcement\nlearning algorithms across different data collection policies in simulated and\nreal world environments where the regularity holds.", + "authors": "Vincent Liu, James R. Wright, Martha White", + "published": "2021-11-15", + "updated": "2023-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2203.05804v1", + "title": "Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism", + "abstract": "Offline reinforcement learning, which seeks to utilize offline/historical\ndata to optimize sequential decision-making strategies, has gained surging\nprominence in recent studies. Due to the advantage that appropriate function\napproximators can help mitigate the sample complexity burden in modern\nreinforcement learning problems, existing endeavors usually enforce powerful\nfunction representation models (e.g. neural networks) to learn the optimal\npolicies. However, a precise understanding of the statistical limits with\nfunction representations, remains elusive, even when such a representation is\nlinear.\n Towards this goal, we study the statistical limits of offline reinforcement\nlearning with linear model representations. To derive the tight offline\nlearning bound, we design the variance-aware pessimistic value iteration\n(VAPVI), which adopts the conditional variance information of the value\nfunction for time-inhomogeneous episodic linear Markov decision processes\n(MDPs). VAPVI leverages estimated variances of the value functions to reweight\nthe Bellman residuals in the least-square pessimistic value iteration and\nprovides improved offline learning bounds over the best-known existing results\n(whereas the Bellman residuals are equally weighted by design). More\nimportantly, our learning bounds are expressed in terms of system quantities,\nwhich provide natural instance-dependent characterizations that previous\nresults are short of. We hope our results draw a clearer picture of what\noffline learning should look like when linear representations are provided.", + "authors": "Ming Yin, Yaqi Duan, Mengdi Wang, Yu-Xiang Wang", + "published": "2022-03-11", + "updated": "2022-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.18617v1", + "title": "ELA: Exploited Level Augmentation for Offline Learning in Zero-Sum Games", + "abstract": "Offline learning has become widely used due to its ability to derive\neffective policies from offline datasets gathered by expert demonstrators\nwithout interacting with the environment directly. Recent research has explored\nvarious ways to enhance offline learning efficiency by considering the\ncharacteristics (e.g., expertise level or multiple demonstrators) of the\ndataset. However, a different approach is necessary in the context of zero-sum\ngames, where outcomes vary significantly based on the strategy of the opponent.\nIn this study, we introduce a novel approach that uses unsupervised learning\ntechniques to estimate the exploited level of each trajectory from the offline\ndataset of zero-sum games made by diverse demonstrators. Subsequently, we\nincorporate the estimated exploited level into the offline learning to maximize\nthe influence of the dominant strategy. Our method enables interpretable\nexploited level estimation in multiple zero-sum games and effectively\nidentifies dominant strategy data. Also, our exploited level augmented offline\nlearning significantly enhances the original offline learning algorithms\nincluding imitation learning and offline reinforcement learning for zero-sum\ngames.", + "authors": "Shiqi Lei, Kanghoon Lee, Linjing Li, Jinkyoo Park, Jiachen Li", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT", + "cs.AI", + "cs.LG", + "cs.MA" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.09796v1", + "title": "Offline Reinforcement Learning with Value-based Episodic Memory", + "abstract": "Offline reinforcement learning (RL) shows promise of applying RL to\nreal-world problems by effectively utilizing previously collected data. Most\nexisting offline RL algorithms use regularization or constraints to suppress\nextrapolation error for actions outside the dataset. In this paper, we adopt a\ndifferent framework, which learns the V-function instead of the Q-function to\nnaturally keep the learning procedure within the support of an offline dataset.\nTo enable effective generalization while maintaining proper conservatism in\noffline learning, we propose Expectile V-Learning (EVL), which smoothly\ninterpolates between the optimal value learning and behavior cloning. Further,\nwe introduce implicit planning along offline trajectories to enhance learned\nV-values and accelerate convergence. Together, we present a new offline method\ncalled Value-based Episodic Memory (VEM). We provide theoretical analysis for\nthe convergence properties of our proposed VEM method, and empirical results in\nthe D4RL benchmark show that our method achieves superior performance in most\ntasks, particularly in sparse-reward tasks.", + "authors": "Xiaoteng Ma, Yiqin Yang, Hao Hu, Qihan Liu, Jun Yang, Chongjie Zhang, Qianchuan Zhao, Bin Liang", + "published": "2021-10-19", + "updated": "2021-10-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1907.04543v4", + "title": "An Optimistic Perspective on Offline Reinforcement Learning", + "abstract": "Off-policy reinforcement learning (RL) using a fixed offline dataset of\nlogged interactions is an important consideration in real world applications.\nThis paper studies offline RL using the DQN replay dataset comprising the\nentire replay experience of a DQN agent on 60 Atari 2600 games. We demonstrate\nthat recent off-policy deep RL algorithms, even when trained solely on this\nfixed dataset, outperform the fully trained DQN agent. To enhance\ngeneralization in the offline setting, we present Random Ensemble Mixture\n(REM), a robust Q-learning algorithm that enforces optimal Bellman consistency\non random convex combinations of multiple Q-value estimates. Offline REM\ntrained on the DQN replay dataset surpasses strong RL baselines. Ablation\nstudies highlight the role of offline dataset size and diversity as well as the\nalgorithm choice in our positive results. Overall, the results here present an\noptimistic view that robust RL algorithms trained on sufficiently large and\ndiverse offline datasets can lead to high quality policies. The DQN replay\ndataset can serve as an offline RL benchmark and is open-sourced.", + "authors": "Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi", + "published": "2019-07-10", + "updated": "2020-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.10905v1", + "title": "Efficient Robotic Manipulation Through Offline-to-Online Reinforcement Learning and Goal-Aware State Information", + "abstract": "End-to-end learning robotic manipulation with high data efficiency is one of\nthe key challenges in robotics. The latest methods that utilize human\ndemonstration data and unsupervised representation learning has proven to be a\npromising direction to improve RL learning efficiency. The use of demonstration\ndata also allows \"warming-up\" the RL policies using offline data with imitation\nlearning or the recently emerged offline reinforcement learning algorithms.\nHowever, existing works often treat offline policy learning and online\nexploration as two separate processes, which are often accompanied by severe\nperformance drop during the offline-to-online transition. Furthermore, many\nrobotic manipulation tasks involve complex sub-task structures, which are very\nchallenging to be solved in RL with sparse reward. In this work, we propose a\nunified offline-to-online RL framework that resolves the transition performance\ndrop issue. Additionally, we introduce goal-aware state information to the RL\nagent, which can greatly reduce task complexity and accelerate policy learning.\nCombined with an advanced unsupervised representation learning module, our\nframework achieves great training efficiency and performance compared with the\nstate-of-the-art methods in multiple robotic manipulation tasks.", + "authors": "Jin Li, Xianyuan Zhan, Zixu Xiao, Guyue Zhou", + "published": "2021-10-21", + "updated": "2021-10-21", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Offline AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.15690v1", + "title": "Benchmarking Offline Reinforcement Learning on Real-Robot Hardware", + "abstract": "Learning policies from previously recorded data is a promising direction for\nreal-world robotics tasks, as online learning is often infeasible. Dexterous\nmanipulation in particular remains an open problem in its general form. The\ncombination of offline reinforcement learning with large diverse datasets,\nhowever, has the potential to lead to a breakthrough in this challenging domain\nanalogously to the rapid progress made in supervised learning in recent years.\nTo coordinate the efforts of the research community toward tackling this\nproblem, we propose a benchmark including: i) a large collection of data for\noffline learning from a dexterous manipulation platform on two tasks, obtained\nwith capable RL agents trained in simulation; ii) the option to execute learned\npolicies on a real-world robotic system and a simulation for efficient\ndebugging. We evaluate prominent open-sourced offline reinforcement learning\nalgorithms on the datasets and provide a reproducible experimental setup for\noffline reinforcement learning on real systems.", + "authors": "Nico G\u00fcrtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel W\u00fcthrich, Stefan Bauer, Bernhard Sch\u00f6lkopf, Georg Martius", + "published": "2023-07-28", + "updated": "2023-07-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Offline AND Reinforcement AND Learning" + } +] \ No newline at end of file