{ "url": "http://arxiv.org/abs/2404.16399v1", "title": "Offline Reinforcement Learning with Behavioral Supervisor Tuning", "abstract": "Offline reinforcement learning (RL) algorithms are applied to learn\nperformant, well-generalizing policies when provided with a static dataset of\ninteractions. Many recent approaches to offline RL have seen substantial\nsuccess, but with one key caveat: they demand substantial per-dataset\nhyperparameter tuning to achieve reported performance, which requires policy\nrollouts in the environment to evaluate; this can rapidly become cumbersome.\nFurthermore, substantial tuning requirements can hamper the adoption of these\nalgorithms in practical domains. In this paper, we present TD3 with Behavioral\nSupervisor Tuning (TD3-BST), an algorithm that trains an uncertainty model and\nuses it to guide the policy to select actions within the dataset support.\nTD3-BST can learn more effective policies from offline datasets compared to\nprevious methods and achieves the best performance across challenging\nbenchmarks without requiring per-dataset tuning.", "authors": "Padmanaba Srinivasan, William Knottenbelt", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.AI" ], "label": "Original Paper", "paper_cat": "Offline AND Reinforcement AND Learning", "gt": "Reinforcement learning (RL) is a method of learning where an agent interacts with an environment to collect experiences and seeks to maximize the reward provided by the environ- ment. This typically follows a repeating cycle of experience collecting and improvement [Sutton and Barto, 2018]. This is termed online RL due to the need for policy rollouts in the environment. Both on-policy and off-policy RL require some schedule of online interaction which, in some domains, can be infeasible due to experimental or environmental lim- itations [Mirowski et al., 2018; Yu et al., 2021]. With such constraints, a dataset may instead be collected that consists of demonstrations by arbitrary (potentially multiple, unknown) behavior policies [Lange et al., 2012] that may be subopti- mal. Offline reinforcement learning algorithms are designed to recover optimal policies from such static datasets. The primary challenge in offline RL is the evaluation of out-of-distribution (OOD) actions; offline datasets rarely of- fer support over the entire state-action space and neural net- works overestimate values when extrapolating to OOD ac- tions [Fujimoto et al., 2018; Gulcehre et al., 2020; Kumar et 3. Figure 1: An illustration of our method versus typical, TD3-BC- like actor-constraint methods. TD3-BC: a) A policy selecting an OOD action is constrained to select in-dataset actions. b) A policy selecting the optimal action may be penalized for not selecting an in-dataset, but not in-batch, inferior action. Our method: c) A pol- icy selecting OOD actions is drawn towards in-dataset actions with decreasing constraint coefficient as it moves closer to any supported action. d) An optimal policy is not penalized for selecting an in- dataset action when the action is not contained in the current batch. al., 2020; Kumar et al., 2019]. If trained using standard off- policy methods, a policy will select any actions that maximize reward, which includes OOD actions. The difference between the rewards implied by the value function and the environ- ment results in a distribution shift that can result in failure in real-world policy rollouts. Thus, offline RL algorithms must both maximize the reward and follow the behavioral policy, while having to potentially \u201cstitch\u201d together several subopti- mal trajectories. The former requirement is usually satisfied by introducing a constraint on the actor to either penalize de- viation from the behavior policy or epistemic uncertainty of the value function, or by regularizing the value function to directly minimize OOD action-values. Many recent approaches to offline RL [Tarasov et al., 2023; Zhu et al., 2022; Li et al., 2022; Nikulin et al., 2023; Xu et al., 2023] demonstrate success in D4RL benchmarks [Fu et al., 2020], but demand the onerous task of per-dataset arXiv:2404.16399v1 [cs.LG] 25 Apr 2024 hyperparameter tuning [Zhang and Jiang, 2021]. Algo- rithms that require substantial offline fine-tuning can be in- feasible in real-world applications [Tang and Wiens, 2021], hampering their adoption in favor of simpler, older algo- rithms [Emerson et al., 2023; Zhu et al., 2023]. These older methods [Fujimoto and Gu, 2021; Kumar et al., 2020; Kostrikov et al., 2021b] provide excellent \u201cbang-for-buck\u201d as their hyperparameters work well across a range of D4RL datasets. Contributions In this paper, we show how a trained uncer- tainty model can be incorporated into the regularized policy objective as a behavioral supervisor to yield TD3 with behav- ioral supervisor tuning (TD3-BST). The key advantage of our method is the dynamic regularization weighting performed by the uncertainty network, which allows the learned policy to maximize Q-values around dataset modes. Evaluation on D4RL datasets demonstrates that TD3-BST achieves SOTA performance, and ablation experiments analyze the perfor- mance of the uncertainty model and the sensitivity of the pa- rameters of the BST objective.", "main_content": "Reinforcement learning is a framework for sequential decision making often formulated as a Markov decision process (MDP), M = {S, A, R, p, p0, \u03b3} with state space S, action space A, a scalar reward dependent on state and action R(s, a), transition dynamics p, initial state distribution p0 and discount factor \u03b3 \u2208 [0, 1) [Sutton and Barto, 2018]. RL aims to learn a policy \u03c0 \u2208\u03a0 that executes action a = \u03c0(s) that will maximize the expected \ufffd\ufffd \ufffd \u2208 executes action a = \u03c0(s) that will maximize the expected discounted reward J(\u03c0) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffdT t=0 \u03b3tR(st, at) \ufffd where P\u03c0(\u03c4) = p0(s0) \ufffdT t=0 \u03c0(at | st)p(st+1 | st, at) is the trajectory under \u03c0. Rather than rolling out an entire trajecdiscounted reward J(\u03c0) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffd t=0 \u03b3tR(st, at) \ufffd where P\u03c0(\u03c4) = p0(s0) \ufffdT t=0 \u03c0(at | st)p(st+1 | st, at) is the trajectory under \u03c0. Rather than rolling out an entire trajectory, a state-action value function (Q function) is often used: Q\u03c0(s, a) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffdT t=0 \u03b3tr(st, at) | s0 = s, a0 = a \ufffd . 2.1 Offline Reinforcement Learning \ufffd\ufffd \ufffd 2.1 Offline Reinforcement Learning Offline RL algorithms are presented with a static dataset D that consists of tuples {s, a, r, s\u2032} where r \u223cR(s, a) and s\u2032 \u223cp(\u00b7 | s, a). D has limited coverage over S \u00d7A; hence, offline RL algorithms must constrain the policy to select actions within the dataset support. To this end, algorithms employ one of three approaches: 1) policy constraints; 2) critic regularization; or 3) uncertainty penalization. Policy constraint Policy constraints modify the actor\u2019s objective only to minimize divergence from the behavior policy. Most simply, this adds a constraint term [Fujimoto and Gu, 2021; Tarasov et al., 2023] to the policy objective: arg max \u03c0 E{s,a}\u223cD [Q(s, \u03c0(s)) \u2212\u03b1D(\u03c0, \u03c0\u03b2)] , (1) where \u03b1 is a scalar controlling the strength of regularization, D(\u00b7, \u00b7) is a divergence function between the policy \u03c0 and the behavior policy \u03c0\u03b2. In offline RL, we do not have access to \u03c0\u03b2; some prior methods attempt to estimate it empirically [Kostrikov et al., 2021a; Li et al., 2023] which is challenging when the dataset is generated by a mixture of policies. Furthermore, selecting the constraint strength can be challenging and difficult to generalize across datasets with similar environments [Tarasov et al., 2023; Kostrikov et al., 2021a]. Other policy constraint approaches use weighted BC [Nair et al., 2020; Kostrikov et al., 2021b; Xu et al., 2023] or (surrogate) BC constraints [Li et al., 2022; Wu et al., 2019; Li et al., 2023]. The former methods may be too restrictive as they do not allow OOD action selection, which is crucial to improve performance [Fu et al., 2022]. The latter methods may still require substantial tuning and in addition to training if using model-based score methods. Other methods impose architectural constraints [Kumar et al., 2019; Fujimoto et al., 2019] that parameterize separate BC and reward-maximizing policy models. Critic Regularization Critic regularization methods directly address the OOD action-value overestimation problem by penalizing large values for adversarially sampled actions [Kostrikov et al., 2021a]. Ensembles Employing an ensemble of neural network estimators is a commonly used technique for prediction with a measure of epistemic uncertainty [Kondratyuk et al., 2020]. A family of offline RL methods employ large ensembles of value functions [An et al., 2021] and make use of the diversity of randomly initialized ensembles to implicitly reduce the selection of OOD actions or directly penalize the variance of the reward in the ensemble [Ghasemipour et al., 2022; Sutton and Barto, 2018]. Model-Based Uncertainty Estimation Learning an uncertainty model of the dataset is often devised analogously to exploration-encouraging methods used in online RL, but, employing these for anti-exploration instead [Rezaeifar et al., 2022]. An example is SAC-RND which directly adopts such an approach [Nikulin et al., 2023]. Other algorithms include DOGE [Li et al., 2022] which trains a model to estimate uncertainty as a distance to dataset action and DARL [Zhang et al., 2023] which uses distance to random projections of stateaction pairs as an uncertainty measure. As a whole, these methods optimize a distance d(\u00b7, \u00b7) \u22650 that represents the uncertainty of an action. 2.2 Uncertainty Estimation Neural networks are known to predict confidently even when presented with OOD samples [Nguyen et al., 2015; Goodfellow et al., 2014; Lakshminarayanan et al., 2017]. A classical approach to OOD detection is to fit a generative model to the dataset that produces a high probability for in-dataset samples and a low probability for OOD ones. These methods work well for simple, unimodal data but can become computationally demanding for more complex data with multiple modes. Another approach trains classifiers that are leveraged to become finer-grained OOD detectors [Lee et al., 2018]. In this work, we focus on Morse neural networks [Dherin et al., 2023], an approach that trains a generative model to produce an unnormalized density that takes on value 1 at the dataset modes. 3 Preliminaries A Morse neural network produces an unnormalized density M(x) \u2208[0, 1] on an embedding space Re [Dherin et al., 2023]. A Morse network can produce a density in Re that attains a value of 1 at mode submanifolds and decreases towards 0 when moving away from the mode. The rate at which the value decreases is controlled by a Morse Kernel. Definition 1 (Morse Kernel). A Morse Kernel is a positive definite kernel K. When applied in a space Z = Rk, the kernel K(z1, z2) takes values in the interval [0, 1] where K(z1, z2) = 1 iff z1 = z2. All kernels of the form K(z1, z2) = e\u2212D(z1,z2) where D(\u00b7, \u00b7) is a divergence [Amari, 2016] are Morse Kernels. Examples include common kernels such as the Radial Basis Function (RBF) Kernel, KRBF (z1, z2) = e\u2212\u03bb2 2 ||z1\u2212z2||2. (2) The RBF kernel and its derivatives decay exponentially, leading learning signals to vanish rapidly. An alternative is the ubiquitous Rational Quadratic (RQ) kernel: KRQ(z1, z2) = \u0012 1 + \u03bb2 2\u03ba || z1 \u2212z2 ||2 \u0013\u2212\u03ba (3) where \u03bb is a scale parameter in each kernel. The RQ kernel is a scaled mixture of RBF kernels controlled by \u03ba and, for small \u03ba, decays much more slowly [Williams and Rasmussen, 2006]. Consider a neural network that maps from a feature space into a latent space f\u03d5 : X \u2192Z, with parameters \u03d5, X \u2208Rd and Z \u2208Rk. A Morse Kernel can impose structure on the latent space. Definition 2 (Morse Neural Network). A Morse neural network is a function f\u03d5 : X \u2192Z in combination with a Morse Kernel on K(z, t) where t \u2282Z is a target, chosen as a hyperparameter of the model. The Morse neural network is defined as M\u03d5(x) = K(f\u03d5(x), t). Using Definition 1 we see that M\u03d5(x) \u2208[0, 1], and when M\u03d5(x) = 1, x corresponds to a mode that coincides with the level set of the submanifold of the Morse neural network. Furthermore, M\u03d5(x) corresponds to the certainty of the sample x being from the training dataset, so 1 \u2212M\u03d5(x) is a measure of the epistemic uncertainty of x. The function \u2212log M\u03d5(x) measures a squared distance, d(\u00b7, \u00b7), between f\u03d5(x) and the closest mode in the latent space at m: d(z) = min m\u2208M d(z, m), (4) where M is the set of all modes. This encodes information about the topology of the submanifold and satisfies the Morse\u2013Bott non-degeneracy condition [Basu and Prasad, 2020]. The Morse neural network offers the following properties: 1 M\u03d5(x) \u2208[0, 1]. 2 M\u03d5(x) = 1 at its mode submanifolds. 3 \u2212log M\u03d5(x) \u22650 is a squared distance that satisfies the Morse\u2013Bott non-degeneracy condition on the mode submanifolds. 4 As M\u03d5(x) is an exponentiated squared distance, the function is also distance aware in the sense that as f\u03d5(x) \u2192t, M\u03d5(x) \u21921. Proof of each property is provided in the appendix. 4 Policy Constraint with a Behavioral Supervisor We now describe the constituent components of our algorithm, building on the Morse network and showing how it can be incorporated into a policy-regularized objective. 4.1 Morse Networks for Offline RL The target t is a hyperparameter that must be chosen. Experiments in [Dherin et al., 2023] use simple, toy datasets with classification problems that perform well for categorical t. We find that using a static label for the Morse network yields poor performance; rather than a labeling model, we treat f\u03d5 as a perturbation model that produces an action f\u03d5(s, a) = \u02c6 a such that \u02c6 a = a if and only if s, a \u223cD. An offline RL dataset D consists of tuples {s, a, r, s\u2032} where we assume {s, a} pairs are i.i.d. sampled from an unknown distribution. The Morse network must be fitted on N state-action pairs [{s1, a1, }, ..., {sN, aN}] such that M\u03d5(si, aj) = 1, \u2200i, j \u22081, ..., N ] only when i = j. We fit a Morse neural network to minimize the KL divergence between unnormalized measures [Amari, 2016] following [Dherin et al., 2023], DKL(D(s, a) || M\u03d5(s, a)): min \u03d5 Es,a\u223cD \u0014 log D(s, a) M\u03d5(s, a) \u0015 + Z M\u03d5(s, a) \u2212D(s, a) da. (5) With respect to \u03d5, this amounts to minimizing the empirical loss: L(\u03d5) = \u22121 N X s,a\u223cD log K(f\u03d5(s, a), a) + 1 N X s\u223cD a \u00af D\u223cDuni K(f\u03d5(s, au), au), (6) where au is an action sampled from a uniform distribution over the action space Duni. A learned Morse density is well suited to modeling ensemble policies [Lei et al., 2023], more flexibly [Dherin et al., 2023; Kostrikov et al., 2021a; Li et al., 2023] and without down-weighting good, in-support actions that have low density under the behavior policy [Singh et al., 2022] as all modes have unnormalized density value 1. A Morse neural network can be expressed as an energybased model (EBM) [Goodfellow et al., 2016]: Proposition 1. A Morse neural network can be expressed as an energy-based model: E\u03d5(x) = e\u2212log M\u03d5(x) where M\u03d5 : Rd \u2192R. Note that the EBM E\u03d5 is itself unnormalized. Representing the Morse network as an EBM allows analysis analogous to [Florence et al., 2022]. Theorem 1. For a set-valued function F(x) : x \u2208Rm \u2192 Rn\\{\u2205}, there exists a continuous function g : Rm+n \u2192R that is approximated by a continuous function approximator g\u03d5 with arbitrarily small bounded error \u03f5. This ensures that any point on the graph F\u03d5(x) = arg miny g\u03d5(x, y) is within distance \u03f5 of F. We refer the reader to [Florence et al., 2022] for a detailed proof. The theorem assumes that F(x) is an implicit function and states that the error at the level-set (i.e. the modes) of F(x) is small. 4.2 TD3-BST We can use the Morse network to design a regularized policy objective. Recall that policy regularization consists of Q-value maximization and minimization of a distance to the behavior policy (Equation 1). We reconsider the policy regularization term and train a policy that minimizes uncertainty while selecting actions close to the behavior policy. Let C\u03c0(s, a) denote a measure of uncertainty of the policy action. We solve the following optimization problem: \u03c0i+1 = arg min \u03c0\u2208\u03a0 Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] (7) s.t. DKL (\u03c0(\u00b7 | s) || \u03c0\u03b2(\u00b7 | s)) \u2264\u03f5. (8) This optimization problem requires an explicit behavior model, which is difficult to estimate and using an estimated model has historically returned mixed results [Kumar et al., 2019; Fujimoto et al., 2019]. Furthermore, this requires direct optimization through C\u03c0 which may be subject to exploitation. Instead, we enforce this implicitly by deriving the solution to the constrained optimization to obtain a closed-form solution for the actor [Peng et al., 2019; Nair et al., 2020]. Enforcing the KKT conditions we obtain the Lagrangian: L(\u03c0, \u00b5) = Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] + \u00b5(\u03f5 \u2212DKL(\u03c0 || \u03c0\u03b2)). (9) Computing \u2202L \u2202\u03c0 and solving for \u03c0 yields the uncertainty minimizing solution \u03c0C\u2217(a | s) \u221d\u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a). When learning the parametric policy \u03c0\u03c8, we project the nonparametric solution into the policy space as a (reverse) KL divergence minimization of \u03c0\u03c8 under the data distribution D: arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0C\u2217(\u00b7 | s) || \u03c0\u03c8(\u00b7 | s) \u0011i (10) = arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a) || \u03c0\u03c8(\u00b7 | s) \u0011i (11) = arg min \u03c8 Es,a\u223cD h \u2212log \u03c0\u03c8(a | s)e 1 \u00b5 C\u03c0(s,a)i , (12) which is a weighted maximum likelihood update where the supervised target is sampled from the dataset D and C\u03c0(s, a) = 1 \u2212M\u03d5(s, \u03c0\u03c8(s)). This avoids explicitly modeling the behavior policy and uses the Morse network uncertainty as a behavior supervisor to dynamically adjust the strength of behavioral cloning. We provide a more detailed derivation in the appendix. Interpretation Our regularization method shares similarities with other weighted regression algorithms [Nair et al., 2020; Peng et al., 2019; Kostrikov et al., 2021b] which weight the advantage of an action compared to the dataset/replay buffer action. Our weighting can be thought of as a measure of disadvantage of a policy action in the sense of how OOD it is. We make modifications to the behavioral cloning objective. From Morse network property 1 we know M\u03d5 \u2208[0, 1], hence 1 \u2264e 1 \u00b5 C\u03c0 \u2264e 1 \u00b5 , i.e. the lowest possible disadvantage coefficient is 1. To minimize the coefficient in the mode, we require it to approach 0 when near a mode. We adjust the weighted behavioral cloning term and add Q-value maximization to yield the regularized policy update: \u03c0i+1 \u2190arg max \u03c0 E s,a\u223cD, a\u03c0\u223c\u03c0i(s) [ 1 ZQ Qi+1(s, a\u03c0) \u2212(e 1 \u00b5 C\u03c0(s,a) \u22121)(a\u03c0 \u2212a)2], (13) where \u00b5 is the Lagrangian multiplier that controls the magnitude of the disadvantage weight and ZQ = 1 N PN n=1|Q(s, a\u03c0)| is a scaling term detached from the gradient update process [Fujimoto and Gu, 2021], necessary as Q(s, a) can be arbitrarily large and the BC-coefficient is upper-bounded at e 1 \u00b5 . The value function update is given by: Qi+1 \u2190arg min Q Es,a,s\u2032\u223cD[(y \u2212Qi(s, a))2], (14) with y = r(s, a) + \u03b3Es\u2032\u223c\u00af \u03c0(s\u2032) \u00af Q(s\u2032, a\u2032) where \u00af Q and \u00af \u03c0 are target value and policy functions, respectively. 4.3 Controlling the Tradeoff Constraint Tuning TD3-BST is straightforward; the primary hyperparameters of the Morse network consist of the choice and scale of the kernel, and the temperature \u00b5. Increasing \u03bb for higher dimensional actions ensures that the high certainty region around modes remains tight. Prior empirical work has demonstrated the importance of allowing some degree of OOD actions [An et al., 2021]; in the TD3-BST framework, this is dependent on \u03bb. In Figure 2 we provide a didactic example of the effect of \u03bb. We construct a dataset consisting of 2-dimensional actions in [\u22121, 1] with means at the four locations {[0.0, 0.8], [0.0, \u22120.8], [0.8, 0.0], [\u22120.8, 0.0]} and each with standard deviation 0.05. We sample M = 128 points, train a Morse network and plot the density produced by the Morse network for \u03bb = { 1 10, 1 2, 1.0, 2.0}. A behavioral cloning policy learned using vanilla MLE where all targets are weighted equally results in an OOD action being selected. Training using Morse-weighted BC downweights the behavioral cloning loss for far away modes enabling the policy to select and minimize error to a single mode. (a) \u03bb = 0.1 (b) \u03bb = 0.5 (c) \u03bb = 1.0 (d) \u03bb = 2.0 (e) Ground Truth (f) Density \u03bb = 1.0 Figure 2: a-d: Contour plots of unnormalized densities produced by a Morse network for increasing \u03bb with ground truth actions included as \u00d7 marks. e: Ground truth actions in the synthetic dataset, the MLE action (red). A Morse certainty weighted MLE model can select actions in a single mode, in this case, the mode centred at [0.8, 0.0] (orange). Weighting a divergence constraint using a Morse (un)certainty will encourage the policy to select actions near the modes of M\u03d5 that maximize reward. f: Plot of the 3D unnormalized Morse density for \u03bb = 1.0. Algorithm 1 TD3-BST Training Procedure Outline. The policy is updated once for every m = 2 critic updates, as is the default in TD3. Input: Dataset D = {s, a, r, s\u2032} Initialize: Initialize Morse network M\u03d5. Output: Trained Morse network M\u03d5. Let t = 0. for t = 1 to TM do Sample minibatch (s, a) \u223cD Sample random actions a \u00af D \u223cDuni for each state s Update \u03d5 by minimizing Equation 6 end for Initialize: Initialize policy network \u03c0\u03c8, critic Q\u03b8, target policy \u00af \u03c8 \u2190\u03c8 and target critic \u00af \u03b8 \u2190\u03b8. Output: Trained policy \u03c0. Let t = 0. for t = 1 to TAC do Sample minibatch (s, a, r, s\u2032) \u223cD Update \u03b8 using Equation 14 if t mod m = 0 then Obtain a\u03c0 = \u03c0(s) Update \u03c8 using Equation 13 Update target networks \u00af \u03b8 \u2190\u03c1\u03b8 + (1 \u2212\u03c1)\u00af \u03b8, \u00af \u03c8 \u2190 \u03c1\u03c8 + (1 \u2212\u03c1) \u00af \u03c8 end if end for return \u03c0 4.4 Algorithm Summary Fitting the Morse Network The TD3-BST training procedure is described in Algorithm 1. The first phase fits the Morse network for TM gradient steps. Actor\u2013Critic Training In the second phase of training, a modified TD3-BC procedure is used for TAC iterations with alterations highlighted in red. We provide full hyperparameter details in the appendix. 5 Experiments In this section, we conduct experiments that aim to answer the following questions: \u2022 How does TD3-BST compare to other baselines, with a focus on comparing to newer baselines that use perdataset tuning? \u2022 Can the BST objective improve performance when used with one-step methods (IQL) that perform in-sample policy evaluation? \u2022 How well does the Morse network learn to discriminate between in-dataset and OOD actions? \u2022 How does changing the kernel scale parameter \u03bb affect performance? \u2022 Does using independent ensembles, a second method of uncertainty estimation, improve performance? We evaluate our algorithm on the D4RL benchmark [Fu et al., 2020], including the Gym Locomotion and challenging Antmaze navigation tasks. 5.1 Comparison with SOTA Methods We evaluate TD3-BST against the older, well known baselines of TD3-BC [Fujimoto and Gu, 2021], CQL [Kumar et al., 2020], and IQL [Kostrikov et al., 2021b]. There are more recent methods that consistently outperform these baselines; of these, we include SQL [Xu et al., 2023], SAC-RND [Nikulin et al., 2023], DOGE [Li et al., 2022], VMG [Zhu et al., 2022], ReBRAC [Tarasov et al., 2023], CFPI [Li et al., 2023] and MSG [Ghasemipour et al., 2022] (to our knowledge, the best-performing ensemble-based method). It is interesting to note that most of these baselines implement policy constraints, except for VMG (graph-based planning) and MSG (policy constraint using a large, independent ensemble). We note that all the aforementioned SOTA methods (except SQL) report scores with per-dataset tuned parameters in stark contrast with the older TD3-BC, CQL, and IQL algorithms, which use the same set of hyperparameters in each D4RL domain. All scores are reported with 10 evaluations in Locomotion and 100 in Antmaze across five seeds. We present scores for D4RL Gym Locomotion in Table 1. TD3-BST achieves best or near-best results compared to all previous methods and recovers expert performance on five of nine datasets. The best performing prior methods include SAC-RND and ReBRAC, both of which require per-dataset tuning of BRAC-variant algorithms [Wu et al., 2019]. We evaluate TD3-BST on the more challenging Antmaze tasks which contain a high degree of suboptimal trajectories and follow a sparse reward scheme that requires algorithms to stitch together several trajectories to perform well. TD3-BST achieves the best scores overall in Table 2, especially as the maze becomes more complex. VMG and MSG are the bestperforming prior baselines and TD3-BST is far simpler and more efficient in its design as a variant of TD3-BC. The authors of VMG report the best scores from checkpoints rather than from the final policy. MSG report scores from ensembles with both 4 and 64 critics of which the best scores included here are from the 64-critic variant. We pay close attention to SAC-RND, which, among all baselines, is most similar in its inception to TD3-BST. SACRND uses a random and trained network pair to produce a dataset-constraining penalty. SAC-RND achieves consistent SOTA scores on locomotion datasets, but fails to deliver commensurate performance on Antmaze tasks. TD3-BST performs similarly to SAC-RND in locomotion and achieves SOTA scores in Antmaze. 5.2 Improving One-Step Methods One-step algorithms learn a policy from an offline dataset, thus remaining on-policy [Rummery and Niranjan, 1994; Sutton and Barto, 2018], and using weighted behavioral cloning [Brandfonbrener et al., 2021; Kostrikov et al., 2021b]. Empirical evaluation by [Fu et al., 2022] suggests that advantageweighted BC is too restrictive and relaxing the policy objective to Equation 1 can lead to performance improvement. We use the BST objective as a drop-in replacement for the policy improvement step in IQL [Kostrikov et al., 2021b] to learn an optimal policy while retaining in-sample policy evaluation. We reproduce IQL results and report scores for IQL-BST, both times using a deterministic policy [Tarasov et al., 2022] and identical hyperparameters to the original work in Table 3. Reproduced IQL closely matches the original results, with slight performance reductions on the -large datasets. Relaxing weighted-BC with a BST objective leads to improvements in performance, especially on the more difficult -medium and -large datasets. To isolate the effect of the BST objective, we do not perform any additional tuning. 5.3 Ablation Experiments Morse Network Analysis We analyze how well the Morse network can distinguish between dataset tuples and samples from Dperm, permutations of dataset actions, and Duni. We plot both certainty (M\u03d5) density and t-SNEs [Van der Maaten and Hinton, 2008] in Figure 3 which show that the unsupervised Morse network is effective in distinguishing between Dperm and Duni and assigning high certainty to dataset tuples. Ablating kernel scale We examine sensitivity to the kernel scale \u03bb. Recall that k = dim(A). We see in Figure 4 that the scale \u03bb = k 2 is a performance sweet-spot on the challenging Antmaze tasks. We further illustrate this by plotting policy deviations from dataset actions in Figure 5. The scale \u03bb = 1.0 is potentially too lax a behavioral constraint, while \u03bb = k is too strong, resulting in performance reduction. However, performance on all scales remains strong and compares well with most prior algorithms. Performance may be further improved by tuning \u03bb, possibly with separate scales for each input dimension. Figure 3: M\u03d5 densities and t-SNE for hopper-medium-expert (top row) and Antmaze-large-diverse (bottom row). Density plots are clipped at 10.0 as density for D is large. 10 actions are sampled from Duni and Dperm each, per state. t-SNE is plotted from the per-dimension perturbation | f\u03d5(s, a) \u2212a |. Figure 4: Ablations of \u03bb on Antmaze datasets. Recall k = dim(A). Independent or Shared Targets? Standard TD3 employs Clipped Double Q-learning (CDQ) [Hasselt, 2010; Fujimoto et al., 2018] to prevent value overestimation. On tasks with sparse rewards, this may be too conservative [Moskovitz et al., 2021]. MSG [Ghasemipour et al., 2022] uses large ensembles of fully independent Q functions to learn offline. We examine how independent double Q functions perform compared to the standard CDQ setup in Antmaze with 2 and 10 critics. The results in Figure 6 show that disabling CDQ with 2 critics is consistently detrimental to performance. Using a larger 10-critic ensemble leads to moderate improvements. This suggests that combining policy regularization with an efficient, independent ensemble could bring further performance benefits with minimal changes to the algorithm. 6 Discussion Morse Network In [Dherin et al., 2023], deeper architectures are required even when training on simple datasets. This rings true for our application of Morse networks in this work, with low-capacity networks performing poorly. Training the Morse network for each locomotion and Antmaze dataset typically takes 10 minutes for 100 000 gradient steps using a batch size of 1 024. When training the policy, using the Morse network increases training time by approximately 15%. Optimal Datasets On Gym Locomotion tasks TD3-BST performance is comparable to newer methods, all of which Dataset TD3-BC CQL IQL SQL SAC-RND1 DOGE ReBRAC CFPI TD3-BST (ours) halfcheetah-m 48.3 44.0 47.4 48.3 66.6 45.3 65.6 52.1 62.1 \u00b1 0.8 hopper-m 59.3 58.5 66.3 75.5 97.8 98.6 102.0 86.8 102.9 \u00b1 1.3 walker2d-m 83.7 72.5 78.3 84.2 91.6 86.8 82.5 88.3 90.7 \u00b1 2.5 halfcheetah-m-r 44.6 45.5 44.2 44.8 42.8 54.9 51.0 44.5 53.0 \u00b1 0.7 hopper-m-r 60.9 95.0 94.7 99.7 100.5 76.2 98.1 93.6 101.2 \u00b1 4.9 walker2d-m-r 81.8 77.2 73.9 81.2 88.7 87.3 77.3 78.2 90.4 \u00b1 8.3 halfcheetah-m-e 90.7 91.6 86.7 94.0 107.6 78.7 101.1 97.3 100.7 \u00b1 1.1 hopper-m-e 98.0 105.4 91.5 111.8 109.8 102.7 107.0 104.2 110.3 \u00b1 0.9 walker2d-m-e 110.1 108.8 109.6 110.0 105.0 110.4 111.6 111.9 109.4 \u00b1 0.2 Table 1: Normalized scores on D4RL Gym Locomotion datasets. VMG scores are excluded because this method performs poorly and the authors of MSG do not report numerical results on locomotion tasks. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. 1 SAC-RND in addition to per-dataset tuning, is trained for 3 million gradient steps. Though not included here, ensemble methods may perform better than the best non-ensemble methods on some datasets, albeit still requiring per-dataset tuning to achieve their reported performance. Top scores are in bold and second-best are underlined. Dataset TD3-BC CQL IQL SQL SAC-RND1 DOGE VMG2 ReBRAC CFPI MSG3 TD3-BST (ours) -umaze 78.6 74.0 87.5 92.2 97.0 97.0 93.7 97.8 90.2 98.6 97.8 \u00b1 1.0 -umaze-d 71.4 84.0 62.2 74.0 66.0 63.5 94.0 88.3 58.6 81.8 91.7 \u00b1 3.2 -medium-p 10.6 61.2 71.2 80.2 74.7 80.6 82.7 84.0 75.2 89.6 90.2 \u00b1 1.8 -medium-d 3.0 53.7 70.0 79.1 74.7 77.6 84.3 76.3 72.2 88.6 92.0 \u00b1 3.8 -large-p 0.2 15.8 39.6 53.2 43.9 48.2 67.3 60.4 51.4 72.6 79.7 \u00b1 7.6 -large-d 0.0 14.9 47.5 52.3 45.7 36.4 74.3 54.4 52.4 71.4 76.1 \u00b1 4.7 Table 2: Normalized scores on D4RL Antmaze datasets. 1 SAC-RND is trained for three million gradient steps. 2 VMG reports scores from the best-performing checkpoint rather than from the final policy; despite this, TD3-BST still outperforms VMG in all datasets except -umaze-diverse. 3 for MSG we report the best score among the reported scores of all configurations, also, MSG is trained for two million steps. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. Other ensemble-based methods are not included, as MSG achieves higher performance. Top scores are in bold and second-best are underlined. Dataset IQL (reproduced) IQL-BST -umaze 87.6 \u00b1 4.6 90.8 \u00b1 2.1 -umaze-d 64.0 \u00b1 5.2 63.1 \u00b1 3.7 -medium-p 70.7 \u00b1 4.3 80.3 \u00b1 1.3 -medium-d 73.8 \u00b1 5.9 84.7 \u00b1 2.0 -large-p 35.2 \u00b1 8.4 55.4 \u00b1 3.2 -large-d 40.7 \u00b1 9.2 51.6 \u00b1 2.6 Table 3: Normalized scores on D4RL Antmaze datasets for IQL and IQL-BST. We use hyperparameters identical to the original IQL paper and use Equation 13 as a drop-in replacement for the policy objective. rarely outperform older baselines. This can be attributed to a significant proportion of high-return-yielding trajectories that are easier to improve. 7 Conclusion In this paper, we introduce TD3-BST, an algorithm that uses an uncertainty model to dynamically adjust the strength of regularization. Dynamic weighting allows the policy to maximize reward around individual dataset modes. Our algorithm compares well against prior methods on Gym Locomotion tasks and achieves the best scores on the more challenging Antmaze tasks, demonstrating strong performance when learning from suboptimal data. In addition, our experiments show that combining our pol(a) hopper-medium (b) amaze-large-play Figure 5: Histograms of deviation from dataset actions. Figure 6: % change in Antmaze scores without CDQ for critic ensembles consisting of 2 and 10 Q functions. icy regularization with an ensemble-based source of uncertainty can improve performance. Future work can explore other methods of estimating uncertainty, alternative uncertainty measures, and how best to combine multiple sources of uncertainty.", "additional_graph_info": { "graph": [], "node_feat": { "Padmanaba Srinivasan": [ { "url": "http://arxiv.org/abs/2404.16399v1", "title": "Offline Reinforcement Learning with Behavioral Supervisor Tuning", "abstract": "Offline reinforcement learning (RL) algorithms are applied to learn\nperformant, well-generalizing policies when provided with a static dataset of\ninteractions. Many recent approaches to offline RL have seen substantial\nsuccess, but with one key caveat: they demand substantial per-dataset\nhyperparameter tuning to achieve reported performance, which requires policy\nrollouts in the environment to evaluate; this can rapidly become cumbersome.\nFurthermore, substantial tuning requirements can hamper the adoption of these\nalgorithms in practical domains. In this paper, we present TD3 with Behavioral\nSupervisor Tuning (TD3-BST), an algorithm that trains an uncertainty model and\nuses it to guide the policy to select actions within the dataset support.\nTD3-BST can learn more effective policies from offline datasets compared to\nprevious methods and achieves the best performance across challenging\nbenchmarks without requiring per-dataset tuning.", "authors": "Padmanaba Srinivasan, William Knottenbelt", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.AI" ], "main_content": "Reinforcement learning is a framework for sequential decision making often formulated as a Markov decision process (MDP), M = {S, A, R, p, p0, \u03b3} with state space S, action space A, a scalar reward dependent on state and action R(s, a), transition dynamics p, initial state distribution p0 and discount factor \u03b3 \u2208 [0, 1) [Sutton and Barto, 2018]. RL aims to learn a policy \u03c0 \u2208\u03a0 that executes action a = \u03c0(s) that will maximize the expected \ufffd\ufffd \ufffd \u2208 executes action a = \u03c0(s) that will maximize the expected discounted reward J(\u03c0) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffdT t=0 \u03b3tR(st, at) \ufffd where P\u03c0(\u03c4) = p0(s0) \ufffdT t=0 \u03c0(at | st)p(st+1 | st, at) is the trajectory under \u03c0. Rather than rolling out an entire trajecdiscounted reward J(\u03c0) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffd t=0 \u03b3tR(st, at) \ufffd where P\u03c0(\u03c4) = p0(s0) \ufffdT t=0 \u03c0(at | st)p(st+1 | st, at) is the trajectory under \u03c0. Rather than rolling out an entire trajectory, a state-action value function (Q function) is often used: Q\u03c0(s, a) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffdT t=0 \u03b3tr(st, at) | s0 = s, a0 = a \ufffd . 2.1 Offline Reinforcement Learning \ufffd\ufffd \ufffd 2.1 Offline Reinforcement Learning Offline RL algorithms are presented with a static dataset D that consists of tuples {s, a, r, s\u2032} where r \u223cR(s, a) and s\u2032 \u223cp(\u00b7 | s, a). D has limited coverage over S \u00d7A; hence, offline RL algorithms must constrain the policy to select actions within the dataset support. To this end, algorithms employ one of three approaches: 1) policy constraints; 2) critic regularization; or 3) uncertainty penalization. Policy constraint Policy constraints modify the actor\u2019s objective only to minimize divergence from the behavior policy. Most simply, this adds a constraint term [Fujimoto and Gu, 2021; Tarasov et al., 2023] to the policy objective: arg max \u03c0 E{s,a}\u223cD [Q(s, \u03c0(s)) \u2212\u03b1D(\u03c0, \u03c0\u03b2)] , (1) where \u03b1 is a scalar controlling the strength of regularization, D(\u00b7, \u00b7) is a divergence function between the policy \u03c0 and the behavior policy \u03c0\u03b2. In offline RL, we do not have access to \u03c0\u03b2; some prior methods attempt to estimate it empirically [Kostrikov et al., 2021a; Li et al., 2023] which is challenging when the dataset is generated by a mixture of policies. Furthermore, selecting the constraint strength can be challenging and difficult to generalize across datasets with similar environments [Tarasov et al., 2023; Kostrikov et al., 2021a]. Other policy constraint approaches use weighted BC [Nair et al., 2020; Kostrikov et al., 2021b; Xu et al., 2023] or (surrogate) BC constraints [Li et al., 2022; Wu et al., 2019; Li et al., 2023]. The former methods may be too restrictive as they do not allow OOD action selection, which is crucial to improve performance [Fu et al., 2022]. The latter methods may still require substantial tuning and in addition to training if using model-based score methods. Other methods impose architectural constraints [Kumar et al., 2019; Fujimoto et al., 2019] that parameterize separate BC and reward-maximizing policy models. Critic Regularization Critic regularization methods directly address the OOD action-value overestimation problem by penalizing large values for adversarially sampled actions [Kostrikov et al., 2021a]. Ensembles Employing an ensemble of neural network estimators is a commonly used technique for prediction with a measure of epistemic uncertainty [Kondratyuk et al., 2020]. A family of offline RL methods employ large ensembles of value functions [An et al., 2021] and make use of the diversity of randomly initialized ensembles to implicitly reduce the selection of OOD actions or directly penalize the variance of the reward in the ensemble [Ghasemipour et al., 2022; Sutton and Barto, 2018]. Model-Based Uncertainty Estimation Learning an uncertainty model of the dataset is often devised analogously to exploration-encouraging methods used in online RL, but, employing these for anti-exploration instead [Rezaeifar et al., 2022]. An example is SAC-RND which directly adopts such an approach [Nikulin et al., 2023]. Other algorithms include DOGE [Li et al., 2022] which trains a model to estimate uncertainty as a distance to dataset action and DARL [Zhang et al., 2023] which uses distance to random projections of stateaction pairs as an uncertainty measure. As a whole, these methods optimize a distance d(\u00b7, \u00b7) \u22650 that represents the uncertainty of an action. 2.2 Uncertainty Estimation Neural networks are known to predict confidently even when presented with OOD samples [Nguyen et al., 2015; Goodfellow et al., 2014; Lakshminarayanan et al., 2017]. A classical approach to OOD detection is to fit a generative model to the dataset that produces a high probability for in-dataset samples and a low probability for OOD ones. These methods work well for simple, unimodal data but can become computationally demanding for more complex data with multiple modes. Another approach trains classifiers that are leveraged to become finer-grained OOD detectors [Lee et al., 2018]. In this work, we focus on Morse neural networks [Dherin et al., 2023], an approach that trains a generative model to produce an unnormalized density that takes on value 1 at the dataset modes. 3 Preliminaries A Morse neural network produces an unnormalized density M(x) \u2208[0, 1] on an embedding space Re [Dherin et al., 2023]. A Morse network can produce a density in Re that attains a value of 1 at mode submanifolds and decreases towards 0 when moving away from the mode. The rate at which the value decreases is controlled by a Morse Kernel. Definition 1 (Morse Kernel). A Morse Kernel is a positive definite kernel K. When applied in a space Z = Rk, the kernel K(z1, z2) takes values in the interval [0, 1] where K(z1, z2) = 1 iff z1 = z2. All kernels of the form K(z1, z2) = e\u2212D(z1,z2) where D(\u00b7, \u00b7) is a divergence [Amari, 2016] are Morse Kernels. Examples include common kernels such as the Radial Basis Function (RBF) Kernel, KRBF (z1, z2) = e\u2212\u03bb2 2 ||z1\u2212z2||2. (2) The RBF kernel and its derivatives decay exponentially, leading learning signals to vanish rapidly. An alternative is the ubiquitous Rational Quadratic (RQ) kernel: KRQ(z1, z2) = \u0012 1 + \u03bb2 2\u03ba || z1 \u2212z2 ||2 \u0013\u2212\u03ba (3) where \u03bb is a scale parameter in each kernel. The RQ kernel is a scaled mixture of RBF kernels controlled by \u03ba and, for small \u03ba, decays much more slowly [Williams and Rasmussen, 2006]. Consider a neural network that maps from a feature space into a latent space f\u03d5 : X \u2192Z, with parameters \u03d5, X \u2208Rd and Z \u2208Rk. A Morse Kernel can impose structure on the latent space. Definition 2 (Morse Neural Network). A Morse neural network is a function f\u03d5 : X \u2192Z in combination with a Morse Kernel on K(z, t) where t \u2282Z is a target, chosen as a hyperparameter of the model. The Morse neural network is defined as M\u03d5(x) = K(f\u03d5(x), t). Using Definition 1 we see that M\u03d5(x) \u2208[0, 1], and when M\u03d5(x) = 1, x corresponds to a mode that coincides with the level set of the submanifold of the Morse neural network. Furthermore, M\u03d5(x) corresponds to the certainty of the sample x being from the training dataset, so 1 \u2212M\u03d5(x) is a measure of the epistemic uncertainty of x. The function \u2212log M\u03d5(x) measures a squared distance, d(\u00b7, \u00b7), between f\u03d5(x) and the closest mode in the latent space at m: d(z) = min m\u2208M d(z, m), (4) where M is the set of all modes. This encodes information about the topology of the submanifold and satisfies the Morse\u2013Bott non-degeneracy condition [Basu and Prasad, 2020]. The Morse neural network offers the following properties: 1 M\u03d5(x) \u2208[0, 1]. 2 M\u03d5(x) = 1 at its mode submanifolds. 3 \u2212log M\u03d5(x) \u22650 is a squared distance that satisfies the Morse\u2013Bott non-degeneracy condition on the mode submanifolds. 4 As M\u03d5(x) is an exponentiated squared distance, the function is also distance aware in the sense that as f\u03d5(x) \u2192t, M\u03d5(x) \u21921. Proof of each property is provided in the appendix. 4 Policy Constraint with a Behavioral Supervisor We now describe the constituent components of our algorithm, building on the Morse network and showing how it can be incorporated into a policy-regularized objective. 4.1 Morse Networks for Offline RL The target t is a hyperparameter that must be chosen. Experiments in [Dherin et al., 2023] use simple, toy datasets with classification problems that perform well for categorical t. We find that using a static label for the Morse network yields poor performance; rather than a labeling model, we treat f\u03d5 as a perturbation model that produces an action f\u03d5(s, a) = \u02c6 a such that \u02c6 a = a if and only if s, a \u223cD. An offline RL dataset D consists of tuples {s, a, r, s\u2032} where we assume {s, a} pairs are i.i.d. sampled from an unknown distribution. The Morse network must be fitted on N state-action pairs [{s1, a1, }, ..., {sN, aN}] such that M\u03d5(si, aj) = 1, \u2200i, j \u22081, ..., N ] only when i = j. We fit a Morse neural network to minimize the KL divergence between unnormalized measures [Amari, 2016] following [Dherin et al., 2023], DKL(D(s, a) || M\u03d5(s, a)): min \u03d5 Es,a\u223cD \u0014 log D(s, a) M\u03d5(s, a) \u0015 + Z M\u03d5(s, a) \u2212D(s, a) da. (5) With respect to \u03d5, this amounts to minimizing the empirical loss: L(\u03d5) = \u22121 N X s,a\u223cD log K(f\u03d5(s, a), a) + 1 N X s\u223cD a \u00af D\u223cDuni K(f\u03d5(s, au), au), (6) where au is an action sampled from a uniform distribution over the action space Duni. A learned Morse density is well suited to modeling ensemble policies [Lei et al., 2023], more flexibly [Dherin et al., 2023; Kostrikov et al., 2021a; Li et al., 2023] and without down-weighting good, in-support actions that have low density under the behavior policy [Singh et al., 2022] as all modes have unnormalized density value 1. A Morse neural network can be expressed as an energybased model (EBM) [Goodfellow et al., 2016]: Proposition 1. A Morse neural network can be expressed as an energy-based model: E\u03d5(x) = e\u2212log M\u03d5(x) where M\u03d5 : Rd \u2192R. Note that the EBM E\u03d5 is itself unnormalized. Representing the Morse network as an EBM allows analysis analogous to [Florence et al., 2022]. Theorem 1. For a set-valued function F(x) : x \u2208Rm \u2192 Rn\\{\u2205}, there exists a continuous function g : Rm+n \u2192R that is approximated by a continuous function approximator g\u03d5 with arbitrarily small bounded error \u03f5. This ensures that any point on the graph F\u03d5(x) = arg miny g\u03d5(x, y) is within distance \u03f5 of F. We refer the reader to [Florence et al., 2022] for a detailed proof. The theorem assumes that F(x) is an implicit function and states that the error at the level-set (i.e. the modes) of F(x) is small. 4.2 TD3-BST We can use the Morse network to design a regularized policy objective. Recall that policy regularization consists of Q-value maximization and minimization of a distance to the behavior policy (Equation 1). We reconsider the policy regularization term and train a policy that minimizes uncertainty while selecting actions close to the behavior policy. Let C\u03c0(s, a) denote a measure of uncertainty of the policy action. We solve the following optimization problem: \u03c0i+1 = arg min \u03c0\u2208\u03a0 Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] (7) s.t. DKL (\u03c0(\u00b7 | s) || \u03c0\u03b2(\u00b7 | s)) \u2264\u03f5. (8) This optimization problem requires an explicit behavior model, which is difficult to estimate and using an estimated model has historically returned mixed results [Kumar et al., 2019; Fujimoto et al., 2019]. Furthermore, this requires direct optimization through C\u03c0 which may be subject to exploitation. Instead, we enforce this implicitly by deriving the solution to the constrained optimization to obtain a closed-form solution for the actor [Peng et al., 2019; Nair et al., 2020]. Enforcing the KKT conditions we obtain the Lagrangian: L(\u03c0, \u00b5) = Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] + \u00b5(\u03f5 \u2212DKL(\u03c0 || \u03c0\u03b2)). (9) Computing \u2202L \u2202\u03c0 and solving for \u03c0 yields the uncertainty minimizing solution \u03c0C\u2217(a | s) \u221d\u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a). When learning the parametric policy \u03c0\u03c8, we project the nonparametric solution into the policy space as a (reverse) KL divergence minimization of \u03c0\u03c8 under the data distribution D: arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0C\u2217(\u00b7 | s) || \u03c0\u03c8(\u00b7 | s) \u0011i (10) = arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a) || \u03c0\u03c8(\u00b7 | s) \u0011i (11) = arg min \u03c8 Es,a\u223cD h \u2212log \u03c0\u03c8(a | s)e 1 \u00b5 C\u03c0(s,a)i , (12) which is a weighted maximum likelihood update where the supervised target is sampled from the dataset D and C\u03c0(s, a) = 1 \u2212M\u03d5(s, \u03c0\u03c8(s)). This avoids explicitly modeling the behavior policy and uses the Morse network uncertainty as a behavior supervisor to dynamically adjust the strength of behavioral cloning. We provide a more detailed derivation in the appendix. Interpretation Our regularization method shares similarities with other weighted regression algorithms [Nair et al., 2020; Peng et al., 2019; Kostrikov et al., 2021b] which weight the advantage of an action compared to the dataset/replay buffer action. Our weighting can be thought of as a measure of disadvantage of a policy action in the sense of how OOD it is. We make modifications to the behavioral cloning objective. From Morse network property 1 we know M\u03d5 \u2208[0, 1], hence 1 \u2264e 1 \u00b5 C\u03c0 \u2264e 1 \u00b5 , i.e. the lowest possible disadvantage coefficient is 1. To minimize the coefficient in the mode, we require it to approach 0 when near a mode. We adjust the weighted behavioral cloning term and add Q-value maximization to yield the regularized policy update: \u03c0i+1 \u2190arg max \u03c0 E s,a\u223cD, a\u03c0\u223c\u03c0i(s) [ 1 ZQ Qi+1(s, a\u03c0) \u2212(e 1 \u00b5 C\u03c0(s,a) \u22121)(a\u03c0 \u2212a)2], (13) where \u00b5 is the Lagrangian multiplier that controls the magnitude of the disadvantage weight and ZQ = 1 N PN n=1|Q(s, a\u03c0)| is a scaling term detached from the gradient update process [Fujimoto and Gu, 2021], necessary as Q(s, a) can be arbitrarily large and the BC-coefficient is upper-bounded at e 1 \u00b5 . The value function update is given by: Qi+1 \u2190arg min Q Es,a,s\u2032\u223cD[(y \u2212Qi(s, a))2], (14) with y = r(s, a) + \u03b3Es\u2032\u223c\u00af \u03c0(s\u2032) \u00af Q(s\u2032, a\u2032) where \u00af Q and \u00af \u03c0 are target value and policy functions, respectively. 4.3 Controlling the Tradeoff Constraint Tuning TD3-BST is straightforward; the primary hyperparameters of the Morse network consist of the choice and scale of the kernel, and the temperature \u00b5. Increasing \u03bb for higher dimensional actions ensures that the high certainty region around modes remains tight. Prior empirical work has demonstrated the importance of allowing some degree of OOD actions [An et al., 2021]; in the TD3-BST framework, this is dependent on \u03bb. In Figure 2 we provide a didactic example of the effect of \u03bb. We construct a dataset consisting of 2-dimensional actions in [\u22121, 1] with means at the four locations {[0.0, 0.8], [0.0, \u22120.8], [0.8, 0.0], [\u22120.8, 0.0]} and each with standard deviation 0.05. We sample M = 128 points, train a Morse network and plot the density produced by the Morse network for \u03bb = { 1 10, 1 2, 1.0, 2.0}. A behavioral cloning policy learned using vanilla MLE where all targets are weighted equally results in an OOD action being selected. Training using Morse-weighted BC downweights the behavioral cloning loss for far away modes enabling the policy to select and minimize error to a single mode. (a) \u03bb = 0.1 (b) \u03bb = 0.5 (c) \u03bb = 1.0 (d) \u03bb = 2.0 (e) Ground Truth (f) Density \u03bb = 1.0 Figure 2: a-d: Contour plots of unnormalized densities produced by a Morse network for increasing \u03bb with ground truth actions included as \u00d7 marks. e: Ground truth actions in the synthetic dataset, the MLE action (red). A Morse certainty weighted MLE model can select actions in a single mode, in this case, the mode centred at [0.8, 0.0] (orange). Weighting a divergence constraint using a Morse (un)certainty will encourage the policy to select actions near the modes of M\u03d5 that maximize reward. f: Plot of the 3D unnormalized Morse density for \u03bb = 1.0. Algorithm 1 TD3-BST Training Procedure Outline. The policy is updated once for every m = 2 critic updates, as is the default in TD3. Input: Dataset D = {s, a, r, s\u2032} Initialize: Initialize Morse network M\u03d5. Output: Trained Morse network M\u03d5. Let t = 0. for t = 1 to TM do Sample minibatch (s, a) \u223cD Sample random actions a \u00af D \u223cDuni for each state s Update \u03d5 by minimizing Equation 6 end for Initialize: Initialize policy network \u03c0\u03c8, critic Q\u03b8, target policy \u00af \u03c8 \u2190\u03c8 and target critic \u00af \u03b8 \u2190\u03b8. Output: Trained policy \u03c0. Let t = 0. for t = 1 to TAC do Sample minibatch (s, a, r, s\u2032) \u223cD Update \u03b8 using Equation 14 if t mod m = 0 then Obtain a\u03c0 = \u03c0(s) Update \u03c8 using Equation 13 Update target networks \u00af \u03b8 \u2190\u03c1\u03b8 + (1 \u2212\u03c1)\u00af \u03b8, \u00af \u03c8 \u2190 \u03c1\u03c8 + (1 \u2212\u03c1) \u00af \u03c8 end if end for return \u03c0 4.4 Algorithm Summary Fitting the Morse Network The TD3-BST training procedure is described in Algorithm 1. The first phase fits the Morse network for TM gradient steps. Actor\u2013Critic Training In the second phase of training, a modified TD3-BC procedure is used for TAC iterations with alterations highlighted in red. We provide full hyperparameter details in the appendix. 5 Experiments In this section, we conduct experiments that aim to answer the following questions: \u2022 How does TD3-BST compare to other baselines, with a focus on comparing to newer baselines that use perdataset tuning? \u2022 Can the BST objective improve performance when used with one-step methods (IQL) that perform in-sample policy evaluation? \u2022 How well does the Morse network learn to discriminate between in-dataset and OOD actions? \u2022 How does changing the kernel scale parameter \u03bb affect performance? \u2022 Does using independent ensembles, a second method of uncertainty estimation, improve performance? We evaluate our algorithm on the D4RL benchmark [Fu et al., 2020], including the Gym Locomotion and challenging Antmaze navigation tasks. 5.1 Comparison with SOTA Methods We evaluate TD3-BST against the older, well known baselines of TD3-BC [Fujimoto and Gu, 2021], CQL [Kumar et al., 2020], and IQL [Kostrikov et al., 2021b]. There are more recent methods that consistently outperform these baselines; of these, we include SQL [Xu et al., 2023], SAC-RND [Nikulin et al., 2023], DOGE [Li et al., 2022], VMG [Zhu et al., 2022], ReBRAC [Tarasov et al., 2023], CFPI [Li et al., 2023] and MSG [Ghasemipour et al., 2022] (to our knowledge, the best-performing ensemble-based method). It is interesting to note that most of these baselines implement policy constraints, except for VMG (graph-based planning) and MSG (policy constraint using a large, independent ensemble). We note that all the aforementioned SOTA methods (except SQL) report scores with per-dataset tuned parameters in stark contrast with the older TD3-BC, CQL, and IQL algorithms, which use the same set of hyperparameters in each D4RL domain. All scores are reported with 10 evaluations in Locomotion and 100 in Antmaze across five seeds. We present scores for D4RL Gym Locomotion in Table 1. TD3-BST achieves best or near-best results compared to all previous methods and recovers expert performance on five of nine datasets. The best performing prior methods include SAC-RND and ReBRAC, both of which require per-dataset tuning of BRAC-variant algorithms [Wu et al., 2019]. We evaluate TD3-BST on the more challenging Antmaze tasks which contain a high degree of suboptimal trajectories and follow a sparse reward scheme that requires algorithms to stitch together several trajectories to perform well. TD3-BST achieves the best scores overall in Table 2, especially as the maze becomes more complex. VMG and MSG are the bestperforming prior baselines and TD3-BST is far simpler and more efficient in its design as a variant of TD3-BC. The authors of VMG report the best scores from checkpoints rather than from the final policy. MSG report scores from ensembles with both 4 and 64 critics of which the best scores included here are from the 64-critic variant. We pay close attention to SAC-RND, which, among all baselines, is most similar in its inception to TD3-BST. SACRND uses a random and trained network pair to produce a dataset-constraining penalty. SAC-RND achieves consistent SOTA scores on locomotion datasets, but fails to deliver commensurate performance on Antmaze tasks. TD3-BST performs similarly to SAC-RND in locomotion and achieves SOTA scores in Antmaze. 5.2 Improving One-Step Methods One-step algorithms learn a policy from an offline dataset, thus remaining on-policy [Rummery and Niranjan, 1994; Sutton and Barto, 2018], and using weighted behavioral cloning [Brandfonbrener et al., 2021; Kostrikov et al., 2021b]. Empirical evaluation by [Fu et al., 2022] suggests that advantageweighted BC is too restrictive and relaxing the policy objective to Equation 1 can lead to performance improvement. We use the BST objective as a drop-in replacement for the policy improvement step in IQL [Kostrikov et al., 2021b] to learn an optimal policy while retaining in-sample policy evaluation. We reproduce IQL results and report scores for IQL-BST, both times using a deterministic policy [Tarasov et al., 2022] and identical hyperparameters to the original work in Table 3. Reproduced IQL closely matches the original results, with slight performance reductions on the -large datasets. Relaxing weighted-BC with a BST objective leads to improvements in performance, especially on the more difficult -medium and -large datasets. To isolate the effect of the BST objective, we do not perform any additional tuning. 5.3 Ablation Experiments Morse Network Analysis We analyze how well the Morse network can distinguish between dataset tuples and samples from Dperm, permutations of dataset actions, and Duni. We plot both certainty (M\u03d5) density and t-SNEs [Van der Maaten and Hinton, 2008] in Figure 3 which show that the unsupervised Morse network is effective in distinguishing between Dperm and Duni and assigning high certainty to dataset tuples. Ablating kernel scale We examine sensitivity to the kernel scale \u03bb. Recall that k = dim(A). We see in Figure 4 that the scale \u03bb = k 2 is a performance sweet-spot on the challenging Antmaze tasks. We further illustrate this by plotting policy deviations from dataset actions in Figure 5. The scale \u03bb = 1.0 is potentially too lax a behavioral constraint, while \u03bb = k is too strong, resulting in performance reduction. However, performance on all scales remains strong and compares well with most prior algorithms. Performance may be further improved by tuning \u03bb, possibly with separate scales for each input dimension. Figure 3: M\u03d5 densities and t-SNE for hopper-medium-expert (top row) and Antmaze-large-diverse (bottom row). Density plots are clipped at 10.0 as density for D is large. 10 actions are sampled from Duni and Dperm each, per state. t-SNE is plotted from the per-dimension perturbation | f\u03d5(s, a) \u2212a |. Figure 4: Ablations of \u03bb on Antmaze datasets. Recall k = dim(A). Independent or Shared Targets? Standard TD3 employs Clipped Double Q-learning (CDQ) [Hasselt, 2010; Fujimoto et al., 2018] to prevent value overestimation. On tasks with sparse rewards, this may be too conservative [Moskovitz et al., 2021]. MSG [Ghasemipour et al., 2022] uses large ensembles of fully independent Q functions to learn offline. We examine how independent double Q functions perform compared to the standard CDQ setup in Antmaze with 2 and 10 critics. The results in Figure 6 show that disabling CDQ with 2 critics is consistently detrimental to performance. Using a larger 10-critic ensemble leads to moderate improvements. This suggests that combining policy regularization with an efficient, independent ensemble could bring further performance benefits with minimal changes to the algorithm. 6 Discussion Morse Network In [Dherin et al., 2023], deeper architectures are required even when training on simple datasets. This rings true for our application of Morse networks in this work, with low-capacity networks performing poorly. Training the Morse network for each locomotion and Antmaze dataset typically takes 10 minutes for 100 000 gradient steps using a batch size of 1 024. When training the policy, using the Morse network increases training time by approximately 15%. Optimal Datasets On Gym Locomotion tasks TD3-BST performance is comparable to newer methods, all of which Dataset TD3-BC CQL IQL SQL SAC-RND1 DOGE ReBRAC CFPI TD3-BST (ours) halfcheetah-m 48.3 44.0 47.4 48.3 66.6 45.3 65.6 52.1 62.1 \u00b1 0.8 hopper-m 59.3 58.5 66.3 75.5 97.8 98.6 102.0 86.8 102.9 \u00b1 1.3 walker2d-m 83.7 72.5 78.3 84.2 91.6 86.8 82.5 88.3 90.7 \u00b1 2.5 halfcheetah-m-r 44.6 45.5 44.2 44.8 42.8 54.9 51.0 44.5 53.0 \u00b1 0.7 hopper-m-r 60.9 95.0 94.7 99.7 100.5 76.2 98.1 93.6 101.2 \u00b1 4.9 walker2d-m-r 81.8 77.2 73.9 81.2 88.7 87.3 77.3 78.2 90.4 \u00b1 8.3 halfcheetah-m-e 90.7 91.6 86.7 94.0 107.6 78.7 101.1 97.3 100.7 \u00b1 1.1 hopper-m-e 98.0 105.4 91.5 111.8 109.8 102.7 107.0 104.2 110.3 \u00b1 0.9 walker2d-m-e 110.1 108.8 109.6 110.0 105.0 110.4 111.6 111.9 109.4 \u00b1 0.2 Table 1: Normalized scores on D4RL Gym Locomotion datasets. VMG scores are excluded because this method performs poorly and the authors of MSG do not report numerical results on locomotion tasks. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. 1 SAC-RND in addition to per-dataset tuning, is trained for 3 million gradient steps. Though not included here, ensemble methods may perform better than the best non-ensemble methods on some datasets, albeit still requiring per-dataset tuning to achieve their reported performance. Top scores are in bold and second-best are underlined. Dataset TD3-BC CQL IQL SQL SAC-RND1 DOGE VMG2 ReBRAC CFPI MSG3 TD3-BST (ours) -umaze 78.6 74.0 87.5 92.2 97.0 97.0 93.7 97.8 90.2 98.6 97.8 \u00b1 1.0 -umaze-d 71.4 84.0 62.2 74.0 66.0 63.5 94.0 88.3 58.6 81.8 91.7 \u00b1 3.2 -medium-p 10.6 61.2 71.2 80.2 74.7 80.6 82.7 84.0 75.2 89.6 90.2 \u00b1 1.8 -medium-d 3.0 53.7 70.0 79.1 74.7 77.6 84.3 76.3 72.2 88.6 92.0 \u00b1 3.8 -large-p 0.2 15.8 39.6 53.2 43.9 48.2 67.3 60.4 51.4 72.6 79.7 \u00b1 7.6 -large-d 0.0 14.9 47.5 52.3 45.7 36.4 74.3 54.4 52.4 71.4 76.1 \u00b1 4.7 Table 2: Normalized scores on D4RL Antmaze datasets. 1 SAC-RND is trained for three million gradient steps. 2 VMG reports scores from the best-performing checkpoint rather than from the final policy; despite this, TD3-BST still outperforms VMG in all datasets except -umaze-diverse. 3 for MSG we report the best score among the reported scores of all configurations, also, MSG is trained for two million steps. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. Other ensemble-based methods are not included, as MSG achieves higher performance. Top scores are in bold and second-best are underlined. Dataset IQL (reproduced) IQL-BST -umaze 87.6 \u00b1 4.6 90.8 \u00b1 2.1 -umaze-d 64.0 \u00b1 5.2 63.1 \u00b1 3.7 -medium-p 70.7 \u00b1 4.3 80.3 \u00b1 1.3 -medium-d 73.8 \u00b1 5.9 84.7 \u00b1 2.0 -large-p 35.2 \u00b1 8.4 55.4 \u00b1 3.2 -large-d 40.7 \u00b1 9.2 51.6 \u00b1 2.6 Table 3: Normalized scores on D4RL Antmaze datasets for IQL and IQL-BST. We use hyperparameters identical to the original IQL paper and use Equation 13 as a drop-in replacement for the policy objective. rarely outperform older baselines. This can be attributed to a significant proportion of high-return-yielding trajectories that are easier to improve. 7 Conclusion In this paper, we introduce TD3-BST, an algorithm that uses an uncertainty model to dynamically adjust the strength of regularization. Dynamic weighting allows the policy to maximize reward around individual dataset modes. Our algorithm compares well against prior methods on Gym Locomotion tasks and achieves the best scores on the more challenging Antmaze tasks, demonstrating strong performance when learning from suboptimal data. In addition, our experiments show that combining our pol(a) hopper-medium (b) amaze-large-play Figure 5: Histograms of deviation from dataset actions. Figure 6: % change in Antmaze scores without CDQ for critic ensembles consisting of 2 and 10 Q functions. icy regularization with an ensemble-based source of uncertainty can improve performance. Future work can explore other methods of estimating uncertainty, alternative uncertainty measures, and how best to combine multiple sources of uncertainty.", "introduction": "Reinforcement learning (RL) is a method of learning where an agent interacts with an environment to collect experiences and seeks to maximize the reward provided by the environ- ment. This typically follows a repeating cycle of experience collecting and improvement [Sutton and Barto, 2018]. This is termed online RL due to the need for policy rollouts in the environment. Both on-policy and off-policy RL require some schedule of online interaction which, in some domains, can be infeasible due to experimental or environmental lim- itations [Mirowski et al., 2018; Yu et al., 2021]. With such constraints, a dataset may instead be collected that consists of demonstrations by arbitrary (potentially multiple, unknown) behavior policies [Lange et al., 2012] that may be subopti- mal. Offline reinforcement learning algorithms are designed to recover optimal policies from such static datasets. The primary challenge in offline RL is the evaluation of out-of-distribution (OOD) actions; offline datasets rarely of- fer support over the entire state-action space and neural net- works overestimate values when extrapolating to OOD ac- tions [Fujimoto et al., 2018; Gulcehre et al., 2020; Kumar et 3. Figure 1: An illustration of our method versus typical, TD3-BC- like actor-constraint methods. TD3-BC: a) A policy selecting an OOD action is constrained to select in-dataset actions. b) A policy selecting the optimal action may be penalized for not selecting an in-dataset, but not in-batch, inferior action. Our method: c) A pol- icy selecting OOD actions is drawn towards in-dataset actions with decreasing constraint coefficient as it moves closer to any supported action. d) An optimal policy is not penalized for selecting an in- dataset action when the action is not contained in the current batch. al., 2020; Kumar et al., 2019]. If trained using standard off- policy methods, a policy will select any actions that maximize reward, which includes OOD actions. The difference between the rewards implied by the value function and the environ- ment results in a distribution shift that can result in failure in real-world policy rollouts. Thus, offline RL algorithms must both maximize the reward and follow the behavioral policy, while having to potentially \u201cstitch\u201d together several subopti- mal trajectories. The former requirement is usually satisfied by introducing a constraint on the actor to either penalize de- viation from the behavior policy or epistemic uncertainty of the value function, or by regularizing the value function to directly minimize OOD action-values. Many recent approaches to offline RL [Tarasov et al., 2023; Zhu et al., 2022; Li et al., 2022; Nikulin et al., 2023; Xu et al., 2023] demonstrate success in D4RL benchmarks [Fu et al., 2020], but demand the onerous task of per-dataset arXiv:2404.16399v1 [cs.LG] 25 Apr 2024 hyperparameter tuning [Zhang and Jiang, 2021]. Algo- rithms that require substantial offline fine-tuning can be in- feasible in real-world applications [Tang and Wiens, 2021], hampering their adoption in favor of simpler, older algo- rithms [Emerson et al., 2023; Zhu et al., 2023]. These older methods [Fujimoto and Gu, 2021; Kumar et al., 2020; Kostrikov et al., 2021b] provide excellent \u201cbang-for-buck\u201d as their hyperparameters work well across a range of D4RL datasets. Contributions In this paper, we show how a trained uncer- tainty model can be incorporated into the regularized policy objective as a behavioral supervisor to yield TD3 with behav- ioral supervisor tuning (TD3-BST). The key advantage of our method is the dynamic regularization weighting performed by the uncertainty network, which allows the learned policy to maximize Q-values around dataset modes. Evaluation on D4RL datasets demonstrates that TD3-BST achieves SOTA performance, and ablation experiments analyze the perfor- mance of the uncertainty model and the sensitivity of the pa- rameters of the BST objective." }, { "url": "http://arxiv.org/abs/2205.11164v1", "title": "Time-series Transformer Generative Adversarial Networks", "abstract": "Many real-world tasks are plagued by limitations on data: in some instances\nvery little data is available and in others, data is protected by privacy\nenforcing regulations (e.g. GDPR). We consider limitations posed specifically\non time-series data and present a model that can generate synthetic time-series\nwhich can be used in place of real data. A model that generates synthetic\ntime-series data has two objectives: 1) to capture the stepwise conditional\ndistribution of real sequences, and 2) to faithfully model the joint\ndistribution of entire real sequences. Autoregressive models trained via\nmaximum likelihood estimation can be used in a system where previous\npredictions are fed back in and used to predict future ones; in such models,\nerrors can accrue over time. Furthermore, a plausible initial value is required\nmaking MLE based models not really generative. Many downstream tasks learn to\nmodel conditional distributions of the time-series, hence, synthetic data drawn\nfrom a generative model must satisfy 1) in addition to performing 2). We\npresent TsT-GAN, a framework that capitalises on the Transformer architecture\nto satisfy the desiderata and compare its performance against five\nstate-of-the-art models on five datasets and show that TsT-GAN achieves higher\npredictive performance on all datasets.", "authors": "Padmanaba Srinivasan, William J. Knottenbelt", "published": "2022-05-23", "updated": "2022-05-23", "primary_cat": "cs.LG", "cats": [ "cs.LG" ], "main_content": "Generative Adversarial Networks A GAN model consists of two components: firstly, a generator that maps noise to a posterior distribution; and a discriminator whose job it is to distinguish between samples produced by the generator and samples drawn from the dataset. Some popular applications of GANs focus on generating high-dimensional data, such as images [2, 31, 38] and videos [36, 10, 44]. Developments have also been made to make the notoriously unstable process of training GANs more stable [2, 27, 33, 37, 41]. The output of a generator can be controlled by conditioning on additional information. This class of GANs are called conditional GANs (cGANS) [34] where the generator takes as an additional input some information to direct the generative process. cGANs have been applied to generate sequential data in a number of fields, such as, video clips [10, 36, 44], time-series data [17, 35, 51], natural language tasks [39, 50, 53] and generating tabular data [9, 18, 28, 47]. Generating new data is a task well suited to GANs \u2013 the generator is trained to generate data by learning the underlying generating distribution and guided by the discriminator. Once trained, a generator can be used to generate synthetic data samples for cases where limited real data is available. Time-Series Generative Models Generating synthetic time-series data extends the generation of synthetic tabular data by incorporating an additional temporal dependency. This tasks generative models with learning features of the data within each time step and also relating features across time. One approach to designing generative models for time-series generation is Professor Forcing [30] which combines a GAN framework with a supervised learning approach where a Recurrent Neural Network (RNN) based generator [25, 8] alternates between teacher forcing [22, 42] and generative training and the discriminator distinguishes between hidden states produced in teacher forcing mode and free running mode. This encourages the generator to match the conditional distributions between the two modes. The C-RNN-GAN framework [35] directly applies Long short-term memory (LSTM) [22] neural network in both generator and discriminator to generate sequential data. The generator receives noise inputs at each time step and generates some data, conditioned on previous outputs. RCGAN [17] extends this approach by allowing conditioning on additional information while also removing the dependence on previous outputs. TimeGAN [51] modifies the standard GAN framework and adopts aspects of Professor Forcing. The framework uses four components: an embedder and decoder (trained via teacher forcing as a joint autoencoder network), a generator and a discriminator. The generator receives noise input and produces hidden states which are passed to the discriminator along with the hidden states of 2 the embedder, which discriminates between the embedder and generator latent distributions. An additional supervised loss penalises differences between the two distributions. COT-GAN [48] presents a \ufb02exible GAN architecture trained using a novel adversarial loss that builds on the Sinkhorn divergence [20]. 3 Method 3.1 Problem Formulation We denote a multivariate sequence of length T as x1:T = x1, ..., xT with N such sequences form the training dataset D = {x1:T }N n=1. The aim of the generator is to learn a distribution \u02c6 p(x1:T ) as an approximation to the generating distribution p(x1:T ). Learning the conditional distribution Q t p(xt | x1, ..., xt\u22121) is a far simpler objective learned by single time-step autoregressive models. Any synthetic data generated using a trained generator is likely to be used downstream to train autoregressive models. As a result, in addition to learning the joint distribution p(x1:T ) the generator must also learn Q t p(xt | x1, ..., xt\u22121). The joint distribution suggests that the generative model can be bidirectional, whereas an autoregressive constraint is explicit in the conditional distribution. To this end, we propose two objectives. The \ufb01rst is to learn the joint distribution: min \u02c6 p D(p(x1:T ) || \u02c6 p(x1:T )) (1) Which we interpret as a global objective where the learned joint distribution must match the true joint distribution. We also incorporate the conditional distribution: min \u02c6 p D(p(xt | x1:t\u22121) || \u02c6 p(xt | x1:t\u22121)) (2) where D represents suitable distance metrics for the two objectives. This represents a local (autoregressive) objective where each item is conditioned on previous ones. 3.2 Proposed Model We present our TsT-GAN model which consists of four components, each designed with respect to the objectives in Section 3.1. 3.2.1 Embedder\u2013Predictor The embedder\u2013predictor network consists of a transformer network that takes as input real multivariate sequences x1:T and predicts the next item in the sequence at each position. This network consists of a linear projection of the input vector into the model dimension. The projected sequence is passed through the embedder network E\u03b8 to produce the set of \ufb01nal embeddings \u02c6 h1, ..., \u02c6 hT . The predictor network P\u03b8 takes the embeddings back into the original input dimension P\u03b8 : Rd \u2192Rm and is implemented by a separate neural network. Applying the predictor network to all the embeddings produced by the embedding network generated one step ahead predictions for all positions in the input sequence \u02c6 x1, ..., \u02c6 xT . The embedding network is parameterised by a transformer decoder network, which uses an autoregressive mask and we realise the predictor network as a simple linear layer that performs a linear projection of the embedding back into the original data input dimension. The embedder\u2013predictor network is trained using a supervised loss that penalises one step ahead prediction error: LS(x1:T , \u02c6 x1:T ) = 1 T \u22121 T \u22121 X t=1 ||xt+1 \u2212\u02c6 xt||2 (3) where \u02c6 xt denotes the prediction by the embedder\u2013predictor network at position t and that corresponds to the predicted value of the time-series at time t+1. This allows learning of the true conditional 3 distribution Q t p(xt | x1:t\u22121) with the embedder modelling the latent conditional distribution Q t p(\u02c6 ht | \u02c6 h1:t\u22121). 3.2.2 Generator and Discriminator The generator model G\u03b8 takes a sequence of random vectors z1, ..., zT and projects these into the model dimension. The projected noise vector is passed through G\u03b8 which outputs a set of latent embeddings \u02dc h1, ..., \u02dc hT . Each latent embedding \u02dc ht \u2208Rd is then transformed back into the original input space by way of the predictor network from Section 3.2.1 to produce a synthetic sequence \u02dc x1:T = \u02dc x1, ..., \u02dc xT . Parameters of the predictor network are shared between the generator and embedder. We construct G\u03b8 in a similar way to the embedding network; it consists of a transformer encoder that makes use of bidirectional attention. To enforce the autoregressive property, we allow the parameters of the predictor network to be updated only when performing backpropagation through the embedder\u2013predictor network. When backpropagating through the generator-predictor network, gradients are calculated but the parameters of the predictor network are frozen. This forces the generator to learn the latent conditional distributions of the embedder to produce valid synthetic data while also allowing full treatment of the joint probability. The random vectors fed into the generator can be drawn from any distribution, we draw random vectors from a standard Gaussian distribution. The discriminator model D\u03b8 is constructed in a similar way to BERT [14] as a transformer encoder with bidirectional attention. A linear projection is used to map input sequences to the model dimension following which a [CLS] embedding is prepended to the beginning of the sequence. This sequence is passed through the discriminator and the embedding corresponding to the [CLS] position is projected into R1 for classi\ufb01cation. The discriminator receives as input real sequences drawn from the dataset, which it is tasked with classifying as true, and synthetic sequences from the generator, which it must classify as false. Our discriminator design focuses on a global classi\ufb01cation of the quality of a sequence, which differs from previous RNN based approaches which classify on a per time step basis. By performing global sequence classi\ufb01cation with the discriminator, we address our \ufb01rst objective in Equation 1, while the stepwise objective in Equation 2 is handled indirectly via the embedder\u2013predictor system. We apply the LS-GAN adversarial loss [33], which uses separate objectives for the discriminator and generator: LGAN(D\u03b8) = min D\u03b8 Ex1:T \u223cp[((D\u03b8(x1:T ) \u22121)2] + E\u02dc x1:T \u223c\u02c6 p[(D\u03b8(\u02dc x1:T )2] LGAN(G\u03b8) = min G\u03b8 1 2E\u02dc x1:T \u223c\u02c6 p[(D\u03b8(\u02dc x1:T ) \u22121)2] (4) The GAN objective is likely to be insuf\ufb01cient to fully capture the temporal dependencies across long time periods [26]. Applying a second, supervised autoregressive loss on the generator is not possible as the generator is bidirectional, so we turn to unsupervised masked training. Following a similar method to masked language modelling (MLM) in BERT [14], we randomly mask out items in a sequence with probability pmask. Masked out positions are replaced with a learnable [MASK] embedding and the generator is tasked with predicting the true values at the masked positions. Transformers have been applied to autoregressive time-series tasks [52, 45, 46] and have been shown to bene\ufb01t from masked modelling pre-training [52]. We add the following masked modelling objective: LMM(x1:T , x1:T ) = 1 |M| X t\u2208M ||xt \u2212xt||2 (5) where M denotes the masked positions and xt is the output of the generator at position t when performing masked modelling. We perform full generation and masked modelling in an alternating fashion using separate learnable linear projections for both tasks. 4 Figure 1: Block diagram of TsT-GAN showing data and gradient \ufb02ows. Gradients that propagate back to the generator will pass through the predictor network, but, will not be allowed to change the parameters of the predictor. Predictor parameters are only updated with respect to gradient \u2202LS \u2202P\u03b8 . 3.3 Architecture Overview Given that x1:T = x1, ..., xT denotes a true time-series of length T and z1:T = z1, ..., zT denotes a sequence of random vectors drawn from an isotropic Gaussian, we provide formal motivation for our model, with an accompanying block diagram in Figure 1. The embedder, trained using maximum likelihood estimation (MLE) takes as input x1:T and produces conditional latent embeddings \u02c6 h1:T = \u02c6 h1, ..., \u02c6 hT which the predictor maps to values \u02c6 x1:T . The generator takes as input a sequence of random vectors z1:T and produces a corresponding set of latent embeddings \u02dc h1:T = \u02dc h1, ..., \u02dc hT . To maximise the downstream utility of synthetic data, the generator aims to learn the conditional latent distribution produced by the embedder such that p(\u02dc ht) \u2248p(\u02c6 ht) \u2200t \u2208T. We achieve this by sharing predictor parameters between the generator and discriminator, but allowing the predictor\u2019s parameters to be changed only when updating with respect to the gradient \u2202LS \u2202P\u03b8 , thereby forcing the condition. The discriminator is tasked with differentiating between real sequences x1:T and synthetic sequences \u02dc x1:T . The discriminator operates over entire sequences, producing only one true/false classi\ufb01cation and so, inspects synthetic sequences on a global scale and encourages the generator to learn the joint distribution of entire sequences. The objective LMM reinforces this by while also exposing the generator to real samples and further encouraging bidirectional learning of the joint distribution. Some approaches to time-series generation [26] have shown that explicit moment matching can improve the quality of synthetic data. We introduce an auxiliary moment loss to promote matching of \ufb01rst and second moments: LML(x1:T , \u02dc x1:T ) = |f\u00b5(x1:T ) \u2212f\u00b5(\u02dc x1:T )| + |f\u03c3(x1:T ) \u2212f\u03c3(\u02dc x1:T )| (6) where f\u00b5 and f\u03c3 are functions that compute the mean and standard deviation of a time-series. 3.4 Optimisation We train our model in three stages. We begin by training the embedder\u2013predictor components independently using the objective LS, followed by training the generator using only the masked modelling objective LMM. The \ufb01nal stage consists of joint training using all objectives. All of our transformer model components are feed forward architectures that are insensitive to ordering of sequence. Hence, after the initial projection to the model dimension we add sinusoidal position embeddings to projected embeddings. For all transformer components, we use a model dimension 5 of d = 32, with H = 8 attention heads and a hidden layer dimension of D = 4 \u00d7 d = 128 with 3 encoder layers for each component. We use the GELU activation function [24] for the non-linearity with LayerNorm [3] normalisation. For optimisation, we use the Adam optimiser [29], with a learning rate of 0.001 in the \ufb01rst two training stages for the embedder, predictor and generator, followed by a learning rate of 0.00002 for all components during joint training, with betas of (0.5, 0.999). For masked modelling, we use pmask = 0.3 for all datasets and a mini-batch size of 128. 4 Experiments 4.1 Evaluation Methodology We compare the performance of our model on four different datasets against several baselines, including TimeGAN [51], RCGAN [17], C-RNN-GAN [35], COT-GAN [48] and Professor Forcing (P-Forcing) [30] models. We also perform ablation experiments, removing components of our TsT-GAN to identify sources of performance gain. We consider the following evaluation metrics: 1. Predictive Score We generate a synthetic dataset using trained a generative model and train a post-hoc network on the synthetic data, after which we evaluate the post-hoc regression network on real data. If a generative model has captured the conditional distribution correctly, then we expect test mean absolute error (MAE) to be low and similar to when post-hoc network is trained on real data. The predictive score follows the TS-TR evaluation methodology [17]. An ideal generator will produce synthetic samples that, under the TS-TR framework, will produce a predictive score no worse than when the post-hoc model is trained on real data. 2. Discriminative Score A post-hoc network is trained to distinguish between real and synthetic data. The training set has an equal number of real and synthetic samples. We report the classi\ufb01cation error on the held out test set. In the case of the ideal generator, synthetic samples are indistinguishable from real samples, resulting in a discriminative score of 0. 3. Visualisation We use t-SNE [43] to reduce the dimensionality of real and synthetic datasets, \ufb02attening across the temporal dimension. This allows comparison of how well the synthetic data distribution matches the original, indicating any areas of the original distribution not captured as well as out of distribution samples in the synthetic data. In addition, to evaluate how well the joint distribution is captured, we calculate the \ufb01rst difference for real and synthetic time-series and plot t-SNEs. The code for each of the aforementioned models is available in public repositories published by the corresponding authors. We parameterise these models with all RNN-based components consisting of three layers of size 32. We use a sequence length of 24 for all datasets. Code associated with each of the models is used to train and generate synthetic data and evaluated using code similar to [51]. For both the predictive and discriminative evaluations, we use a two layer Gated Recurrent Unit (GRU) [8] with hidden size equal to the input dimension. 4.2 Datasets We evaluate TsT-GAN across a range of datasets with different properties. All datasets we use are available online or can be generated. 1. Sines The quality of sine waves can be evaluated easily by inspection and this dataset consists of a number of sine waves with random shifts in phase and frequency. Phase and frequency shifts are random variables: phase shift in \u03c6 \u223cUniform[\u2212\u03c0, \u03c0] and frequency shift \u03bb \u223cUniform[0, 1]. We generate a multivariate Sines dataset consisting of 5 sine waves per sample. This synthetic dataset provides continuous valued, periodic functions with no correlations between features. (5 features and 10 000 rows.) 6 2. Stocks The Stocks dataset consists of daily data collected between 2004 and 2019 for the Google ticker.1 (5 features and 3685 rows.) 3. Energy The UCI Appliances Energy Prediction dataset [7, 16] is high-dimensional and consists of features such as energy consumption, humidity and temperature collected by sensors. The data is complex with samples logged every 10 minutes for around four and a half months. (28 features and 19 735 rows.) 4. Chickenpox The UCI Hungarian Chickenpox Cases dataset [40, 16] consists of records of chickenpox cases weekly in 20 counties in Hungary. This dataset represents a realistic situation where generative models be trained on small amounts of data and then generate synthetic samples to train other models. (20 features and 521 rows.) 5. Air The UCI Air Quality dataset [12, 16] consists of levels of different gases recorded hourly in an Italian city. We remove the date and time columns as part of preprocessing. (13 features and 9358 rows.) 4.3 Visualisation, Predictive and Discriminative Scores Table 1 shows that TsT-GAN consistently creates more useful data than the baseline models, achieving a lower predictive score across all datasets. Our predictive score for all datasets is remarkably close to the original score and the scores on the synthetic Sines and Stocks datasets outperform the original. TsT-GAN outperforms the next best performing baseline on the Sines, Energy and Stock datasets by 32%, 22% and 19%, respectively. TsT-GAN performs consistently in the discriminative tests, achieving best performance across two datasets. COT-GAN achieves an incredible 0.6% discriminative score on Chickenpox while exhibiting competitive predictive score performance with TsT-GAN. We visualise t-SNEs in Figure 2 and see that samples from TsT-GAN overlap real data samples extremely well for all datasets. The Chickenpox dataset is especially dif\ufb01cult to model due to the small number of samples, nevertheless TsT-GAN is able to achieve signi\ufb01cant coverage. TimeGAN seems to learn speci\ufb01c modes, while also producing some out of distribution samples. RCGAN, C-RNN-GAN, COT-GAN and P-Forcing produce several out of distribution samples. COT-GAN particular produces impressive looking graphs, being commensurate with predictive scores. From Figure 3 we see that TsT-GAN captured the \ufb01rst differences well in all datasets. Of particular interest is the Sines row. Sines is a toy dataset with known generating distributions where \ufb01rst differences follow speci\ufb01c patterns (the differences are themselves sinusoidal); had TsT-GAN learned fully the generating distribution, we would expect to see distinct regions of high and low density for synthetic samples overlapping the true samples. 4.4 Ablation Experiments We perform ablation experiments to evaluate the importance of each component of TsT-GAN. Our experiments are as follows: \u2022 ML Removes only the auxiliary moment matching objective. All subsequent ablations remove the moment matching objective as well. \u2022 MM + Auto Makes the generator autoregressive and removes the masked modelling objective, replacing it with a one step ahead prediction objective. \u2022 Embedding Removes the embedding network resulting in a generator that is trained with the LS-GAN objective and MM loss, with the parameters of the predictor network being updated jointly with the generator. \u2022 MM Removes only the masked modelling objective but retains the bidirectional generator. \u2022 Base Is a standard transformer GAN made by removing MM and the embedding network. From the latter two sections in Table 1 we see the TsT-GAN outperforming all ablations. The autoregressive generator outperforms TsT-GAN in the discriminative score for Stocks and Chickenpox, although the difference is small. Removing the embedding network in particular has a a signi\ufb01cant 1Obtained from: https://github.com/jsyoon0823/TimeGAN 7 Table 1: Predictive and Discriminative scores with standard deviations for both comparison with baselines and ablations. Lower scores are better and best performance is indicated in bold. Predictive scores include score on the original data for comparison. Predictive Score (MAE) Model Sines Stocks Energy Chickenpox Air Original .009 \u00b1 .000 .010 \u00b1 .001 .032 \u00b1 .001 .089 \u00b1 .002 .034 \u00b1 .001 TsT-GAN .008 \u00b1 .000 .009 \u00b1 .000 .039 \u00b1 .001 .091 \u00b1 .001 .042 \u00b1 .002 TimeGAN .024 \u00b1 .004 .011 \u00b1 .001 .050 \u00b1 .001 .101 \u00b1 .002 .114 \u00b1 .005 RCGAN .012 \u00b1 .000 .021 \u00b1 .001 .068 \u00b1 .001 .106 \u00b1 .002 .072 \u00b1 .001 C-RNN-GAN .017 \u00b1 .000 .027 \u00b1 .001 .069 \u00b1 .001 .207 \u00b1 .002 .095 \u00b1 .002 COT-GAN .016 \u00b1 .000 .012 \u00b1 .000 .056 \u00b1 .001 .094 \u00b1 .003 .044 \u00b1 .000 P-Forcing .014 \u00b1 .001 .018 \u00b1 .000 .059 \u00b1 .003 .319 \u00b1 .002 .190 \u00b1 .041 Discriminative Score (proportion classi\ufb01ed as synthetic) TsT-GAN .026 \u00b1 .018 .122 \u00b1 .020 .442 \u00b1 .007 .053 \u00b1 .044 .243 \u00b1 .009 TimeGAN .231 \u00b1 .117 .191 \u00b1 .029 .496 \u00b1 .004 .046 \u00b1 .044 .479 \u00b1 .025 RCGAN .174 \u00b1 .033 .260 \u00b1 .010 .500 \u00b1 .000 .050 \u00b1 .003 .478 \u00b1 .004 C-RNN-GAN .274 \u00b1 .040 .290 \u00b1 .032 .547 \u00b1 .004 .209 \u00b1 .001 .473 \u00b1 .024 COT-GAN .302 \u00b1 .089 .260 \u00b1 .068 .500 \u00b1 .000 .006 \u00b1 .053 .441 \u00b1 .052 P-Forcing .253 \u00b1 .036 .088 \u00b1 .007 .553 \u00b1 .044 .587 \u00b1 .009 .484 \u00b1 .018 Ablations Predictive Score (MAE) TsT-GAN .008 \u00b1 .000 .009 \u00b1 .000 .039 \u00b1 .001 .091 \u00b1 .001 .042 \u00b1 .002 ML .008 \u00b1 .007 .011 \u00b1 .001 .047 \u00b1 .006 .101 \u00b1 .002 .045 \u00b1 .002 MM + Auto .008 \u00b1 .001 .012 \u00b1 .000 .054 \u00b1 .001 .111 \u00b1 .002 .046 \u00b1 .001 Embedding .009 \u00b1 .000 .016 \u00b1 .000 .081 \u00b1 .001 .145 \u00b1 .004 .056 \u00b1 .003 MM .009 \u00b1 .001 .013 \u00b1 .001 .057 \u00b1 .001 .095 \u00b1 .002 .051 \u00b1 .002 Base .010 \u00b1 .001 .020 \u00b1 .000 .089 \u00b1 .001 .196 \u00b1 .006 .068 \u00b1 .003 Ablations Discriminative Score (proportion classi\ufb01ed as synthetic) TsT-GAN .026 \u00b1 .018 .122 \u00b1 .020 .442 \u00b1 .007 .053 \u00b1 .044 .243 \u00b1 .009 ML .028 \u00b1 .010 .140 \u00b1 .092 .465 \u00b1 .003 .069 \u00b1 .031 .302 \u00b1 .003 MM + Auto .029 \u00b1 .011 .118 \u00b1 .089 .488 \u00b1 .004 .048 \u00b1 .017 .452 \u00b1 .004 Embedding .145 \u00b1 .079 .254 \u00b1 .032 .497 \u00b1 .003 .342 \u00b1 .030 .514 \u00b1 .005 MM .166 \u00b1 .047 .171 \u00b1 .095 .498 \u00b1 .001 .480 \u00b1 .069 .477 \u00b1 .002 Base .113 \u00b1 .042 .200 \u00b1 .035 .529 \u00b1 .0120 .462 \u00b1 .126 .613 \u00b1 .015 detrimental effect on predictive performance performance on all but the Sines dataset, suggesting that our enforcement of the conditional distribution plays an important role in capturing useful temporal correlations across time. 5 Conclusion We have presented TsT-GAN, a new framework for training time-series generative models. The unconditional generator network in TsT-GAN is guided by unsupervised masked modelling to produce high quality synthetic sequences that capture both the global distribution as well as conditional timeseries dynamics. We evaluate and benchmark our model using the TS-TR framework and show that TsT-GAN consistently outperforms existing methods. Future work could explore how to better incorporate moment matching in a uni\ufb01ed framework, rather than as an auxiliary loss. Furthermore, TsT-GAN\u2019s discriminative scores still show scope for improvement suggesting that there still exists some discrepancy between true and learned distributions. A major limitation of our model is architecturally rooted: the Transformer architecture\u2019s self-attention mechanism has a computational complexity of O(N 2) for a sequence of length N. As three out four components of TsT-GAN consist of Transformers, this results in a signi\ufb01cant computational cost when training and performing inference with longer time-series. 8 (a) TsT-GAN (b) TimeGAN (c) RCGAN (d) C-RNNGAN (e) COT-GAN (f) P-Forcing Figure 2: t-SNE plots of Sines on the \ufb01rst row, Stocks on the second row, Energy on the third row, Chickenpox on the fourth row and Air on the \ufb01fth. Red indicates real data and blue indicates synthetic data. Best viewed in colour. (a) TsT-GAN (b) TimeGAN (c) RCGAN (d) C-RNNGAN (e) COT-GAN (f) P-Forcing Figure 3: t-SNE plots of \ufb01rst differences with Sines on the \ufb01rst row, Stocks on the second row, Energy on the third row, Chickenpox on the fourth row and Air on the \ufb01fth. Red indicates real data and blue indicates synthetic data. Best viewed in colour. Our model can also contribute to data compression. As data increases in resolution and demand for data increases, it is crucial to ensure that data remains accessible. We have shown that our model is able to learn meaningful representations of several time-series datasets as well as its utility in downstream tasks. In future applications, a trained model could be disseminated instead of a much larger dataset. 9", "introduction": "Many real world applications are reliant on time-series data, however, not all these applications necessarily have available the data they need. For some tasks, data is available only in small quantities \u2013 often too small to base detailed analysis on \u2013 and for others, data is protected by regulatory or ethical concerns . Medicine is one \ufb01eld that is notorious for suffering from such problems and the ability to generate synthetic data is an avenue through which analysis can continue [32, 49, 13, 5]. Another \ufb01eld with similar issues is data from human-internet interactions; as world governments have increasingly adopted GDPR-like legislation, the availability of such data is limited and methods that can generate synthetic data as a stand-in for user data will increase in importance. Although we do not expressly use privacy-preserving methods in training, this property can be achieved using methods such as differential privacy-preserving stochastic gradient descent [15, 1]. Good quality synthetic time-series data should respect the conditional distribution of a time-series QT t=1 p(xt | x1, ..., xt), so as to maximise utility for downstream models. Generated synthetic data should also capture well the joint distribution p(x1, ..., xT ) such that the synthetic data is indistinguishable from real data. A straightforward, if na\u00efve approach to generating synthetic data is to use autoregressive models trained using teacher forcing [22, 42] by repeatedly feeding it past predictions. Conditioning on Preprint. Under review. arXiv:2205.11164v1 [cs.LG] 23 May 2022 previous outputs is prone to errors adding up over the course of a sequence and techniques to overcome this [6, 19, 30] have not entirely solved the problem [51]. Generative models are statistical models that learn from a set of data instances, X, and their corre- sponding labels, Y to capture the joint probability distribution p(X, Y ). Learning the joint distribution allows generative models to generate new data instances. Generative Adversarial Networks (GANs) are one method of training generative models [21] that typically map some noise, z, to a posterior distribution. Speci\ufb01cally, for time-series, GANs incorporate an additional temporal dimension to model the joint distribution of all elements of the time-series. Synthetically generated sequences have a wide array of applications [11, 23, 4], yet in many existing approaches the true time-series dynamics are not explicitly learned and the quality of how well these dynamics are learned are not explored in detail. Contributions We use the two characteristics of good generative models to guide the development of a new architecture that that contains a generator that can model the full joint distribution while also respecting the need for accurate conditional distributions. We develop a training framework for our model that can be applied to any time-series dataset and benchmark our method quantitatively using the train on synthetic test on real (TS-TR) [17] approach, and qualitatively using t-SNE [43]. We compare our model against \ufb01ve state-of-the-art baselines on \ufb01ve datasets and show that TsT-GAN consistently achieves superior performance, achieving the best predictive scores across the board while also demonstrating best discriminative performance on three out of \ufb01ve datasets. We also perform ablation experiments on TsT-GAN to identify sources of performance gain." } ] }, "edge_feat": {} } }