|
|
"main_content": "Reinforcement learning is a framework for sequential decision making often formulated as a Markov decision process (MDP), M = {S, A, R, p, p0, \u03b3} with state space S, action space A, a scalar reward dependent on state and action R(s, a), transition dynamics p, initial state distribution p0 and discount factor \u03b3 \u2208 [0, 1) [Sutton and Barto, 2018]. RL aims to learn a policy \u03c0 \u2208\u03a0 that executes action a = \u03c0(s) that will maximize the expected \ufffd\ufffd \ufffd \u2208 executes action a = \u03c0(s) that will maximize the expected discounted reward J(\u03c0) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffdT t=0 \u03b3tR(st, at) \ufffd where P\u03c0(\u03c4) = p0(s0) \ufffdT t=0 \u03c0(at | st)p(st+1 | st, at) is the trajectory under \u03c0. Rather than rolling out an entire trajecdiscounted reward J(\u03c0) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffd t=0 \u03b3tR(st, at) \ufffd where P\u03c0(\u03c4) = p0(s0) \ufffdT t=0 \u03c0(at | st)p(st+1 | st, at) is the trajectory under \u03c0. Rather than rolling out an entire trajectory, a state-action value function (Q function) is often used: Q\u03c0(s, a) = E\u03c4\u223cP\u03c0(\u03c4) \ufffd\ufffdT t=0 \u03b3tr(st, at) | s0 = s, a0 = a \ufffd . 2.1 Offline Reinforcement Learning \ufffd\ufffd \ufffd 2.1 Offline Reinforcement Learning Offline RL algorithms are presented with a static dataset D that consists of tuples {s, a, r, s\u2032} where r \u223cR(s, a) and s\u2032 \u223cp(\u00b7 | s, a). D has limited coverage over S \u00d7A; hence, offline RL algorithms must constrain the policy to select actions within the dataset support. To this end, algorithms employ one of three approaches: 1) policy constraints; 2) critic regularization; or 3) uncertainty penalization. Policy constraint Policy constraints modify the actor\u2019s objective only to minimize divergence from the behavior policy. Most simply, this adds a constraint term [Fujimoto and Gu, 2021; Tarasov et al., 2023] to the policy objective: arg max \u03c0 E{s,a}\u223cD [Q(s, \u03c0(s)) \u2212\u03b1D(\u03c0, \u03c0\u03b2)] , (1) where \u03b1 is a scalar controlling the strength of regularization, D(\u00b7, \u00b7) is a divergence function between the policy \u03c0 and the behavior policy \u03c0\u03b2. In offline RL, we do not have access to \u03c0\u03b2; some prior methods attempt to estimate it empirically [Kostrikov et al., 2021a; Li et al., 2023] which is challenging when the dataset is generated by a mixture of policies. Furthermore, selecting the constraint strength can be challenging and difficult to generalize across datasets with similar environments [Tarasov et al., 2023; Kostrikov et al., 2021a]. Other policy constraint approaches use weighted BC [Nair et al., 2020; Kostrikov et al., 2021b; Xu et al., 2023] or (surrogate) BC constraints [Li et al., 2022; Wu et al., 2019; Li et al., 2023]. The former methods may be too restrictive as they do not allow OOD action selection, which is crucial to improve performance [Fu et al., 2022]. The latter methods may still require substantial tuning and in addition to training if using model-based score methods. Other methods impose architectural constraints [Kumar et al., 2019; Fujimoto et al., 2019] that parameterize separate BC and reward-maximizing policy models. Critic Regularization Critic regularization methods directly address the OOD action-value overestimation problem by penalizing large values for adversarially sampled actions [Kostrikov et al., 2021a]. Ensembles Employing an ensemble of neural network estimators is a commonly used technique for prediction with a measure of epistemic uncertainty [Kondratyuk et al., 2020]. A family of offline RL methods employ large ensembles of value functions [An et al., 2021] and make use of the diversity of randomly initialized ensembles to implicitly reduce the selection of OOD actions or directly penalize the variance of the reward in the ensemble [Ghasemipour et al., 2022; Sutton and Barto, 2018]. Model-Based Uncertainty Estimation Learning an uncertainty model of the dataset is often devised analogously to exploration-encouraging methods used in online RL, but, employing these for anti-exploration instead [Rezaeifar et al., 2022]. An example is SAC-RND which directly adopts such an approach [Nikulin et al., 2023]. Other algorithms include DOGE [Li et al., 2022] which trains a model to estimate uncertainty as a distance to dataset action and DARL [Zhang et al., 2023] which uses distance to random projections of stateaction pairs as an uncertainty measure. As a whole, these methods optimize a distance d(\u00b7, \u00b7) \u22650 that represents the uncertainty of an action. 2.2 Uncertainty Estimation Neural networks are known to predict confidently even when presented with OOD samples [Nguyen et al., 2015; Goodfellow et al., 2014; Lakshminarayanan et al., 2017]. A classical approach to OOD detection is to fit a generative model to the dataset that produces a high probability for in-dataset samples and a low probability for OOD ones. These methods work well for simple, unimodal data but can become computationally demanding for more complex data with multiple modes. Another approach trains classifiers that are leveraged to become finer-grained OOD detectors [Lee et al., 2018]. In this work, we focus on Morse neural networks [Dherin et al., 2023], an approach that trains a generative model to produce an unnormalized density that takes on value 1 at the dataset modes. 3 Preliminaries A Morse neural network produces an unnormalized density M(x) \u2208[0, 1] on an embedding space Re [Dherin et al., 2023]. A Morse network can produce a density in Re that attains a value of 1 at mode submanifolds and decreases towards 0 when moving away from the mode. The rate at which the value decreases is controlled by a Morse Kernel. Definition 1 (Morse Kernel). A Morse Kernel is a positive definite kernel K. When applied in a space Z = Rk, the kernel K(z1, z2) takes values in the interval [0, 1] where K(z1, z2) = 1 iff z1 = z2. All kernels of the form K(z1, z2) = e\u2212D(z1,z2) where D(\u00b7, \u00b7) is a divergence [Amari, 2016] are Morse Kernels. Examples include common kernels such as the Radial Basis Function (RBF) Kernel, KRBF (z1, z2) = e\u2212\u03bb2 2 ||z1\u2212z2||2. (2) The RBF kernel and its derivatives decay exponentially, leading learning signals to vanish rapidly. An alternative is the ubiquitous Rational Quadratic (RQ) kernel: KRQ(z1, z2) = \u0012 1 + \u03bb2 2\u03ba || z1 \u2212z2 ||2 \u0013\u2212\u03ba (3) where \u03bb is a scale parameter in each kernel. The RQ kernel is a scaled mixture of RBF kernels controlled by \u03ba and, for small \u03ba, decays much more slowly [Williams and Rasmussen, 2006]. Consider a neural network that maps from a feature space into a latent space f\u03d5 : X \u2192Z, with parameters \u03d5, X \u2208Rd and Z \u2208Rk. A Morse Kernel can impose structure on the latent space. Definition 2 (Morse Neural Network). A Morse neural network is a function f\u03d5 : X \u2192Z in combination with a Morse Kernel on K(z, t) where t \u2282Z is a target, chosen as a hyperparameter of the model. The Morse neural network is defined as M\u03d5(x) = K(f\u03d5(x), t). Using Definition 1 we see that M\u03d5(x) \u2208[0, 1], and when M\u03d5(x) = 1, x corresponds to a mode that coincides with the level set of the submanifold of the Morse neural network. Furthermore, M\u03d5(x) corresponds to the certainty of the sample x being from the training dataset, so 1 \u2212M\u03d5(x) is a measure of the epistemic uncertainty of x. The function \u2212log M\u03d5(x) measures a squared distance, d(\u00b7, \u00b7), between f\u03d5(x) and the closest mode in the latent space at m: d(z) = min m\u2208M d(z, m), (4) where M is the set of all modes. This encodes information about the topology of the submanifold and satisfies the Morse\u2013Bott non-degeneracy condition [Basu and Prasad, 2020]. The Morse neural network offers the following properties: 1 M\u03d5(x) \u2208[0, 1]. 2 M\u03d5(x) = 1 at its mode submanifolds. 3 \u2212log M\u03d5(x) \u22650 is a squared distance that satisfies the Morse\u2013Bott non-degeneracy condition on the mode submanifolds. 4 As M\u03d5(x) is an exponentiated squared distance, the function is also distance aware in the sense that as f\u03d5(x) \u2192t, M\u03d5(x) \u21921. Proof of each property is provided in the appendix. 4 Policy Constraint with a Behavioral Supervisor We now describe the constituent components of our algorithm, building on the Morse network and showing how it can be incorporated into a policy-regularized objective. 4.1 Morse Networks for Offline RL The target t is a hyperparameter that must be chosen. Experiments in [Dherin et al., 2023] use simple, toy datasets with classification problems that perform well for categorical t. We find that using a static label for the Morse network yields poor performance; rather than a labeling model, we treat f\u03d5 as a perturbation model that produces an action f\u03d5(s, a) = \u02c6 a such that \u02c6 a = a if and only if s, a \u223cD. An offline RL dataset D consists of tuples {s, a, r, s\u2032} where we assume {s, a} pairs are i.i.d. sampled from an unknown distribution. The Morse network must be fitted on N state-action pairs [{s1, a1, }, ..., {sN, aN}] such that M\u03d5(si, aj) = 1, \u2200i, j \u22081, ..., N ] only when i = j. We fit a Morse neural network to minimize the KL divergence between unnormalized measures [Amari, 2016] following [Dherin et al., 2023], DKL(D(s, a) || M\u03d5(s, a)): min \u03d5 Es,a\u223cD \u0014 log D(s, a) M\u03d5(s, a) \u0015 + Z M\u03d5(s, a) \u2212D(s, a) da. (5) With respect to \u03d5, this amounts to minimizing the empirical loss: L(\u03d5) = \u22121 N X s,a\u223cD log K(f\u03d5(s, a), a) + 1 N X s\u223cD a \u00af D\u223cDuni K(f\u03d5(s, au), au), (6) where au is an action sampled from a uniform distribution over the action space Duni. A learned Morse density is well suited to modeling ensemble policies [Lei et al., 2023], more flexibly [Dherin et al., 2023; Kostrikov et al., 2021a; Li et al., 2023] and without down-weighting good, in-support actions that have low density under the behavior policy [Singh et al., 2022] as all modes have unnormalized density value 1. A Morse neural network can be expressed as an energybased model (EBM) [Goodfellow et al., 2016]: Proposition 1. A Morse neural network can be expressed as an energy-based model: E\u03d5(x) = e\u2212log M\u03d5(x) where M\u03d5 : Rd \u2192R. Note that the EBM E\u03d5 is itself unnormalized. Representing the Morse network as an EBM allows analysis analogous to [Florence et al., 2022]. Theorem 1. For a set-valued function F(x) : x \u2208Rm \u2192 Rn\\{\u2205}, there exists a continuous function g : Rm+n \u2192R that is approximated by a continuous function approximator g\u03d5 with arbitrarily small bounded error \u03f5. This ensures that any point on the graph F\u03d5(x) = arg miny g\u03d5(x, y) is within distance \u03f5 of F. We refer the reader to [Florence et al., 2022] for a detailed proof. The theorem assumes that F(x) is an implicit function and states that the error at the level-set (i.e. the modes) of F(x) is small. 4.2 TD3-BST We can use the Morse network to design a regularized policy objective. Recall that policy regularization consists of Q-value maximization and minimization of a distance to the behavior policy (Equation 1). We reconsider the policy regularization term and train a policy that minimizes uncertainty while selecting actions close to the behavior policy. Let C\u03c0(s, a) denote a measure of uncertainty of the policy action. We solve the following optimization problem: \u03c0i+1 = arg min \u03c0\u2208\u03a0 Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] (7) s.t. DKL (\u03c0(\u00b7 | s) || \u03c0\u03b2(\u00b7 | s)) \u2264\u03f5. (8) This optimization problem requires an explicit behavior model, which is difficult to estimate and using an estimated model has historically returned mixed results [Kumar et al., 2019; Fujimoto et al., 2019]. Furthermore, this requires direct optimization through C\u03c0 which may be subject to exploitation. Instead, we enforce this implicitly by deriving the solution to the constrained optimization to obtain a closed-form solution for the actor [Peng et al., 2019; Nair et al., 2020]. Enforcing the KKT conditions we obtain the Lagrangian: L(\u03c0, \u00b5) = Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] + \u00b5(\u03f5 \u2212DKL(\u03c0 || \u03c0\u03b2)). (9) Computing \u2202L \u2202\u03c0 and solving for \u03c0 yields the uncertainty minimizing solution \u03c0C\u2217(a | s) \u221d\u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a). When learning the parametric policy \u03c0\u03c8, we project the nonparametric solution into the policy space as a (reverse) KL divergence minimization of \u03c0\u03c8 under the data distribution D: arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0C\u2217(\u00b7 | s) || \u03c0\u03c8(\u00b7 | s) \u0011i (10) = arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a) || \u03c0\u03c8(\u00b7 | s) \u0011i (11) = arg min \u03c8 Es,a\u223cD h \u2212log \u03c0\u03c8(a | s)e 1 \u00b5 C\u03c0(s,a)i , (12) which is a weighted maximum likelihood update where the supervised target is sampled from the dataset D and C\u03c0(s, a) = 1 \u2212M\u03d5(s, \u03c0\u03c8(s)). This avoids explicitly modeling the behavior policy and uses the Morse network uncertainty as a behavior supervisor to dynamically adjust the strength of behavioral cloning. We provide a more detailed derivation in the appendix. Interpretation Our regularization method shares similarities with other weighted regression algorithms [Nair et al., 2020; Peng et al., 2019; Kostrikov et al., 2021b] which weight the advantage of an action compared to the dataset/replay buffer action. Our weighting can be thought of as a measure of disadvantage of a policy action in the sense of how OOD it is. We make modifications to the behavioral cloning objective. From Morse network property 1 we know M\u03d5 \u2208[0, 1], hence 1 \u2264e 1 \u00b5 C\u03c0 \u2264e 1 \u00b5 , i.e. the lowest possible disadvantage coefficient is 1. To minimize the coefficient in the mode, we require it to approach 0 when near a mode. We adjust the weighted behavioral cloning term and add Q-value maximization to yield the regularized policy update: \u03c0i+1 \u2190arg max \u03c0 E s,a\u223cD, a\u03c0\u223c\u03c0i(s) [ 1 ZQ Qi+1(s, a\u03c0) \u2212(e 1 \u00b5 C\u03c0(s,a) \u22121)(a\u03c0 \u2212a)2], (13) where \u00b5 is the Lagrangian multiplier that controls the magnitude of the disadvantage weight and ZQ = 1 N PN n=1|Q(s, a\u03c0)| is a scaling term detached from the gradient update process [Fujimoto and Gu, 2021], necessary as Q(s, a) can be arbitrarily large and the BC-coefficient is upper-bounded at e 1 \u00b5 . The value function update is given by: Qi+1 \u2190arg min Q Es,a,s\u2032\u223cD[(y \u2212Qi(s, a))2], (14) with y = r(s, a) + \u03b3Es\u2032\u223c\u00af \u03c0(s\u2032) \u00af Q(s\u2032, a\u2032) where \u00af Q and \u00af \u03c0 are target value and policy functions, respectively. 4.3 Controlling the Tradeoff Constraint Tuning TD3-BST is straightforward; the primary hyperparameters of the Morse network consist of the choice and scale of the kernel, and the temperature \u00b5. Increasing \u03bb for higher dimensional actions ensures that the high certainty region around modes remains tight. Prior empirical work has demonstrated the importance of allowing some degree of OOD actions [An et al., 2021]; in the TD3-BST framework, this is dependent on \u03bb. In Figure 2 we provide a didactic example of the effect of \u03bb. We construct a dataset consisting of 2-dimensional actions in [\u22121, 1] with means at the four locations {[0.0, 0.8], [0.0, \u22120.8], [0.8, 0.0], [\u22120.8, 0.0]} and each with standard deviation 0.05. We sample M = 128 points, train a Morse network and plot the density produced by the Morse network for \u03bb = { 1 10, 1 2, 1.0, 2.0}. A behavioral cloning policy learned using vanilla MLE where all targets are weighted equally results in an OOD action being selected. Training using Morse-weighted BC downweights the behavioral cloning loss for far away modes enabling the policy to select and minimize error to a single mode. (a) \u03bb = 0.1 (b) \u03bb = 0.5 (c) \u03bb = 1.0 (d) \u03bb = 2.0 (e) Ground Truth (f) Density \u03bb = 1.0 Figure 2: a-d: Contour plots of unnormalized densities produced by a Morse network for increasing \u03bb with ground truth actions included as \u00d7 marks. e: Ground truth actions in the synthetic dataset, the MLE action (red). A Morse certainty weighted MLE model can select actions in a single mode, in this case, the mode centred at [0.8, 0.0] (orange). Weighting a divergence constraint using a Morse (un)certainty will encourage the policy to select actions near the modes of M\u03d5 that maximize reward. f: Plot of the 3D unnormalized Morse density for \u03bb = 1.0. Algorithm 1 TD3-BST Training Procedure Outline. The policy is updated once for every m = 2 critic updates, as is the default in TD3. Input: Dataset D = {s, a, r, s\u2032} Initialize: Initialize Morse network M\u03d5. Output: Trained Morse network M\u03d5. Let t = 0. for t = 1 to TM do Sample minibatch (s, a) \u223cD Sample random actions a \u00af D \u223cDuni for each state s Update \u03d5 by minimizing Equation 6 end for Initialize: Initialize policy network \u03c0\u03c8, critic Q\u03b8, target policy \u00af \u03c8 \u2190\u03c8 and target critic \u00af \u03b8 \u2190\u03b8. Output: Trained policy \u03c0. Let t = 0. for t = 1 to TAC do Sample minibatch (s, a, r, s\u2032) \u223cD Update \u03b8 using Equation 14 if t mod m = 0 then Obtain a\u03c0 = \u03c0(s) Update \u03c8 using Equation 13 Update target networks \u00af \u03b8 \u2190\u03c1\u03b8 + (1 \u2212\u03c1)\u00af \u03b8, \u00af \u03c8 \u2190 \u03c1\u03c8 + (1 \u2212\u03c1) \u00af \u03c8 end if end for return \u03c0 4.4 Algorithm Summary Fitting the Morse Network The TD3-BST training procedure is described in Algorithm 1. The first phase fits the Morse network for TM gradient steps. Actor\u2013Critic Training In the second phase of training, a modified TD3-BC procedure is used for TAC iterations with alterations highlighted in red. We provide full hyperparameter details in the appendix. 5 Experiments In this section, we conduct experiments that aim to answer the following questions: \u2022 How does TD3-BST compare to other baselines, with a focus on comparing to newer baselines that use perdataset tuning? \u2022 Can the BST objective improve performance when used with one-step methods (IQL) that perform in-sample policy evaluation? \u2022 How well does the Morse network learn to discriminate between in-dataset and OOD actions? \u2022 How does changing the kernel scale parameter \u03bb affect performance? \u2022 Does using independent ensembles, a second method of uncertainty estimation, improve performance? We evaluate our algorithm on the D4RL benchmark [Fu et al., 2020], including the Gym Locomotion and challenging Antmaze navigation tasks. 5.1 Comparison with SOTA Methods We evaluate TD3-BST against the older, well known baselines of TD3-BC [Fujimoto and Gu, 2021], CQL [Kumar et al., 2020], and IQL [Kostrikov et al., 2021b]. There are more recent methods that consistently outperform these baselines; of these, we include SQL [Xu et al., 2023], SAC-RND [Nikulin et al., 2023], DOGE [Li et al., 2022], VMG [Zhu et al., 2022], ReBRAC [Tarasov et al., 2023], CFPI [Li et al., 2023] and MSG [Ghasemipour et al., 2022] (to our knowledge, the best-performing ensemble-based method). It is interesting to note that most of these baselines implement policy constraints, except for VMG (graph-based planning) and MSG (policy constraint using a large, independent ensemble). We note that all the aforementioned SOTA methods (except SQL) report scores with per-dataset tuned parameters in stark contrast with the older TD3-BC, CQL, and IQL algorithms, which use the same set of hyperparameters in each D4RL domain. All scores are reported with 10 evaluations in Locomotion and 100 in Antmaze across five seeds. We present scores for D4RL Gym Locomotion in Table 1. TD3-BST achieves best or near-best results compared to all previous methods and recovers expert performance on five of nine datasets. The best performing prior methods include SAC-RND and ReBRAC, both of which require per-dataset tuning of BRAC-variant algorithms [Wu et al., 2019]. We evaluate TD3-BST on the more challenging Antmaze tasks which contain a high degree of suboptimal trajectories and follow a sparse reward scheme that requires algorithms to stitch together several trajectories to perform well. TD3-BST achieves the best scores overall in Table 2, especially as the maze becomes more complex. VMG and MSG are the bestperforming prior baselines and TD3-BST is far simpler and more efficient in its design as a variant of TD3-BC. The authors of VMG report the best scores from checkpoints rather than from the final policy. MSG report scores from ensembles with both 4 and 64 critics of which the best scores included here are from the 64-critic variant. We pay close attention to SAC-RND, which, among all baselines, is most similar in its inception to TD3-BST. SACRND uses a random and trained network pair to produce a dataset-constraining penalty. SAC-RND achieves consistent SOTA scores on locomotion datasets, but fails to deliver commensurate performance on Antmaze tasks. TD3-BST performs similarly to SAC-RND in locomotion and achieves SOTA scores in Antmaze. 5.2 Improving One-Step Methods One-step algorithms learn a policy from an offline dataset, thus remaining on-policy [Rummery and Niranjan, 1994; Sutton and Barto, 2018], and using weighted behavioral cloning [Brandfonbrener et al., 2021; Kostrikov et al., 2021b]. Empirical evaluation by [Fu et al., 2022] suggests that advantageweighted BC is too restrictive and relaxing the policy objective to Equation 1 can lead to performance improvement. We use the BST objective as a drop-in replacement for the policy improvement step in IQL [Kostrikov et al., 2021b] to learn an optimal policy while retaining in-sample policy evaluation. We reproduce IQL results and report scores for IQL-BST, both times using a deterministic policy [Tarasov et al., 2022] and identical hyperparameters to the original work in Table 3. Reproduced IQL closely matches the original results, with slight performance reductions on the -large datasets. Relaxing weighted-BC with a BST objective leads to improvements in performance, especially on the more difficult -medium and -large datasets. To isolate the effect of the BST objective, we do not perform any additional tuning. 5.3 Ablation Experiments Morse Network Analysis We analyze how well the Morse network can distinguish between dataset tuples and samples from Dperm, permutations of dataset actions, and Duni. We plot both certainty (M\u03d5) density and t-SNEs [Van der Maaten and Hinton, 2008] in Figure 3 which show that the unsupervised Morse network is effective in distinguishing between Dperm and Duni and assigning high certainty to dataset tuples. Ablating kernel scale We examine sensitivity to the kernel scale \u03bb. Recall that k = dim(A). We see in Figure 4 that the scale \u03bb = k 2 is a performance sweet-spot on the challenging Antmaze tasks. We further illustrate this by plotting policy deviations from dataset actions in Figure 5. The scale \u03bb = 1.0 is potentially too lax a behavioral constraint, while \u03bb = k is too strong, resulting in performance reduction. However, performance on all scales remains strong and compares well with most prior algorithms. Performance may be further improved by tuning \u03bb, possibly with separate scales for each input dimension. Figure 3: M\u03d5 densities and t-SNE for hopper-medium-expert (top row) and Antmaze-large-diverse (bottom row). Density plots are clipped at 10.0 as density for D is large. 10 actions are sampled from Duni and Dperm each, per state. t-SNE is plotted from the per-dimension perturbation | f\u03d5(s, a) \u2212a |. Figure 4: Ablations of \u03bb on Antmaze datasets. Recall k = dim(A). Independent or Shared Targets? Standard TD3 employs Clipped Double Q-learning (CDQ) [Hasselt, 2010; Fujimoto et al., 2018] to prevent value overestimation. On tasks with sparse rewards, this may be too conservative [Moskovitz et al., 2021]. MSG [Ghasemipour et al., 2022] uses large ensembles of fully independent Q functions to learn offline. We examine how independent double Q functions perform compared to the standard CDQ setup in Antmaze with 2 and 10 critics. The results in Figure 6 show that disabling CDQ with 2 critics is consistently detrimental to performance. Using a larger 10-critic ensemble leads to moderate improvements. This suggests that combining policy regularization with an efficient, independent ensemble could bring further performance benefits with minimal changes to the algorithm. 6 Discussion Morse Network In [Dherin et al., 2023], deeper architectures are required even when training on simple datasets. This rings true for our application of Morse networks in this work, with low-capacity networks performing poorly. Training the Morse network for each locomotion and Antmaze dataset typically takes 10 minutes for 100 000 gradient steps using a batch size of 1 024. When training the policy, using the Morse network increases training time by approximately 15%. Optimal Datasets On Gym Locomotion tasks TD3-BST performance is comparable to newer methods, all of which Dataset TD3-BC CQL IQL SQL SAC-RND1 DOGE ReBRAC CFPI TD3-BST (ours) halfcheetah-m 48.3 44.0 47.4 48.3 66.6 45.3 65.6 52.1 62.1 \u00b1 0.8 hopper-m 59.3 58.5 66.3 75.5 97.8 98.6 102.0 86.8 102.9 \u00b1 1.3 walker2d-m 83.7 72.5 78.3 84.2 91.6 86.8 82.5 88.3 90.7 \u00b1 2.5 halfcheetah-m-r 44.6 45.5 44.2 44.8 42.8 54.9 51.0 44.5 53.0 \u00b1 0.7 hopper-m-r 60.9 95.0 94.7 99.7 100.5 76.2 98.1 93.6 101.2 \u00b1 4.9 walker2d-m-r 81.8 77.2 73.9 81.2 88.7 87.3 77.3 78.2 90.4 \u00b1 8.3 halfcheetah-m-e 90.7 91.6 86.7 94.0 107.6 78.7 101.1 97.3 100.7 \u00b1 1.1 hopper-m-e 98.0 105.4 91.5 111.8 109.8 102.7 107.0 104.2 110.3 \u00b1 0.9 walker2d-m-e 110.1 108.8 109.6 110.0 105.0 110.4 111.6 111.9 109.4 \u00b1 0.2 Table 1: Normalized scores on D4RL Gym Locomotion datasets. VMG scores are excluded because this method performs poorly and the authors of MSG do not report numerical results on locomotion tasks. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. 1 SAC-RND in addition to per-dataset tuning, is trained for 3 million gradient steps. Though not included here, ensemble methods may perform better than the best non-ensemble methods on some datasets, albeit still requiring per-dataset tuning to achieve their reported performance. Top scores are in bold and second-best are underlined. Dataset TD3-BC CQL IQL SQL SAC-RND1 DOGE VMG2 ReBRAC CFPI MSG3 TD3-BST (ours) -umaze 78.6 74.0 87.5 92.2 97.0 97.0 93.7 97.8 90.2 98.6 97.8 \u00b1 1.0 -umaze-d 71.4 84.0 62.2 74.0 66.0 63.5 94.0 88.3 58.6 81.8 91.7 \u00b1 3.2 -medium-p 10.6 61.2 71.2 80.2 74.7 80.6 82.7 84.0 75.2 89.6 90.2 \u00b1 1.8 -medium-d 3.0 53.7 70.0 79.1 74.7 77.6 84.3 76.3 72.2 88.6 92.0 \u00b1 3.8 -large-p 0.2 15.8 39.6 53.2 43.9 48.2 67.3 60.4 51.4 72.6 79.7 \u00b1 7.6 -large-d 0.0 14.9 47.5 52.3 45.7 36.4 74.3 54.4 52.4 71.4 76.1 \u00b1 4.7 Table 2: Normalized scores on D4RL Antmaze datasets. 1 SAC-RND is trained for three million gradient steps. 2 VMG reports scores from the best-performing checkpoint rather than from the final policy; despite this, TD3-BST still outperforms VMG in all datasets except -umaze-diverse. 3 for MSG we report the best score among the reported scores of all configurations, also, MSG is trained for two million steps. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. Other ensemble-based methods are not included, as MSG achieves higher performance. Top scores are in bold and second-best are underlined. Dataset IQL (reproduced) IQL-BST -umaze 87.6 \u00b1 4.6 90.8 \u00b1 2.1 -umaze-d 64.0 \u00b1 5.2 63.1 \u00b1 3.7 -medium-p 70.7 \u00b1 4.3 80.3 \u00b1 1.3 -medium-d 73.8 \u00b1 5.9 84.7 \u00b1 2.0 -large-p 35.2 \u00b1 8.4 55.4 \u00b1 3.2 -large-d 40.7 \u00b1 9.2 51.6 \u00b1 2.6 Table 3: Normalized scores on D4RL Antmaze datasets for IQL and IQL-BST. We use hyperparameters identical to the original IQL paper and use Equation 13 as a drop-in replacement for the policy objective. rarely outperform older baselines. This can be attributed to a significant proportion of high-return-yielding trajectories that are easier to improve. 7 Conclusion In this paper, we introduce TD3-BST, an algorithm that uses an uncertainty model to dynamically adjust the strength of regularization. Dynamic weighting allows the policy to maximize reward around individual dataset modes. Our algorithm compares well against prior methods on Gym Locomotion tasks and achieves the best scores on the more challenging Antmaze tasks, demonstrating strong performance when learning from suboptimal data. In addition, our experiments show that combining our pol(a) hopper-medium (b) amaze-large-play Figure 5: Histograms of deviation from dataset actions. Figure 6: % change in Antmaze scores without CDQ for critic ensembles consisting of 2 and 10 Q functions. icy regularization with an ensemble-based source of uncertainty can improve performance. Future work can explore other methods of estimating uncertainty, alternative uncertainty measures, and how best to combine multiple sources of uncertainty." |