diff --git "a/abs_29K_G/test_abstract_long_2405.00987v1.json" "b/abs_29K_G/test_abstract_long_2405.00987v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00987v1.json" @@ -0,0 +1,261 @@ +{ + "url": "http://arxiv.org/abs/2405.00987v1", + "title": "S$^2$AC: Energy-Based Reinforcement Learning with Stein Soft Actor Critic", + "abstract": "Learning expressive stochastic policies instead of deterministic ones has\nbeen proposed to achieve better stability, sample complexity, and robustness.\nNotably, in Maximum Entropy Reinforcement Learning (MaxEnt RL), the policy is\nmodeled as an expressive Energy-Based Model (EBM) over the Q-values. However,\nthis formulation requires the estimation of the entropy of such EBMs, which is\nan open problem. To address this, previous MaxEnt RL methods either implicitly\nestimate the entropy, resulting in high computational complexity and variance\n(SQL), or follow a variational inference procedure that fits simplified actor\ndistributions (e.g., Gaussian) for tractability (SAC). We propose Stein Soft\nActor-Critic (S$^2$AC), a MaxEnt RL algorithm that learns expressive policies\nwithout compromising efficiency. Specifically, S$^2$AC uses parameterized Stein\nVariational Gradient Descent (SVGD) as the underlying policy. We derive a\nclosed-form expression of the entropy of such policies. Our formula is\ncomputationally efficient and only depends on first-order derivatives and\nvector products. Empirical results show that S$^2$AC yields more optimal\nsolutions to the MaxEnt objective than SQL and SAC in the multi-goal\nenvironment, and outperforms SAC and SQL on the MuJoCo benchmark. Our code is\navailable at:\nhttps://github.com/SafaMessaoud/S2AC-Energy-Based-RL-with-Stein-Soft-Actor-Critic", + "authors": "Safa Messaoud, Billel Mokeddem, Zhenghai Xue, Linsey Pang, Bo An, Haipeng Chen, Sanjay Chawla", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "Learning expressive stochastic policies instead of deterministic ones has\nbeen proposed to achieve better stability, sample complexity, and robustness.\nNotably, in Maximum Entropy Reinforcement Learning (MaxEnt RL), the policy is\nmodeled as an expressive Energy-Based Model (EBM) over the Q-values. However,\nthis formulation requires the estimation of the entropy of such EBMs, which is\nan open problem. To address this, previous MaxEnt RL methods either implicitly\nestimate the entropy, resulting in high computational complexity and variance\n(SQL), or follow a variational inference procedure that fits simplified actor\ndistributions (e.g., Gaussian) for tractability (SAC). We propose Stein Soft\nActor-Critic (S$^2$AC), a MaxEnt RL algorithm that learns expressive policies\nwithout compromising efficiency. Specifically, S$^2$AC uses parameterized Stein\nVariational Gradient Descent (SVGD) as the underlying policy. We derive a\nclosed-form expression of the entropy of such policies. Our formula is\ncomputationally efficient and only depends on first-order derivatives and\nvector products. Empirical results show that S$^2$AC yields more optimal\nsolutions to the MaxEnt objective than SQL and SAC in the multi-goal\nenvironment, and outperforms SAC and SQL on the MuJoCo benchmark. Our code is\navailable at:\nhttps://github.com/SafaMessaoud/S2AC-Energy-Based-RL-with-Stein-Soft-Actor-Critic", + "main_content": "INTRODUCTION S!AC (ours) SQL (Haarnoja et al., ICML 17) SAC (Haarnoja et al., ICML 18) Explicit entropy evaluation Num. SVGD steps = 0 Figure 1: Comparing S2AC to SQL and SAC. S2AC with a parameterized policy is reduced to SAC if the number of SVGD steps is 0. SQL becomes equivalent to S2AC if the entropy is evaluated explicitly with our derived formula. MaxEnt RL (Todorov, 2006; Ziebart, 2010; Haarnoja et al., 2017; Kappen, 2005; Toussaint, 2009; Theodorou et al., 2010; Abdolmaleki et al., 2018; Haarnoja et al., 2018a; Vieillard et al., 2020) has been proposed to address challenges hampering the deployment of RL to real-world applications, including stability, sample efficiency (Gu et al., 2017), and robustness (Eysenbach & Levine, 2022). Instead of learning a deterministic policy, as in classical RL (Sutton et al., 1999; Schulman et al., 2017; Silver et al., 2014; Lillicrap et al., 2015), MaxEnt RL learns a stochastic policy that captures the intricacies of the action space. This enables better exploration during training and eventually better robustness to environmental perturbations at test time, i.e., the agent learns multimodal action space distributions which enables picking the next best action in case a perturbation prevents the execution of the optimal one. To achieve this, MaxEnt RL models the policy using the expressive family of EBMs (LeCun et al., 2006). This translates into learning policies that maximize the sum of expected future reward and expected future entropy. However, estimating the entropy of such complex distributions remains an open problem. To address this, existing approaches either use tricks to go around the entropy computation or make limiting assumptions on the policy. This results in either poor scalability or convergence to suboptimal solutions. For example, SQL (Haarnoja et al., 2017) implicitly incorporates entropy in the Q-function computation. This requires using importance sampling, which results in high variability and hence poor training stability and limited scalability to high dimensional action spaces. SAC (Haarnoja 1 arXiv:2405.00987v1 [cs.LG] 2 May 2024 \fPublished as a conference paper at ICLR 2024 \u03c0(\"|$!) \u03c0(\"|$\") STAC SQL SAC !! !\" S!AC Figure 2: S2AC learns a more optimal solution to the MaxEnt RL objective than SAC and SQL. We design a multigoal environment where an agent starts from the center of the 2-d map and tries to reach one of the three goals (G1, G2, and G3). The maximum expected future reward (level curves) is the same for all the goals but the expected future entropy is different (higher on the path to G2/G3): the action distribution \u03c0(a|s) is bi-modal on the path to the left (G2 and G3) and unimodal to the right (G1). Hence, we expect the optimal policy for the MaxEnt RL objective to assign more weights to G2 and G3. We visualize trajectories (in blue) sampled from the policies learned using SAC, SQL, and S2AC. SAC quickly commits to a single mode due to its actor being tied to a Gaussian policy. Though SQL also recovers the three modes, the trajectories are evenly distributed. S2AC recovers all the modes and approaches the left two goals more frequently. This indicates that it successfully maximizes not only the expected future reward but also the expected future entropy. et al., 2018a), on the other hand, follows a variational inference procedure by fitting a Gaussian distribution to the EBM policy. This enables a closed-form evaluation of the entropy but results in a suboptimal solution. For instance, SAC fails in environments characterized by multimodal action distributions. Similar to SAC, IAPO (Marino et al., 2021) models the policy as a uni-modal Gaussian. Instead of optimizing a MaxEnt objective, it achieves multimodal policies by learning a collection of parameter estimates (mean, variance) through different initializations for different policies. To improve the expressiveness of SAC, SSPG (Cetin & Celiktutan, 2022) and SAC-NF (Mazoure et al., 2020) model the policy as a Markov chain with Gaussian transition probabilities and as a normalizing flow (Rezende & Mohamed, 2015), respectively. However, due to training stability issues, the reported results in Cetin & Celiktutan (2022) show that though both models learn multi-modal policies, they fail to maximize the expected future entropy in positive rewards setups. We propose a new algorithm, S2AC, that yields a more optimal solution to the MaxEnt RL objective. To achieve expressivity, S2AC models the policy as a Stein Variational Gradient Descent (SVGD) (Liu, 2017) sampler from an EBM over Q-values (target distribution). SVGD proceeds by first sampling a set of particles from an initial distribution, and then iteratively transforming these particles via a sequence of updates to fit the target distribution. To compute a closed-form estimate of the entropy of such policies, we use the change-of-variable formula for pdfs (Devore et al., 2012). We prove that this is only possible due to the invertibility of the SVGD update rule, which does not necessarily hold for other popular samplers (e.g., Langevin Dynamics (Welling & Teh, 2011)). While normalizing flow models (Rezende & Mohamed, 2015) are also invertible, SVGD-based policy is more expressive as it encodes the inductive bias about the unnormalized density and incorporates a dispersion term to encourage multi-modality, whereas normalizing flows encode a restrictive class of invertible transformations (with easy-to-estimate Jacobian determinants). Moreover, our formula is computationally efficient and only requires evaluating first-order derivatives and vector products. To improve scalability, we model the initial distribution of the SVGD sampler as an isotropic Gaussian and learn its parameters, i.e., mean and standard deviation, end-to-end. We show that this results in faster convergence to the target distribution, i.e., fewer SVGD steps. Intuitively, the initial distribution learns to contour the high-density region of the target distribution while the SVGD updates result in better and faster convergence to the modes within that region. Hence, our approach is as parameter efficient as SAC, since the SVGD updates do not introduce additional trainable parameters. Note that S2AC can be reduced to SAC when the number of SVGD steps is zero. Also, SQL becomes equivalent to S2AC if the entropy is computed explicitly using our formula (the policy in SQL is an amortized SVGD sampler). Beyond RL, the backbone of S2AC is a new variational inference algorithm with a more expressive and scalable distribution characterized by a closed-form entropy estimate. We believe that this variational distribution can have a wider range of exciting applications. We conduct extensive empirical evaluations of S2AC from three aspects. We start with a sanity check on the merit of our derived SVGD-based entropy estimate on target distributions with known entropy values (e.g., Gaussian) or log-likelihoods (e.g., Gaussian Mixture Models) and assess its 2 \fPublished as a conference paper at ICLR 2024 sensitivity to different SVGD parameters (kernel, initial distribution, number of steps and number of particles). We observe that its performance depends on the choice of the kernel and is robust to variations of the remaining parameters. In particular, we find out that the kernel should be chosen to guarantee inter-dependencies between the particles, which turns out to be essential for invertibility. Next, we assess the performance of S2AC on a multi-goal environment (Haarnoja et al., 2017) where different goals are associated with the same positive (maximum) expected future reward but different (maximum) expected future entropy. We show that S2AC learns multimodal policies and effectively maximizes the entropy, leading to better robustness to obstacles placed at test time. Finally, we test S2AC on the MuJoCo benchmark (Duan et al., 2016). S2AC yields better performances than the baselines on four out of the five environments. Moreover, S2AC shows higher sample efficiency as it tends to converge with fewer training steps. These results were obtained from running SVGD for only three steps, which results in a small overhead compared to SAC during training. Furthermore, to maximize the run-time efficiency during testing, we train an amortized SVGD version of the policy to mimic the SVGD-based policy. Hence, this reduces inference to a forward pass through the policy network without compromising the performance. 2 PRELIMINARIES 2.1 SAMPLERS FOR ENERGY-BASED MODELS In this work, we study three representative methods for sampling from EBMs: (1) Stochastic Gradient Langevin Dynamics (SGLD) & Deterministic Langevin Dynamics (DLD) (Welling & Teh, 2011), (2) Hamiltonian Monte Carlo (HMC) (Neal et al., 2011), and (3) Stein Variational Gradient Descent (SVGD) (Liu & Wang, 2016). We review SVGD here since it is the sampler we eventually use in S2AC, and leave the rest to Appendix C.1. SVGD is a particle-based Bayesian inference algorithm. Compared to SGLD and HMC which have a single particle in their dynamics, SVGD operates on a set of particles. Specifically, SVGD samples a set of m particles {aj}m j=1 from an initial distribution q0 which it then transforms through a sequence of updates to fit the target distribution. Formally, at every iteration l, SVGD applies a form of functional gradient descent \u2206f that minimizes the KL-divergence between the target distribution p and the proposal distribution ql induced by the particles, i.e., the update rule for the ith particles is: al+1 i = al i + \u03f5\u2206f(al i) with \u2206f(al i) = Eal j\u223cql \u0002 k(al i, al j)\u2207al j log p(al j) + \u2207al jk(al i, al j) \u0003 . (1) Here, \u03f5 is the step size and k(\u00b7, \u00b7) is the kernel function, e.g., the RBF kernel: k(ai, aj) = exp(||ai \u2212 aj||2/2\u03c32). The first term within the gradient drives the particles toward the high probability regions of p, while the second term serves as a repulsive force to encourage dispersion. 2.2 MAXIMUM-ENTROPY RL We consider an infinite horizon Markov Decision Process (MDP) defined by a tuple (S, A, p, r), where S is the state space, A is the action space and p : S \u00d7 A \u00d7 S \u2192[0, \u221e] is the state transition probability modeling the density of the next state st+1 \u2208S given the current state st \u2208S and action at \u2208A. Additionally, we assume that the environment emits a bounded reward function r \u2208[rmin, rmax] at every iteration. We use \u03c1\u03c0(st) and \u03c1\u03c0(st, at) to denote the state and state-action marginals of the trajectory distribution induced by a policy \u03c0(at|st). We consider the setup of continuous action spaces Lazaric et al. (2007); Lee et al. (2018); Zhou & Lu (2023). MaxEnt RL (Todorov, 2006; Ziebart, 2010; Rawlik et al., 2012) learns a policy \u03c0\u2217(at|st), that instead of maximizing the expected future reward, maximizes the sum of the expected future reward and entropy: \u03c0\u2217= arg max\u03c0 X t \u03b3tE(st,at)\u223c\u03c1\u03c0 \u0002 r(st, at) + \u03b1H(\u03c0(\u00b7|st)) \u0003 , (2) where \u03b1 is a temperature parameter controlling the stochasticity of the policy and H(\u03c0(\u00b7|st)) is the entropy of the policy at state st. The conventional RL objective can be recovered for \u03b1 = 0. Note that the MaxEnt RL objective above is equivalent to approximating the policy, modeled as an EBM over Q-values, by a variational distribution \u03c0(at|st) (see proof of equivalence in Appendix D), i.e., \u03c0\u2217= arg min\u03c0 X t Est\u223c\u03c1\u03c0 \u0002 DKL \u0000\u03c0(\u00b7|st)\u2225exp(Q(st, \u00b7)/\u03b1)/Z \u0001\u0003 , (3) where DKL is the KL-divergence and Z is the normalizing constant. We now review two landmark MaxEnt RL algorithms: SAC (Haarnoja et al., 2018a) and SQL (Haarnoja et al., 2017). SAC is an actor-critic algorithm that alternates between policy evaluation, i.e., evaluating the Q-values for a policy \u03c0\u03b8(at|st): Q\u03d5(st, at) \u2190r(st, at) + \u03b3 Est+1,at+1\u223c\u03c1\u03c0\u03b8 \u0002 Q\u03d5(st+1, at+1) + \u03b1H(\u03c0\u03b8(\u00b7|st+1)) \u0003 (4) 3 \fPublished as a conference paper at ICLR 2024 and policy improvement, i.e., using the updated Q-values to compute a better policy: \u03c0\u03b8 = arg max\u03b8 X t Est,at\u223c\u03c1\u03c0\u03b8 \u0002 Q\u03d5(at, st) + \u03b1H(\u03c0\u03b8(\u00b7|st)) \u0003 . (5) SAC models \u03c0\u03b8 as an isotropic Gaussian, i.e., \u03c0\u03b8(\u00b7|s) = N(\u00b5\u03b8, \u03c3\u03b8I). While this enables computing a closed-form expression of the entropy, it incurs an over-simplification of the true action distribution, and thus cannot represent complex distributions, e.g., multimodal distributions. SQL goes around the entropy computation, by defining a soft version of the value function V\u03d5 = \u03b1 log \u0000 R A exp \u0000 1 \u03b1Q\u03d5(st, a\u2032) \u0001 da\u2032\u0001 . This enables expressing the Q-value (Eq (4)) independently from the entropy, i.e., Q\u03d5(st, at) = r(st, at) + \u03b3Est+1\u223cp[V\u03d5(st+1)]. Hence, SQL follows a soft value iteration which alternates between the updates of the \u201csoft\u201d versions of Q and value functions: Q\u03d5(st, at) \u2190r(st, at) + \u03b3Est+1\u223cp[V\u03d5(st+1)], \u2200(st, at) (6) V\u03d5(st) \u2190\u03b1 log \u0000 R A exp \u0000 1 \u03b1Q\u03d5(st, a\u2032) \u0001 da\u2032\u0001 , \u2200st. (7) Once the Q\u03d5 and V\u03d5 functions converge, SQL uses amortized SVGD Wang & Liu (2016) to learn a stochastic sampling network f\u03b8(\u03be, st) that maps noise samples \u03be into the action samples from the EBM policy distribution \u03c0\u2217(at|st) = exp \u0000 1 \u03b1(Q\u2217(st, at) \u2212V \u2217(st)) \u0001 . The parameters \u03b8 are obtained by minimizing the loss J\u03b8(st) = DKL \u0000\u03c0\u03b8(\u00b7|st)|| exp \u0000 1 \u03b1(Q\u2217 \u03d5(st, \u00b7) \u2212V \u2217 \u03d5 (st)) \u0001 with respect to \u03b8. Here, \u03c0\u03b8 denotes the policy induced by f\u03b8. SVGD is designed to minimize such KL-divergence without explicitly computing \u03c0\u03b8. In particular, SVGD provides the most greedy direction as a functional \u2206f\u03b8(\u00b7, st) (Eq (1)) which can be used to approximate the gradient \u2202J\u03b8/\u2202at. Hence, the gradient of the loss J\u03b8 with respect to \u03b8 is: \u2202J\u03b8(st)/\u2202\u03b8 \u221dE\u03be \u0002 \u2206f\u03b8(\u03be, st)\u2202f\u03b8(\u03be, st)/\u2202\u03b8 \u0003 . Note that the integral in Eq (7) is approximated via importance sampling, which is known to result in high variance estimates and hence poor scalability to high dimensional action spaces. Moreover, amortized generation is usually unstable and prone to mode collapse, an issue similar to GANs. Therefore, SQL is outperformed by SAC Haarnoja et al. (2018a) on benchmark tasks like MuJoCo. 3 APPROACH We introduce S2AC, a new actor-critic MaxEnt RL algorithm that uses SVGD as the underlying actor to generate action samples from policies represented using EBMs. This choice is motivated by the expressivity of distributions that can be fitted via SVGD. Additionally, we show that we can derive a closed-form entropy estimate of the SVGD-induced distribution, thanks to the invertibility of the update rule, which does not necessarily hold for other EBM samplers. Besides, we propose a parameterized version of SVGD to enable scalability to high-dimensional action spaces and nonsmooth Q-function landscapes. S2AC is hence capable of learning a more optimal solution to the MaxEnt RL objective (Eq (2)) as illustrated in Figure 2. 3.1 STEIN SOFT ACTOR CRITIC Like SAC, S2AC performs soft policy iteration which alternates between policy evaluation and policy improvement. The difference is that we model the actor as a parameterized sampler from an EBM. Hence, the policy distribution corresponds to an expressive EBM as opposed to a Gaussian. Critic. The critic\u2019s parameters \u03d5 are obtained by minimizing the Bellman loss as traditionally: \u03d5\u2217= arg min\u03d5 E(st,at)\u223c\u03c1\u03c0\u03b8 \u0002 (Q\u03d5(st, at) \u2212\u02c6 y)2\u0003 , (8) with the target \u02c6 y = rt(st, at) + \u03b3E(st+1,at+1)\u223c\u03c1\u03c0 \u0002 Q \u00af \u03d5(st+1, at+1) + \u03b1H(\u03c0(\u00b7|st+1)) \u0003 . Here \u00af \u03d5 is an exponentially moving average of the value network weights (Mnih et al., 2015). Actor as an EBM sampler. The actor is modeled as a sampler from an EBM over the Q-values. To generate a set of valid actions, the actor first samples a set of particles {a0} from an initial distribution q0 (e.g., Gaussian). These particles are then updated over several iterations l \u2208[1, L], i.e., {al+1} \u2190{al} + \u03f5h({al}, s) following the sampler dynamics characterized by a transformation h (e.g., for SVGD, h = \u2206f in Eq (1)). If q0 is tractable and h is invertible, it\u2019s possible to compute a closed-form expression of the distribution of the particles at the lth iteration via the change of variable formula Devore et al. (2012): ql(al|s) = ql\u22121(al\u22121|s) \f \fdet(I + \u03f5\u2207alh(al, s)) \f \f\u22121 , \u2200l \u2208[1, L]. In this case, the policy is represented using the particle distribution at the final step L of the sampler dynamics, i.e., \u03c0(a|s) = qL(aL|s) and the entropy can be estimated by averaging log qL(aL|s) over a set of particles (Section 3.2). We study the invertibility of popular EBM samplers in Section 3.3. 4 \fPublished as a conference paper at ICLR 2024 \ud835\udc4e! ! \ud835\udc4e! \"! \ud835\udc5e!(\ud835\udc4e|\ud835\udc60) \ud835\udc4e! ! \ud835\udc4e! #! \ud835\udc5e!(\ud835\udc4e|\ud835\udc60) S!AC(\ud835\udf19, \ud835\udf03) S!AC(\ud835\udf19) Figure 3: S2AC(\u03d5, \u03b8) achieves faster convergence to the target distribution (in orange) than S2AC(\u03d5) by parameterizing the initial distribution N(\u00b5\u03b8, \u03c3\u03b8) of the SVGD sampler. Parameterized initialization. To reduce the number of steps required to converge to the target distribution (hence reducing computation cost), we further propose modeling the initial distribution as a parameterized isotropic Gaussian, i.e., a0 \u223cN(\u00b5\u03b8(s), \u03c3\u03b8(s)). The parameterization trick is then used to express a0 as a function of \u03b8. Intuitively, the actor would learn \u03b8 such that the initial distribution is close to the target distribution. Hence, fewer steps are required to converge, as illustrated in Figure 3. Note that if the number of steps L = 0, S2AC is reduced to SAC. Besides, to deal with the non-smooth nature of deep Q-function landscapes which might lead to particle divergence in the sampling process, we bound the particle updates to be within a few standard deviations (t) from the mean of the learned initial distribution, i.e., \u2212t\u03c3\u03b8 \u2264al \u03b8 \u2264t\u03c3\u03b8, \u2200l \u2208[1, L]. Eventually, the initial distribution q0 \u03b8 learns to contour the high-density region of the target distribution and the following updates refine it by converging to the spanned modes. Formally, the parameters \u03b8 are computed by minimizing the expected KL-divergence between the policy qL \u03b8 induced by the particles from the sampler and the EBM of the Q-values: \u03b8\u2217=arg max\u03b8Est\u223cD,aL \u03b8 \u223c\u03c0\u03b8 \u0002 Q\u03d5(st, aL \u03b8 ) \u0003 + \u03b1Est\u223cD [H(\u03c0\u03b8(\u00b7|st))] s.t. \u2212t\u03c3\u03b8 \u2264al \u03b8 \u2264t\u03c3\u03b8, \u2200l \u2208[1, L]. (9) Here, D is the replay buffer. The derivation is in Appendix E. Note that the constraint does not truncate the particles as it is not an invertible transformation which then violates the assumptions of the change of variable formula. Instead, we sample more particles than we need and select the ones that stay within the range. We call S2AC(\u03d5, \u03b8) and S2AC(\u03d5) as two versions of S2AC with/without the parameterized initial distribution. The complete S2AC algorithm is in Algorithm 1 of Appendix A. 3.2 A CLOSED-FORM EXPRESSION OF THE POLICY\u2019S ENTROPY A critical challenge in MaxEnt RL is how to efficiently compute the entropy term H(\u03c0(\u00b7|st+1)) in Eq (2). We show that, if we model the policy as an iterative sampler from the EBM, under certain conditions, we can derive a closed-form estimate of the entropy at convergence. Theorem 3.1. Let F : Rn \u2192Rn be an invertible transformation of the form F(a) = a + \u03f5h(a). We denote by qL(aL) the distribution obtained from repeatedly applying F to a set of samples {a0} from an initial distribution q0(a0) over L steps, i.e., aL = F \u25e6F \u25e6\u00b7 \u00b7 \u00b7 \u25e6F(a0). Under the condition \u03f5||\u2207al ih(ai)||\u221e\u226a1, \u2200l \u2208[1, L], the distribution of the particles at the Lth step is: log qL(aL) \u2248log q0(a0) \u2212\u03f5 XL\u22121 l=0 Tr(\u2207alh(al)) + O(\u03f52dL). (10) Here, d is the dimensionality of a, i.e., a \u2208Rd and O(\u03f52dL) is the order of approximation error. Proof Sketch: As F is invertible, we apply the change of variable formula (Appendix C.2) on the transformation F \u25e6F \u25e6\u00b7 \u00b7 \u00b7 F and obtain: log qL(aL) = log q0(a0)\u2212PL\u22121 l=0 log \f \fdet(I + \u03f5\u2207alh(al)) \f \f. Under the assumption \u03f5||\u2207aih(ai)||\u221e\u226a1, we apply the corollary of Jacobi\u2019s formula (Appendix C.3) and get Eq. (10). The detailed proof is in Appendix F. Note that the condition \u03f5||\u2207aih(ai)||\u221e\u226a1 can always be satisfied when we choose a sufficiently small step size \u03f5, or the gradient of h(a) is small, i.e., h(a) is Lipschitz continuous with a sufficiently small constant. It follows from the theorem above, that the entropy of a policy modeled as an EBM sampler (Eq (9)) can be expressed analytically as: H(\u03c0\u03b8(\u00b7|s))=\u2212Ea0 \u03b8\u223cq0 \u03b8 h log qL \u03b8 (aL \u03b8 |s) i \u2248\u2212Ea0 \u03b8\u223cq0 \u03b8 h log q0 \u03b8(a0|s)\u2212\u03f5 XL\u22121 l=0 Tr \u0010 \u2207al \u03b8h(al \u03b8, s) \u0011 i . (11) In the following, we drop the dependency of the action on \u03b8 for simplicity of the notation. 3.3 INVERTIBLE POLICIES Next, we study the invertibility of three popular EBM samplers: SVGD, SGLD, and HMC as well as the efficiency of computing the trace, i.e., Tr(\u2207alh(al, s)) in Eq (10) for the ones that are invertible. Proposition 3.2 (SVGD invertibility). Given the SVGD learning rate \u03f5 and RBF kernel k(\u00b7, \u00b7) with variance \u03c3, if \u03f5 \u226a\u03c3, the update rule of SVGD dynamics defined in Eq (1) is invertible. 5 \fPublished as a conference paper at ICLR 2024 SVGD \u210b(\ud835\udc5e!) = 3.5 DLD \u210b(\ud835\udc5e!) = \u221225.93 SGLD \u210b(\ud835\udc5e!) = \u221211.57 HMC \u210b(\ud835\udc5e!) = \u221254.5 Initial Distribution \ud835\udc5e\" = \ud835\udca9(0, 6\ud835\udc3c) (a) Recovering the GT entropy m \u210b(\ud835\udc5e!) Kernel variance \ud835\udf0e (b) Effect of \u03c3 on H(qL) (\ud835\udc5a, \ud835\udc3f)= (\ud835\udc5a, \ud835\udc3f)= (\ud835\udc5a, \ud835\udc3f)= (\ud835\udc5a, \ud835\udc3f)= \u210b(\ud835\udc5e!) Target Distribution (c) Effect of m and L on H(qL) Figure 4: Entropy evaluation results. Proof Sketch: We use the explicit function theorem to show that the Jacobian \u2207aF(a, s) of the update rule F(a, s) is diagonally dominated and hence invertible. This yields invertibility of F(a, s). See detailed proof in Appendix G.3. Theorem 3.3. The closed-form estimate of log qL(aL|s) for the SVGD based sampler with an RBF kernel k(\u00b7, \u00b7) is log qL(aL|s)\u2248logq0(a0|s)+ \u03f5 m\u03c32 L\u22121 X l=0 m X j=1,al\u0338=al j k(al j, al) \u0010 (al\u2212al j)\u22a4\u2207al jQ(s, al j)+ \u03b1 \u03c32 \u2225al\u2212al j\u22252\u2212d\u03b1 \u0011 . Here, (\u00b7)\u22a4denotes the transpose of a matrix/vector. Note that the entropy does not depend on any matrix computation, but only on vector dot products and first-order vector derivatives. The proof is in Appendix H.1. Intuitively, the derived likelihood is proportional to (1) the concavity of the curvature of the Q-landscape, captured by a weighted average of the neighboring particles\u2019 Q-value gradients and (2) pairwise-distances between the neighboring particles (\u223c\u2225al i\u2212al j\u22252 \u00b7 exp (\u2225al i\u2212al j\u22252)), i.e., the larger the distance the higher is the entropy. We elaborate on the connection between this formula and non-parametric entropy estimators in Appendix B. Proposition 3.4 (SGLD, HMC). The SGLD and HMC updates are not invertible w.r.t. a. Proof Sketch: SGLD is stochastic (noise term) and thus not injective. HMC is only invertible if conditioned on the velocity v. Detailed proofs are in Appendices G.1-G.2. From the above theoretic analysis, we can see that SGLD update is not invertible and hence is not suitable as a sampler for S2AC. While the HMC update is invertible, its derived closed-form entropy involves calculating Hessian and hence computationally more expensive. Due to these considerations, we choose to use SVGD with an RBF kernel as the underlying sampler of S2AC. 4 RESULTS We first evaluate the correctness of our proposed closed-form entropy formula. Then we present the results of different RL algorithms on multigoal and MuJoCo environments. 4.1 ENTROPY EVALUATION This experiment tests the correctness of our entropy formula. We compare the estimated entropy for distributions (with known ground truth entropy or log-likelihoods) using different samplers and study the sensitivity of the formula to different samplers\u2019 parameters. (1) Recovering the ground truth entropy. In Figure 4a, we plot samples (black dots) obtained by SVGD, SGLD, DLD and HMC at convergence to a Gaussian with ground truth entropy H(p) = 3.41, starting from the same initial distribution (leftmost sub-figure). We also report the entropy values computed via Eq.(11). Unlike SGLD, DLD, and HMC, SVGD recovers the ground truth entropy. This empirically supports Proposition 3.4 that SGLD, DLD, and HMC are not invertible. (2) Effect of the kernel variance. Figure 4b shows the effect of different SVGD kernel variances \u03c3, where we use the same initial Gaussian from Figure 4a. We also visualize the particle distributions after L SVGD steps for the different configurations in Figure 9 of Appendix I. We can see that when the kernel variance is too small (e.g., \u03c3 = 0.1), the invertibility is violated, and thus the estimated entropy is wrong even at convergence. On the other extreme when the kernel variance is too large (e.g., \u03c3=100), i.e., when the particles are too scattered initially, the particles do not converge to the target Gaussian due to noisy gradients in the first term of Eq.(1). The best configurations hence lie somewhere in between (e.g., \u03c3\u2208{3, 5, 7}). (3) Effect of SVGD steps and particles. Figure 4c and Figure 10b (Appendix. I) show the behavior of our entropy formula under different configurations of the number of SVGD steps and particles, on two settings: (i) GMM M with an increasing number of components M, and (ii) distributions with increasing ground truth entropy values, i.e., Gaussians with increasing variances \u03c3. Results show that our entropy consistently grows with an increasing M (Figure 4c) and increasing \u03c3 (Figure 10b), even when a small number of SVGD steps and particles is used (e.g., L = 10, m = 10). 6 \fPublished as a conference paper at ICLR 2024 4.2 MULTI-GOAL EXPERIMENTS ! = 0.2 ! = 1 ! = 10 ! = 20 '! '\" (', (( () Multigoal Environment Figure 5: Multigoal env. To check if S2AC learns a better solution to the max-entropy objective (Eq (2)), we design a new multi-goal environment as shown in Figure 5. The agent is a 2D point mass at the origin trying to reach one of the goals (in red). Q-landscapes are depicted by level curves. Actions are bounded in [\u22121, 1] along both axes. Critical states for the analysis are marked with blue crosses. It is built on the multi-goal environment in Haarnoja et al. (2017) with modifications such that all the goals have (i) the same maximum expected future reward (positive) but (ii) different maximum expected future entropy. This is achieved by asymmetrically placing the goals (two goals on the left side and one on the right, leading to a higher expected future entropy on the left side) while assigning the same final rewards to all the goals. The problem setup and hyperparameters are detailed in Appendix J. (1) Multi-modality. Figure 6 visualizes trajectories (blue lines) collected from 20 episodes of S2AC(\u03d5, \u03b8), S2AC(\u03d5), SAC, SQL and SAC-NF (SAC with a normalizing flow policy, Mazoure et al. (2020)) agents (rows) at test time for increasing entropy weights \u03b1 (columns). S2AC and SQL consistently cover all the modes for all \u03b1 values, while this is only achieved by SAC and SAC-NF for large \u03b1 values. Note that, in the case of SAC, this comes at the expense of accuracy. Although normalizing flows are expressive enough in theory, they are known to quickly collapse to local optima in practice Kobyzev et al. (2020). The dispersion term in S2AC encodes an inductive bias to mitigate this issue. (2) Maximizing the expected future entropy. We also see that with increasing \u03b1, more S2AC and SAC-NF trajectories converge to the left goals (G2/G3). This shows both models learn to maximize the expected future entropy. This is not the case for SQL whose trajectory distribution remains uniform across the goals. SAC results do not show a consistent trend. This validates the hypothesis that the entropy term in SAC only helps exploration but does not lead to maximizing future entropy. The quantified distribution over reached goals is in Figure 12 of Appendix J. (3) Robustness/adaptability. To assess the robustness of the learned policies, we place an obstacle (red bar in Figure 7) on the path to G2. We show the test time trajectories of 20 episodes using S2AC, SAC, SQL and SAC-NF agents trained with different \u03b1\u2019s. We observe that, for S2AC and SAC-NF, with increasing \u03b1, more trajectories reach the goal after hitting the obstacles. This is not the case for SAC, where many trajectories hit the obstacle without reaching the goal. SQL does not manage to escape the barrier even with higher \u03b1. Additional results on the (4) effect of parameterization of q0, and the (5) entropy\u2019s effect on the learned Q-landscapes are respectively reported in Figure 11 and Figure 14 of Appendix J. S!AC(\ud835\udf19) SAC SQL S!AC(\ud835\udf19, \ud835\udf03) \ud835\udefc= 0.2 \ud835\udefc= 1 \ud835\udefc= 10 \ud835\udefc= 20 SAC-NF Figure 6: S2AC and SAC-NF learn to maximize the expected future entropy (biased towards G2/G3) while SAC and SQL do not. S2AC consistently recovers all modes, while SAC-NF with smaller \u03b1\u2019s does not, indicating its instability. S!AC(\ud835\udf19) SAC SQL S!AC(\ud835\udf19, \ud835\udf03) \ud835\udefc= 0.2 \ud835\udefc= 1 \ud835\udefc= 10 \ud835\udefc= 20 SAC-NF Figure 7: S2AC and SAC-NF are more robust to perturbations. Obstacle O is placed diagonally at [\u22121, 1]. Trajectories that did and did not reach the goal after hitting O are in green and red, respectively. 7 \fPublished as a conference paper at ICLR 2024 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 500 1000 1500 2000 2500 3000 3500 Average Return 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 \u00b0500 0 500 1000 1500 2000 2500 3000 Average Return 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 2000 4000 6000 8000 Average Return 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 500 1000 1500 2000 2500 3000 3500 4000 Average Return PPO SAC-NF DDPG SAC S2AC(\u00a1, \u00b5, \u221a) S2AC(\u00a1, \u00b5) S2AC(\u00a1) SQL 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 500 1000 1500 2000 2500 3000 3500 4000 Average Return (a) Hopper-v2 (b) Walker2d-v2 (c) HalfCheetah-v2 (d) Ant-v2 (e) Humanoid-v2 (f) Median (g) IQM (h) Mean (i) Optimality Gap (j) P(X>Y) Figure 8: (a)-(e): Performance curves on the MuJoCo benchmark (training). S2AC outperforms SQL and SAC-NF on all environments and SAC on 4 out of 5 environments. (f)-(i): Comparison of Median, IQM, Mean, and Optimality Gap between S2AC and baseline algorithms. (j): The probabilities of S2AC outperforming baseline algorithms. 4.3 MUJOCO EXPERIMENTS We evaluate S2AC on five environments from MuJoCo (Brockman et al., 2016): Hopper-v2, Walker2dv2, HalfCheetah-v2, Ant-v2, and Humanoid-v2. As baselines, we use (1) DDPG (Gu et al., 2017), (2) PPO (Schulman et al., 2015), (3) SQL (Haarnoja et al., 2017), (4) SAC-NF (Mazoure et al., 2020), and (5) SAC (Haarnoja et al., 2018a). Hyperparameters are in Appendix K. (1) Performance and sample efficiency. We train five different instances of each algorithm with different random seeds, with each performing 100 evaluation rollouts every 1000 environment steps. Performance results are in Figure 8(a)-(e). The solid curves correspond to the mean returns over the five trials and the shaded region represents the minimum and maximum. S2AC(\u03d5, \u03b8) is consistently better than SQL and SAC-NF across all the environments and has superior performance than SAC in four out of five environments. Results also show that the initial parameterization was key to ensuring the scalability (S2AC(\u03d5) has poor performance compared to S2AC(\u03d5, \u03b8)). Figure 8(f)-(j) demonstrate the statistical significance of these gains by leveraging statistics from the rliable library (Agarwal et al., 2021) which we detail in Appendix K. Hopper Walker2d HalfCheetah Ant Action dim 3 6 6 8 State dim 11 17 17 111 SAC 0.723 0.714 0.731 0.708 SQL 0.839 0.828 0.815 0.836 S2AC(\u03d5, \u03b8) 3.267 4.622 4.583 5.917 S2AC(\u03d5, \u03b8, \u03c8) 0.850 0.817 0.830 0.837 Table 1: Action selection run-time on MuJoCo. (2) Run-time. We report the run-time of action selection of SAC, SQL, and S2AC algorithms in Table 1. S2AC(\u03d5, \u03b8) run-time increases linearly with the action space. To improve the scalability, we train an amortized version that we deploy at test-time, following (Haarnoja et al., 2017). Specifically, we train a feed-forward deepnet f\u03c8(s, z) to mimic the SVGD dynamics during testing, where z is a random vector that allows mapping the same state to different particles. Note that we cannot use f\u03c8(s, z) during training as we need to estimate the entropy in Eq (11), which depends on the unrolled SVGD dynamics (details in Appendix K). The amortized version S2AC(\u03d5, \u03b8, \u03c8) has a similar run-time to SAC and SQL with a slight tradeoff in performance (Figure 8). 5 RELATED WORK MaxEnt RL (Todorov, 2006; Ziebart, 2010; Rawlik et al., 2012) aims to learn a policy that gets high rewards while acting as randomly as possible. To achieve this, it maximizes the sum of expected future reward and expected future entropy. It is different from entropy regularization (Schulman et al., 2015; O\u2019Donoghue et al., 2016; Schulman et al., 2017) which maximizes entropy at the current time step. It is also different from multi-modal RL approaches (Tang & Agrawal, 2018) which recover different modes with equal frequencies without considering their future entropy. MaxEnt RL has been broadly incorporated in various RL domains, including inverse RL (Ziebart et al., 2008; Finn et al., 2016), stochastic control (Rawlik et al., 2012; Toussaint, 2009), guided policy search (Levine & Koltun, 2013), and off-policy learning (Haarnoja et al., 2018a;b). MaxEnt RL is shown to maximize a lower bound of the robust RL objective (Eysenbach & Levine, 2022) and is hence less sensitive 8 \fPublished as a conference paper at ICLR 2024 to perturbations in state and reward functions. From the variational inference lens, MaxEnt RL aims to find the policy distribution that minimizes the KL-divergence to an EBM over Q-function. The desired family of variational distributions is (1) expressive enough to capture the intricacies of the Q-value landscape (e.g., multimodality) and (2) has a tractable entropy estimate. These two requirements are hard to satisfy. SAC (Haarnoja et al., 2018a) uses a Gaussian policy. Despite having a tractable entropy, it fails to capture arbitrary Q-value landscapes. SAC-GMM (Haarnoja, 2018) extends SAC by modeling the policy as a Gaussian Mixture Model, but it requires an impractical grid search over the number of components. Other extensions include IAPO (Marino et al., 2021) which also models the policy as a uni-modal Gaussian but learns a collection of parameter estimates (mean, variance) through different initializations. While this yields multi-modality, it does not optimize a MaxEnt objective. SSPG (Cetin & Celiktutan, 2022) and SAC-NF (Mazoure et al., 2020) respectively improve the policy expressivity by modeling the policy as a Markov chain with Gaussian transition probabilities and as a normalizing flow. Due to training instability, the reported multi-goal experiments in (Cetin & Celiktutan, 2022) show that, though both models capture multimodality, they fail to maximize the expected future entropy in positive reward setups. SQL (Haarnoja et al., 2017), on the other hand, bypasses the explicit entropy computation altogether via a soft version of value iteration. It then trains an amortized SVGD (Wang & Liu, 2016) sampler from the EBM over the learned Q-values. However, estimating soft value functions requires approximating integrals via importance sampling which is known to have high variance and poor scalability. We propose a new family of variational distributions induced by a parameterized SVGD sampler from the EBM over Q-values. Our policy is expressive and captures multi-modal distributions while being characterized by a tractable entropy estimate. EBMs (LeCun et al., 2006; Wu et al., 2018) are represented as Gibbs densities p(x) = exp E(x)/Z, where E(x) \u2208R is an energy function describing inter-variable dependencies and Z = R exp E(x) is the partition function. Despite their expressiveness, EBMs are not tractable as the partition function requires integrating over an exponential number of configurations. Markov Chain Monte Carlo (MCMC) methods (Van Ravenzwaaij et al., 2018) (e.g., HMC (Hoffman & Gelman, 2014), SGLD (Welling & Teh, 2011)) are frequently used to approximate the partition function via sampling. There have been recent efforts to parameterize these samplers via deepnets (Levy et al., 2017; Gong et al., 2018; Feng et al., 2017) to improve scalability. Similarly to these methods, we propose a parameterized variant of SVGD (Liu & Wang, 2016) as an EBM sampler to enable scalability to highdimensional action spaces. Beyond sampling, we derive a closed-form expression of the sampling distribution as an estimate of the EBM. This yields a tractable estimate of the entropy. This is opposed to previous methods for estimating EBM entropy which mostly rely on heuristic approximation, lower bounds Dai et al. (2017; 2019a), or neural estimators of mutual information (Kumar et al., 2019). The idea of approximating the entropy of EBMs via MCMC sampling by leveraging the change of variable formula was first proposed in Dai et al. (2019b). The authors apply the formula to HMC and LD, which, as we show previously, violate the invertibility assumption. To go around this, they augment the EBM family with the noise or velocity variable for LD and HMC respectively. But the derived log-likelihood of the sampling distribution turns out to be \u2013counter-intuitively\u2013 independent of the sampler\u2019s dynamics and equal to the initial distribution, which is then parameterized using a flow model (details in Appendix B.2). We show that SVGD is invertible, and hence we sample from the original EBM, so that our derived entropy is more intuitive as it depends on the SVGD dynamics. SVGD-augmented RL (Liu & Wang, 2016) has been explored under other RL contexts. Liu et al. (2017) use SVGD to learn a distribution over policy parameters. While this leads to learning diverse policies, it is fundamentally different from our approach as we are interested in learning a single multi-modal policy with a closed-form entropy formula. Castanet et al. (2023); Chen et al. (2021) use SVGD to sample from multimodal distributions over goals/tasks. We go beyond sampling and use SVGD to derive a closed-form entropy formula of an expressive variational distribution. 6", + "additional_graph_info": { + "graph": [ + [ + "Safa Messaoud", + "Alexander G. Schwing" + ], + [ + "Safa Messaoud", + "Zhenghai Xue" + ], + [ + "Zhenghai Xue", + "Qingpeng Cai" + ], + [ + "Zhenghai Xue", + "Kun Gai" + ] + ], + "node_feat": { + "Safa Messaoud": [ + { + "url": "http://arxiv.org/abs/2405.00987v1", + "title": "S$^2$AC: Energy-Based Reinforcement Learning with Stein Soft Actor Critic", + "abstract": "Learning expressive stochastic policies instead of deterministic ones has\nbeen proposed to achieve better stability, sample complexity, and robustness.\nNotably, in Maximum Entropy Reinforcement Learning (MaxEnt RL), the policy is\nmodeled as an expressive Energy-Based Model (EBM) over the Q-values. However,\nthis formulation requires the estimation of the entropy of such EBMs, which is\nan open problem. To address this, previous MaxEnt RL methods either implicitly\nestimate the entropy, resulting in high computational complexity and variance\n(SQL), or follow a variational inference procedure that fits simplified actor\ndistributions (e.g., Gaussian) for tractability (SAC). We propose Stein Soft\nActor-Critic (S$^2$AC), a MaxEnt RL algorithm that learns expressive policies\nwithout compromising efficiency. Specifically, S$^2$AC uses parameterized Stein\nVariational Gradient Descent (SVGD) as the underlying policy. We derive a\nclosed-form expression of the entropy of such policies. Our formula is\ncomputationally efficient and only depends on first-order derivatives and\nvector products. Empirical results show that S$^2$AC yields more optimal\nsolutions to the MaxEnt objective than SQL and SAC in the multi-goal\nenvironment, and outperforms SAC and SQL on the MuJoCo benchmark. Our code is\navailable at:\nhttps://github.com/SafaMessaoud/S2AC-Energy-Based-RL-with-Stein-Soft-Actor-Critic", + "authors": "Safa Messaoud, Billel Mokeddem, Zhenghai Xue, Linsey Pang, Bo An, Haipeng Chen, Sanjay Chawla", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION S!AC (ours) SQL (Haarnoja et al., ICML 17) SAC (Haarnoja et al., ICML 18) Explicit entropy evaluation Num. SVGD steps = 0 Figure 1: Comparing S2AC to SQL and SAC. S2AC with a parameterized policy is reduced to SAC if the number of SVGD steps is 0. SQL becomes equivalent to S2AC if the entropy is evaluated explicitly with our derived formula. MaxEnt RL (Todorov, 2006; Ziebart, 2010; Haarnoja et al., 2017; Kappen, 2005; Toussaint, 2009; Theodorou et al., 2010; Abdolmaleki et al., 2018; Haarnoja et al., 2018a; Vieillard et al., 2020) has been proposed to address challenges hampering the deployment of RL to real-world applications, including stability, sample efficiency (Gu et al., 2017), and robustness (Eysenbach & Levine, 2022). Instead of learning a deterministic policy, as in classical RL (Sutton et al., 1999; Schulman et al., 2017; Silver et al., 2014; Lillicrap et al., 2015), MaxEnt RL learns a stochastic policy that captures the intricacies of the action space. This enables better exploration during training and eventually better robustness to environmental perturbations at test time, i.e., the agent learns multimodal action space distributions which enables picking the next best action in case a perturbation prevents the execution of the optimal one. To achieve this, MaxEnt RL models the policy using the expressive family of EBMs (LeCun et al., 2006). This translates into learning policies that maximize the sum of expected future reward and expected future entropy. However, estimating the entropy of such complex distributions remains an open problem. To address this, existing approaches either use tricks to go around the entropy computation or make limiting assumptions on the policy. This results in either poor scalability or convergence to suboptimal solutions. For example, SQL (Haarnoja et al., 2017) implicitly incorporates entropy in the Q-function computation. This requires using importance sampling, which results in high variability and hence poor training stability and limited scalability to high dimensional action spaces. SAC (Haarnoja 1 arXiv:2405.00987v1 [cs.LG] 2 May 2024 \fPublished as a conference paper at ICLR 2024 \u03c0(\"|$!) \u03c0(\"|$\") STAC SQL SAC !! !\" S!AC Figure 2: S2AC learns a more optimal solution to the MaxEnt RL objective than SAC and SQL. We design a multigoal environment where an agent starts from the center of the 2-d map and tries to reach one of the three goals (G1, G2, and G3). The maximum expected future reward (level curves) is the same for all the goals but the expected future entropy is different (higher on the path to G2/G3): the action distribution \u03c0(a|s) is bi-modal on the path to the left (G2 and G3) and unimodal to the right (G1). Hence, we expect the optimal policy for the MaxEnt RL objective to assign more weights to G2 and G3. We visualize trajectories (in blue) sampled from the policies learned using SAC, SQL, and S2AC. SAC quickly commits to a single mode due to its actor being tied to a Gaussian policy. Though SQL also recovers the three modes, the trajectories are evenly distributed. S2AC recovers all the modes and approaches the left two goals more frequently. This indicates that it successfully maximizes not only the expected future reward but also the expected future entropy. et al., 2018a), on the other hand, follows a variational inference procedure by fitting a Gaussian distribution to the EBM policy. This enables a closed-form evaluation of the entropy but results in a suboptimal solution. For instance, SAC fails in environments characterized by multimodal action distributions. Similar to SAC, IAPO (Marino et al., 2021) models the policy as a uni-modal Gaussian. Instead of optimizing a MaxEnt objective, it achieves multimodal policies by learning a collection of parameter estimates (mean, variance) through different initializations for different policies. To improve the expressiveness of SAC, SSPG (Cetin & Celiktutan, 2022) and SAC-NF (Mazoure et al., 2020) model the policy as a Markov chain with Gaussian transition probabilities and as a normalizing flow (Rezende & Mohamed, 2015), respectively. However, due to training stability issues, the reported results in Cetin & Celiktutan (2022) show that though both models learn multi-modal policies, they fail to maximize the expected future entropy in positive rewards setups. We propose a new algorithm, S2AC, that yields a more optimal solution to the MaxEnt RL objective. To achieve expressivity, S2AC models the policy as a Stein Variational Gradient Descent (SVGD) (Liu, 2017) sampler from an EBM over Q-values (target distribution). SVGD proceeds by first sampling a set of particles from an initial distribution, and then iteratively transforming these particles via a sequence of updates to fit the target distribution. To compute a closed-form estimate of the entropy of such policies, we use the change-of-variable formula for pdfs (Devore et al., 2012). We prove that this is only possible due to the invertibility of the SVGD update rule, which does not necessarily hold for other popular samplers (e.g., Langevin Dynamics (Welling & Teh, 2011)). While normalizing flow models (Rezende & Mohamed, 2015) are also invertible, SVGD-based policy is more expressive as it encodes the inductive bias about the unnormalized density and incorporates a dispersion term to encourage multi-modality, whereas normalizing flows encode a restrictive class of invertible transformations (with easy-to-estimate Jacobian determinants). Moreover, our formula is computationally efficient and only requires evaluating first-order derivatives and vector products. To improve scalability, we model the initial distribution of the SVGD sampler as an isotropic Gaussian and learn its parameters, i.e., mean and standard deviation, end-to-end. We show that this results in faster convergence to the target distribution, i.e., fewer SVGD steps. Intuitively, the initial distribution learns to contour the high-density region of the target distribution while the SVGD updates result in better and faster convergence to the modes within that region. Hence, our approach is as parameter efficient as SAC, since the SVGD updates do not introduce additional trainable parameters. Note that S2AC can be reduced to SAC when the number of SVGD steps is zero. Also, SQL becomes equivalent to S2AC if the entropy is computed explicitly using our formula (the policy in SQL is an amortized SVGD sampler). Beyond RL, the backbone of S2AC is a new variational inference algorithm with a more expressive and scalable distribution characterized by a closed-form entropy estimate. We believe that this variational distribution can have a wider range of exciting applications. We conduct extensive empirical evaluations of S2AC from three aspects. We start with a sanity check on the merit of our derived SVGD-based entropy estimate on target distributions with known entropy values (e.g., Gaussian) or log-likelihoods (e.g., Gaussian Mixture Models) and assess its 2 \fPublished as a conference paper at ICLR 2024 sensitivity to different SVGD parameters (kernel, initial distribution, number of steps and number of particles). We observe that its performance depends on the choice of the kernel and is robust to variations of the remaining parameters. In particular, we find out that the kernel should be chosen to guarantee inter-dependencies between the particles, which turns out to be essential for invertibility. Next, we assess the performance of S2AC on a multi-goal environment (Haarnoja et al., 2017) where different goals are associated with the same positive (maximum) expected future reward but different (maximum) expected future entropy. We show that S2AC learns multimodal policies and effectively maximizes the entropy, leading to better robustness to obstacles placed at test time. Finally, we test S2AC on the MuJoCo benchmark (Duan et al., 2016). S2AC yields better performances than the baselines on four out of the five environments. Moreover, S2AC shows higher sample efficiency as it tends to converge with fewer training steps. These results were obtained from running SVGD for only three steps, which results in a small overhead compared to SAC during training. Furthermore, to maximize the run-time efficiency during testing, we train an amortized SVGD version of the policy to mimic the SVGD-based policy. Hence, this reduces inference to a forward pass through the policy network without compromising the performance. 2 PRELIMINARIES 2.1 SAMPLERS FOR ENERGY-BASED MODELS In this work, we study three representative methods for sampling from EBMs: (1) Stochastic Gradient Langevin Dynamics (SGLD) & Deterministic Langevin Dynamics (DLD) (Welling & Teh, 2011), (2) Hamiltonian Monte Carlo (HMC) (Neal et al., 2011), and (3) Stein Variational Gradient Descent (SVGD) (Liu & Wang, 2016). We review SVGD here since it is the sampler we eventually use in S2AC, and leave the rest to Appendix C.1. SVGD is a particle-based Bayesian inference algorithm. Compared to SGLD and HMC which have a single particle in their dynamics, SVGD operates on a set of particles. Specifically, SVGD samples a set of m particles {aj}m j=1 from an initial distribution q0 which it then transforms through a sequence of updates to fit the target distribution. Formally, at every iteration l, SVGD applies a form of functional gradient descent \u2206f that minimizes the KL-divergence between the target distribution p and the proposal distribution ql induced by the particles, i.e., the update rule for the ith particles is: al+1 i = al i + \u03f5\u2206f(al i) with \u2206f(al i) = Eal j\u223cql \u0002 k(al i, al j)\u2207al j log p(al j) + \u2207al jk(al i, al j) \u0003 . (1) Here, \u03f5 is the step size and k(\u00b7, \u00b7) is the kernel function, e.g., the RBF kernel: k(ai, aj) = exp(||ai \u2212 aj||2/2\u03c32). The first term within the gradient drives the particles toward the high probability regions of p, while the second term serves as a repulsive force to encourage dispersion. 2.2 MAXIMUM-ENTROPY RL We consider an infinite horizon Markov Decision Process (MDP) defined by a tuple (S, A, p, r), where S is the state space, A is the action space and p : S \u00d7 A \u00d7 S \u2192[0, \u221e] is the state transition probability modeling the density of the next state st+1 \u2208S given the current state st \u2208S and action at \u2208A. Additionally, we assume that the environment emits a bounded reward function r \u2208[rmin, rmax] at every iteration. We use \u03c1\u03c0(st) and \u03c1\u03c0(st, at) to denote the state and state-action marginals of the trajectory distribution induced by a policy \u03c0(at|st). We consider the setup of continuous action spaces Lazaric et al. (2007); Lee et al. (2018); Zhou & Lu (2023). MaxEnt RL (Todorov, 2006; Ziebart, 2010; Rawlik et al., 2012) learns a policy \u03c0\u2217(at|st), that instead of maximizing the expected future reward, maximizes the sum of the expected future reward and entropy: \u03c0\u2217= arg max\u03c0 X t \u03b3tE(st,at)\u223c\u03c1\u03c0 \u0002 r(st, at) + \u03b1H(\u03c0(\u00b7|st)) \u0003 , (2) where \u03b1 is a temperature parameter controlling the stochasticity of the policy and H(\u03c0(\u00b7|st)) is the entropy of the policy at state st. The conventional RL objective can be recovered for \u03b1 = 0. Note that the MaxEnt RL objective above is equivalent to approximating the policy, modeled as an EBM over Q-values, by a variational distribution \u03c0(at|st) (see proof of equivalence in Appendix D), i.e., \u03c0\u2217= arg min\u03c0 X t Est\u223c\u03c1\u03c0 \u0002 DKL \u0000\u03c0(\u00b7|st)\u2225exp(Q(st, \u00b7)/\u03b1)/Z \u0001\u0003 , (3) where DKL is the KL-divergence and Z is the normalizing constant. We now review two landmark MaxEnt RL algorithms: SAC (Haarnoja et al., 2018a) and SQL (Haarnoja et al., 2017). SAC is an actor-critic algorithm that alternates between policy evaluation, i.e., evaluating the Q-values for a policy \u03c0\u03b8(at|st): Q\u03d5(st, at) \u2190r(st, at) + \u03b3 Est+1,at+1\u223c\u03c1\u03c0\u03b8 \u0002 Q\u03d5(st+1, at+1) + \u03b1H(\u03c0\u03b8(\u00b7|st+1)) \u0003 (4) 3 \fPublished as a conference paper at ICLR 2024 and policy improvement, i.e., using the updated Q-values to compute a better policy: \u03c0\u03b8 = arg max\u03b8 X t Est,at\u223c\u03c1\u03c0\u03b8 \u0002 Q\u03d5(at, st) + \u03b1H(\u03c0\u03b8(\u00b7|st)) \u0003 . (5) SAC models \u03c0\u03b8 as an isotropic Gaussian, i.e., \u03c0\u03b8(\u00b7|s) = N(\u00b5\u03b8, \u03c3\u03b8I). While this enables computing a closed-form expression of the entropy, it incurs an over-simplification of the true action distribution, and thus cannot represent complex distributions, e.g., multimodal distributions. SQL goes around the entropy computation, by defining a soft version of the value function V\u03d5 = \u03b1 log \u0000 R A exp \u0000 1 \u03b1Q\u03d5(st, a\u2032) \u0001 da\u2032\u0001 . This enables expressing the Q-value (Eq (4)) independently from the entropy, i.e., Q\u03d5(st, at) = r(st, at) + \u03b3Est+1\u223cp[V\u03d5(st+1)]. Hence, SQL follows a soft value iteration which alternates between the updates of the \u201csoft\u201d versions of Q and value functions: Q\u03d5(st, at) \u2190r(st, at) + \u03b3Est+1\u223cp[V\u03d5(st+1)], \u2200(st, at) (6) V\u03d5(st) \u2190\u03b1 log \u0000 R A exp \u0000 1 \u03b1Q\u03d5(st, a\u2032) \u0001 da\u2032\u0001 , \u2200st. (7) Once the Q\u03d5 and V\u03d5 functions converge, SQL uses amortized SVGD Wang & Liu (2016) to learn a stochastic sampling network f\u03b8(\u03be, st) that maps noise samples \u03be into the action samples from the EBM policy distribution \u03c0\u2217(at|st) = exp \u0000 1 \u03b1(Q\u2217(st, at) \u2212V \u2217(st)) \u0001 . The parameters \u03b8 are obtained by minimizing the loss J\u03b8(st) = DKL \u0000\u03c0\u03b8(\u00b7|st)|| exp \u0000 1 \u03b1(Q\u2217 \u03d5(st, \u00b7) \u2212V \u2217 \u03d5 (st)) \u0001 with respect to \u03b8. Here, \u03c0\u03b8 denotes the policy induced by f\u03b8. SVGD is designed to minimize such KL-divergence without explicitly computing \u03c0\u03b8. In particular, SVGD provides the most greedy direction as a functional \u2206f\u03b8(\u00b7, st) (Eq (1)) which can be used to approximate the gradient \u2202J\u03b8/\u2202at. Hence, the gradient of the loss J\u03b8 with respect to \u03b8 is: \u2202J\u03b8(st)/\u2202\u03b8 \u221dE\u03be \u0002 \u2206f\u03b8(\u03be, st)\u2202f\u03b8(\u03be, st)/\u2202\u03b8 \u0003 . Note that the integral in Eq (7) is approximated via importance sampling, which is known to result in high variance estimates and hence poor scalability to high dimensional action spaces. Moreover, amortized generation is usually unstable and prone to mode collapse, an issue similar to GANs. Therefore, SQL is outperformed by SAC Haarnoja et al. (2018a) on benchmark tasks like MuJoCo. 3 APPROACH We introduce S2AC, a new actor-critic MaxEnt RL algorithm that uses SVGD as the underlying actor to generate action samples from policies represented using EBMs. This choice is motivated by the expressivity of distributions that can be fitted via SVGD. Additionally, we show that we can derive a closed-form entropy estimate of the SVGD-induced distribution, thanks to the invertibility of the update rule, which does not necessarily hold for other EBM samplers. Besides, we propose a parameterized version of SVGD to enable scalability to high-dimensional action spaces and nonsmooth Q-function landscapes. S2AC is hence capable of learning a more optimal solution to the MaxEnt RL objective (Eq (2)) as illustrated in Figure 2. 3.1 STEIN SOFT ACTOR CRITIC Like SAC, S2AC performs soft policy iteration which alternates between policy evaluation and policy improvement. The difference is that we model the actor as a parameterized sampler from an EBM. Hence, the policy distribution corresponds to an expressive EBM as opposed to a Gaussian. Critic. The critic\u2019s parameters \u03d5 are obtained by minimizing the Bellman loss as traditionally: \u03d5\u2217= arg min\u03d5 E(st,at)\u223c\u03c1\u03c0\u03b8 \u0002 (Q\u03d5(st, at) \u2212\u02c6 y)2\u0003 , (8) with the target \u02c6 y = rt(st, at) + \u03b3E(st+1,at+1)\u223c\u03c1\u03c0 \u0002 Q \u00af \u03d5(st+1, at+1) + \u03b1H(\u03c0(\u00b7|st+1)) \u0003 . Here \u00af \u03d5 is an exponentially moving average of the value network weights (Mnih et al., 2015). Actor as an EBM sampler. The actor is modeled as a sampler from an EBM over the Q-values. To generate a set of valid actions, the actor first samples a set of particles {a0} from an initial distribution q0 (e.g., Gaussian). These particles are then updated over several iterations l \u2208[1, L], i.e., {al+1} \u2190{al} + \u03f5h({al}, s) following the sampler dynamics characterized by a transformation h (e.g., for SVGD, h = \u2206f in Eq (1)). If q0 is tractable and h is invertible, it\u2019s possible to compute a closed-form expression of the distribution of the particles at the lth iteration via the change of variable formula Devore et al. (2012): ql(al|s) = ql\u22121(al\u22121|s) \f \fdet(I + \u03f5\u2207alh(al, s)) \f \f\u22121 , \u2200l \u2208[1, L]. In this case, the policy is represented using the particle distribution at the final step L of the sampler dynamics, i.e., \u03c0(a|s) = qL(aL|s) and the entropy can be estimated by averaging log qL(aL|s) over a set of particles (Section 3.2). We study the invertibility of popular EBM samplers in Section 3.3. 4 \fPublished as a conference paper at ICLR 2024 \ud835\udc4e! ! \ud835\udc4e! \"! \ud835\udc5e!(\ud835\udc4e|\ud835\udc60) \ud835\udc4e! ! \ud835\udc4e! #! \ud835\udc5e!(\ud835\udc4e|\ud835\udc60) S!AC(\ud835\udf19, \ud835\udf03) S!AC(\ud835\udf19) Figure 3: S2AC(\u03d5, \u03b8) achieves faster convergence to the target distribution (in orange) than S2AC(\u03d5) by parameterizing the initial distribution N(\u00b5\u03b8, \u03c3\u03b8) of the SVGD sampler. Parameterized initialization. To reduce the number of steps required to converge to the target distribution (hence reducing computation cost), we further propose modeling the initial distribution as a parameterized isotropic Gaussian, i.e., a0 \u223cN(\u00b5\u03b8(s), \u03c3\u03b8(s)). The parameterization trick is then used to express a0 as a function of \u03b8. Intuitively, the actor would learn \u03b8 such that the initial distribution is close to the target distribution. Hence, fewer steps are required to converge, as illustrated in Figure 3. Note that if the number of steps L = 0, S2AC is reduced to SAC. Besides, to deal with the non-smooth nature of deep Q-function landscapes which might lead to particle divergence in the sampling process, we bound the particle updates to be within a few standard deviations (t) from the mean of the learned initial distribution, i.e., \u2212t\u03c3\u03b8 \u2264al \u03b8 \u2264t\u03c3\u03b8, \u2200l \u2208[1, L]. Eventually, the initial distribution q0 \u03b8 learns to contour the high-density region of the target distribution and the following updates refine it by converging to the spanned modes. Formally, the parameters \u03b8 are computed by minimizing the expected KL-divergence between the policy qL \u03b8 induced by the particles from the sampler and the EBM of the Q-values: \u03b8\u2217=arg max\u03b8Est\u223cD,aL \u03b8 \u223c\u03c0\u03b8 \u0002 Q\u03d5(st, aL \u03b8 ) \u0003 + \u03b1Est\u223cD [H(\u03c0\u03b8(\u00b7|st))] s.t. \u2212t\u03c3\u03b8 \u2264al \u03b8 \u2264t\u03c3\u03b8, \u2200l \u2208[1, L]. (9) Here, D is the replay buffer. The derivation is in Appendix E. Note that the constraint does not truncate the particles as it is not an invertible transformation which then violates the assumptions of the change of variable formula. Instead, we sample more particles than we need and select the ones that stay within the range. We call S2AC(\u03d5, \u03b8) and S2AC(\u03d5) as two versions of S2AC with/without the parameterized initial distribution. The complete S2AC algorithm is in Algorithm 1 of Appendix A. 3.2 A CLOSED-FORM EXPRESSION OF THE POLICY\u2019S ENTROPY A critical challenge in MaxEnt RL is how to efficiently compute the entropy term H(\u03c0(\u00b7|st+1)) in Eq (2). We show that, if we model the policy as an iterative sampler from the EBM, under certain conditions, we can derive a closed-form estimate of the entropy at convergence. Theorem 3.1. Let F : Rn \u2192Rn be an invertible transformation of the form F(a) = a + \u03f5h(a). We denote by qL(aL) the distribution obtained from repeatedly applying F to a set of samples {a0} from an initial distribution q0(a0) over L steps, i.e., aL = F \u25e6F \u25e6\u00b7 \u00b7 \u00b7 \u25e6F(a0). Under the condition \u03f5||\u2207al ih(ai)||\u221e\u226a1, \u2200l \u2208[1, L], the distribution of the particles at the Lth step is: log qL(aL) \u2248log q0(a0) \u2212\u03f5 XL\u22121 l=0 Tr(\u2207alh(al)) + O(\u03f52dL). (10) Here, d is the dimensionality of a, i.e., a \u2208Rd and O(\u03f52dL) is the order of approximation error. Proof Sketch: As F is invertible, we apply the change of variable formula (Appendix C.2) on the transformation F \u25e6F \u25e6\u00b7 \u00b7 \u00b7 F and obtain: log qL(aL) = log q0(a0)\u2212PL\u22121 l=0 log \f \fdet(I + \u03f5\u2207alh(al)) \f \f. Under the assumption \u03f5||\u2207aih(ai)||\u221e\u226a1, we apply the corollary of Jacobi\u2019s formula (Appendix C.3) and get Eq. (10). The detailed proof is in Appendix F. Note that the condition \u03f5||\u2207aih(ai)||\u221e\u226a1 can always be satisfied when we choose a sufficiently small step size \u03f5, or the gradient of h(a) is small, i.e., h(a) is Lipschitz continuous with a sufficiently small constant. It follows from the theorem above, that the entropy of a policy modeled as an EBM sampler (Eq (9)) can be expressed analytically as: H(\u03c0\u03b8(\u00b7|s))=\u2212Ea0 \u03b8\u223cq0 \u03b8 h log qL \u03b8 (aL \u03b8 |s) i \u2248\u2212Ea0 \u03b8\u223cq0 \u03b8 h log q0 \u03b8(a0|s)\u2212\u03f5 XL\u22121 l=0 Tr \u0010 \u2207al \u03b8h(al \u03b8, s) \u0011 i . (11) In the following, we drop the dependency of the action on \u03b8 for simplicity of the notation. 3.3 INVERTIBLE POLICIES Next, we study the invertibility of three popular EBM samplers: SVGD, SGLD, and HMC as well as the efficiency of computing the trace, i.e., Tr(\u2207alh(al, s)) in Eq (10) for the ones that are invertible. Proposition 3.2 (SVGD invertibility). Given the SVGD learning rate \u03f5 and RBF kernel k(\u00b7, \u00b7) with variance \u03c3, if \u03f5 \u226a\u03c3, the update rule of SVGD dynamics defined in Eq (1) is invertible. 5 \fPublished as a conference paper at ICLR 2024 SVGD \u210b(\ud835\udc5e!) = 3.5 DLD \u210b(\ud835\udc5e!) = \u221225.93 SGLD \u210b(\ud835\udc5e!) = \u221211.57 HMC \u210b(\ud835\udc5e!) = \u221254.5 Initial Distribution \ud835\udc5e\" = \ud835\udca9(0, 6\ud835\udc3c) (a) Recovering the GT entropy m \u210b(\ud835\udc5e!) Kernel variance \ud835\udf0e (b) Effect of \u03c3 on H(qL) (\ud835\udc5a, \ud835\udc3f)= (\ud835\udc5a, \ud835\udc3f)= (\ud835\udc5a, \ud835\udc3f)= (\ud835\udc5a, \ud835\udc3f)= \u210b(\ud835\udc5e!) Target Distribution (c) Effect of m and L on H(qL) Figure 4: Entropy evaluation results. Proof Sketch: We use the explicit function theorem to show that the Jacobian \u2207aF(a, s) of the update rule F(a, s) is diagonally dominated and hence invertible. This yields invertibility of F(a, s). See detailed proof in Appendix G.3. Theorem 3.3. The closed-form estimate of log qL(aL|s) for the SVGD based sampler with an RBF kernel k(\u00b7, \u00b7) is log qL(aL|s)\u2248logq0(a0|s)+ \u03f5 m\u03c32 L\u22121 X l=0 m X j=1,al\u0338=al j k(al j, al) \u0010 (al\u2212al j)\u22a4\u2207al jQ(s, al j)+ \u03b1 \u03c32 \u2225al\u2212al j\u22252\u2212d\u03b1 \u0011 . Here, (\u00b7)\u22a4denotes the transpose of a matrix/vector. Note that the entropy does not depend on any matrix computation, but only on vector dot products and first-order vector derivatives. The proof is in Appendix H.1. Intuitively, the derived likelihood is proportional to (1) the concavity of the curvature of the Q-landscape, captured by a weighted average of the neighboring particles\u2019 Q-value gradients and (2) pairwise-distances between the neighboring particles (\u223c\u2225al i\u2212al j\u22252 \u00b7 exp (\u2225al i\u2212al j\u22252)), i.e., the larger the distance the higher is the entropy. We elaborate on the connection between this formula and non-parametric entropy estimators in Appendix B. Proposition 3.4 (SGLD, HMC). The SGLD and HMC updates are not invertible w.r.t. a. Proof Sketch: SGLD is stochastic (noise term) and thus not injective. HMC is only invertible if conditioned on the velocity v. Detailed proofs are in Appendices G.1-G.2. From the above theoretic analysis, we can see that SGLD update is not invertible and hence is not suitable as a sampler for S2AC. While the HMC update is invertible, its derived closed-form entropy involves calculating Hessian and hence computationally more expensive. Due to these considerations, we choose to use SVGD with an RBF kernel as the underlying sampler of S2AC. 4 RESULTS We first evaluate the correctness of our proposed closed-form entropy formula. Then we present the results of different RL algorithms on multigoal and MuJoCo environments. 4.1 ENTROPY EVALUATION This experiment tests the correctness of our entropy formula. We compare the estimated entropy for distributions (with known ground truth entropy or log-likelihoods) using different samplers and study the sensitivity of the formula to different samplers\u2019 parameters. (1) Recovering the ground truth entropy. In Figure 4a, we plot samples (black dots) obtained by SVGD, SGLD, DLD and HMC at convergence to a Gaussian with ground truth entropy H(p) = 3.41, starting from the same initial distribution (leftmost sub-figure). We also report the entropy values computed via Eq.(11). Unlike SGLD, DLD, and HMC, SVGD recovers the ground truth entropy. This empirically supports Proposition 3.4 that SGLD, DLD, and HMC are not invertible. (2) Effect of the kernel variance. Figure 4b shows the effect of different SVGD kernel variances \u03c3, where we use the same initial Gaussian from Figure 4a. We also visualize the particle distributions after L SVGD steps for the different configurations in Figure 9 of Appendix I. We can see that when the kernel variance is too small (e.g., \u03c3 = 0.1), the invertibility is violated, and thus the estimated entropy is wrong even at convergence. On the other extreme when the kernel variance is too large (e.g., \u03c3=100), i.e., when the particles are too scattered initially, the particles do not converge to the target Gaussian due to noisy gradients in the first term of Eq.(1). The best configurations hence lie somewhere in between (e.g., \u03c3\u2208{3, 5, 7}). (3) Effect of SVGD steps and particles. Figure 4c and Figure 10b (Appendix. I) show the behavior of our entropy formula under different configurations of the number of SVGD steps and particles, on two settings: (i) GMM M with an increasing number of components M, and (ii) distributions with increasing ground truth entropy values, i.e., Gaussians with increasing variances \u03c3. Results show that our entropy consistently grows with an increasing M (Figure 4c) and increasing \u03c3 (Figure 10b), even when a small number of SVGD steps and particles is used (e.g., L = 10, m = 10). 6 \fPublished as a conference paper at ICLR 2024 4.2 MULTI-GOAL EXPERIMENTS ! = 0.2 ! = 1 ! = 10 ! = 20 '! '\" (', (( () Multigoal Environment Figure 5: Multigoal env. To check if S2AC learns a better solution to the max-entropy objective (Eq (2)), we design a new multi-goal environment as shown in Figure 5. The agent is a 2D point mass at the origin trying to reach one of the goals (in red). Q-landscapes are depicted by level curves. Actions are bounded in [\u22121, 1] along both axes. Critical states for the analysis are marked with blue crosses. It is built on the multi-goal environment in Haarnoja et al. (2017) with modifications such that all the goals have (i) the same maximum expected future reward (positive) but (ii) different maximum expected future entropy. This is achieved by asymmetrically placing the goals (two goals on the left side and one on the right, leading to a higher expected future entropy on the left side) while assigning the same final rewards to all the goals. The problem setup and hyperparameters are detailed in Appendix J. (1) Multi-modality. Figure 6 visualizes trajectories (blue lines) collected from 20 episodes of S2AC(\u03d5, \u03b8), S2AC(\u03d5), SAC, SQL and SAC-NF (SAC with a normalizing flow policy, Mazoure et al. (2020)) agents (rows) at test time for increasing entropy weights \u03b1 (columns). S2AC and SQL consistently cover all the modes for all \u03b1 values, while this is only achieved by SAC and SAC-NF for large \u03b1 values. Note that, in the case of SAC, this comes at the expense of accuracy. Although normalizing flows are expressive enough in theory, they are known to quickly collapse to local optima in practice Kobyzev et al. (2020). The dispersion term in S2AC encodes an inductive bias to mitigate this issue. (2) Maximizing the expected future entropy. We also see that with increasing \u03b1, more S2AC and SAC-NF trajectories converge to the left goals (G2/G3). This shows both models learn to maximize the expected future entropy. This is not the case for SQL whose trajectory distribution remains uniform across the goals. SAC results do not show a consistent trend. This validates the hypothesis that the entropy term in SAC only helps exploration but does not lead to maximizing future entropy. The quantified distribution over reached goals is in Figure 12 of Appendix J. (3) Robustness/adaptability. To assess the robustness of the learned policies, we place an obstacle (red bar in Figure 7) on the path to G2. We show the test time trajectories of 20 episodes using S2AC, SAC, SQL and SAC-NF agents trained with different \u03b1\u2019s. We observe that, for S2AC and SAC-NF, with increasing \u03b1, more trajectories reach the goal after hitting the obstacles. This is not the case for SAC, where many trajectories hit the obstacle without reaching the goal. SQL does not manage to escape the barrier even with higher \u03b1. Additional results on the (4) effect of parameterization of q0, and the (5) entropy\u2019s effect on the learned Q-landscapes are respectively reported in Figure 11 and Figure 14 of Appendix J. S!AC(\ud835\udf19) SAC SQL S!AC(\ud835\udf19, \ud835\udf03) \ud835\udefc= 0.2 \ud835\udefc= 1 \ud835\udefc= 10 \ud835\udefc= 20 SAC-NF Figure 6: S2AC and SAC-NF learn to maximize the expected future entropy (biased towards G2/G3) while SAC and SQL do not. S2AC consistently recovers all modes, while SAC-NF with smaller \u03b1\u2019s does not, indicating its instability. S!AC(\ud835\udf19) SAC SQL S!AC(\ud835\udf19, \ud835\udf03) \ud835\udefc= 0.2 \ud835\udefc= 1 \ud835\udefc= 10 \ud835\udefc= 20 SAC-NF Figure 7: S2AC and SAC-NF are more robust to perturbations. Obstacle O is placed diagonally at [\u22121, 1]. Trajectories that did and did not reach the goal after hitting O are in green and red, respectively. 7 \fPublished as a conference paper at ICLR 2024 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 500 1000 1500 2000 2500 3000 3500 Average Return 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 \u00b0500 0 500 1000 1500 2000 2500 3000 Average Return 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 2000 4000 6000 8000 Average Return 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 500 1000 1500 2000 2500 3000 3500 4000 Average Return PPO SAC-NF DDPG SAC S2AC(\u00a1, \u00b5, \u221a) S2AC(\u00a1, \u00b5) S2AC(\u00a1) SQL 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 500 1000 1500 2000 2500 3000 3500 4000 Average Return (a) Hopper-v2 (b) Walker2d-v2 (c) HalfCheetah-v2 (d) Ant-v2 (e) Humanoid-v2 (f) Median (g) IQM (h) Mean (i) Optimality Gap (j) P(X>Y) Figure 8: (a)-(e): Performance curves on the MuJoCo benchmark (training). S2AC outperforms SQL and SAC-NF on all environments and SAC on 4 out of 5 environments. (f)-(i): Comparison of Median, IQM, Mean, and Optimality Gap between S2AC and baseline algorithms. (j): The probabilities of S2AC outperforming baseline algorithms. 4.3 MUJOCO EXPERIMENTS We evaluate S2AC on five environments from MuJoCo (Brockman et al., 2016): Hopper-v2, Walker2dv2, HalfCheetah-v2, Ant-v2, and Humanoid-v2. As baselines, we use (1) DDPG (Gu et al., 2017), (2) PPO (Schulman et al., 2015), (3) SQL (Haarnoja et al., 2017), (4) SAC-NF (Mazoure et al., 2020), and (5) SAC (Haarnoja et al., 2018a). Hyperparameters are in Appendix K. (1) Performance and sample efficiency. We train five different instances of each algorithm with different random seeds, with each performing 100 evaluation rollouts every 1000 environment steps. Performance results are in Figure 8(a)-(e). The solid curves correspond to the mean returns over the five trials and the shaded region represents the minimum and maximum. S2AC(\u03d5, \u03b8) is consistently better than SQL and SAC-NF across all the environments and has superior performance than SAC in four out of five environments. Results also show that the initial parameterization was key to ensuring the scalability (S2AC(\u03d5) has poor performance compared to S2AC(\u03d5, \u03b8)). Figure 8(f)-(j) demonstrate the statistical significance of these gains by leveraging statistics from the rliable library (Agarwal et al., 2021) which we detail in Appendix K. Hopper Walker2d HalfCheetah Ant Action dim 3 6 6 8 State dim 11 17 17 111 SAC 0.723 0.714 0.731 0.708 SQL 0.839 0.828 0.815 0.836 S2AC(\u03d5, \u03b8) 3.267 4.622 4.583 5.917 S2AC(\u03d5, \u03b8, \u03c8) 0.850 0.817 0.830 0.837 Table 1: Action selection run-time on MuJoCo. (2) Run-time. We report the run-time of action selection of SAC, SQL, and S2AC algorithms in Table 1. S2AC(\u03d5, \u03b8) run-time increases linearly with the action space. To improve the scalability, we train an amortized version that we deploy at test-time, following (Haarnoja et al., 2017). Specifically, we train a feed-forward deepnet f\u03c8(s, z) to mimic the SVGD dynamics during testing, where z is a random vector that allows mapping the same state to different particles. Note that we cannot use f\u03c8(s, z) during training as we need to estimate the entropy in Eq (11), which depends on the unrolled SVGD dynamics (details in Appendix K). The amortized version S2AC(\u03d5, \u03b8, \u03c8) has a similar run-time to SAC and SQL with a slight tradeoff in performance (Figure 8). 5 RELATED WORK MaxEnt RL (Todorov, 2006; Ziebart, 2010; Rawlik et al., 2012) aims to learn a policy that gets high rewards while acting as randomly as possible. To achieve this, it maximizes the sum of expected future reward and expected future entropy. It is different from entropy regularization (Schulman et al., 2015; O\u2019Donoghue et al., 2016; Schulman et al., 2017) which maximizes entropy at the current time step. It is also different from multi-modal RL approaches (Tang & Agrawal, 2018) which recover different modes with equal frequencies without considering their future entropy. MaxEnt RL has been broadly incorporated in various RL domains, including inverse RL (Ziebart et al., 2008; Finn et al., 2016), stochastic control (Rawlik et al., 2012; Toussaint, 2009), guided policy search (Levine & Koltun, 2013), and off-policy learning (Haarnoja et al., 2018a;b). MaxEnt RL is shown to maximize a lower bound of the robust RL objective (Eysenbach & Levine, 2022) and is hence less sensitive 8 \fPublished as a conference paper at ICLR 2024 to perturbations in state and reward functions. From the variational inference lens, MaxEnt RL aims to find the policy distribution that minimizes the KL-divergence to an EBM over Q-function. The desired family of variational distributions is (1) expressive enough to capture the intricacies of the Q-value landscape (e.g., multimodality) and (2) has a tractable entropy estimate. These two requirements are hard to satisfy. SAC (Haarnoja et al., 2018a) uses a Gaussian policy. Despite having a tractable entropy, it fails to capture arbitrary Q-value landscapes. SAC-GMM (Haarnoja, 2018) extends SAC by modeling the policy as a Gaussian Mixture Model, but it requires an impractical grid search over the number of components. Other extensions include IAPO (Marino et al., 2021) which also models the policy as a uni-modal Gaussian but learns a collection of parameter estimates (mean, variance) through different initializations. While this yields multi-modality, it does not optimize a MaxEnt objective. SSPG (Cetin & Celiktutan, 2022) and SAC-NF (Mazoure et al., 2020) respectively improve the policy expressivity by modeling the policy as a Markov chain with Gaussian transition probabilities and as a normalizing flow. Due to training instability, the reported multi-goal experiments in (Cetin & Celiktutan, 2022) show that, though both models capture multimodality, they fail to maximize the expected future entropy in positive reward setups. SQL (Haarnoja et al., 2017), on the other hand, bypasses the explicit entropy computation altogether via a soft version of value iteration. It then trains an amortized SVGD (Wang & Liu, 2016) sampler from the EBM over the learned Q-values. However, estimating soft value functions requires approximating integrals via importance sampling which is known to have high variance and poor scalability. We propose a new family of variational distributions induced by a parameterized SVGD sampler from the EBM over Q-values. Our policy is expressive and captures multi-modal distributions while being characterized by a tractable entropy estimate. EBMs (LeCun et al., 2006; Wu et al., 2018) are represented as Gibbs densities p(x) = exp E(x)/Z, where E(x) \u2208R is an energy function describing inter-variable dependencies and Z = R exp E(x) is the partition function. Despite their expressiveness, EBMs are not tractable as the partition function requires integrating over an exponential number of configurations. Markov Chain Monte Carlo (MCMC) methods (Van Ravenzwaaij et al., 2018) (e.g., HMC (Hoffman & Gelman, 2014), SGLD (Welling & Teh, 2011)) are frequently used to approximate the partition function via sampling. There have been recent efforts to parameterize these samplers via deepnets (Levy et al., 2017; Gong et al., 2018; Feng et al., 2017) to improve scalability. Similarly to these methods, we propose a parameterized variant of SVGD (Liu & Wang, 2016) as an EBM sampler to enable scalability to highdimensional action spaces. Beyond sampling, we derive a closed-form expression of the sampling distribution as an estimate of the EBM. This yields a tractable estimate of the entropy. This is opposed to previous methods for estimating EBM entropy which mostly rely on heuristic approximation, lower bounds Dai et al. (2017; 2019a), or neural estimators of mutual information (Kumar et al., 2019). The idea of approximating the entropy of EBMs via MCMC sampling by leveraging the change of variable formula was first proposed in Dai et al. (2019b). The authors apply the formula to HMC and LD, which, as we show previously, violate the invertibility assumption. To go around this, they augment the EBM family with the noise or velocity variable for LD and HMC respectively. But the derived log-likelihood of the sampling distribution turns out to be \u2013counter-intuitively\u2013 independent of the sampler\u2019s dynamics and equal to the initial distribution, which is then parameterized using a flow model (details in Appendix B.2). We show that SVGD is invertible, and hence we sample from the original EBM, so that our derived entropy is more intuitive as it depends on the SVGD dynamics. SVGD-augmented RL (Liu & Wang, 2016) has been explored under other RL contexts. Liu et al. (2017) use SVGD to learn a distribution over policy parameters. While this leads to learning diverse policies, it is fundamentally different from our approach as we are interested in learning a single multi-modal policy with a closed-form entropy formula. Castanet et al. (2023); Chen et al. (2021) use SVGD to sample from multimodal distributions over goals/tasks. We go beyond sampling and use SVGD to derive a closed-form entropy formula of an expressive variational distribution. 6" + }, + { + "url": "http://arxiv.org/abs/2105.06441v1", + "title": "DeepQAMVS: Query-Aware Hierarchical Pointer Networks for Multi-Video Summarization", + "abstract": "The recent growth of web video sharing platforms has increased the demand for\nsystems that can efficiently browse, retrieve and summarize video content.\nQuery-aware multi-video summarization is a promising technique that caters to\nthis demand. In this work, we introduce a novel Query-Aware Hierarchical\nPointer Network for Multi-Video Summarization, termed DeepQAMVS, that jointly\noptimizes multiple criteria: (1) conciseness, (2) representativeness of\nimportant query-relevant events and (3) chronological soundness. We design a\nhierarchical attention model that factorizes over three distributions, each\ncollecting evidence from a different modality, followed by a pointer network\nthat selects frames to include in the summary. DeepQAMVS is trained with\nreinforcement learning, incorporating rewards that capture representativeness,\ndiversity, query-adaptability and temporal coherence. We achieve\nstate-of-the-art results on the MVS1K dataset, with inference time scaling\nlinearly with the number of input video frames.", + "authors": "Safa Messaoud, Ismini Lourentzou, Assma Boughoula, Mona Zehni, Zhizhen Zhao, Chengxiang Zhai, Alexander G. Schwing", + "published": "2021-05-13", + "updated": "2021-05-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.IR" + ], + "main_content": "INTRODUCTION From Snapchat and Youtube to Twitter, Facebook and ByteDance, video sharing has influenced social media significantly over the past years. Video views increased over 99% on YouTube and 258% on Facebook, in just a single year1. To date, more than 5 billion videos have been shared on Youtube, where users daily spend 1 billion hours watching the uploaded content2. Facebook also reached 100 million hours of video watching every day3. Given a query, current video search engines return hundreds of videos, often redundant and difficult for the user to comprehend without spending a significant amount of time and effort to find the information of interest. To effectively tackle this issue, Query-Aware Multi-Video Summarization (QAMVS) methods select a subset of frames from the retrieved videos and form a concise topic-related summary conditioned on the user search intent [22, 25]. A compelling summary should be (1) concise, (2) representative of the query-relevant events, and (3) chronologically sound. Naively applying traditional single video summarization (SVS) techniques 1https://www.wyzowl.com/video-social-media-2020/ 2https://www.omnicoreagency.com/youtube-statistics/ 3https://99firms.com/blog/facebook-video-statistics/ arXiv:2105.06441v1 [cs.CV] 13 May 2021 \fresults in suboptimal summaries, as SVS methods fail to capture all aforementioned criteria. Overall, QAMVS is more challenging than SVS. First, QAMVS needs to ensure temporal coherence, a non-trivial task since the frames are selected from multiple different videos. In contrast, for SVS the chronological order is given by the video frame order. Secondly, QAMVS methods need to filter large noisy content as videos contain a lot of query-irrelevant information. Hence, QAMVS involves modeling the interactions between two or more modalities, i.e., the set of videos and the query contents. In contrast, a clustering formulation optimizing for the summary diversity yields good results for SVS. Prior work relies on multi-stage pipelines to sequentially optimize for the aforementioned criteria. First, a set of candidate frames is selected following graph-based [7, 24, 28], decomposition-based [22, 25, 46] or learning-based [41, 65] methods. Next, the list of frames is refined to be query-adaptive by ignoring frames that are dissimilar to a set of web-images retrieved with the same query [22, 24, 25, 28]. Finally, the selected frames are ordered to form a coherent summary, either based on importance scores assigned at the video level [46, 65] or by topic-closeness [22, 24, 25]. Due to the sequential nature of these methods we observe significant shortcomings: (1) multi-stage procedures result in error propagation; (2) existing methods have polynomial complexity with respect to the size of the video set and the video lengths, and (3) the use of multi-modal meta-data is often limited to candidate frame selection instead of guiding the summarization in every step. To address these shortcomings, in this work, we propose a unified end-to-end trainable model for the QAMVS task. Our architecture (summarized in Figure 1) is a hierarchical attention-based sequenceto-sequence model which significantly reduces the computational complexity from polynomial to linear compared to the current state-of-the-art methods and alleviates error propagation due to being a unified approach. We achieve this via a pointer network, which selects the frames to include in the summary, thus removing the burden of rearranging the frames in a separate subsequent step. The attention of the pointer network factorizes over three distributions, each collecting evidence from a different modality, guiding the summarization process in every step. To address the challenge of limited ground truth supervision, we train our model using reinforcement learning, incorporating representativeness, diversity, query-adaptability and temporal coherence rewards. The key contributions of this work are summarized as follows: (1) We design a novel end-to-end Query-Aware Multi-Video Summarization (DeepQAMVS) framework that jointly optimizes multiple crucial criteria of this challenging task: (i) conciseness, (ii) chronological soundness and (iii) representativeness of all query-related events. (2) We adopt pointer networks to remove the burden of rearranging the selected frames towards forming a chronologically coherent summary and design a hierarchical attention mechanism that models the cross-modal semantic dependencies between the videos and the query, achieving state-of-the-art performance. (3) We employ reinforcement learning to avoid over-fitting to the limited ground-truth data. We introduce two novel rewards that capture query-adaptability and temporal coherence. We conduct extensive experiments on the challenging MVS1K dataset. Quantitative and qualitative analysis shows that our model achieves state-of-the-art results and generates visually coherent summaries. 2 RELATED WORK We cover related work on single video summarization (SVS), multivideo summarization (MVS) and pointer networks (PN). 2.1 Single Video Summarization Both supervised and unsupervised methods have been proposed for the SVS task. On the supervised side, methods involve categoryspecific classifiers for importance scoring of different video segments [51, 62], sequential determinantal point processes [16, 57, 58], LSTMs [37, 55, 71], encoder-decoder architectures [6, 23], memory networks [14] and semantic aware techniques which include video descriptors [67], vision-language embeddings [50, 63] and textsummarization metrics [69]. Instead, unsupervised methods rely on low-level visual features to determine the important parts of a video. Strategies include clustering [10, 17, 45], maximal bi-clique finding [7], energy minimization [52] and sparse-coding [8, 12, 13, 80]. Recently, convolutional models [54], generative adversarial networks [15, 39, 53, 75, 76] and reinforcement learning [29, 48, 74, 81] have shown compelling results on the SVS task. Using queries to guide the summary has been explored in SVS. Proposed methods condition the summary generation on the textual query embedding [74, 76], learn common textual-visual coembeddings for both the query and the frames [63], or enrich the visual features with textual ones obtained from dense textual shot annotations [58, 59]. As current multi-video datasets contain video-level titles/descriptions and abstract queries (e.g., retirement, wedding, terror attack), the aforementioned methods are not applicable. Instead, we use the query to retrieve a set of web-images that represent its major sub-concepts/sub-events and use these images to condition the summary generation process. Note that query-adaptability is more critical in the case of MVS due to large irrelevant content across different videos. In general, SVS methods that operate on a single long video obtained by concatenating all videos to be summarized, such as \ud835\udc58-means [10] and Dominant Set Clustering (DSC) [3], also result in lower performance than methods designed specifically for the QAMVS problem. These methods first form clusters of frames, select centroids as candidate frames and then compute diversity to eliminate similar keyframes before generating the final summary. Due to the lack of an ordering mechanism, SVS methods result in low consistency across selected frames that reduces readability and smoothness of the overall summary, affecting significantly the user viewing experience [11]. Nevertheless, to emphasize the importance of designing techniques that tackle QAMVS specifically, we also report results for SVS approaches in our evaluation. 2.2 Multi-Video Summarization Applications range from multi-view summarization aiming at summarizing videos captured for the same scene with several dynamically moving cameras (e.g., in surveillance) [21, 38, 43, 44, 82], and summarizing of user-devices\u2019 videos [1, 41, 72, 73, 77\u201379] (e.g., for cities hotspot preview [78] or city navigation [77]) to topicrelated MVS (QAMVS) [22, 25, 28, 41, 46, 66]. Early attempts to solve the QAMVS task applied techniques optimizing for diversity [9, 20, 26, 31\u201335, 46, 47, 65, 66]. However, methods that advocate for these metrics cannot solve the QAMVS task satisfactorily, as \f1 2 3 4 Frame Distribution LSTM LSTM LSTM ... LSTM Final Summary Hierarchical Attention Web Images Text Data 1. William and Kate leave Buckingham Palace 2. William and Kate kiss on the balcony 3. William and Kate walk down aisle Query: Prince William Wedding Videos' Titles & Descriptions Video 2 Video 3 Videos Video 1 Figure 2: Overview of the policy network. DeepQAMVS is modeled as a Pointer Network with Hierarchical Attention (Figure 3). The policy \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D, I) is constructed by gathering evidence from the videos, the query images and the textual data. During inference, the frame with the highest probability from the video collection is copied into the final summary Y \ud835\udc3f. (1) unimportant yet diverse frames are selected due to the high amount of irrelevant information across the different videos and (2) frames are not ordered chronologically to make a coherent story. Nevertheless, to emphasize the importance of designing techniques that tackle QAMVS challenges specifically, we also report results for diversity-oriented approaches in our evaluation. More recent QAMVS methods can be divided into three categories: (1) graphbased, (2) decomposition-based, and (3) learning-based. Graph-based methods construct a graph of relationships between frames of different videos, from which the most representative ones are selected. For example, Kim et al. [28] summarized query related videos by performing diversity ranking on top of the similarity graphs between query web-images and video frames, in order to reconstruct a storyline graph of query-relevant events. Ji et al. [24] proposed a clustering-based procedure using a hyper-graph dominant set, followed by a refinement step to filter frames that are most dissimilar to the query web-images, and a final step where the remaining candidates are ordered based on topic closeness. Decomposition-based approaches subsume weighted archetypal analysis and sparse-coding. Ji et al. [25] proposed a two-stage approach, where the frames are first extracted using multimodal Weighted Archetypal Analysis (MWAA). Here, the weights are obtained from a graph fusing information from video frames, textual meta-data and query-dependent web-images. Next, the frames are chronologically ordered based on upload time and topic-closeness. Panda et al. [46] formulated QAMVS as a sparse coding program regularized with interestingness and diversity metrics, followed by ordering the frames using a video-relevance score. While Panda et al. [46] did not account for query-adaptability, Ji et al. [22] extended the latter with an additional regularization term enforcing the selected frames to be similar to the query web-images. To form the final summary, frames are then ordered chronologically by grouping them into events based on textual and visual similarity. For learning-based methods, Wang et al. [66] proposed a multipleinstance learning approach to localize the tags into video shots and select the query-aware frames in accordance with the tags. Nie et al. [41] selected frames from semantically important regions and then use a probabilistic model to jointly optimize for multiple attributes such as aesthetics, coherence, and stability. In contrast to previous approaches that propose modularized solutions, we design a unified end-to-end model for QAMVS to generate visually coherent summaries in an end-to-end fashion. 2.3 Pointer Networks Pointer Networks (PNs) have been applied to solve combinatorial optimization problems, e.g., traveling-salesman [2] and language modeling tasks [64]. At every time step, the output is constructed by iteratively copying an input item that is chosen by the pointer. This property is uniquely convenient for the QAMVS task. Our model is the first to use a Pointer Network for QAMVS. PNs, unlike other Seq2Seq models (e.g., LSTM [71] or seqDPP [16]), enable attending to any frame in any video at any time point. Hence, they naturally generate an ordered sequence of frames, while the attention mechanism fuses the multi-modal information to select the next best frame satisfying diversity, query-relevance and visual coherence (Figure 3). We train the Pointer Network in our model using reinforcement learning, as it is useful for tasks with limited labeled data [4, 5, 19, 30, 36, 40, 56], as in the case of QAMVS. 3 PROPOSED DEEPQAMVS MODEL Given a collection of videos and images retrieved by searching with a common text query that encodes user preferences, the goal is to generate a topic-related summary for the videos. DeepQAMVS utilizes both web-images and textual meta-data. Web-images are particularly useful as they guide the summarization towards discarding irrelevant information (image attention). However, they \fVideo 1 Video 2 Video 3 Attention Operator Attention Operator Attention Operator Attention Operator Attention Operator Attention Operator Videos Text Data 1. William and Kate leave Buckingham Palace 2. William and Kate kiss on the balcony 3. William and Kate walk down aisle Query: Prince William Wedding Videos' Titles & Descriptions Web Images Figure 3: Illustration of DeepQAMVS\u2019s Hierarchical Attention \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D, I). might contain irrelevant information. To robustly ensure queryrelevance, DeepQAMVS leverages multi-modal attention which enables the web-images and textual meta-data (query attention) to act as complementary information that guides the summarization process. In the following, we first formally define the problem, then introduce the proposed DeepQAMVS model. 3.1 Problem Formulation Let \ud835\udc5ebe the semantic embedding of the textual query and let I = {\ud835\udc3c1, \u00b7 \u00b7 \u00b7 , \ud835\udc3c|I |} refer to the set of web-image embeddings. We denote by X(\ud835\udc63) = {\ud835\udc65(\ud835\udc63) 1 , \u00b7 \u00b7 \u00b7 ,\ud835\udc65(\ud835\udc63) |X(\ud835\udc63) |} the set of frame embeddings from video \ud835\udc63\u2208{1, \u00b7 \u00b7 \u00b7 , \ud835\udc41}. Let D = {\ud835\udc51(1), \u00b7 \u00b7 \u00b7 ,\ud835\udc51(\ud835\udc41)} be the text embeddings of the videos\u2019 textual data, constructed by averaging the embeddings of the title and description for every video. The goal is to generate a summary Y \ud835\udc3f= {\ud835\udc661, \u00b7 \u00b7 \u00b7 ,\ud835\udc66\ud835\udc3f} of \ud835\udc3fframes selected from the input video frames, i.e., Y \ud835\udc3f\u2282X = \u00d0 \ud835\udc63X(\ud835\udc63). Due to the sequential nature of the problem, i.e., selecting the next candidate frame based on what has been selected so far, we formulate the QAMVS problem as a Markov Decision Process (MDP). Specifically, an agent operates in \ud835\udc61\u2208{1, \u00b7 \u00b7 \u00b7 , \ud835\udc3f} time-steps according to a policy \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D, I) with trainable parameters \ud835\udf03. The policy encodes the probability of selecting an action \ud835\udc4e\ud835\udc61given the state Y \ud835\udc61\u22121, the query \ud835\udc5e, the text meta-data D and the webimages I. The state Y \ud835\udc61\u22121 denotes the set of frames that are already selected in the summary up to time step \ud835\udc61. Note that Y 0 = \u2205. The set of possible actions is the set of input frames after eliminating the ones that have already been selected in the summary, i.e., \ud835\udc4e\ud835\udc61\u2208A\ud835\udc61= X \\ Y \ud835\udc61\u22121. We denote by A(\ud835\udc63) \ud835\udc61 the set of valid actions corresponding to frames from video \ud835\udc63. We model the policy function \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D, I) as a pointer network with hierarchical attention, as illustrated in Figure 2. At inference step \ud835\udc61, the inputs (X, D and I), together with the state Y \ud835\udc61\u22121, are used to compute the distribution \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D, I) over possible actions \ud835\udc4e\ud835\udc61, i.e., over possible frames. The frame with the highest probability is then copied to the summary Y \ud835\udc61. The process continues until a summary of length \ud835\udc3fis reached. Next, we describe the policy \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D, I). 3.2 DeepQAMVS Policy Network Our proposed policy function models the cross-modal semantic dependencies between the videos, the text query and the webimages. More specifically, the policy network \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D, I) is the weighted combination of three distributions, video frame attention \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121), image attention \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121, I), and query attention \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D). Formally, \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D, I) = \ud835\udf07(1) \ud835\udc61 \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121)+ \ud835\udf07(2) \ud835\udc61 \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121, I) + \ud835\udf07(3) \ud835\udc61 \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D), (1) where \ud835\udf07(1) \ud835\udc61 , \ud835\udf07(2) \ud835\udc61 and \ud835\udf07(3) \ud835\udc61 are learnable interpolation terms satisfying \ud835\udf07(1) \ud835\udc61 + \ud835\udf07(2) \ud835\udc61 + \ud835\udf07(3) \ud835\udc61 = 1. An illustration of the hierarchical attention is provided in Figure 3. In the following, we introduce each of these three distributions. The video frame attention \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121) is modeled as a two-level attention, i.e., at each time step \ud835\udc61, video attention selects video \ud835\udc63 and then selects a frame \ud835\udc4e\ud835\udc61from video \ud835\udc63: \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121) = \ud835\udc41 \u2211\ufe01 \ud835\udc63=1 \ud835\udc5d(\ud835\udc63|Y \ud835\udc61\u22121)\ud835\udc5d(\ud835\udc4e\ud835\udc61|\ud835\udc63, Y \ud835\udc61\u22121), (2) where \ud835\udc5d(\ud835\udc4e\ud835\udc61|\ud835\udc63, Y \ud835\udc61\u22121) is the probability of selecting a frame \ud835\udc4e\ud835\udc61from video \ud835\udc63, and \ud835\udc5d(\ud835\udc63|Y \ud835\udc61\u22121) is the distribution over the collection of videos. We compute both probabilities via \ud835\udc5d(\ud835\udc4e\ud835\udc61|\ud835\udc63, Y \ud835\udc61\u22121),\ud835\udc50(\ud835\udc63) \ud835\udc61 = Attention(A(\ud835\udc63) \ud835\udc61 , Y \ud835\udc61\u22121), (3) \ud835\udc5d(\ud835\udc63|Y \ud835\udc61\u22121),\ud835\udc50\ud835\udc61= Attention \u0010 {\ud835\udc50(1) \ud835\udc61 , \u00b7 \u00b7 \u00b7 ,\ud835\udc50(\ud835\udc41) \ud835\udc61 }, Y \ud835\udc61\u22121 \u0011 . (4) The Attention operator, as well as the context vectors \ud835\udc50\ud835\udc61and {\ud835\udc50(\ud835\udc63) \ud835\udc61 }\ud835\udc41 \ud835\udc63=1 are defined below. Intuitively, the two-level attention enables scaling to a large number of videos and video lengths since it decomposes a joint distribution into the product of two conditional distributions. The image attention \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121, I) reflects the correlation between video frames and web-images. We first generate a context \f... ... Figure 4: The Attention operator. vector \u02c6 \ud835\udc50\ud835\udc61encoding the most relevant information in the web-images at time \ud835\udc61given the current summary Y \ud835\udc61\u22121: \ud835\udc5d(\ud835\udc3c|Y \ud835\udc61\u22121), \u02c6 \ud835\udc50\ud835\udc61= Attention(I, Y \ud835\udc61\u22121). (5) \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121, I) is then obtained by transforming the dot product between \u02c6 \ud835\udc50\ud835\udc61and the action representations, i.e., representations from not previously selected frames, into a distribution via a softmax. The query attention \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D) captures the correlation between the query\ud835\udc5e, the text data D and the summary at time \ud835\udc61. For this, we first weigh every video\u2019s text embedding by its similarity to the query. Next, we compute an attention over the weighted embeddings, given the current summary Y \ud835\udc61\u22121, via \ud835\udf0b(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D), \u02dc \ud835\udc50\ud835\udc61=Attention \u0010 (\ud835\udc5e\ud835\udc47\ud835\udc51(\ud835\udc63))\ud835\udc51(\ud835\udc63), Y \ud835\udc61\u22121 \u0011 . (6) The interpolation weights in Eq. (1) can be obtained by attending over the modalities\u2019 context vectors, \ud835\udc50\ud835\udc61, \u02c6 \ud835\udc50\ud835\udc61and \u02dc \ud835\udc50\ud835\udc61: [\ud835\udf07(1) \ud835\udc61 , \ud835\udf07(2) \ud835\udc61 , \ud835\udf07(3) \ud835\udc61 ], \u00b7 = Attention({\ud835\udc50\ud835\udc61, \u02c6 \ud835\udc50\ud835\udc61, MLP( \u02dc \ud835\udc50\ud835\udc61)}, Y \ud835\udc61\u22121), (7) where MLP is a multi-layer perceptron used to unify the dimensions of the three context vectors. We observe that if \ud835\udc50\ud835\udc61and \u02c6 \ud835\udc50\ud835\udc61are similar, their weights \ud835\udf07(1) \ud835\udc61 and \ud835\udf07(2) \ud835\udc61 are close, else more weight is given to video attention. The Attention operator, illustrated in Figure 4 and used multiple times above, takes as input a sequence of vectors U = {\ud835\udc62\ud835\udc56}|U | \ud835\udc56=1 with \ud835\udc62\ud835\udc56\u2208R\ud835\udc5aand the summary Y \ud835\udc61\u22121, embedded by an LSTM into a hidden state \u210e\ud835\udc61\u2208R\ud835\udc5b. The Attention operator provides as output a distribution \ud835\udc5d(\ud835\udc62\ud835\udc56) over the vectors {\ud835\udc62\ud835\udc56}|U | \ud835\udc56=1 and a context vector \ud835\udc50as a linear combination of elements in U by conditioning them on \u210e\ud835\udc61: \ud835\udc52\ud835\udc56= \ud835\udc64\ud835\udc47 1 tanh(\ud835\udc4a2[\ud835\udc62\ud835\udc56;\u210e\ud835\udc61]), \ud835\udc5d(\ud835\udc621), \u00b7 \u00b7 \u00b7 , \ud835\udc5d(\ud835\udc62|U |) = Softmax([\ud835\udc521, \u00b7 \u00b7 \u00b7 ,\ud835\udc52|U |]), (8) \ud835\udc50= |U | \u2211\ufe01 \ud835\udc56=1 \ud835\udc5d(\ud835\udc62\ud835\udc56)\ud835\udc62\ud835\udc56, (9) where \ud835\udc641 \u2208R\ud835\udc5band \ud835\udc4a2 \u2208R\ud835\udc5b\u00d7(\ud835\udc5b+\ud835\udc5a) are trainable weight parameters. The outputs of the Attention operator are the probabilities given in Eq. (8) and the context vector \ud835\udc50given in Eq. (9). Embeddings: The video frames X are embedded with a pre-trained CNN followed by a BiLSTM network. Web-images I are encoded with the same CNN. Textual embeddings D are computed for every video by averaging Glove word embeddings [49] from its associated title and description. Note that we normalize all embeddings. 3.3 Training with Policy Gradient Due to the limited annotated data and the subjectivity of the ground truth summaries, we train our model via reinforcement learning. The goal is to learn the policy \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|Y \ud835\udc61\u22121,\ud835\udc5e, D, I) by maximizing the expected reward \ud835\udc3d(\ud835\udf03) = E\ud835\udf0b\ud835\udf03[\ud835\udc45(Y \ud835\udc3f)] during training, where \ud835\udc45(Y \ud835\udc3f) denotes the reward function computed for a summary Y \ud835\udc3f. Following REINFORCE [68], we approximate the expectation by running the agent for \ud835\udc40episodes for a batch of videos and then taking the average gradient. To reduce variance, we use a moving average of the rewards as a computationally efficient baseline. The reward \ud835\udc45= \ud835\udefd1\ud835\udc45div + \ud835\udefd2\ud835\udc45rep + \ud835\udefd3\ud835\udc45query + \ud835\udefd4\ud835\udc45coh is composed of four terms, measuring the diversity (\ud835\udc45div), representativeness (\ud835\udc45rep), query-adaptability (\ud835\udc45query) and temporal coherence (\ud835\udc45coh). Hyperparameters {\ud835\udefd\ud835\udc56}4 \ud835\udc56=1 are weights associated to different rewards. Note that we use the same diversity and representativeness rewards as Zhou et al. [81]. In addition, we introduce two novel rewards, query-adaptability and temporal coherence, to accommodate the QAMVS task. To keep the rewards in the same range, we use (1) dot product as a similarity metric in \ud835\udc45coh to balance out \ud835\udc45div and (2) a similar form to \ud835\udc45rep for \ud835\udc45query. The Diversity Reward measures the dissimilarity among the selected frames in the feature space via \ud835\udc45div(Y \ud835\udc3f) = 1 \ud835\udc3f(\ud835\udc3f\u22121) \u2211\ufe01 \ud835\udc66\ud835\udc61,\ud835\udc66\ud835\udc61\u2032 \u2208Y \ud835\udc3f \ud835\udc61\u2260\ud835\udc61\u2032 \u0010 1 \u2212\ud835\udc66\ud835\udc47 \ud835\udc61\ud835\udc66\ud835\udc61\u2032 \u0011 . (10) Intuitively, the more dissimilar the selected frames to each other, the higher the diversity reward the agent receives. The Representativeness Reward measures how well the generated summary represents the main events occurring in the collection of videos. Thus, the reward is higher when the selected frames are closer to the cluster centers. Formally, \ud835\udc45rep(Y \ud835\udc3f) =exp(\u22121 |X| \u2211\ufe01 \ud835\udc65\u2208X min \ud835\udc66\ud835\udc61\u2208Y \ud835\udc3f \u2225\ud835\udc65\u2212\ud835\udc66\ud835\udc61\u22252). (11) The Query-Adaptability Reward4 encourages the model to select the summary frames to be similar to the web-images I via \ud835\udc45query(Y \ud835\udc3f) = exp \u00a9 \u00ad \u00ab \u22121 \ud835\udc3f \u2211\ufe01 \ud835\udc66\ud835\udc61\u2208Y \ud835\udc3f min \ud835\udc3c\u2208I \u2225\ud835\udc66\ud835\udc61\u2212\ud835\udc3c\u22252\u00aa \u00ae \u00ac . (12) The Temporal Coherence Reward encourages the visual coherence of the generated summary via \ud835\udc45coh(Y \ud835\udc3f) = 1 \ud835\udc3f \u2211\ufe01 \ud835\udc66\ud835\udc61\u2208Y \ud835\udc3f \ud835\udf0c(\ud835\udc66\ud835\udc61), (13) 4Other explored forms include \ud835\udc45query(Y \ud835\udc3f) = \u22121 |I| \u00cd \ud835\udc3c\u2208I min \ud835\udc66\ud835\udc61\u2208Y\ud835\udc3f \u2225\ud835\udc66\ud835\udc61\u2212\ud835\udc3c\u22252 and \ud835\udc45query(Y \ud835\udc3f) = \u22121 \ud835\udc3f \u00cd \ud835\udc66\ud835\udc61\u2208Y\ud835\udc3f \u2225\ud835\udc66\ud835\udc61\u2212 1 |I| \u00cd \ud835\udc3c\u2208I \ud835\udc3c\u22252. We found the formulation in Eq. (12) to work best. \fwhere \ud835\udf0c(\ud835\udc66\ud835\udc61) is calculated by adding up the correlation between two consecutive frames: \ud835\udf0c(\ud835\udc66\ud835\udc61) = 1 2 \u2211\ufe01 \ud835\udc58\u2208{\u00b11} \ud835\udc66\ud835\udc47 \ud835\udc61\ud835\udc66\ud835\udc61+\ud835\udc58. (14) Hence, the more correlated the neighboring frames, the higher the temporal coherence reward. Note that optimizing for visual/temporal coherence, i.e., smoothness of the transitions, is just a proxy for chronological soundness, which is a much harder problem. 4 EXPERIMENTAL SETUP We describe experimental details, such as the evaluation dataset and metrics, and present quantitative and qualitative results, comparing the proposed DeepQAMVS model with several baselines. Our experiments aim to show that (1) SVS methods cannot properly address QAMVS and multi-stage MVS procedures result in lower performance than a unified system (sections 4.2 and 4.5), (2) the use of multi-modal information is crucial in guiding the summarization process (section 4.3), (3) the introduced novel temporal coherence reward generates more visually coherent summaries (section 4.4) and (4) our method reduces the computational complexity compared to the current state-of-the-art methods (section 4.6). 4.1 Experimental Settings Dataset: We perform our experiments on the MVS1K dataset [22]. MVS1K is a collection of 1000 videos on 10 queries (events), with associated web-images, video titles, and their text descriptions. Each query has 4 different user summaries, serving as a ground truth. Table 1 lists the events, the query used to retrieve them, the number of videos and query web-images for each event as well as the total number of input frames across all videos. Each video is associated with a title and a text description. We use the features introduced by Ji et al. [22]. The dimensionality of the video frame and webimage embeddings is 4352. The embeddings are composed of a 4096 dimensional VGGNet-19 [60] (trained on ImageNet) CNN feature vector concatenated with a 256 dimensional HSV color histogram feature vector. These embeddings are reduced to a vector of length 256 through a fully connected layer. The input frames to the model are selected such that they represent the segment centers obtained using the shot boundary detection algorithm [70]. The textual features (titles and descriptions) and the query are Glove embeddings [49] of dimension 100. We set the hidden state dimension of the LSTM and the pointer network to be 256 and 32, respectively. Evaluation Metrics: To compare with previous work, generated summaries are assessed using F1-score, averaged over the ground truth user summaries. Following prior work [22, 24, 25], two frames are considered to match when the pixel-level euclidean distance is smaller than a predefined threshold of 0.6. Training Details: We train using a 10-fold cross-validation scheme. Specifically, for evaluating each event, we use the remaining 9 events as training data. During training, we use a batch size of 32, where each sample consists of 10 randomly sampled videos per event. We limit the number of video combinations to 4000 for every event. This large number of random combinations allowed us to Table 1: Dataset Characteristics Query ID Query # Videos # Frames # Images 1 Britains Prince William wedding 2011 90 1124 324 2 Prince death 2016 104 1549 142 3 NASA discovers Earth-like planet 100 1349 226 4 American government shut-down 2013 82 962 177 5 Malaysia Airline MH370 109 1330 435 6 FIFA corruption scandal 2015 90 785 177 7 Obama re-election 2012 85 1263 207 8 Alpha go vs Lee Sedo 84 976 118 9 Kobe Bryant retirement 109 1140 221 10 Paris terror attacks 83 857 651 Total 936 2678 avoid overfitting despite the small number of events5. We optimize with Adam, 0.01 learning rate and \u21132 regularization. During testing, we use all the videos associated with an event in the test set. Since the diversity \ud835\udc45div and representativeness \ud835\udc45rep reward on one side, and the coherence reward \ud835\udc45coh on the other side are contradictory, i.e., \ud835\udc45div and \ud835\udc45rep encourage the selection of diverse frames while \ud835\udc45coh is high when the summary is smooth as measured by the similarity of the neighboring frames, we use a training schedule: (1) We set \ud835\udefd1 = \ud835\udefd2 = \ud835\udefd3 = 1/3 and \ud835\udefd4 = 0 for 60 epochs. (2) Then, set \ud835\udefd1 = \ud835\udefd2 = \ud835\udefd3 = \ud835\udefd4 = 1/4 for 30 additional epochs. We also experiment with different summary lengths \ud835\udc3f\u2208{30, 50, 60}. 4.2 Experimental Results We compare DeepQAMVS to five SVS baselines operating on the concatenated videos. We chose the SVS baselines such that they represent the main trends in unsupervised summarization: \u2022 \ud835\udc3e-means [10]: SVS method that clusters all video frames and then selects the one closest to the cluster centers as summary frames (\ud835\udc58= 9). \u2022 DSC [3]: Dominant Set Clustering (DSC) is a graph-based clustering method where a dominant set algorithm is used to extract the summary frames. \u2022 MSR [3]: Minimum Sparse Reconstruction (MSR) is a decomposition based approach, which formulates video summarization as a minimum sparse reconstruction. \u2022 SUM-GAN [39]: An adversarial LSTM model, where the generator is an autoencoder LSTM aiming at first selecting the summary frames then reconstructing the original video based on them, and the discriminator is trained to distinguish between the reconstructed video and the original one. \u2022 DSN [81]: uses a RNN trained with deep reinforcement learning with diversity and representativeness rewards. Moreover, we compare with four state-of-the-art QAMVS baselines: \u2022 QUASC [22]: QUASC is a sparse coding program regularized with interestingness, diversity and query-relevance metrics, followed by ordering the frames chronologically by grouping them into events based on textual and visual similarity. \u2022 MWAA [25]: MWAA is a two-stage approach, where the frames are first extracted using multi-modal Weighted Archetypal Analysis (MWAA), and then are chronologically ordered based on upload time and topic-closeness. 5Besides the proposed reinforcement learning framework, we experimented with training in a supervised fashion, however, we could not avoid over-fitting. \fTable 2: Comparison of our approach against baselines (F1 score). Query ID 1 2 3 4 5 6 7 8 9 10 AVG SVS k-means .576 .552 .568 .336 .457 .525 .651 .278 .384 .337 .466 DSC .578 .472 .399 .530 .407 .494 .533 .485 .529 .471 .490 MSR .472 .391 .370 .414 .396 .355 .418 .234 .384 .288 .372 SUM-GAN .620\u00b1.035 .481\u00b1.028 .519\u00b1.034 .501\u00b1.038 .413\u00b1.022 .455\u00b1.048 .458\u00b1.059 .459\u00b1.041 .510\u00b1.021 .395\u00b1.056 .486\u00b1 .075 DSN .529\u00b1.019 .327\u00b1.062 .478\u00b1.036 .407\u00b1.026 .325\u00b1.042 .453\u00b1.033 .616\u00b1.028 .375\u00b1.022 .469\u00b1.021 .384\u00b1.016 .436\u00b1.093 MVS QUASC .520 .513 .400 .570 .513 .538 .623 .439 .709 .588 .544 MVS-HDS .660 .552 .475 .526 .495 .520 .642 .469 .633 .581 .555 MWAA .705 .610 .553 .511 .563 .466 .664 .483 .611 .379 .555 Random-50 .600\u00b1.070 .349\u00b1.088 .288\u00b1.047 .492\u00b1.131 .255\u00b1.074 .352\u00b1.096 .265\u00b1.099 .429\u00b1.109 .326\u00b1.109 .284\u00b1.064 .364\u00b1.089 Ours-30 .570\u00b1.013 .491\u00b1.037 .421\u00b1.084 .519\u00b1.017 .458\u00b1.054 .476\u00b1.030 .369\u00b1.036 .372\u00b1.014 .403\u00b1.017 .368\u00b1.041 .446\u00b1.022 Ours-50 .706\u00b1.018 .563\u00b1.035 .525\u00b1.017 .553\u00b1.026 .549\u00b1.014 .486\u00b1.032 .524\u00b1.015 .486\u00b1.022 .690\u00b1.015 .542\u00b1.022 .561\u00b1.005 Ours-60 .722\u00b1.019 .530\u00b1.046 .495\u00b1.009 .508\u00b1.015 .541\u00b1.036 .487\u00b1.014 .614\u00b1.026 .474\u00b1.015 .674\u00b1.025 .573\u00b1.019 .562\u00b1.004 Ours-best .722\u00b1.019 .563\u00b1.035 .525\u00b1.017 .553\u00b1.026 .549\u00b1.014 .487\u00b1.014 .614\u00b1.026 .486\u00b1.022 .690\u00b1.015 .573\u00b1.019 .576\u00b1.017 Table 3: Summary length (# frames) across methods. Query ID 1 2 3 4 5 6 7 8 9 10 AVG SVS k-means 48 51 59 51 63 47 48 36 39 28 47.0 DSC 42 47 34 39 52 46 55 41 41 41 43.8 MSR 48 51 59 51 63 47 48 36 39 28 47.0 SUM-GAN 60 60 60 60 60 60 60 60 60 60 60.0 DSN 60 60 60 60 60 60 60 60 60 60 60.0 MVS QUASC 33 57 21 55 48 41 59 52 51 56 47.3 MVS-HDS 51 60 48 48 60 44 58 54 60 50 53.3 MWAA 49 75 46 39 49 37 60 36 39 39 46.9 Ours-best 60 50 50 50 50 60 60 50 50 60 53.0 \u2022 MVS-HDS [24]: MVS-HDS is a clustering-based procedure using a Hyper-graph Dominant Set, followed by a refinement step to filter frames that are most dissimilar to the query web-images, and a final step where the remaining candidates are ordered based on topic closeness. \u2022 Random-50: We also compare our method against a randomly generated summary with length 50. We present quantitative results of our approach in Table 2 and the number of summary frames selected by each approach in Table 3. More specifically, in Table 2, the reported numbers represent the mean and standard deviation obtained from 5 rounds of experiments. We report the F1-scores for summaries of length 30 (ours-30), 50 (ours-50) and 60 (ours-60), as well as the best obtained score (oursbest) when selecting the best summary length for every event. We observe that SVS methods have in general lower performance than MVS methods. In addition, our proposed end-to-end DeepQAMVS model, on average, outperforms all baselines. 4.3 Ablation Study We present an ablation study, examining the effect of different rewards and attention mechanisms in Table 4. We evaluate the average F1-score across all the events for the following combinations of the attention modalities: (1) only video frame attention (\ud835\udf07(2) \ud835\udc61 = \ud835\udf07(3) \ud835\udc61 = 0); (2) video frame and image attention (\ud835\udf07(3) \ud835\udc61 = 0); (3) video frame and query attention (\ud835\udf07(2) \ud835\udc61 = 0); and (4) video frame, Table 4: Ablation study on attention and rewards (\ud835\udc3f= 60). Reward Attention \ud835\udf07(2) \ud835\udc61 = 0 \ud835\udf07(3) \ud835\udc61 = 0 \ud835\udf07(3) \ud835\udc61 = 0 \ud835\udf07(2) \ud835\udc61 = 0 \ud835\udf07(\ud835\udc56) \ud835\udc61 \u22600 \ud835\udefd2 = \ud835\udefd3 = \ud835\udefd4 = 0 .323\u00b1 .013 .559\u00b1.008 .374\u00b1 .015 .560\u00b1 .008 \ud835\udefd3 = \ud835\udefd4 = 0 .321\u00b1 .011 .557\u00b1.002 .373\u00b1 .002 .559\u00b1 .001 \ud835\udefd4 = 0 \u2212 .561\u00b1.003 \u2212 .562\u00b1 .006 \ud835\udefd\ud835\udc56\u22600 .330\u00b1 .020 .559\u00b1.005 .375\u00b1 .017 .562\u00b1 .004 image and query attention (\ud835\udf07(\ud835\udc56) \ud835\udc61 \u22600, \u2200\ud835\udc56\u2208{1, 2, 3}). We also investigate the effect of the different rewards by incrementally adding the reward terms including, (1) \ud835\udc45div (\ud835\udefd2 = \ud835\udefd3 = \ud835\udefd4 = 0); (2) \ud835\udc45div and \ud835\udc45query (\ud835\udefd3 = \ud835\udefd4 = 0); (3) \ud835\udc45div, \ud835\udc45query and \ud835\udc45coh (\ud835\udefd4 = 0); and (4) all the rewards, i.e., \ud835\udefd\ud835\udc56\u22600, \u2200\ud835\udc56\u2208{1, \u00b7 \u00b7 \u00b7 , 4}. Note that we do not add the query reward \ud835\udc45query when testing with attention terms that do not include the image attention (\u2212in Table 4). When considering all forms of attention (last column), we found that \ud835\udc45div has barely improved the F1-score. In contrast, including \ud835\udc45query, helped improve the quality of the summary while adding the coherency reward \ud835\udc45coh did not lead to a consistent increase of the F1-score. This is expected as the ground truth summary consists of an unordered set of frames. However, as demonstrated by the user study below, \ud835\udc45coh helped generating more visually coherent summaries. Across the different combinations of rewards, we observe that the combination of video frame attention and image attention (column 3) yields overall a higher F1-score than the combination of video frame attention and query attention (column 4). This is due to video descriptions being noisy and associated with the whole video, unlike the web-images, which are embedded in the same space as the frames and hence better capture queryadaptability. The best results are obtained by using all the attention terms (last column), demonstrating the complementary properties of the multimodal information. 4.4 Temporal Coherence User Study Since the provided ground truth summaries are composed of an unordered collection of frames, we resort to a user study to assess the visual coherence of our generated summaries. In total 21 participants are presented with 3 summaries generated from (1) DeepQAMVS, (2) random permutation of the video segments in \fHDS (2|\ud835\udfd2) SUM-GAN (0|\ud835\udfd6) DSN (0|\ud835\udfd1) K-Means (6|\ud835\udfd5) DSC (0|4) MSR (0|\ud835\udfd5) QUASC OURS DSN SUM-GAN QUASC OURS DSN SUM-GAN QUASC OURS DSN SUM-GAN Unimportant Redundant QUASC OURS DSN SUM-GAN QUASC (0|\ud835\udfd1) OURS (0|\ud835\udfcf) Figure 5: Qualitative results for event 1 (Prince William Wedding) by \ud835\udc3e-means [10], DSC [3], MSR [3], QUASC [22], MVSHDS [24] and DeepQAMVS, respectively. Frames outlined in red indicate unimportant keyframes, while yellow ones show redundant ones. The number of unimportant and redundant frames are reported on top of every summary. 1 2 3 4 5 6 7 8 9 10 AVG 20 % 40 % 60 % 80 % participants (%) DeepQAMVS Random DeepQAMVSwo Figure 6: Temporal coherence user study for Query IDs (\ud835\udc65-axis). 6 10 4 8 1 9 7 5 3 2 AVG 0.4 0.5 0.6 0.7 Run time (in sec.) L=30 L=50 L=60 Figure 7: Run Time Analysis in seconds. Query IDs (\ud835\udc65-axis) ordered by total number of input frames. the DeepQAMVS summary (Random), and (3) DeepQAMVS trained without the temporal coherence reward (DeepQAMVSwo). The participants are asked to select the most coherent summary, paying special attention to transitions between different segments in each video. From Figure 6, we can see that users preferred our DeepQAMVS summary in 8 out of 10 events. For events 5 \u2018Malaysian Airline MH370\u2019 and 10 \u2018Paris Attack\u2019, users preferred the summaries generated by DeepQAMVSwo. Note that these two events deal with major news incidents and consequently mostly consist of visually similar newscaster segments. In this case, users most likely prefer the resulting summaries from DeepQAMVSwo, as it produces more visually varied summaries due to the higher importance of the diversity reward. 4.5 Qualitative Results Figure 5 illustrates the summaries generated by different methods for the query Prince William Wedding (event 1). Visually, we observe that SVS methods choose many irrelevant frames. This is expected as these methods just optimize for diversity and do not take query information into account. QUASC, MWAA and HDS on the other hand have fewer irrelevant frames as they use the webimages to further guide the summarization. Compared to baselines, our method generates summaries with high diversity and selects less unimportant (red bounding box) or redundant frames (yellow bounding box). 4.6 Run-Time Analysis For completeness, we report the run-time of our model in Figure 7 for summary lengths 30, 50 and 60. We observe that we scale linearly with the number of input frames and summary length. We do not have access to any QAMVS baseline implementations to measure \f(a) \ud835\udc6d1 =.39, \ud835\udc45\ud835\udc51\ud835\udc56\ud835\udc63=.68, \ud835\udc45\ud835\udc5f\ud835\udc52\ud835\udc5d=.60, \ud835\udc45\ud835\udc5e\ud835\udc62\ud835\udc52\ud835\udc5f\ud835\udc66=.65, \ud835\udc45\ud835\udc50\ud835\udc5c\u210e=.29 (b) \ud835\udc6d1 =.31, \ud835\udc45\ud835\udc51\ud835\udc56\ud835\udc63=.67, \ud835\udc45\ud835\udc5f\ud835\udc52\ud835\udc5d=.63, \ud835\udc45\ud835\udc5e\ud835\udc62\ud835\udc52\ud835\udc5f\ud835\udc66=.67, \ud835\udc45\ud835\udc50\ud835\udc5c\u210e=.43 Figure 8: Failure case from event 7 (Obama Re-election). Although, the summary constructed from the ground-truth (left) and the DeepQAMVS generated one (right) are visually and reward-wise comparable, yet there is a remarkable difference in their corresponding F1-scores. run-times, but complexity-wise, they all scale polynomially with the number of input frames. 4.7 Limitations and Future Work Figure 8 presents a comparison of two summaries, the ground truth summary (left, (a)) and the summary generated by our DeepQAMVS (right, (b)). While both summaries have high diversity, representativeness and query-adaptability rewards, (b) has a lower F1-score compared to (a). This showcases the limitations of (1) the F1-score as a metric to assess the summary and (2) the subjectivity of the ground truth summaries. The F1-score relies solely on the visual overlap between the selected frames and the ground truth using pixel-level distances, which are highly sensitive to zooming, shifting and camera angle. In fact, Otani et al. [42] showed that randomly generated summaries achieve comparable or better performance to the state-ofthe-art methods when evaluated using the F1-score on two SVS datasets, SumMe [18] and TVSum [61]. Note that the ground truth in their case consists of importance scores associated with every frame. Otani et al. [42] proposed a new evaluation protocol based on the correlation between the ranking of the estimated scores and the human-annotated ones (Kendall [27] and Spearman [83] correlation coefficients). This metric shows the expected intuition, i.e., across human-annotated summaries, the correlation metric is high. In contrast, the correlation between the randomly generated and state-of-the-art summarization methods is small. Unfortunately, this metric is not applicable to QAMVS. To see this consider the following: if the ground truth consists of importance scores, redundant frames representing an important event will have high scores across videos. Hence, a ranked list of ground truth scores contains redundant frames, which leads to a sub-optimal summary resulting in high Spearman/Kendall scores. To fix this, we believe that a metric combining visual, textual and temporal order overlap would lead to a better evaluation protocol. Few papers have proposed metrics based on the textual overlap in the past. In particular, Yeung et al. [69] annotated segments in videos with sentences. The ground truth and selected segments are compared using a similarity metric for text summarization (ROUGE). Textual annotation could be very expensive for QAMVS. However, recent advances in image captioning could be leveraged to automate the process. In this paper, we design a user study to assess the coherence of the produced summaries. However, user-studies are expensive, subjective and not reproducible. Instead, a ranking correlation measure between a list of textual concepts from the ordered ground truth frames and the ones from the proposed summary may serve as a better metric, similar to [42]. Beyond the evaluation metric, training to optimize for the temporal coherence still has room for improvement. Although using the proposed reward results in visually smoother transitions, it did not lead to an overall clear story in the final summary. Embedding frames/web-images in a shared vision-language domain [50] could permit to leverage advances in text summarization. Also, the field could benefit from new benchmarks with more events and shotlevel text annotations to enable a wider range of techniques and evaluation metrics. 5" + }, + { + "url": "http://arxiv.org/abs/2005.01508v2", + "title": "Can We Learn Heuristics For Graphical Model Inference Using Reinforcement Learning?", + "abstract": "Combinatorial optimization is frequently used in computer vision. For\ninstance, in applications like semantic segmentation, human pose estimation and\naction recognition, programs are formulated for solving inference in\nConditional Random Fields (CRFs) to produce a structured output that is\nconsistent with visual features of the image. However, solving inference in\nCRFs is in general intractable, and approximation methods are computationally\ndemanding and limited to unary, pairwise and hand-crafted forms of higher order\npotentials. In this paper, we show that we can learn program heuristics, i.e.,\npolicies, for solving inference in higher order CRFs for the task of semantic\nsegmentation, using reinforcement learning. Our method solves inference tasks\nefficiently without imposing any constraints on the form of the potentials. We\nshow compelling results on the Pascal VOC and MOTS datasets.", + "authors": "Safa Messaoud, Maghav Kumar, Alexander G. Schwing", + "published": "2020-04-27", + "updated": "2020-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "I.4.6, I.2.6" + ], + "main_content": "Introduction Graphical model inference is an important combinatorial optimization task for robotics and autonomous systems. Despite signi\ufb01cant progress in recent years due to increasingly accurate deep net models, challenges such as inconsistent bounding box detection, segmentation or image classi\ufb01cation remain. Those inconsistencies can be addressed with Conditional Random Fields (CRFs), albeit requiring to solve an inference task which is of combinatorial complexity. Classical algorithms to address combinatorial problems come in three paradigms: exact, approximate and heuristic. Exact algorithms are often based on solving an Integer Linear Program (ILP) using a combination of a Linear Programming (LP) relaxation and a branch-and-bound framework. Particularly for large problems, repeated solving of linear programs is computationally expensive and therefore prohibitive. Approximation algorithms address this concern, however, often at the expense of weak optimality guarantees. Moreover, approximation algorithms often involve manual construction for each problem. Seemingly easier to develop are heuristics which are generally computationally fast but guarantees are hardly provided. In addition, tuning of hyperparameters for a particular problem instance may be required. A fourth paradigm has been considered since the early 2000s and gained popularity again recently [93, 6, 85, 5, 27, 18]: learned algorithms. This fourth paradigm is based on the intuition that data governs the properties of the combinatorial algorithm. For instance, semantic image segmentation always deals with similarly sized problem structures or semantic patterns. It is therefore conceivable that learning to solve the problem on a given dataset uncovers strategies which are close to optimal but hard to \ufb01nd manually, since it is much more effective for a learning algorithm to sift through large amounts of sample problems. To achieve this, in a series of work, reinforcement learning techniques were developed [93, 6, 85, 5, 27, 18] and shown to perform well on a variety of combinatorial tasks from the traveling salesman problem and the knapsack formulation to maximum cut and minimum vertex cover. While the aforementioned learning based techniques have been shown to perform extremely well on classical benchmarks, we are not aware of results for inference algorithms in CRFs for semantic segmentation. We hence wonder whether we can learn heuristics to address graphical model inference in semantic segmentation problems? To study this we develop a new framework for higher order CRF inference for the task of semantic segmentation using a Markov Decision Process (MDP). To solve the MDP, we assess two reinforcement learning algorithms: a Deep Q-Net (DQN) [58] and a deep net guided Monte Carlo Tree Search (MCTS) [82]. The proposed approach has two main advantages: (1) Unlike traditional approaches, it does not impose any constraints on the form of the CRF terms to facilitate effective inference. We demonstrate our claim by designing detection based higher order potentials that result in computationally intractable classical inference approaches. (2) Our method is more ef\ufb01cient than traditional approaches as inference complexity is linear in arbitrary potential orders while classical methods have exponential dependence on the largest clique size in general. This is due to the fact that semantic segmentation is reduced to sequentially inferring the labels 1 arXiv:2005.01508v2 [cs.CV] 5 May 2020 \fReward Sec 3.4 Node Embedding Superpixel Pooling Output Unaries (PSPNet) Sec 3.6 GNN Policy Network Sec 3.3 CRF Energy Sec 3.2 Groud Truth Input Higher Order Potentials (Bounding Boxes YoloV2) Sec 3.6 Binaries (Hypercolumns VGG16) Sec 3.6 Figure 1: Pipeline of the proposed approach. Inference in a higher order CRF is solved using reinforcement learning for the task of semantic segmentation. For Pascal VOC, unaries are obtained from PSPNet [94], pairwise potentials are computed using hypercolumns from VGG16 [30] and higher order potentials are based on detection bounding boxes from YoloV2 [69]. The policy network is modeled as a graph embedding network [17] following the CRF graph structure. It sequentially produces the labeling of every node (superpixel). of every variable based on a learned policy, without use of any iterative or search procedure. We evaluate the proposed approach on two benchmarks: (1) the Pascal VOC semantic segmentation dataset [19], and (2) the MOTS multi-object tracking and segmentation dataset [86]. We demonstrate that our method outperforms traditional inference algorithms while being more ef\ufb01cient. 2. Related Work We \ufb01rst review work on semantic segmentation before discussing learning of combinatorial optimizers. Semantic Segmentation: In early 2000, classi\ufb01ers were locally applied to images to generate segmentations [42] which resulted in a noisy output. To address this concern, as early as 2004, He et al. [33] applied Conditional Random Fields (CRFs) [43] and multi-layer perceptron features. For inference, Gibbs sampling was used, since MAP inference is NP-hard due to the combinatorial nature of the program. Progress in combinatorial optimization for \ufb02ow-based problems in the 1990s and early 2000s [21, 23, 26, 9, 7, 8, 10, 40] showed that min-cut solvers can \ufb01nd the MAP solution of sub-modular energy functions of graphical models for binary segmentation. Approximation algorithms like swapmoves and \u03b1-expansion [10] were developed to extend applicability of min-cut solvers to more than two labels. Semantic segmentation was further popularized by combining random forests with CRFs [81]. Recently, the performance on standard semantic segmentation benchmarks like Pascal VOC 2012 [19] has been dramatically boosted by convolutional networks. Both deeper [48] and wider [61, 71, 92] network architectures have been proposed. Advances like spatial pyramid pooling [94] and atrous spatial pyramid pooling [15] emerged to remedy limited receptive \ufb01elds. Other approaches jointly train deep nets with CRFs [16, 78, 28, 79, 52, 14, 96] to better capture the rich structure present in natural scenes. CRF Inference: Algorithmically, to \ufb01nd the MAP con\ufb01guration, LP relaxations have been extensively studied in the 2000s [74, 13, 41, 39, 22, 88, 34, 83, 35, 68, 89, 54, 53, 36, 75, 76, 77, 55, 56]. Also, CRF inference was studied as a differentiable module within a deep net [95, 51, 57, 24, 25]. However, both directions remain computationally demanding, particularly if high order potentials are involved. We therefore wonder whether recent progress in learning based combinatorial optimization yields effective algorithms for high order CRF inference in semantic segmentation. Learning-based Combinatorial Optimization: Decades of research on combinatorial optimization, often also referred to as discrete optimization, uncovered a large amount of valuable exact, approximation and heuristic algorithms. Already in the early 2000s, but more prominently recently [93, 6, 85, 5, 27, 18], learning based algorithms have been suggested for combinatorial optimization. They are based on the intuition that instances of similar problems are often solved repeatedly. While humans have uncovered impressive heuristics, data driven techniques are likely to uncover even more compelling mechanisms. It is beyond the scope of this paper to review the vast literature on combinatorial optimization. Instead, we subsequently focus on learning based methods. Among the \ufb01rst is work by Boyan and Moore [6], discussing how to learn to predict the outcome of a local search algorithm in order to bias future search trajectories. Around the same time, reinforcement learning techniques were used to solve resourceconstrained scheduling tasks [93]. Reinforcement learning is also the technique of choice for recent approaches addressing NP-hard tasks [5, 27, 18, 45] like the traveling salesman, \f\u008e1 \u0019( | ) \u2208 w1 \u00881 \u211dj\u00d7|\ue238| \u0000\u0001 \u0002 \u0003 \u0004 \u0005 \u0006 \u0007 \b \t \u000b\f\r \u000e \u000f \u0010\u0011\u0012\u0013\u0014 \u2211 \u0015\u0016 \u0017 \u0018\u0019\u001a\u001b\u001c \u001d \u001e\u001f !\" = \u2205 \u00881 # $ % & ' () *+ ,./012 3 4 5 6 7 89 : ; < => ? @ AB C ~1 DE FG H IJ K L M NOPQ = ( , ) w\u2217 1 ~\u2217 1 \u008e~\u2217 1 RS TU VW X Y Z[\\] ^ = ({ }, ( )) \u00882 ~\u2217 1 \u008e{ } ~\u2217 1 ~1 Figure 2: Illustration of one iteration of reinforcement learning for the inference task. The policy network samples an action a1 = (i\u2217 1, yi\u2217 1) from the learned distribution \u03c0(a1|s1) \u2208RN\u00d7|L| at iteration t = 1. knapsack, maximum cut, and minimum vertex cover problems. Similarly, promising results exist for structured prediction problems like dialog generation [46, 90, 31], program synthesis [12, 50, 65], semantic parsing [49], architecture search [97], chunking and parsing [80], machine translation [67, 62, 4], summarization [63], image captioning [70], knowledge graph reasoning [91], query rewriting [60, 11] and information extraction [59, 66]. Instead of directly learning to solve a given program, machine learning techniques have also been applied to parts of combinatorial solvers, e.g., to speed up branch-and-bound rules [44, 73, 32, 38]. We also want to highlight recent work on learning to optimize for continuous problems [47, 2]. Given those impressive results on challenging real-world problems, we wonder: can we learn programs for solving higher order CRFs for semantic image segmentation? Since CRF inference is typically formulated as a combinatorial optimization problem, we want to know how recent advances in learning based combinatorial optimization can be leveraged. 3. Approach We \ufb01rst present an overview of our approach before we discuss the individual components in greater detail. 3.1. Overview Graphical models factorize a global energy function as a sum of local functions of two types: (1) local evidence; and (2) co-occurrence information. Both cues are typically obtained from deep net classi\ufb01ers which are combined in a joint energy formulation. Finding the optimal semantic segmentation con\ufb01guration, i.e., \ufb01nding the minimizing argument of the energy, generally involves solving an NP-hard combinatorial optimization problem. Notable exceptions include energies with sub-modular co-occurrence terms. Instead of using classical directions, i.e., heuristics, exhaustive search, or relaxations, here, we assess suitability of learning based combinatorial optimization. Intuitively, we argue that CRF inference for the task of semantic segmentation exhibits an inherent similarity which can be exploited by learning based algorithms. In spirit, this mimics the design of heuristic rules. However, different from hand-crafting those rules, we use a learning based approach. To the best of our knowledge, this is the \ufb01rst work to successfully apply learning based combinatorial optimization to CRF inference for semantic segmentation. We therefore \ufb01rst provide an overview of the developed approach, outlined in Fig. 1. Just like classical approaches, we also use local evidence and co-occurrence information, obtained from deep nets. This information is consequently used to form an energy function de\ufb01ned over a Conditional Random Field (CRF). An example of a CRF with variables corresponding to superpixels (circles), pairwise potentials (edges) and higher order potentials obtained from object detections (fully connected cliques) is illustrated in Fig. 1. However, different from classical methods, we \ufb01nd the minimizing con\ufb01guration of the energy by repeatedly applying a learned policy network. In every iteration, the policy network selects a random variable, i.e., the pixel and its label by computing a probability distribution over all currently unlabeled pixels and their labels. Speci\ufb01cally, the pixel and label are determined by choosing the highest scoring entry in a matrix where the number of rows and columns correspond to the currently unlabeled pixels and the available labels respectively, as illustrated in Fig. 9. 3.2. Problem Formulation Formally, given an image x, we are interested in predicting the semantic segmentation y = (y1, . . . , yN) \u2208 Y. Hereby, N denotes the total number of pixels or superpixels, and the semantic segmentation of a superpixel i \u2208{1, . . . , N} is referred to via yi \u2208L = {1, . . . , |L|}, which can be assigned one out of |L| possible discrete labels from the set of possible labels L. The output space is denoted Y = LN. Classical techniques obtain local evidence fi(yi) for every pixel or superpixel, and co-occurrence information in the form of pairwise potentials fij(yi, yj) and higher order potentials fc(yc). The latter assigns an energy to a clique c \u2286{1, . . . , N} of variables yc = (yi)i\u2208c. For readability, we drop the dependence of the energies fi, fij and fc on the image x and the parameters of the employed deep nets. The goal of energy based semantic segmentation is to \ufb01nd the con\ufb01guration y\u2217which has the lowest energy E(y), i.e., y\u2217=arg min y\u2208Y E(y)\u225c N X i=1 fi(yi) + X (i,j)\u2208E fij(yi, yj) + X c\u2208C fc(yc). (1) Hereby, the sets E and C subsume respectively the captured set of pairwise and higher order co-occurrence patterns. Details about the potentials are presented in Sec. 3.6. \fAlgorithm 1: Inference Procedure 1: s1 = \u2205; 2: for t = 1 to N do 3: a\u2217 t = arg maxat\u2208At \u03c0(at|st) 4: (i\u2217 t , yi\u2217 t ) \u2190a\u2217 t 5: st+1 = st \u2295(i\u2217 t , yi\u2217 t ) 6: end for 7: Return: \u02c6 y \u2190sN+1 Solving the combinatorial program given in Eq. (1), i.e., inferring the optimal con\ufb01guration y\u2217is generally NP-hard. Different from existing methods, we develop a learning based combinatorial optimization heuristic for semantic segmentation with the intention to better capture the intricacies of energy minimization than can be done by hand-crafting rules. The developed heuristic sequentially labels one variable yi, i \u2208{1, . . . , N}, at a time. Formally, selection of one superpixel at a time can be formulated in a reinforcement learning context, as shown in Fig. 9. Speci\ufb01cally, an agent operates in t \u2208{1, . . . , N} time-steps according to a policy \u03c0(at|st) which encodes a probability distribution over actions at \u2208At given the current state st. The current state subsumes in selection order the indices of all currently labeled variables It \u2286{1, . . . , N} as well as their labels yIt = (yi)i\u2208It, i.e., st \u2208{(It, yIt) : It \u2286{1, . . . , N}, yIt \u2208L|It|}. We start with s1 = \u2205. The set of possible actions At is the concatenation of the label spaces L for all currently unlabeled pixels j \u2208{1, . . . , N} \\ It, i.e., At = L j\u2208{1,...,N}\\It L. We emphasize the difference between the concatenation operator and the product operator used to obtain the semantic segmentation output space Y = LN, i.e., the proposed approach does not operate in the product space. As mentioned before, the policy \u03c0(at|st) results in a probability distribution over actions at \u2208At from which we greedily select the most probable action a\u2217 t = arg max at\u2208At \u03c0(at|st). The most probable action a\u2217 t can be decomposed into the index for the selected variable, i.e., i\u2217 t and its state yi\u2217 t \u2208 L. We obtain the subsequent state st+1 by combining the extracted variable index i\u2217 t and its labeling with the previous state st. Speci\ufb01cally, we obtain st+1 = st \u2295(i\u2217 t , yi\u2217 t ) by slightly abusing the \u2295-operator to mean concatenation to a set and a list maintained within a state. Formally, we summarize the developed reinforcement learning based semantic segmentation algorithm used for inferring a labeling \u02c6 y in Alg. 1. In the following, we describe the policy function \u03c0\u03b8(at|st), which we found to work well for semantic segmentation, and different variants to learn its parameters \u03b8. 3.3. Policy Function We model the policy function \u03c0\u03b8(at|st) using a graph embedding network [17]. The input to the network is a weighted graph G(V, E, w), where nodes V = {1, . . . , N}, correspond to variables, i.e., in our case superpixels, E is a set of edges connecting neighboring superpixels, as illustrated in Fig. 1 and w : E \u2192R+ is the edge weight function. The weights {w(i, j)}{j:(i,j)\u2208E} on the edges between a given node i and its neighbors {j : (i, j) \u2208E} form a distribution, obtained by normalizing the dot product between the hypercolumns [30] gi and gj via a softmax across neighbors. At every iteration, the state st is encoded in the graph G by tagging node i \u2208V with a scalar hi = 1 if the node is part of the already labeled set It, i.e., if i \u2208It and 0 otherwise. Moreover, a one-hot encoding \u02dc yi \u2208{0, 1}|L| encodes the selected label of nodes i \u2208It. We set \u02dc yi to equal the all zeros vector if node i has not been selected yet. Every node i \u2208V is represented by a p-dimensional embedding, where p is a hyperparameter. The embedding is composed of \u02dc yi, hi as well as superpixel features bi \u2208RF which encode appearance and bounding box characteristics that we discuss in detail in Sec. 4. The output of the network is a |L|-dimensional vector \u03c0i for each node i \u2208V , representing the scores of the |L| different labels for variable i. The network iteratively generates a new representation \u00b5(k+1) i for every node i \u2208V by aggregating the current embeddings \u00b5(k) i according to the graph structure E starting from \u00b5(0) i = 0, \u2200i \u2208V . After K steps, the embedding captures long range interactions between the graph features as well as the graph properties necessary to minimize the energy function E. Formally, the update rule for node i is \u00b5(k+1) i \u2190Relu \uf8eb \uf8ed\u03b8(k) 1 hi+\u03b8(k) 2 \u02dc yi+\u03b8(k) 3 bi+\u03b8(k) 4 X j:(i,j)\u2208E w(i, j)\u00b5(k) j \uf8f6 \uf8f8, (2) where \u03b8(k) 1 \u2208Rp, \u03b8(k) 2 \u2208Rp\u00d7|L| , \u03b8(k) 3 \u2208Rp\u00d7F and \u03b8(k) 4 \u2208 Rp\u00d7p are trainable parameters. After K steps, \u03c0i for every unlabeled node i \u2208{1, . . . , N} \\ It is obtained via \u03c0i = \u03b85\u00b5(K) i \u2200i \u2208{1, . . . , N} \\ It, (3) where \u03b85 \u2208R|L|\u00d7p is another trainable model parameter. We illustrate the policy function \u03c0\u03b8(at|st) and one iteration of inference in Fig. 9. 3.4. Reward Function: To train the policy, ideally, the reward function rt(st, at) is designed such that the cumulative reward coincides exactly with the objective function that we aim at maximizing, i.e., PN t=1 rt(st, at) = \u2212E(\u02c6 y), where \u02c6 y is extracted from \fTable 1: Illustration of the energy reward computation following the two proposed reward schemes on a fully connected graph with 3 nodes. t it Et rt = \u2212(Et \u2212Et\u22121) rt = \u00b11 Graph 0 \u2212 0 \u2212 \u2212 1 2 3 1 1 f1(y1) \u2212f1(y1) \u2212 1+2\u00b7 1{(Et(y1)