diff --git "a/abs_29K_G/test_abstract_long_2405.03962v1.json" "b/abs_29K_G/test_abstract_long_2405.03962v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.03962v1.json" @@ -0,0 +1,655 @@ +{ + "url": "http://arxiv.org/abs/2405.03962v1", + "title": "AdsorbDiff: Adsorbate Placement via Conditional Denoising Diffusion", + "abstract": "Determining the optimal configuration of adsorbates on a slab (adslab) is\npivotal in the exploration of novel catalysts across diverse applications.\nTraditionally, the quest for the lowest energy adslab configuration involves\nplacing the adsorbate onto the slab followed by an optimization process. Prior\nmethodologies have relied on heuristics, problem-specific intuitions, or\nbrute-force approaches to guide adsorbate placement. In this work, we propose a\nnovel framework for adsorbate placement using denoising diffusion. The model is\ndesigned to predict the optimal adsorbate site and orientation corresponding to\nthe lowest energy configuration. Further, we have an end-to-end evaluation\nframework where diffusion-predicted adslab configuration is optimized with a\npretrained machine learning force field and finally evaluated with Density\nFunctional Theory (DFT). Our findings demonstrate an acceleration of up to 5x\nor 3.5x improvement in accuracy compared to the previous best approach. Given\nthe novelty of this framework and application, we provide insights into the\nimpact of pre-training, model architectures, and conduct extensive experiments\nto underscore the significance of this approach.", + "authors": "Adeesh Kolluru, John R Kitchin", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "physics.chem-ph" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Determining the optimal configuration of adsorbates on a slab (adslab) is\npivotal in the exploration of novel catalysts across diverse applications.\nTraditionally, the quest for the lowest energy adslab configuration involves\nplacing the adsorbate onto the slab followed by an optimization process. Prior\nmethodologies have relied on heuristics, problem-specific intuitions, or\nbrute-force approaches to guide adsorbate placement. In this work, we propose a\nnovel framework for adsorbate placement using denoising diffusion. The model is\ndesigned to predict the optimal adsorbate site and orientation corresponding to\nthe lowest energy configuration. Further, we have an end-to-end evaluation\nframework where diffusion-predicted adslab configuration is optimized with a\npretrained machine learning force field and finally evaluated with Density\nFunctional Theory (DFT). Our findings demonstrate an acceleration of up to 5x\nor 3.5x improvement in accuracy compared to the previous best approach. Given\nthe novelty of this framework and application, we provide insights into the\nimpact of pre-training, model architectures, and conduct extensive experiments\nto underscore the significance of this approach.", + "main_content": "Introduction Heterogenous catalysis plays an important role in developing chemicals in industries, environmental protection through converters, and the synthesis of alternative fuels (Liu & Li, 2017; Zitnick et al., 2020). Modeling these chemical reactions involve an intermediate adsorbate on a catalyst slab which determines the efficacy of the catalyst for that particular reaction. Discovering a novel catalyst computationally involves screening through billions of candidates and finding the lowest energy configuration. 1Department of Chemical Engineering, Carnegie Mellon University. Correspondence to: Adeesh Kolluru , John R. Kitchin . Finding the lowest energy configuration for an adsorbate and slab requires a global optimum (which is non-convex) search across different sites on the slab. Conventional approaches solve this in two steps (1) heuristically place the adsorbate on certain important sites and (2) perform optimization with quantum mechanical calculators like Density Functional Theory (DFT) on each of these sites. The lowest energy site out of these is considered for calculating adsorption energy, which is a thermodynamic descriptor for how good that catalyst is. With recent advances in machine learning methods for predicting forces, it has become possible to perform optimization with ML force fields (MLFFs) instead of Density Functional Theory (DFT) making this process faster and easier to test many sites and find better minima. These ML force fields are trained on DFT data to predict energies and forces corresponding to different adslab configurations. The recent release of the OC20-Dense dataset (Lan et al., 2023) signifies a significant advancement in the computation of the lowest energy adslab configuration. This work employs a blend of heuristic and random adsorbate placements across 100 sites, with subsequent optimizations across each site using Density Functional Theory (DFT) to calculate adsorption energy. The study further introduces AdsorbML, a paradigm characterized by a brute-force exploration of initial adsorbate placements. Employing pre-trained machine learning (ML) force fields from OC20, AdsorbML streamlines the optimization process, culminating in the determination of the lowest energy adsorbate-slab (adslab) configuration. The predictive accuracy of these configurations is rigorously validated against DFT single-points or complete DFT optimization. This hybrid approach results in a computational acceleration of 2000-fold in adsorption energy calculations compared to the sole reliance on DFT calculations. Recent developments in graph neural network (GNN) based ML architectures have increased the accuracies of adsorption energy prediction significantly by encoding geometric information of atoms in more explicit ways. However, there\u2019s little to no work done on improving the adsorption site prediction which could help us get away with the currently used brute-force approach. In this work, we develop a novel conditional denoising diffu1 arXiv:2405.03962v1 [cs.LG] 7 May 2024 \fAdsorbate placement via conditional denoising diffusion sion framework for adsorbate placement. We first formulate a diffusion framework over the space of the 2D translation and 3D rigid rotation of an adsorbate molecule over the slab considering periodic boundary conditions (PBC) of the slab. Through the learned diffusion process, we sample the most stable site by iteratively updating the center of mass of adsorbate and rigid orientation. Performing a naive unconditional diffusion framework on the most optimal adsorbate site and orientation \u2014 corresponding to the lowest energy adslab configuration out of 100 densely sampled calculations in OC20-Dense \u2014 leads to throwing away 99% of DFT optimal energy data. Therefore, we modify the diffusion training to be conditional on relative energies (relative across densely sampled sites of an adslab combination). This leads to significant improvements in accuracies and sample efficiency during diffusion training. After sampling for the optimal site and orientation of adsorbate on the slab, we perform ML force field (MLFF) optimization and DFT single-point verification similar to AdsorbML. This comprehensive end-to-end evaluation helps in robust assessment of the practical impact of the learned diffusion model. There have been significant advances in diffusion generative models in molecular and material discovery, and analogous problems in molecular docking on proteins. However, this is the first work to frame the adsorbate placement problem considering all its symmetries with the slab in a diffusion framework. Intuitively, the reverse diffusion process of AdsorbDiff helps in skipping multiple minima sites due to its energy-based conditional sampling which is followed by a local optimization with a DFT-learned MLFF to find a global optimum. To facilitate further research on this problem, we provide comprehensive results on the importance of GNN architectures for the diffusion task, show the importance of pretraining, and demonstrate the success of our approach to in-distribution (ID) and out-of-distribution (OOD) splits. The summary of contributions of this work are \u2022 We propose AdsorbDiff, a novel conditional denoising diffusion framework designed to leverage the translation, rotation, and periodic symmetries inherent in adsorbate and slab interactions. Additionally, this framework is adept at efficiently predicting the lowest energy site by conditional training on relative energies. \u2022 We present our results in a comprehensive end-to-end evaluation framework, integrated with DFT, to accurately gauge the true capability of our approach in predicting optimal adsorption energies. \u2022 We achieve a 31.8% success rate, 3.5x higher than the naive AdsorbML baseline of 9.1% with a single site prediction. Alternatively, we demonstrate that a comparable level of accuracy could be achieved by AdsorbML by employing 5x more placements. \u2022 We demonstrate that pretraining on large-scale local optimization data can significantly improve the results on the search for global optima. \u2022 We show that diffusion results exhibit insignificant dependence on GNN architectures, in contrast to the notable differences observed for the same architectures when trained on DFT forces. \u2022 We highlight the model\u2019s generalization capabilities to previously unseen adsorbates and slabs. 2. Background and Related Work Force-fields: Energy and forces (as a gradient of energy with respect to positions) are calculated using ab initio quantum mechanical methods like Density Functional Theory (DFT). ML models can be trained to predict these energies and forces, and are called ML force-fields (MLFFs). These force fields can be utilized to perform structure optimization to get the lowest energy structures. Optimization: For adsorption energy prediction, we start with an optimized adsorbate and slab, place the adsorbate on a slab, and perform optimization to get an adslab configuration with the lowest energy. Usually, second-order optimizers like BFGS, L-BFGS, Conjugate gradient descent, etc are used to solve this optimization problem. Since this is non-convex, the initial guess of adsorbate placement or the strategy of optimization is critical to finding an adslab configuration corresponding to the global optimum. AdsorbML (Lan et al., 2023) method starts with combining heuristic and random initial placements which is a brute-force approach to finding better minima. \u201dEasy Potential\u201d from (Schaarschmidt et al., 2022) trains a simple harmonic potential to guess this initial placement. Learn2Hop (Merchant et al., 2021) also learns the optimization landscape to navigate through better and hop through local minima. There are approaches like minima hopping that help in navigating through the entire optimization landscape with a force-field (Jung et al., 2023) and help in finding better minima, but these could be computationally expensive. GNNs: Message-Passing Neural Networks (MPNN) are a class of graph neural networks (GNN) that are utilized across material property prediction tasks. Different architectures encode the geometric information in different ways. SchNet (Sch\u00a8 utt et al., 2018) only encodes the distance information. Including more explicit geometric features have improved the model prediction as DimeNet (Gasteiger et al., 2020b;a) incorporates triplets. SphereNet (Liu et al., 2021), GemNet (Gasteiger et al., 2021; 2022) incorporates complete geometric information explicitly by giving triplets and quadruplets information. PaiNN (Sch\u00a8 utt et al., 2021) incorporates directional information and applies only linear operations on those features. Equivariant models like NequIP (Batzner et al., 2022), Allegro (Musaelian et al., 2023), MACE (Batatia et al., 2022), SCN (Zitnick et al., 2 \fAdsorbate placement via conditional denoising diffusion Figure 1. Overview of AdsorbDiff: Random initial site and orientation for the adsorbate are selected, followed by sampling over 2D translation, 3D rigid rotations, and considering periodic boundary conditions (PBC) to predict the optimal site and orientation. MLFF optimization is then conducted from the predicted site with a fixed interstitial gap until convergence. The final prediction undergoes constraint verification, and DFT verification is performed on valid structures to calculate success rates. 2022), Equiformer (Liao & Smidt, 2022; Liao et al., 2023) utilize spherical harmonics in representing the geometric features. Diffusion Models: Diffusion models are a class of generative models that have shown impressive results across different domains starting from computer vision (Dhariwal & Nichol, 2021; Croitoru et al., 2023), language models (Gong et al., 2022), temporal data modeling, to applications in molecules (Xu et al., 2022; 2023; Arts et al., 2023; Hoogeboom et al., 2022; Jing et al., 2022), proteins (Wu et al., 2022; Trippe et al., 2022; Watson et al., 2022; 2023) and materials (Xie et al., 2021; Fu et al., 2023; Zeni et al., 2023; Merchant et al., 2023; Yang et al., 2023b). There are different kinds of formulations proposed for diffusion models like denoising diffusion probabilistic models (DDPMs), score-based generative models (SGMs), and stochastic differential equations (Score SDEs) (Yang et al., 2023a). Many of these formulations have been adapted to problems in molecular and material discovery. For example, CDVAE (Xie et al., 2021) adapts concepts from noise-conditioned score networks (NCSN) for bulk discovery. Conditional diffusion has also been recently utilized across proteins (Krishna et al., 2024), catalyst and materials (Zheng et al., 2023) for generating structures with required properties. Diffusion models have also been recently utilized for molecular docking on proteins (Corso et al., 2022). Although this problem is somewhat analogous to placing adsorbate on a slab, as far as we know there hasn\u2019t been previous work on formulating adsorbate placement in a diffusion framework. AdsorbDiff also differs from molecular docking in several key aspects \u2013 2D translation formulation, periodic boundary conditions, conditional denoising formulation, and the requirement of DFT level accuracy as opposed to simple force-fields for proteins making our end-to-end evaluation with DFT critical. 3. AdsorbDiff 3.1. Overview The objective of this research is to enhance the efficiency of adsorption energy calculation, representing the lowest energy configuration of an adsorbate on a slab. The methodology of this work involves the initial placement of an adsorbate on a random site within the 2D surface of the slab, followed by reverse diffusion to predict the optimal adsorption site and orientation. Employing machine learning force field optimization, the structure undergoes iterative updates with an optimizer until forces converge close to 0. Subsequently, the final structure is verified for compliance with constraints essential for defining adsorption energy. On the optimized structure, a single Density Functional Theory (DFT) calculation is conducted to obtain the predicted energy (EP red). A successful outcome is determined by the predicted energy being within 0.1 eV or lower than the DFT baseline of adsorption energy in OC20-Dense data, indicating the model\u2019s ability to provide a comparable or superior estimate of adsorption energy (shown in Figure 1). 3 \fAdsorbate placement via conditional denoising diffusion The code is open-sourced with MIT License1. 3.2. Adsorbate placement Various adsorbate placement strategies were explored for the OC20-Dense dataset, incorporating a combination of heuristic and random approaches. Specifically, 100 sites were selected for each adslab configuration, utilizing a blend of heuristic and random placements. The heuristic placement involved strategically situating the adsorbate\u2019s binding site on either an on-top site, hollow site, or bridge site, with a specified interstitial gap denoting the distance between the connecting atom of the slab and the corresponding adsorbate atom. Additional random sites are introduced through the random rotation of the adsorbate along the normal of the slab, accompanied by a slight translational wobble along the surface from the heuristic site. 3.3. Diffusion for adsorbate placement In this work, our objective is to develop a diffusion model aimed at predicting the adsorbate orientation and site corresponding to the lowest energy, as established through benchmarking with the OC20-Dense dataset. The adsorbate motion is constrained within a manifold (Mc) and utilizes the combined action group (A), as described in DiffDock (Corso et al., 2022). This manifold permits the adsorbate to navigate towards configurations with lowenergy adslab states through a combination of translations, rotations, and torsion angle adjustments. Note, for fair comparisons with our baselines, torsion angle alterations are disregarded in our analysis due to the smaller size of the adsorbate employed in this study. This approach aligns with the methodology of AdsorbML, which does not introduce randomness in torsion angles as part of its benchmark. In our framework, we specifically consider translations in the 2D plane parallel to the slab while accounting for periodic boundary conditions (PBC). The z-coordinate is meticulously aligned to denote the normal direction of the slab and the diffusion process is executed across the xycoordinates. Therefore, the adsorbate movements are associated with the 2D translation group T(2), and rigid rotations are modeled using the SO(3) group. The translation operation, denoted as Atr : T(2) \u00d7 R2n \u2192R2n, is defined as Atr(r, x)i = xi + r, employing the isomorphism T(2) \u223c = R2, where xi \u2208R2 represents the position of the i-th adsorbate atom. Similarly, the rotation operation, denoted as Arot : SO(3) \u00d7 R3n \u2192R3n, is defined by Arot(R, x)i = R(xi \u2212\u00af x) + \u00af x, where \u00af x = 1 n P i xi, signifying rotations around the center-of-mass of the adsorbate. For the initial coordinates of adsorbate, we select a random 1https://github.com/AdeeshKolluru/ AdsorbDiff point on the slab. This point is considered as the center-ofmass of the adsorbate in fractional coordinates. We then convert from fractional coordinates to real coordinates and perform a reverse diffusion process to get to the lowest energy site (as shown in Algorithm 1). The work conducted by De et al. (De Bortoli et al., 2022) and Corso et al. (Corso et al., 2022) has demonstrated the applicability of the diffusion framework to Riemannian manifolds. In this context, the score model constitutes the tangent space, and a geodesic random walk serves as the reverse stochastic differential equation (SDE) solver. The score model is trained using denoising score matching (Song & Ermon, 2019), wherein a score function s\u03b8(x) is learned to approximate the gradient of the probability density \u2207xp(x) at varying noise levels (as shown in Algorithm 2). The learned scores for translations and rotations are treated as independent entities, assuming the tangent space is a direct sum of individual tangent spaces, with contributions from torsion being neglected. The forward SDE for both translation and rotation is defined as dx = q d\u03c32(t) dt dw, 4 \fAdsorbate placement via conditional denoising diffusion where w represents the corresponding Wiener process. In the translational scenario within T(2), the model learns a score for a standard Gaussian distribution with variance \u03c32(t). For rotations in SO(3), the diffusion kernel is governed by the IGSO(3) distribution, which can be sampled in the axis-angle parameterization. This involves sampling a unit vector \u03c9\u2032 \u2208so(3) uniformly and a random angle \u03c9 from the interval [0, \u03c0], as outlined by Equations 1 and 2. The score of diffusion kernel is defined in Equation 3. The computation of R\u2032 = R(\u03c9\u02c6 \u03c9)R, where R is the result of applying the Euler vector \u03c9\u02c6 \u03c9 to R, has been established in prior work by Yim et al. (Yim et al., 2023). To efficiently carry out the score computation and sampling processes, it is feasible to precompute the truncated infinite series and interpolate the cumulative distribution function (CDF) of p(\u03c9). p(\u03c9) = 1 \u2212cos(\u03c9) \u03c0 f(\u03c9) (1) f(\u03c9) = \u221e X l=0 (2l + 1) exp \u0012 \u2212l(l + 1)\u03c32 2 \u0013 \u00d7 sin \u0012\u0012 l + 1 2 \u0013 \u03c9 \u0013 sin \u0010\u03c9 2 \u0011 (2) \u2207ln pt(R\u2032|R) = \u0012 d d\u03c9 log f(\u03c9) \u0013 \u02c6 \u03c9 (3) 3.4. Conditional denoising diffusion for adsorbate placement While the OC Challenge set provides densely calculated adsorption energies for 244 systems, a total of 244 * 100 DFT optimization benchmarks were conducted. This involved performing 100 different random placements for each configuration. Notably, the naive denoising diffusion setup was exclusively trained on the 244 lowest energy configurations. To leverage the entirety of the DFT optimization data, a conditional diffusion model is employed. In this model, the optimized position is conditioned on the relative energy, specifically relative to the energy of the lowest energy configuration (Ec rel-i = Ec min \u2212Ec i ). This approach allows for a more comprehensive utilization of the available DFT optimization data. 3.5. Graph Neural Network (GNN) architecture The inputs to the ML model are the 3D positions of all input atoms from the adslab configuration and their corresponding atomic numbers. The outputs predict per-atom 3D vectors. These vectors are forces in the case of force fields and the score function in the case of diffusion. To predict multiple score functions (for translation and rotation), multiple output heads are trained each predicting independent score functions. All architectures used in this work come under the messagepassing neural network (MPNN) framework of graph neural networks (GNNs). MPNNs operate by passing messages between nodes in the graph, allowing information to be exchanged and aggregated iteratively. The key components of an MPNN include message passing, updating node states, and global readout. In the message-passing step, nodes exchange information based on their local context, and this information is then used to update the states of the nodes (as shown in Equation 4). h(t+1) v = Update \u0010 h(t) v , Aggregate \u0010 {m(t) u\u2192v | u \u2208N(v)} \u0011\u0011 (4) Here, h(t) v represents embeddings of node v at iteration t, m(t) u\u2192v denotes the message from node u to v at iteration t, N(v) represents the neighborhood of node v, and Update and Aggregate are differentiable functions for updating node states and aggregating messages, respectively. In our study, we systematically investigate diverse architectures employed in the training of diffusion models to discern the significance of architectural decisions in this context. Specifically, we have chosen to assess the performance of PaiNN, GemNet-OC, and EquiformerV2, each distinguished by its treatment of explicit geometric information and rotational symmetries (Duval et al., 2023). This selection is grounded in the diverse characteristics they bring to the table. Furthermore, we employ these architectures in benchmarking against OC20 force-field evaluation, thereby facilitating comparative analysis of architectural significance in the realms of force-fields and diffusion. 4. Results In this section, we present results demonstrating the impact of AdsorbDiff in accelerating the search for adsorption energy or better global optima. Specifically, we demonstrate the impact of conditional denoising training over unconditional training and a randomly placed adsorbate baseline. This random baseline is equivalent to performing AdsorbML on a single site (Nsite=1). Additionally, we demonstrate the impact of pretraining, model architectures, and the generalization of this approach to new adsorbates and slabs. 4.1. Datasets We utilize two publicly available datasets for this work OC20-Dense (Lan et al., 2023) and OC20 (Chanussot et al., 2021). OC20: Open Catalyst 2020 (OC20) is a large-scale dataset that contains converged DFT optimization trajectories of 5 \fAdsorbate placement via conditional denoising diffusion 460k unique adslab configurations, encompassing 55 unique elements and 74 adsorbates. Note that these optimizations are local optimizations performed with a single heuristic placement. ML force field models are trained on the forces derived from these DFT trajectories. Additionally, the optimized structure from OC20 is utilized for pre-training the diffusion model. OC20-Dense: The OC20-Dense dataset serves as a DFT benchmark for adsorption energies, employing dense placement on 100 random sites per adslab configuration, followed by DFT optimization. This dataset releases both in-distribution (ID) and out-of-distribution (OOD) data, relative to OC20. The ID data incorporates adsorbates and slabs from OC20\u2019s training set but presents different combinations and configurations, while OOD introduces new adsorbates and/or slabs not found in the OC20 training set. A subset of OC20-Dense ID and OOD was utilized in the Open Catalyst Challenge 2023, hosted at the AI for Science Workshop during NeurIPS 2023 2. We split the ID data into 80/20 ratios for training the diffusion model and validating the sampling process. These smaller subsets make it computationally cheaper to perform end-to-end iterations. 4.2. Metric and constraints Our success metric is defined by the final energy calculated through DFT. For real-world applications, this energy (DDF T T otal) is used in calculating the adsorption energy EDF T Ads as EDF T Adsorption = EDF T T otal \u2212EDF T Slab \u2212EDF T Adsorbate, where EDF T Slab and EDF T Adsorbate are the independent energies of slab and adsorbate respectively. This adsorption energy acts as a thermodynamic description of how good a catalyst is for downstream application. The DFT Success Rate (SR) is defined as the percentage of valid structures within 0.1 eV or lower of the DFT computed adsorption energy benchmark in the OC20-Dense data (as described in AdsorbML). This is computationally expensive to calculate but is accurate. Metrics calculated from ML predictions are inexpensive but are also inaccurate, discussed further in Appendix C. Since we calculate adsorption energies, the adsorbate and slab must not change during optimization. Therefore, the structures are considered an anomaly due to (1) adsorbate desorption: adsorbate moves far away from the slab, (2) adsorbate dissociation: atoms in adsorbate dissociate into multiple adsorbates, (3) slab mismatch/reconstruction: slab reconstructs into a completely different structure during optimization (4) adsorbate intercalation: when any of the adsorbate atoms detaches and get into the slab. Experimental setup: All presented results are based on the DFT success rate metric as defined in the preceding 2https://opencatalystproject.org/ challenge.html section. Throughout the diffusion process, we employ the EquiformerV2 architecture, unless explicitly stated otherwise, owing to its state-of-the-art performance in AdsorbML. Additionally, for MLFF optimization, we utilize GemNetOC pre-trained on OC20, chosen for its lower inference cost. Further specifics regarding model and training hyperparameters are available in Appendix D. All results are shown on the val ID split apart from the OOD section. 4.3. Conditional vs Unconditional diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random Unconditional Conditional 9.1% 11.4% 31.8% Conditional vs Unconditional Diffusion (Nsite=1) Figure 2. Comparison of conditional and unconditional diffusion with a baseline of random placement. Conditional diffusion training on relative energies of configurations of adslab significantly improves success rates over unconditional training and AdsorbML baseline. We demonstrate the importance of conditional training on relative energies (as shown in Section 3.4) over unconditional diffusion training in Figure 2. We compare both of these approaches to a naive baseline of AdsorbML with a single site (Nsite=1) where MLFF optimization is performed on a random adsorbate placement. It is noteworthy that the performance of unconditional training is suboptimal, this may be ascribed to the unexploited potential of additional data made available through conditional training. 4.4. AdsorbDiff vs AdsorbML AdsorbML conducts MLFF optimization and DFT evaluations on adsorption sites randomly placed within the system. A comparative analysis is drawn with AdsorbDiff, where the prediction of adsorption sites is facilitated through the utilization of diffusion models. As depicted in Figure 3, it is evident that AdsorbDiff exhibits notably superior performance, particularly at lower Nsites. However, as the number of adsorption sites (Nsites) increases, AdsorbDiff tends to either converge to or underperform in comparison to the brute force approach employed by AdsorbML. Adsorbate sites sampled from AdsorbDiff have less diversity by design as it\u2019s trained to predict the global optima. We calculate the average across the standard deviation of the points sampled at 10 Nsites and get 8.1 \u02da A for AdsorbML and 2.7 \u02da A for AdsorbDiff. AdsorbML\u2019s brute force placements have more randomness which leads to fewer anomalies post the MLFF 6 \fAdsorbate placement via conditional denoising diffusion 2 4 6 8 10 Number of Sites 10 15 20 25 30 35 40 45 DFT Success Rate (%) 9.1% 31.8% 20.5% 34.1% 34.1% 36.3% 47.7% 41.0% AdsorbDiff vs AdsorbML AdsorbML AdsorbDiff AdsorbDiff (Nsite=1) Figure 3. DFT Success Rates (%) for AdsorbDiff and AdsorbML across a varying number of site predictions. AdsorbDiff performs 3.5x better than AdsorbML utilizing a single site prediction. At higher sites, AdsorbML performs better due to the brute-force nature of site prediction that reduces anomalies. 2 4 6 8 10 Number of Sites 10 15 20 25 30 Anomalies 31.8% 25.0% 18.2% 20.5% 11.4% 22.7% 6.8% 13.6% AdsorbML AdsorbDiff Figure 4. Anomalies in AdsorbDiff and AdsorbML with respect to Nsites. A system is labeled as anomalous if all its predicted sites result in anomalies. AdsorbML has fewer anomalies than AdsorbDiff at higher Nsites due to more randomness in initial sites. optimization process shown in Figure 4. 4.5. Impact of pretraining Conditional diffusion benefits from training on a dataset that is 100 times more extensive than the unconditional approach, a consequence of leveraging multiple local optima within a unique adslab configuration. The substantial increase in training data size manifests in a notable enhancement in the success rate for the conditional approach. The OC20 IS2RE dataset, containing optimization data for 460,000 distinct adslab combinations, serves as a valuable resource for pretraining the diffusion model. It is important to acknowledge that this pretraining process results in a model that learns the local optima of an adslab combination, with the caveat that the model may not capture global optima for an adslab combination. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random PT Zero-shot PT Conditional 9.1% 29.6% 31.8% Impact of Pre-training (Nsite=1) Figure 5. Impact of pretraining on 460k OC20 local optima data on DFT Success Rate. PT Zero-shot measures zero-shot generalization of OC20 pre-trained model to OC20-Dense data. PT Conditional is finetuned on OC20 Dense data conditionally on relative energies of adslab configurations. Random baseline corresponds to randomly placed adsorbate. IS2RS Pretraining (PT) Zero-shot: Taking advantage of the diffusion model pre-trained on OC20 IS2RE data, we conduct a zero-shot validation on the OC20-Dense ID val split. This experimental setup allows us to assess the model\u2019s ability to predict better global optima having trained on a large dataset of local optima. Notably, we observe a substantial increase in DFT success rate in the zero-shot setting (as shown in Figure 5). IS2RS Pretraining (PT) Conditional: In this approach, we utilize the pre-trained model using the OC20-Dense data as described in Section 3.4. We observe that although this gives a 2% improvement over zero-shot, it converges to the same results as just training conditionally on OC20-Dense (shown in Figure 5). 4.6. Impact of architectures Architectures characterized by richer geometric information and extensive many-body interaction capabilities, such as eSCN and EquiformerV2, have demonstrated superior performance in force evaluations within the OC20 dataset compared to simpler models like PaiNN, which primarily encode directional information and apply linear transformations. Our benchmarking involves the evaluation of three architectures that exhibit progressively improved performance in OC20 Force MAE, revealing significant differences among them. This evaluation is specifically conducted in the context of the zero-shot assessment following pretraining (PT zeroshot) on an extensive dataset encompassing 460,000 OC20 instances. This choice is inspired by insights from the GemNet-OC paper (Gasteiger et al., 2022), suggesting that certain architectural choices manifest optimal performance only at higher data scales. 7 \fAdsorbate placement via conditional denoising diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) PaiNN GemNet-OC EquiformerV2 27.3% 27.3% 29.6% Impact of GNN architectures on diffusion Figure 6. Impact of Graph Neural Network (GNN) architectures on the diffusion process for DFT Success Rate keeping other parts of the framework same. Different architectures perform similarly on the task of diffusion sampling. Interestingly, in the realm of the diffusion task, we note that the disparity in success rates among these architectures is marginal (as shown in Figure 6) which has been recently demonstrated in applications of molecular generation tasks as well (Wang et al., 2023). The intuition behind this result is that the diffusion model\u2019s score function can be thought of as learning a harmonic potential (Xie et al., 2021). Harmonic potentials are simpler force-fields than ab-initio DFT calculations involved in OC20 forces. This could result in simpler architectures being able to capture the underlying complexity of the diffusion task defined in our work. 4.7. OOD generalization We measure the success of AdsorbDiff in out-of-distribution (OOD) cases where the model hasn\u2019t seen the adsorbate or the slab even during the pre-training on OC20. We pick a random 50 samples out of 200 validation OOD split defined in Open Catalyst Challenge 2023. We observe a marginal decrease of only 3.8% in results for the OOD case compared to the ID scenario and consistently observe significant improvement over the AdsorbML (Nsite=1) baseline. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random AdsorbDiff 8.4% 28% OOD Results Figure 7. Comparison of DFT Success Rate for In-Distribution (ID) and Out-of-Distribution (OOD) splits using the AdsorbDiff method. Random baseline corresponds to randomly placed adsorbate. 4.8. Inference cost In the case of conditional diffusion, our approach maintains a maximum step limit of 100, with adsorbate placement converging, on average, within 98 steps. In contrast, for MLFF optimization with a maximum step limit of 300 and Fmax criteria of 0.01 eV/A (consistent with AdsorbML), the convergence occurs in approximately 286 steps. Consequently, for scenarios with a single adsorption site (Nsite 1), AdsorbDiff incurs approximately 34% more inference cost than AdsorbML, given the GNN architecture for diffusion and MLFF optimization is the same. This end-to-end ML framework is O(104) times faster than the conventional DFT pipelines (Lan et al., 2023). In Section 4.6, we illustrate that simpler and faster models such as PaiNN yield comparable performance to more intricate and slower models like EquiformerV2. This enhances the efficiency of our diffusion-based approach, as its computational burden becomes negligible in comparison to MLFF optimization, which would require more computationally intensive ML architectures (details in Appendix B). 5.", + "additional_graph_info": { + "graph": [ + [ + "Adeesh Kolluru", + "Muhammed Shuaibi" + ], + [ + "Adeesh Kolluru", + "Nima Shoghi" + ], + [ + "Adeesh Kolluru", + "Abhishek Das" + ], + [ + "Muhammed Shuaibi", + "Abhishek Das" + ], + [ + "Muhammed Shuaibi", + "Aditya Grover" + ], + [ + "Muhammed Shuaibi", + "Anuroop Sriram" + ], + [ + "Nima Shoghi", + "C. Lawrence Zitnick" + ], + [ + "Abhishek Das", + "Devi Parikh" + ], + [ + "Abhishek Das", + "Dhruv Batra" + ] + ], + "node_feat": { + "Adeesh Kolluru": [ + { + "url": "http://arxiv.org/abs/2405.03962v1", + "title": "AdsorbDiff: Adsorbate Placement via Conditional Denoising Diffusion", + "abstract": "Determining the optimal configuration of adsorbates on a slab (adslab) is\npivotal in the exploration of novel catalysts across diverse applications.\nTraditionally, the quest for the lowest energy adslab configuration involves\nplacing the adsorbate onto the slab followed by an optimization process. Prior\nmethodologies have relied on heuristics, problem-specific intuitions, or\nbrute-force approaches to guide adsorbate placement. In this work, we propose a\nnovel framework for adsorbate placement using denoising diffusion. The model is\ndesigned to predict the optimal adsorbate site and orientation corresponding to\nthe lowest energy configuration. Further, we have an end-to-end evaluation\nframework where diffusion-predicted adslab configuration is optimized with a\npretrained machine learning force field and finally evaluated with Density\nFunctional Theory (DFT). Our findings demonstrate an acceleration of up to 5x\nor 3.5x improvement in accuracy compared to the previous best approach. Given\nthe novelty of this framework and application, we provide insights into the\nimpact of pre-training, model architectures, and conduct extensive experiments\nto underscore the significance of this approach.", + "authors": "Adeesh Kolluru, John R Kitchin", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "physics.chem-ph" + ], + "main_content": "Introduction Heterogenous catalysis plays an important role in developing chemicals in industries, environmental protection through converters, and the synthesis of alternative fuels (Liu & Li, 2017; Zitnick et al., 2020). Modeling these chemical reactions involve an intermediate adsorbate on a catalyst slab which determines the efficacy of the catalyst for that particular reaction. Discovering a novel catalyst computationally involves screening through billions of candidates and finding the lowest energy configuration. 1Department of Chemical Engineering, Carnegie Mellon University. Correspondence to: Adeesh Kolluru , John R. Kitchin . Finding the lowest energy configuration for an adsorbate and slab requires a global optimum (which is non-convex) search across different sites on the slab. Conventional approaches solve this in two steps (1) heuristically place the adsorbate on certain important sites and (2) perform optimization with quantum mechanical calculators like Density Functional Theory (DFT) on each of these sites. The lowest energy site out of these is considered for calculating adsorption energy, which is a thermodynamic descriptor for how good that catalyst is. With recent advances in machine learning methods for predicting forces, it has become possible to perform optimization with ML force fields (MLFFs) instead of Density Functional Theory (DFT) making this process faster and easier to test many sites and find better minima. These ML force fields are trained on DFT data to predict energies and forces corresponding to different adslab configurations. The recent release of the OC20-Dense dataset (Lan et al., 2023) signifies a significant advancement in the computation of the lowest energy adslab configuration. This work employs a blend of heuristic and random adsorbate placements across 100 sites, with subsequent optimizations across each site using Density Functional Theory (DFT) to calculate adsorption energy. The study further introduces AdsorbML, a paradigm characterized by a brute-force exploration of initial adsorbate placements. Employing pre-trained machine learning (ML) force fields from OC20, AdsorbML streamlines the optimization process, culminating in the determination of the lowest energy adsorbate-slab (adslab) configuration. The predictive accuracy of these configurations is rigorously validated against DFT single-points or complete DFT optimization. This hybrid approach results in a computational acceleration of 2000-fold in adsorption energy calculations compared to the sole reliance on DFT calculations. Recent developments in graph neural network (GNN) based ML architectures have increased the accuracies of adsorption energy prediction significantly by encoding geometric information of atoms in more explicit ways. However, there\u2019s little to no work done on improving the adsorption site prediction which could help us get away with the currently used brute-force approach. In this work, we develop a novel conditional denoising diffu1 arXiv:2405.03962v1 [cs.LG] 7 May 2024 \fAdsorbate placement via conditional denoising diffusion sion framework for adsorbate placement. We first formulate a diffusion framework over the space of the 2D translation and 3D rigid rotation of an adsorbate molecule over the slab considering periodic boundary conditions (PBC) of the slab. Through the learned diffusion process, we sample the most stable site by iteratively updating the center of mass of adsorbate and rigid orientation. Performing a naive unconditional diffusion framework on the most optimal adsorbate site and orientation \u2014 corresponding to the lowest energy adslab configuration out of 100 densely sampled calculations in OC20-Dense \u2014 leads to throwing away 99% of DFT optimal energy data. Therefore, we modify the diffusion training to be conditional on relative energies (relative across densely sampled sites of an adslab combination). This leads to significant improvements in accuracies and sample efficiency during diffusion training. After sampling for the optimal site and orientation of adsorbate on the slab, we perform ML force field (MLFF) optimization and DFT single-point verification similar to AdsorbML. This comprehensive end-to-end evaluation helps in robust assessment of the practical impact of the learned diffusion model. There have been significant advances in diffusion generative models in molecular and material discovery, and analogous problems in molecular docking on proteins. However, this is the first work to frame the adsorbate placement problem considering all its symmetries with the slab in a diffusion framework. Intuitively, the reverse diffusion process of AdsorbDiff helps in skipping multiple minima sites due to its energy-based conditional sampling which is followed by a local optimization with a DFT-learned MLFF to find a global optimum. To facilitate further research on this problem, we provide comprehensive results on the importance of GNN architectures for the diffusion task, show the importance of pretraining, and demonstrate the success of our approach to in-distribution (ID) and out-of-distribution (OOD) splits. The summary of contributions of this work are \u2022 We propose AdsorbDiff, a novel conditional denoising diffusion framework designed to leverage the translation, rotation, and periodic symmetries inherent in adsorbate and slab interactions. Additionally, this framework is adept at efficiently predicting the lowest energy site by conditional training on relative energies. \u2022 We present our results in a comprehensive end-to-end evaluation framework, integrated with DFT, to accurately gauge the true capability of our approach in predicting optimal adsorption energies. \u2022 We achieve a 31.8% success rate, 3.5x higher than the naive AdsorbML baseline of 9.1% with a single site prediction. Alternatively, we demonstrate that a comparable level of accuracy could be achieved by AdsorbML by employing 5x more placements. \u2022 We demonstrate that pretraining on large-scale local optimization data can significantly improve the results on the search for global optima. \u2022 We show that diffusion results exhibit insignificant dependence on GNN architectures, in contrast to the notable differences observed for the same architectures when trained on DFT forces. \u2022 We highlight the model\u2019s generalization capabilities to previously unseen adsorbates and slabs. 2. Background and Related Work Force-fields: Energy and forces (as a gradient of energy with respect to positions) are calculated using ab initio quantum mechanical methods like Density Functional Theory (DFT). ML models can be trained to predict these energies and forces, and are called ML force-fields (MLFFs). These force fields can be utilized to perform structure optimization to get the lowest energy structures. Optimization: For adsorption energy prediction, we start with an optimized adsorbate and slab, place the adsorbate on a slab, and perform optimization to get an adslab configuration with the lowest energy. Usually, second-order optimizers like BFGS, L-BFGS, Conjugate gradient descent, etc are used to solve this optimization problem. Since this is non-convex, the initial guess of adsorbate placement or the strategy of optimization is critical to finding an adslab configuration corresponding to the global optimum. AdsorbML (Lan et al., 2023) method starts with combining heuristic and random initial placements which is a brute-force approach to finding better minima. \u201dEasy Potential\u201d from (Schaarschmidt et al., 2022) trains a simple harmonic potential to guess this initial placement. Learn2Hop (Merchant et al., 2021) also learns the optimization landscape to navigate through better and hop through local minima. There are approaches like minima hopping that help in navigating through the entire optimization landscape with a force-field (Jung et al., 2023) and help in finding better minima, but these could be computationally expensive. GNNs: Message-Passing Neural Networks (MPNN) are a class of graph neural networks (GNN) that are utilized across material property prediction tasks. Different architectures encode the geometric information in different ways. SchNet (Sch\u00a8 utt et al., 2018) only encodes the distance information. Including more explicit geometric features have improved the model prediction as DimeNet (Gasteiger et al., 2020b;a) incorporates triplets. SphereNet (Liu et al., 2021), GemNet (Gasteiger et al., 2021; 2022) incorporates complete geometric information explicitly by giving triplets and quadruplets information. PaiNN (Sch\u00a8 utt et al., 2021) incorporates directional information and applies only linear operations on those features. Equivariant models like NequIP (Batzner et al., 2022), Allegro (Musaelian et al., 2023), MACE (Batatia et al., 2022), SCN (Zitnick et al., 2 \fAdsorbate placement via conditional denoising diffusion Figure 1. Overview of AdsorbDiff: Random initial site and orientation for the adsorbate are selected, followed by sampling over 2D translation, 3D rigid rotations, and considering periodic boundary conditions (PBC) to predict the optimal site and orientation. MLFF optimization is then conducted from the predicted site with a fixed interstitial gap until convergence. The final prediction undergoes constraint verification, and DFT verification is performed on valid structures to calculate success rates. 2022), Equiformer (Liao & Smidt, 2022; Liao et al., 2023) utilize spherical harmonics in representing the geometric features. Diffusion Models: Diffusion models are a class of generative models that have shown impressive results across different domains starting from computer vision (Dhariwal & Nichol, 2021; Croitoru et al., 2023), language models (Gong et al., 2022), temporal data modeling, to applications in molecules (Xu et al., 2022; 2023; Arts et al., 2023; Hoogeboom et al., 2022; Jing et al., 2022), proteins (Wu et al., 2022; Trippe et al., 2022; Watson et al., 2022; 2023) and materials (Xie et al., 2021; Fu et al., 2023; Zeni et al., 2023; Merchant et al., 2023; Yang et al., 2023b). There are different kinds of formulations proposed for diffusion models like denoising diffusion probabilistic models (DDPMs), score-based generative models (SGMs), and stochastic differential equations (Score SDEs) (Yang et al., 2023a). Many of these formulations have been adapted to problems in molecular and material discovery. For example, CDVAE (Xie et al., 2021) adapts concepts from noise-conditioned score networks (NCSN) for bulk discovery. Conditional diffusion has also been recently utilized across proteins (Krishna et al., 2024), catalyst and materials (Zheng et al., 2023) for generating structures with required properties. Diffusion models have also been recently utilized for molecular docking on proteins (Corso et al., 2022). Although this problem is somewhat analogous to placing adsorbate on a slab, as far as we know there hasn\u2019t been previous work on formulating adsorbate placement in a diffusion framework. AdsorbDiff also differs from molecular docking in several key aspects \u2013 2D translation formulation, periodic boundary conditions, conditional denoising formulation, and the requirement of DFT level accuracy as opposed to simple force-fields for proteins making our end-to-end evaluation with DFT critical. 3. AdsorbDiff 3.1. Overview The objective of this research is to enhance the efficiency of adsorption energy calculation, representing the lowest energy configuration of an adsorbate on a slab. The methodology of this work involves the initial placement of an adsorbate on a random site within the 2D surface of the slab, followed by reverse diffusion to predict the optimal adsorption site and orientation. Employing machine learning force field optimization, the structure undergoes iterative updates with an optimizer until forces converge close to 0. Subsequently, the final structure is verified for compliance with constraints essential for defining adsorption energy. On the optimized structure, a single Density Functional Theory (DFT) calculation is conducted to obtain the predicted energy (EP red). A successful outcome is determined by the predicted energy being within 0.1 eV or lower than the DFT baseline of adsorption energy in OC20-Dense data, indicating the model\u2019s ability to provide a comparable or superior estimate of adsorption energy (shown in Figure 1). 3 \fAdsorbate placement via conditional denoising diffusion The code is open-sourced with MIT License1. 3.2. Adsorbate placement Various adsorbate placement strategies were explored for the OC20-Dense dataset, incorporating a combination of heuristic and random approaches. Specifically, 100 sites were selected for each adslab configuration, utilizing a blend of heuristic and random placements. The heuristic placement involved strategically situating the adsorbate\u2019s binding site on either an on-top site, hollow site, or bridge site, with a specified interstitial gap denoting the distance between the connecting atom of the slab and the corresponding adsorbate atom. Additional random sites are introduced through the random rotation of the adsorbate along the normal of the slab, accompanied by a slight translational wobble along the surface from the heuristic site. 3.3. Diffusion for adsorbate placement In this work, our objective is to develop a diffusion model aimed at predicting the adsorbate orientation and site corresponding to the lowest energy, as established through benchmarking with the OC20-Dense dataset. The adsorbate motion is constrained within a manifold (Mc) and utilizes the combined action group (A), as described in DiffDock (Corso et al., 2022). This manifold permits the adsorbate to navigate towards configurations with lowenergy adslab states through a combination of translations, rotations, and torsion angle adjustments. Note, for fair comparisons with our baselines, torsion angle alterations are disregarded in our analysis due to the smaller size of the adsorbate employed in this study. This approach aligns with the methodology of AdsorbML, which does not introduce randomness in torsion angles as part of its benchmark. In our framework, we specifically consider translations in the 2D plane parallel to the slab while accounting for periodic boundary conditions (PBC). The z-coordinate is meticulously aligned to denote the normal direction of the slab and the diffusion process is executed across the xycoordinates. Therefore, the adsorbate movements are associated with the 2D translation group T(2), and rigid rotations are modeled using the SO(3) group. The translation operation, denoted as Atr : T(2) \u00d7 R2n \u2192R2n, is defined as Atr(r, x)i = xi + r, employing the isomorphism T(2) \u223c = R2, where xi \u2208R2 represents the position of the i-th adsorbate atom. Similarly, the rotation operation, denoted as Arot : SO(3) \u00d7 R3n \u2192R3n, is defined by Arot(R, x)i = R(xi \u2212\u00af x) + \u00af x, where \u00af x = 1 n P i xi, signifying rotations around the center-of-mass of the adsorbate. For the initial coordinates of adsorbate, we select a random 1https://github.com/AdeeshKolluru/ AdsorbDiff point on the slab. This point is considered as the center-ofmass of the adsorbate in fractional coordinates. We then convert from fractional coordinates to real coordinates and perform a reverse diffusion process to get to the lowest energy site (as shown in Algorithm 1). The work conducted by De et al. (De Bortoli et al., 2022) and Corso et al. (Corso et al., 2022) has demonstrated the applicability of the diffusion framework to Riemannian manifolds. In this context, the score model constitutes the tangent space, and a geodesic random walk serves as the reverse stochastic differential equation (SDE) solver. The score model is trained using denoising score matching (Song & Ermon, 2019), wherein a score function s\u03b8(x) is learned to approximate the gradient of the probability density \u2207xp(x) at varying noise levels (as shown in Algorithm 2). The learned scores for translations and rotations are treated as independent entities, assuming the tangent space is a direct sum of individual tangent spaces, with contributions from torsion being neglected. The forward SDE for both translation and rotation is defined as dx = q d\u03c32(t) dt dw, 4 \fAdsorbate placement via conditional denoising diffusion where w represents the corresponding Wiener process. In the translational scenario within T(2), the model learns a score for a standard Gaussian distribution with variance \u03c32(t). For rotations in SO(3), the diffusion kernel is governed by the IGSO(3) distribution, which can be sampled in the axis-angle parameterization. This involves sampling a unit vector \u03c9\u2032 \u2208so(3) uniformly and a random angle \u03c9 from the interval [0, \u03c0], as outlined by Equations 1 and 2. The score of diffusion kernel is defined in Equation 3. The computation of R\u2032 = R(\u03c9\u02c6 \u03c9)R, where R is the result of applying the Euler vector \u03c9\u02c6 \u03c9 to R, has been established in prior work by Yim et al. (Yim et al., 2023). To efficiently carry out the score computation and sampling processes, it is feasible to precompute the truncated infinite series and interpolate the cumulative distribution function (CDF) of p(\u03c9). p(\u03c9) = 1 \u2212cos(\u03c9) \u03c0 f(\u03c9) (1) f(\u03c9) = \u221e X l=0 (2l + 1) exp \u0012 \u2212l(l + 1)\u03c32 2 \u0013 \u00d7 sin \u0012\u0012 l + 1 2 \u0013 \u03c9 \u0013 sin \u0010\u03c9 2 \u0011 (2) \u2207ln pt(R\u2032|R) = \u0012 d d\u03c9 log f(\u03c9) \u0013 \u02c6 \u03c9 (3) 3.4. Conditional denoising diffusion for adsorbate placement While the OC Challenge set provides densely calculated adsorption energies for 244 systems, a total of 244 * 100 DFT optimization benchmarks were conducted. This involved performing 100 different random placements for each configuration. Notably, the naive denoising diffusion setup was exclusively trained on the 244 lowest energy configurations. To leverage the entirety of the DFT optimization data, a conditional diffusion model is employed. In this model, the optimized position is conditioned on the relative energy, specifically relative to the energy of the lowest energy configuration (Ec rel-i = Ec min \u2212Ec i ). This approach allows for a more comprehensive utilization of the available DFT optimization data. 3.5. Graph Neural Network (GNN) architecture The inputs to the ML model are the 3D positions of all input atoms from the adslab configuration and their corresponding atomic numbers. The outputs predict per-atom 3D vectors. These vectors are forces in the case of force fields and the score function in the case of diffusion. To predict multiple score functions (for translation and rotation), multiple output heads are trained each predicting independent score functions. All architectures used in this work come under the messagepassing neural network (MPNN) framework of graph neural networks (GNNs). MPNNs operate by passing messages between nodes in the graph, allowing information to be exchanged and aggregated iteratively. The key components of an MPNN include message passing, updating node states, and global readout. In the message-passing step, nodes exchange information based on their local context, and this information is then used to update the states of the nodes (as shown in Equation 4). h(t+1) v = Update \u0010 h(t) v , Aggregate \u0010 {m(t) u\u2192v | u \u2208N(v)} \u0011\u0011 (4) Here, h(t) v represents embeddings of node v at iteration t, m(t) u\u2192v denotes the message from node u to v at iteration t, N(v) represents the neighborhood of node v, and Update and Aggregate are differentiable functions for updating node states and aggregating messages, respectively. In our study, we systematically investigate diverse architectures employed in the training of diffusion models to discern the significance of architectural decisions in this context. Specifically, we have chosen to assess the performance of PaiNN, GemNet-OC, and EquiformerV2, each distinguished by its treatment of explicit geometric information and rotational symmetries (Duval et al., 2023). This selection is grounded in the diverse characteristics they bring to the table. Furthermore, we employ these architectures in benchmarking against OC20 force-field evaluation, thereby facilitating comparative analysis of architectural significance in the realms of force-fields and diffusion. 4. Results In this section, we present results demonstrating the impact of AdsorbDiff in accelerating the search for adsorption energy or better global optima. Specifically, we demonstrate the impact of conditional denoising training over unconditional training and a randomly placed adsorbate baseline. This random baseline is equivalent to performing AdsorbML on a single site (Nsite=1). Additionally, we demonstrate the impact of pretraining, model architectures, and the generalization of this approach to new adsorbates and slabs. 4.1. Datasets We utilize two publicly available datasets for this work OC20-Dense (Lan et al., 2023) and OC20 (Chanussot et al., 2021). OC20: Open Catalyst 2020 (OC20) is a large-scale dataset that contains converged DFT optimization trajectories of 5 \fAdsorbate placement via conditional denoising diffusion 460k unique adslab configurations, encompassing 55 unique elements and 74 adsorbates. Note that these optimizations are local optimizations performed with a single heuristic placement. ML force field models are trained on the forces derived from these DFT trajectories. Additionally, the optimized structure from OC20 is utilized for pre-training the diffusion model. OC20-Dense: The OC20-Dense dataset serves as a DFT benchmark for adsorption energies, employing dense placement on 100 random sites per adslab configuration, followed by DFT optimization. This dataset releases both in-distribution (ID) and out-of-distribution (OOD) data, relative to OC20. The ID data incorporates adsorbates and slabs from OC20\u2019s training set but presents different combinations and configurations, while OOD introduces new adsorbates and/or slabs not found in the OC20 training set. A subset of OC20-Dense ID and OOD was utilized in the Open Catalyst Challenge 2023, hosted at the AI for Science Workshop during NeurIPS 2023 2. We split the ID data into 80/20 ratios for training the diffusion model and validating the sampling process. These smaller subsets make it computationally cheaper to perform end-to-end iterations. 4.2. Metric and constraints Our success metric is defined by the final energy calculated through DFT. For real-world applications, this energy (DDF T T otal) is used in calculating the adsorption energy EDF T Ads as EDF T Adsorption = EDF T T otal \u2212EDF T Slab \u2212EDF T Adsorbate, where EDF T Slab and EDF T Adsorbate are the independent energies of slab and adsorbate respectively. This adsorption energy acts as a thermodynamic description of how good a catalyst is for downstream application. The DFT Success Rate (SR) is defined as the percentage of valid structures within 0.1 eV or lower of the DFT computed adsorption energy benchmark in the OC20-Dense data (as described in AdsorbML). This is computationally expensive to calculate but is accurate. Metrics calculated from ML predictions are inexpensive but are also inaccurate, discussed further in Appendix C. Since we calculate adsorption energies, the adsorbate and slab must not change during optimization. Therefore, the structures are considered an anomaly due to (1) adsorbate desorption: adsorbate moves far away from the slab, (2) adsorbate dissociation: atoms in adsorbate dissociate into multiple adsorbates, (3) slab mismatch/reconstruction: slab reconstructs into a completely different structure during optimization (4) adsorbate intercalation: when any of the adsorbate atoms detaches and get into the slab. Experimental setup: All presented results are based on the DFT success rate metric as defined in the preceding 2https://opencatalystproject.org/ challenge.html section. Throughout the diffusion process, we employ the EquiformerV2 architecture, unless explicitly stated otherwise, owing to its state-of-the-art performance in AdsorbML. Additionally, for MLFF optimization, we utilize GemNetOC pre-trained on OC20, chosen for its lower inference cost. Further specifics regarding model and training hyperparameters are available in Appendix D. All results are shown on the val ID split apart from the OOD section. 4.3. Conditional vs Unconditional diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random Unconditional Conditional 9.1% 11.4% 31.8% Conditional vs Unconditional Diffusion (Nsite=1) Figure 2. Comparison of conditional and unconditional diffusion with a baseline of random placement. Conditional diffusion training on relative energies of configurations of adslab significantly improves success rates over unconditional training and AdsorbML baseline. We demonstrate the importance of conditional training on relative energies (as shown in Section 3.4) over unconditional diffusion training in Figure 2. We compare both of these approaches to a naive baseline of AdsorbML with a single site (Nsite=1) where MLFF optimization is performed on a random adsorbate placement. It is noteworthy that the performance of unconditional training is suboptimal, this may be ascribed to the unexploited potential of additional data made available through conditional training. 4.4. AdsorbDiff vs AdsorbML AdsorbML conducts MLFF optimization and DFT evaluations on adsorption sites randomly placed within the system. A comparative analysis is drawn with AdsorbDiff, where the prediction of adsorption sites is facilitated through the utilization of diffusion models. As depicted in Figure 3, it is evident that AdsorbDiff exhibits notably superior performance, particularly at lower Nsites. However, as the number of adsorption sites (Nsites) increases, AdsorbDiff tends to either converge to or underperform in comparison to the brute force approach employed by AdsorbML. Adsorbate sites sampled from AdsorbDiff have less diversity by design as it\u2019s trained to predict the global optima. We calculate the average across the standard deviation of the points sampled at 10 Nsites and get 8.1 \u02da A for AdsorbML and 2.7 \u02da A for AdsorbDiff. AdsorbML\u2019s brute force placements have more randomness which leads to fewer anomalies post the MLFF 6 \fAdsorbate placement via conditional denoising diffusion 2 4 6 8 10 Number of Sites 10 15 20 25 30 35 40 45 DFT Success Rate (%) 9.1% 31.8% 20.5% 34.1% 34.1% 36.3% 47.7% 41.0% AdsorbDiff vs AdsorbML AdsorbML AdsorbDiff AdsorbDiff (Nsite=1) Figure 3. DFT Success Rates (%) for AdsorbDiff and AdsorbML across a varying number of site predictions. AdsorbDiff performs 3.5x better than AdsorbML utilizing a single site prediction. At higher sites, AdsorbML performs better due to the brute-force nature of site prediction that reduces anomalies. 2 4 6 8 10 Number of Sites 10 15 20 25 30 Anomalies 31.8% 25.0% 18.2% 20.5% 11.4% 22.7% 6.8% 13.6% AdsorbML AdsorbDiff Figure 4. Anomalies in AdsorbDiff and AdsorbML with respect to Nsites. A system is labeled as anomalous if all its predicted sites result in anomalies. AdsorbML has fewer anomalies than AdsorbDiff at higher Nsites due to more randomness in initial sites. optimization process shown in Figure 4. 4.5. Impact of pretraining Conditional diffusion benefits from training on a dataset that is 100 times more extensive than the unconditional approach, a consequence of leveraging multiple local optima within a unique adslab configuration. The substantial increase in training data size manifests in a notable enhancement in the success rate for the conditional approach. The OC20 IS2RE dataset, containing optimization data for 460,000 distinct adslab combinations, serves as a valuable resource for pretraining the diffusion model. It is important to acknowledge that this pretraining process results in a model that learns the local optima of an adslab combination, with the caveat that the model may not capture global optima for an adslab combination. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random PT Zero-shot PT Conditional 9.1% 29.6% 31.8% Impact of Pre-training (Nsite=1) Figure 5. Impact of pretraining on 460k OC20 local optima data on DFT Success Rate. PT Zero-shot measures zero-shot generalization of OC20 pre-trained model to OC20-Dense data. PT Conditional is finetuned on OC20 Dense data conditionally on relative energies of adslab configurations. Random baseline corresponds to randomly placed adsorbate. IS2RS Pretraining (PT) Zero-shot: Taking advantage of the diffusion model pre-trained on OC20 IS2RE data, we conduct a zero-shot validation on the OC20-Dense ID val split. This experimental setup allows us to assess the model\u2019s ability to predict better global optima having trained on a large dataset of local optima. Notably, we observe a substantial increase in DFT success rate in the zero-shot setting (as shown in Figure 5). IS2RS Pretraining (PT) Conditional: In this approach, we utilize the pre-trained model using the OC20-Dense data as described in Section 3.4. We observe that although this gives a 2% improvement over zero-shot, it converges to the same results as just training conditionally on OC20-Dense (shown in Figure 5). 4.6. Impact of architectures Architectures characterized by richer geometric information and extensive many-body interaction capabilities, such as eSCN and EquiformerV2, have demonstrated superior performance in force evaluations within the OC20 dataset compared to simpler models like PaiNN, which primarily encode directional information and apply linear transformations. Our benchmarking involves the evaluation of three architectures that exhibit progressively improved performance in OC20 Force MAE, revealing significant differences among them. This evaluation is specifically conducted in the context of the zero-shot assessment following pretraining (PT zeroshot) on an extensive dataset encompassing 460,000 OC20 instances. This choice is inspired by insights from the GemNet-OC paper (Gasteiger et al., 2022), suggesting that certain architectural choices manifest optimal performance only at higher data scales. 7 \fAdsorbate placement via conditional denoising diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) PaiNN GemNet-OC EquiformerV2 27.3% 27.3% 29.6% Impact of GNN architectures on diffusion Figure 6. Impact of Graph Neural Network (GNN) architectures on the diffusion process for DFT Success Rate keeping other parts of the framework same. Different architectures perform similarly on the task of diffusion sampling. Interestingly, in the realm of the diffusion task, we note that the disparity in success rates among these architectures is marginal (as shown in Figure 6) which has been recently demonstrated in applications of molecular generation tasks as well (Wang et al., 2023). The intuition behind this result is that the diffusion model\u2019s score function can be thought of as learning a harmonic potential (Xie et al., 2021). Harmonic potentials are simpler force-fields than ab-initio DFT calculations involved in OC20 forces. This could result in simpler architectures being able to capture the underlying complexity of the diffusion task defined in our work. 4.7. OOD generalization We measure the success of AdsorbDiff in out-of-distribution (OOD) cases where the model hasn\u2019t seen the adsorbate or the slab even during the pre-training on OC20. We pick a random 50 samples out of 200 validation OOD split defined in Open Catalyst Challenge 2023. We observe a marginal decrease of only 3.8% in results for the OOD case compared to the ID scenario and consistently observe significant improvement over the AdsorbML (Nsite=1) baseline. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random AdsorbDiff 8.4% 28% OOD Results Figure 7. Comparison of DFT Success Rate for In-Distribution (ID) and Out-of-Distribution (OOD) splits using the AdsorbDiff method. Random baseline corresponds to randomly placed adsorbate. 4.8. Inference cost In the case of conditional diffusion, our approach maintains a maximum step limit of 100, with adsorbate placement converging, on average, within 98 steps. In contrast, for MLFF optimization with a maximum step limit of 300 and Fmax criteria of 0.01 eV/A (consistent with AdsorbML), the convergence occurs in approximately 286 steps. Consequently, for scenarios with a single adsorption site (Nsite 1), AdsorbDiff incurs approximately 34% more inference cost than AdsorbML, given the GNN architecture for diffusion and MLFF optimization is the same. This end-to-end ML framework is O(104) times faster than the conventional DFT pipelines (Lan et al., 2023). In Section 4.6, we illustrate that simpler and faster models such as PaiNN yield comparable performance to more intricate and slower models like EquiformerV2. This enhances the efficiency of our diffusion-based approach, as its computational burden becomes negligible in comparison to MLFF optimization, which would require more computationally intensive ML architectures (details in Appendix B). 5." + }, + { + "url": "http://arxiv.org/abs/2206.02005v2", + "title": "Open Challenges in Developing Generalizable Large Scale Machine Learning Models for Catalyst Discovery", + "abstract": "The development of machine learned potentials for catalyst discovery has\npredominantly been focused on very specific chemistries and material\ncompositions. While effective in interpolating between available materials,\nthese approaches struggle to generalize across chemical space. The recent\ncuration of large-scale catalyst datasets has offered the opportunity to build\na universal machine learning potential, spanning chemical and composition\nspace. If accomplished, said potential could accelerate the catalyst discovery\nprocess across a variety of applications (CO2 reduction, NH3 production, etc.)\nwithout additional specialized training efforts that are currently required.\nThe release of the Open Catalyst 2020 (OC20) has begun just that, pushing the\nheterogeneous catalysis and machine learning communities towards building more\naccurate and robust models. In this perspective, we discuss some of the\nchallenges and findings of recent developments on OC20. We examine the\nperformance of current models across different materials and adsorbates to\nidentify notably underperforming subsets. We then discuss some of the modeling\nefforts surrounding energy-conservation, approaches to finding and evaluating\nthe local minima, and augmentation of off-equilibrium data. To complement the\ncommunity's ongoing developments, we end with an outlook to some of the\nimportant challenges that have yet to be thoroughly explored for large-scale\ncatalyst discovery.", + "authors": "Adeesh Kolluru, Muhammed Shuaibi, Aini Palizhati, Nima Shoghi, Abhishek Das, Brandon Wood, C. Lawrence Zitnick, John R Kitchin, Zachary W Ulissi", + "published": "2022-06-04", + "updated": "2022-06-13", + "primary_cat": "physics.chem-ph", + "cats": [ + "physics.chem-ph", + "cond-mat.mtrl-sci" + ], + "main_content": "Introduction Catalysts have played a key role in the synthesis of everyday chemicals and fuels necessary for a 21st century society. As renewable energy prices continue to decrease, traditional chemical synthesis processes are being revisited for more sustainable alternatives. At the center of this, catalyst discovery plays a key role in the advancement of renewable energy processes and sustainable chemical production, i.e. ammonia for fertilizer and hydrogen production. Unfortunately, the search space for catalyst materials is enormous for even high-throughput experiments.2 This presents a need for computational tools to simulate systems through quantum mechanical (QM) models like Density Functional Theory (DFT). QM approaches have made notable advancements in bridging computational results to experimental \ufb01ndings.3\u20138 While effective, QM tools scale very poorly, O(N 3) or 1 arXiv:2206.02005v2 [physics.chem-ph] 13 Jun 2022 \fFigure 1: Summary of challenges associated with training on large dataset with large ML potentials discussed in the paper. Top left Trade o\ufb00s in direct and gradient GNN force predictions. Top right An example system for a case where the distance metrics are relatively good for the direct approach but the force metrics are worse. Bottom left Demonstration of inconsistent error across a metallic surface and a non-metal through an example. Bottom right Augmenting existing relaxation datasets with o\ufb00-equlibrium data can aid in relaxation performance. worse in the number of electrons. The computational cost associated with QM tools render them infeasible to the scale of the systems and search space desired for catalyst discovery. As a result, the catalysis community has moved towards a more data driven approach.9\u201313 With the QM data available, researchers are often interested in building machine learning surrogates for a particular chemical property.14\u201317 Such e\ufb00orts, however, were limited to the \ufb01nite data available, often for a very speci\ufb01c chemistry or system, limiting the generalizability ability of such models.10,18 Fortunately, as the community continues to curate larger, and more diverse datasets, machine learning models will continue to improve as they move towards larger, and more sophisticated architectures. In the \ufb01eld of small molecules, a vast collection of datasets have been developed for varying use cases, including molecular dynamics simulations (MD17,19 ANI-1,20 COLL21) and quantum mechanical properties (QM922 Alchemy23). These datasets are often limited to a few (5-10) unique elements, on average 10-20 atoms per system, and training set sizes in the range of 10k-1M samples. In the \ufb01eld of heterogeneous catalysis, datasets are often much more limited with training set sizes between 100 50k.24\u201327 These datasets were often created for very speci\ufb01c applications involving a handful of small adsorbates (i.e. hydrogen containing adsorbates on transition metal surfaces, CO2 re2 \fduction catalysts, etc.). The release of OC20 marks a push towards a large, sparse collection of the material space. OC20 spans 55 unique elements, 82 adsorbates and includes a collection of unary, binary and ternary materials. A total of 1.28 million DFT relaxations were performed, comprising \u223c260M single point evaluations of system energy and per-atom forces. OC20 presented several practical tasks for the community to work towards. The most general of the tasks, Structure to Energy and Forces (S2EF) evaluates a model\u2019s ability to serve as a surrogate to DFT predicting a con\ufb01guration\u2019s energy and per-atom forces. Initial Structure to Relaxed Energy (IS2RE) asks to predict the relaxed state energy, given only the initial structure. Initial Structure to Relaxed Structure (IS2RS) explores how well the relaxed structure can be predicted given only the initial con\ufb01guration. In the scope of OC20, all energies were referenced to represent adsorption energy. For more details, we refer readers to the original manuscript.1 In this perspective we shed light on the challenges of training Graph Neural Networks (GNNs) on large-scale datasets spanning material and composition space, illustrated in Figure 1. We begin with a quick overview on the current state of the community\u2019s progress and share some takeaways from what we have observed. We then discuss some telling trends on the performance of models across di\ufb00erent adsorbates and material types. We discuss how di\ufb00erent approaches and modeling decisions impact the prediction tasks and highlight the challenges associated with each. Further, we explain what the accuracies in various proposed metrics mean and some of the challenges in analyzing them. Finally, we share our outlook on the direction the community is headed and what still remains to achieve a large scale, generalizable potential for catalyst discovery. Community progress in developing ML models for catalysis Molecular modeling has progressed at an incredible rate over the past few decades. Simple linear models, neural networks, and kernel methods were originally developed relying on hand-crafted atomic representations, or descriptors28\u201332 as inputs to the models. Descriptors capture invariant geometric information in the form of bonds and angles of the local environment of an atom. While e\ufb00ective, the parameterization of such descriptors has been a challenging and non-trivial task. The past few years has seen a shift towards deep learning approaches. Rather than relying on hand crafted representations, models are being developed to learn similar or more expressive representations, speci\ufb01cally by exploiting the graphical nature of molecules using Graph Neural Networks (GNNs).33\u201338 Such models only take in 3D atomic coordinates and atomic numbers. A graph is then generated, where atoms are treated as nodes, and the distance between them as edges. Once a graph has been constructed, GNNs will undergo several rounds of message passing in which node representations are updated based o\ufb00messages sent between neighboring nodes. While models may di\ufb00er in their exact architecture, the update and message functions often include a series of multilayer perceptrons and nonlinearities. Unlike traditional descriptor based models, GNNs end up learning node representations as part of the training process. Learned representations proceed through a \ufb01nal output block where a \ufb01nal prediction is made. In recent years, GNNs have come to surpass traditional descriptor based models.33\u201338 While typically data hungry, recent models like NequIP36 are demonstrating great performance with as little as 100 samples. GNNs continue to gain traction as models continue to demonstrate state of the art performance on molecular datasets. Since the release of OC20, the community has been rapidly developing new approaches to improve existing baselines. Models being de3 \fFigure 2: Community progress on the OC20 dataset since release. Left: IS2RE performance for both direct and relaxation based approaches. The current error target of 0.10 eV would make these models more practically useful for researchers\u2019 applications. Right: S2EF performance as evaluated by mean absolute error of the forces. IS2RE and S2EF MAEs for their median baselines are 1.756 eV and 0.084 eV/\u02da A , respectively. veloped range from traditional descriptor-style models39 to complex and large GNN architectures.35,40\u201343 Godwin, et al. present a simple, but e\ufb00ective GNN regularization technique to improve graph-level predictions, namely IS2RE. Liu, et al. use a similar technique in addition to a graph-based transformer to win 1st place in the NeurIPS 2021 Open Catalyst Challenge44 for direct IS2RE predictions. Klicpera, et al.35,45 and Shuaibi, et al.40 explore various higher order representations (i.e., triplets and quadruplets) and leverage training on the entire OC20 to achieve impressive performance on the S2EF task, with GemNet-OC45 holding the current state of the art across all tasks. Sriram, et al.41 introduces Graph Parallelism, allowing them to scale GemNet to nearly a billion parameters across multiple GPUs. The scale and diversity of OC20 has additionally enabled transfer learning approaches to smaller datasets. Kolluru, et al.46 propose a transfer learning technique to use OC20 pretrained models to improve performance on smaller, out-of-distribution datasets. Similar work has also been demonstrated for other big material datasets.47 As the community continues to improve performance (Figure 2), it\u2019s important to understand some of the challenges, trends, and pitfalls in developing a generalizable potential. Where are molecular GNNs still erroneous? Most of the independent work done in developing ML potentials has been con\ufb01ned to datasets built for certain applications. For example, ML potentials for the applications of CO2RR are usually just trained with CO and H adsorbates.27,48\u201350 While this approach might interpolate well across materials, extrapolation to di\ufb00erent adsorbates or more complicated materials will likely su\ufb00er in performance. A universal ML potential, if possible, would \ufb01rst require a large, diverse dataset that spans material and chemical space. OC20 dataset was created to build ML potentials that cover a large and diverse space of heterogeneous catalysts. Errors across material types: With over 300k unique surfaces, OC20 spans a vast range of material compositions. When training large GNNs on the entire OC20 dataset, we observe that the accuracies are not uniform across element and adsorbate types. To analyze this, we divide the validation set into four di\ufb00erent material types: intermetallics, metalloids, nonmetals and halides, Figure 3(a). The distribution of data across these classes of materials is not the same, we have signi\ufb01cantly more intermetallics and relatively fewer halides. We observe that the performance on non-metals is signi\ufb01cantly worse, although both nonmetals and 4 \f(a) (c) (d) (b) Figure 3: Analysis of GemNet-dT errors on the OC20 validation sets. (a) The categorization of OC20 elements into intermetallics, nonmetals, metalloids and halides for analysis. (b) Model performance across the di\ufb00erent distributions and material types. (c) Errors averaged across all validation splits for speci\ufb01c adsorbate containing systems. (d) Errors averaged across all validation splits for adsorabtes containing certain elements. metalloids contribute to similar percentage of training data (Figure 3(b)). On the other hand, models tend to do much better across the board for intermetallics. Inaccuracies coming from non-metals disproportionately contribute to the overall errors, leading to worse performance for both force and energy predictions. Errors across adsorbates: Large adsorbates are inherently more complicated as the degrees of freedom increases with the number of atoms. However, we observe no correlation with our model\u2019s performance and the size of the adsorbate. Model accuracies are poor for bidentate adsorbates like *CH*COH, *N*NO, *CH2*O, shown in Figure 3(c). Figure 3(d) also shows that adsorbates with N and O are generally more erroneous. Modeling trade-o\ufb00s Energy-conserving forces Force predictions play an important role in the applications of ML models for catalyst discovery. While some tasks may only be interested in property predictions like adsorption or formation energy,48,52,53 forces are necessary to study dynamics such as structural relaxations, molecular dynamics, and transition state calculations.1,33,36,54 Physically, energy-conserving forces are derived as the gradient of energy with respect to atomic positions: Fi = \u2212dE dxi (1) Energy-conservation is critical in studying molecular dynamics accurately. ML models estimating energy-conserving forces must ensure the architecture is continuous and di\ufb00erentiable, often satis\ufb01ed by appropriate non-linear 5 \fTable 1: Results on the OC20 S2EF task via gradient-derived or direct force predictions. All models were trained on the OC20 S2EF All dataset. Results reported for the validation set. Energy metrics are unavailable for the gradient based SpinConv model due to being optimized only on forces. Model Energy MAE (eV) \u2193 Force MAE (eV/\u02da A ) \u2193 ID OOD Ads. OOD Cat. OOD Both ID OOD Ads. OOD Cat. OOD Both Median 2.04 2.42 1.99 2.58 0.081 0.080 0.079 0.098 Gradient forces SpinConv40 0.031 0.035 0.032 0.042 GemNet-dT35 0.36 0.39 0.48 0.58 0.030 0.034 0.033 0.042 Direct forces SpinConv40 0.26 0.29 0.38 0.47 0.027 0.030 0.029 0.037 GemNet-dT35 0.23 0.25 0.35 0.41 0.021 0.024 0.025 0.032 Table 2: Results on the OC20 IS2RE task using one of two approaches. Direct Directly predicting the relaxed state energy and Relaxation Training a model for energy and force predictions, followed by an iterative ML-based geometry optimization to arrive at a relaxed structure and energy. Relaxation results on the 2M subset suggest that competitive results are still possible with a limited compute budget. Results reported for the test set. Energy MAE [eV] \u2193 Energy within Threshold (EwT) \u2191 Model Approach Dataset Size ID OOD Ads OOD Cat OOD Both ID OOD Ads OOD Cat OOD Both Median baseline 1.75 1.88 1.71 1.66 0.71% 0.72% 0.89% 0.74% DimeNet++34 Direct 460,328 0.56 0.73 0.58 0.66 4.25% 2.07% 4.10% 2.41% SpinConv40 Direct 460,328 0.56 0.72 0.57 0.67 4.08% 2.26% 3.82% 2.33% NoisyNodes43 Direct 460,328 0.42 0.57 0.44 0.47 9.12% 3.49% 8.01% 4.64% Graphormer42 Direct 460,328 0.40 0.57 0.42 0.50 8.97% 3.45% 8.18% 3.79% DimeNet++ \u2013 LF + LE1,34,51 Relaxation 2,000,000 0.53 0.57 0.56 0.52 6.79% 4.71% 6.49% 4.54% SpinConv40,51 Relaxation 2,000,000 0.46 0.51 0.47 0.44 7.38% 4.82% 7.05% 5.31% GemNet-dT35 Relaxation 2,000,000 0.44 0.44 0.45 0.42 9.37% 6.59% 8.42% 6.40% GemNet-OC45 Relaxation 2,000,000 0.41 0.42 0.42 0.39 11.02% 8.68% 10.10% 7.82% DimeNet++ \u2013 LF + LE1,34 Relaxation 133,934,018 0.50 0.54 0.58 0.61 6.57% 4.34% 5.09% 3.93% SpinConv40 Relaxation 133,934,018 0.42 0.44 0.46 0.42 9.37% 7.47% 8.16% 6.56% GemNet-dT35 Relaxation 133,934,018 0.39 0.39 0.43 0.38 12.37% 9.11% 10.09% 7.87% GemNet-OC45 Relaxation 133,934,018 0.35 0.35 0.38 0.34 16.06% 12.62% 13.17% 11.06% activation functions.33\u201335 Geometrically, forces derived in an energy-conserving manner ensures forces are rotationally equivariant, a necessary physical relation of molecular systems.55 Unfortunately, a gradient calculation increases model overhead in both memory usage and computational time by a factor of 2-4.40,56 For datasets like MD17, calculating forces as a gradient is known to help in model accuracies as that is an important physical prior to the model.35,36,40 Models trained on MD17 are often used to run molecular dynamics, further necessitating the need for energy-conservation.36 However, for the OC20 dataset, particularly in the task of geometric optimization, we observe that the gradient approach for calculating forces to perform worse than direct prediction of forces for GemNet-dT35 and Spinconv.40 Dimenet ++34 and ForceNet56 were built for gradient and direct approach respectively. The gradient approach could also make the training unstable in certain cases, which has been observed for ForceNet56 and GemNet-OC.45 Table 1 compares performance on the S2EF task for two recent top performing models, GemNet-dT35 and SpinConv.40 Not only are the force accuracies worse for the gradient approach, but the corresponding relaxed structure and relaxed energy 6 \fmetrics calculated via optimization are also signi\ufb01cantly worse.40 While energy-conservation plays a critical role in many molecular applications, we observe that direct force computations brings e\ufb03ciency and performance advantages.40,56 Models trained for direct force predictions are limited to applications where strict enforcement of energy-conservation can reasonably be ignored, i.e. OC20\u2019s structural relaxations. Here, atomic positions are updated solely from force estimates.1,57 If necessary, DFT, or a subsequent ML model, can then be used to make reliable energy predictions on the ML optimized structure. Similarly, transition states or saddle points can be derived in a similar manner with direct-force models. We want to emphasize that although unorthodox, direct-force models still prove to be useful in certain catalyst applications, i.e. OC20-like tasks. Prediction of relaxed energy and structure Adsorption energy is one of many properties that helps inform catalyst performance.58 Computationally, this is computed via a series of QM structural relaxations. The relaxed energy is then referenced to represent the adsorption energy, see Chanussot et al.,1 Garc\u00b4 \u0131a-Muelas et al.59 for more details. From a data-driven approach, we can predict the relaxed energy or the relaxed structure of an atomic system usually via two methods. First, we can build a surrogate to DFT, approximating system energy and per-atom structures, and running ML optimizations to \ufb01nd the minimum energy, a common approach within the \ufb01eld. Alternatively, given a large enough dataset of relaxed structures and energies, we can try to predict these properties directly using a ML model instead of optimizing via an iterative loop. The advantage of the direct method over the relaxation approach is that it requires only a single call to the ML model, whereas the relaxation approach could require on average 200-300 calls for a single relaxation. Direct approaches are particularly advantageous when we talk about the computational cost of approaching large scale inference on the order of hundreds of millions to billions of systems. The community has made tremendous progress in predicting adsorption energy as evaluated by the OC20 IS2RE task (Figure 2). Direct approaches, despite using 300x less data, are approaching the competitive relaxation based approaches of GemNet-XL and GemNet-OC. Inference time aside, models trained on the full 133M dataset for the relaxation based approaches are typically compute intensive, using between 128-512 GPUs.1,35,40,56 While this is certainly a small price to pay if the models developed accelerate the discovery process, it does make it di\ufb03cult for the community to engage in and aid in development. This has been particularly observed in the NeurIPS 2021 Open Catalyst Challenge,44 where of the 30 submissions, 0 were made via the relaxation approach. Here, we show that models trained on a 2M subset of the full dataset are still able to provide competitive results and even, averaged across all splits, out perform direct approaches. Given the trends in the 2M dataset correlate well with the full 133M dataset,45 this should help incentivize the community to explore other approaches even with resource limitations. Although the relaxation approach is computationally expensive for both training and inference, we have observed that the models trained through this approach tend to generalize better on out-of-distribution (OOD) data, Table 2. Direct relaxed energy predictions are an easier ML problem than direct structure predictions. For a system of size N, energy predictions require a single scalar output, while structure predictions require 3N components. We \ufb01nd that for relaxed energy prediction tasks, metrics are closer for direct and relaxation approach whereas for structure prediction task the metrics are worse. The OC20 paper provides a baseline for relaxed structure prediction only via the relaxation approach.1 In Table 3 we provide baselines for direct relaxed structure prediction. A considerable gap exists between the direct and relaxation based approaches (especially in the DFT based metrics). 7 \fMetrics for \ufb01nding local minima Relaxed structure prediction is less straightforward than some of the other common energy and force prediction tasks. Given a dataset like OC20 where relaxed structures are not necessarily global minima, a model trained on such a dataset could either (1) predict and arrive at the same local minima, (2) arrive at a di\ufb00erent, but still suitable minima, or (3) fail to arrive at any sort of minima. To account for this, two main metrics have been presented in the OC20 paper. Average Distance within Threshold (ADwT) is a distance based metric and measures how close the predicted structure compares to the actual structure. This is similar to the Global Distance Test (GDT) metric in the protein folding task.60,61 ADwT takes an average across different thresholds varying from 0.1 to 0.5\u02da A to ensure a signal is captured. For the OC20 dataset, we evaluate this metric for the input initial structures for an accuracy of 21.18% on the in-domain validation set.1 Models, at the bare minimum, should perform better than this baseline. To ensure invariance to arbitrary coordinate reference frames, we predict the di\ufb00erence between initial and \ufb01nal positions instead of the \ufb01nal position Cartesian coordinates. Predicting the delta di\ufb00erence helps simplify this task and results in improved ADwT accuracies. Table 3: Baseline metrics for IS2RS direct task in comparison with the relaxation approach. Metrics are reported on a 2k subset of the validation set, across all splits. DwT is evaluated at a threshold of 0.04 \u02da A. For compute reasons, DFT-based metrics were evaluated on a 200 system subset of the 2k, 50 systems from each split. Model DwT (at 0.04 \u02da A ) \u2191ADwT \u2191FbT* \u2191AFbT* \u2191 Direct ForceNet56 0.70 45.69% 0.00% 0.00% SpinConv40 1.05 47.76% 0.00% 0.00% GemNet-dT35 1.75 45.87% 0.00% 0.08% Relaxation ForceNet56 1.45 46.51% 0.00% 7.64% SpinConv40 8.20 55.81% 0.00% 12.55% GemNet-dT35 13.95 60.88% 0.00% 20.35% A model that predicts a relaxed structure that is not identical to its DFT reference may still be considered successful for two reasons. (1) the model could have predicted a symmetrically identical site on the surface and (2) the model predicted a di\ufb00erent, but still suitable local minima. The former is more a concern surrounding the distance-based metric, as ADwT, although accounts for periodic-boundary conditions, does not consider symmetrically identical sites. While it is rather unlikely an adsorbate initialized over a particular site will hop several sites over to a symmetrically identical site, it is worth raising awareness to the possibility. On the other hand, a model that arrives at a di\ufb00erent relaxed structure entirely will fail according to ADwT. However, to verify whether the model has predicted a di\ufb00erent suitable minima, we can evaluate the DFT forces corresponding to the ML predicted structures. This metric is called Average Force below Threshold (AFbT) and it measures the percent of structures having their forces close to zero.1 Since models are expected to predict relaxed structures, DFT forces should be close to zero. This is a stricter metric as compared to ADwT. However, this is far more expensive due to the additional DFT calculations. A more practically useful metric would be number of DFT calculations required to \ufb01nd the relaxed structure starting from the ML relaxed structure. This would give us an idea of the percent of DFT calculations that the current ML models can reduce. Although useful, this is a signi\ufb01cantly more expensive metric than AFbT calculations. While it is not something Open Catalyst Project\u2019s (OCP) tracks on their public leaderboard, we bring awareness to it as there could be instances where models do poorly on ADwT and AFbT but resulting structures are only a few DFT steps away from the relaxed structure. In Table 3 we compare relaxed structure prediction via a direct and relaxation approach. We observe that direct methods, although having competitive ADwT metrics, have AFbT metrics that are signi\ufb01cantly worse. This suggests that direct models do a reasonable job at getting close to the relaxed structure but are in high-force con\ufb01gurations, failing to capture repulsive physical interactions.43 We speculate models struggle with this since small perturbations distances can have large consequences on 8 \fforces, e.g. moving two atoms at an equilibrium bond length fractions of an angstrom towards each other. Relaxed structure prediction via the relaxation approach avoids this issue by using ML forces to drive a geometric optimizer. We observe that distance metrics at tighter thresholds correlate better with force based metrics, however, going below 0.04 \u02da A does not give su\ufb03cient signal and the accuracies for most systems fall to zero. Moreover, the Distance within Threshold (DwT) at 0.04 \u02da A isn\u2019t a good enough signal that can replace AFbT. For example, DwT (at 0.04\u02da A ) for ForceNet relaxation approach and GemNet-dT direct approach are similar, however, the AFbT metrics still di\ufb00er by 7.56% (as shown in Table 3). We believe that \ufb01nding non DFT-based metrics that correlate well with DFT-based metrics is still an open and important question in the community which would make model evaluation computationally less expensive. Additional data The OC20 paper1 released two additional data subsets generated with ab-initio molecular dynamics (\u2018MD\u2019) and structural perturbations (\u2018Rattled\u2019). These provide 38M and 17M additional S2EF training data points respectively. Table 4 presents results for GemNet-OC45 models trained on S2EF, Rattled, and MD data compared against similar analysis from the OC20 paper for DimeNet++.21,34 First, on the force MAE metric, addition of MD data hurts DimeNet++ while it improves GemNetOC. We speculate this to be another artifact of modeling forces as negative gradients of energy (as in DimeNet++) vs. direct prediction (as in GemNet-OC). Second, consistent with the OC20 paper, adding MD data to the training set provides a useful signal for IS2RS structure relaxations as per the AFbT metric. Finally, adding Rattled data helps with IS2RS metrics, but did not help or marginally hurt the S2EF force MAE. This could be due to a variety of reasons \u2013 random perturbations being too large / small to be useful, intermediate structures along a trajectory being less useful compared to closer to the local minimum (as in MD initial structures), etc. A promising direction here could be active learning approaches to optimally query additional training data points. Table 4: Results with DimeNet++ (DN++) and GemNet-OC (GN-OC) trained on MD and Rattled. S2EF results reported for the validation in-distribution set. IS2RS results reported on the test set. S2EF Val ID IS2RS Test Training Data (# samples) Force MAE \u2193ADwT \u2191AFbT \u2191 DN++ ( 20M (20M) 0.0511 34.37% 2.67% 20M + MD (58M) 0.0594 47.69% 17.09% 20M + Rattled (37M) 0.0614 43.94% 12.51% GN-OC ( All (133M) 0.0179 60.33% 35.27% All + MD (172M) 0.0173 60.77% 38.05% All + MD + Rattled (189M) 0.0174 Summary and Outlook The development of generalizable or universal ML models has only recently been seriously considered with the emergence of largescale datasets like OC20.1 Since its release, the catalysis and ML communities have both made tremendous progressive in developing models for catalyst applications. As the community continues to grow and as more datasets emerge that span material and composition space, the prospect of large-scale generalizable models is within reason. Progress thus far has demonstrated several challenges in accomplishing this feat: classes of materials and adsorbates with inconsistent errors, energy-conserving forces, relaxed vs direct approaches, DFT metrics, and data augmentation strategies. In this perspective, we discussed these challenges in detail and provided some insights as to how and why they are important. Although these challenges were discussed in the context of OC20, we anticipate similar challenges to future datasets of its kind. Datasets like OC20 has o\ufb00ered new ways to how we think about building large, generalizable, and reliable models. While model performance has been the focal point of community progress thus far, we provide an outlook of other important challenges that we hope the community to engage in. 9 \fTraining strategies. OC20 was released with prede\ufb01ned training, validation, and test sets. Its splits were curated in a manner to tackle the problem of building a single generalizable model for catalysis. However, it could be the case that multiple models for di\ufb00erent subsets of the data, e.g. adsorbates, compositions, materials, do better. In the case of nonmetals, for instance, we have shown that this actually hurts performance a possible consequence of the reduced dataset size. Uncertainty and active learning. While model performance is a necessary step for the discovery process, it is not always su\ufb03cient. A practical ML-aided catalyst discovery pipeline will ultimately turn to experiments to validate whether the ML predicted \u201cgreat\u201d catalyst is at all e\ufb00ective. Having con\ufb01dence in these predictions is particularly important to avoid wasted expensive experiments. Uncertainty quanti\ufb01cation has been a particularly popular topic within the catalysis community, often focused on the small data regime and active learning.62\u201368 The e\ufb00ectiveness of traditional uncertainty estimation techniques on large datasets like OC20 is a necessary and important step for the future of this work. Similarly, how to best leverage active learning for either dataset generation and/or augmentation69 or online active learning64,65 at the scale of OC20 will be an exciting future direction. Model e\ufb03ciency. In addition to model performance and reliability, model e\ufb03ciency will continue to be critical for all applications. For training, faster, more data e\ufb03cient models can help attract the community to tackle some of the bigger challenges like a surrogate to DFT, i.e. OC20\u2019s S2EF task. Progress so far has shown that the best models are also the largest models. From an inference perspective, this poses obvious challenges of slower speeds and ultimately reduced screening throughput. While models still remain orders of magnitude faster than DFT, when considering the possibility of screening billions of systems, computational costs add up. Recent models encoding equivariant representations36,70 have shown incredible scaling and e\ufb03ciency gains that could be promising to explore. Moving forward, ef\ufb01cient architectures and model distillation71 will be an important contribution to reduce the computational cost of large-scale inference, even if it means sacri\ufb01cing some accuracy. Data augmentation. The scale of OC20 makes data augmentation a non-trivial challenge. With 130M+ training data points, randomly adding 10-100k data points will likely have negligible impact on the models. We observed that models using the additional MD data are able to perform the best, while the rattled data has little impact. Identifying strategies to combine and train large molecular and material datasets like ANI-120 and OQMD72 with OC20 could help improve models even further. The biggest challenge surrounding this comes from combining datasets of varying levels of DFT theory. Energy-conserving forces. In the context of OC20, we have observed that the best performing models make a direct force-prediction. While this may be suitable for some applications, the more physically motivated gradient approach to force prediction is desired for other applications like MD. The same direct models applied to MD17 observe the opposite e\ufb00ect, better performance via the gradient method.45 It remains an open question why this is the case, and we encourage others to investigate this observation. Physics-based modeling. The majority of models submitted to OC20 have followed a purely data-driven approach, only taking in atomic numbers and positions as inputs. Exploring ways to leverage OC20 charge density or Bader charge data 1 could prove useful, particularly in the low data regime. Additionally, models like UNiTE73 or OrbNet74 that leverage tight binding DFT75 for featurization could be interesting to explore for catalyst applications. 1To be made publically available at https://github.com/Open-CatalystProject/ocp/blob/main/DATASET.md 10 \fGlossary ADwT Average Distance within Threshold. 8 AFbT Average Force below Threshold. 8, 9 DFT Density Functional Theory. 1, 8 DwT Distance within Threshold. 9 EwT Energy within Threshold. 6 GNNs Graph Neural Networks. 3, 4 IS2RE Initial Structure to Relaxed Energy. 3, 4, 6 IS2RS Initial Structure to Relaxed Structure. 3, 9 OC20 Open Catalyst 2020 Dataset. 1, 3\u201310 OCP Open Catalyst Project\u2019s. 8 S2EF Structure to Energy and Forces. 3, 4, 9 11" + } + ], + "Muhammed Shuaibi": [ + { + "url": "http://arxiv.org/abs/2106.09575v1", + "title": "Rotation Invariant Graph Neural Networks using Spin Convolutions", + "abstract": "Progress towards the energy breakthroughs needed to combat climate change can\nbe significantly accelerated through the efficient simulation of atomic\nsystems. Simulation techniques based on first principles, such as Density\nFunctional Theory (DFT), are limited in their practical use due to their high\ncomputational expense. Machine learning approaches have the potential to\napproximate DFT in a computationally efficient manner, which could dramatically\nincrease the impact of computational simulations on real-world problems.\nApproximating DFT poses several challenges. These include accurately modeling\nthe subtle changes in the relative positions and angles between atoms, and\nenforcing constraints such as rotation invariance or energy conservation. We\nintroduce a novel approach to modeling angular information between sets of\nneighboring atoms in a graph neural network. Rotation invariance is achieved\nfor the network's edge messages through the use of a per-edge local coordinate\nframe and a novel spin convolution over the remaining degree of freedom. Two\nmodel variants are proposed for the applications of structure relaxation and\nmolecular dynamics. State-of-the-art results are demonstrated on the\nlarge-scale Open Catalyst 2020 dataset. Comparisons are also performed on the\nMD17 and QM9 datasets.", + "authors": "Muhammed Shuaibi, Adeesh Kolluru, Abhishek Das, Aditya Grover, Anuroop Sriram, Zachary Ulissi, C. Lawrence Zitnick", + "published": "2021-06-17", + "updated": "2021-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CE", + "I.2.6; J.2" + ], + "main_content": "Introduction Many of the world\u2019s challenges such as \ufb01nding energy solutions to address climate change [35, 3] and drug discovery [22, 28] are fundamentally problems of atomic-scale design. A notable example is the discovery of new catalyst materials to drive chemical reactions that are essential for addressing energy scarcity, renewable energy storage, and more broadly climate change [35, 23]. Potential catalyst materials are typically modeled using Density Functional Theory (DFT) that estimates the forces that are exerted on each atom and the energy of a system or structure of atoms. Unfortunately, the computational complexity of DFT limits the scale at which it can be applied. Ef\ufb01cient machine learning approximations to DFT calculations hold the potential to signi\ufb01cantly increase the discovery rate of new materials for these important global problems. Graph Neural Networks (GNNs) [10, 34] are a common approach to modeling atomic structures, where each node represents an atom and the edges represent the atom\u2019s neighbors [26, 9, 13, 25, 27, 33, 20, 15]. A signi\ufb01cant challenge in designing models is utilizing relative angular information between atoms, while maintaining a model\u2019s invariance to system rotations. Numerous approaches have been proposed, such as only using the distance between atoms [25, 27, 33], or limiting equivariant angular representations to linear transformations to maintain equivariance [31, 2, 1, 29]. One promising approach is the use of triplets of neighboring atoms to de\ufb01ne local coordinate frames that are invariant to system rotations [15, 14]. The relative angles between the three atoms may be used to update the GNN\u2019s messages while maintaining the network\u2019s invariance to rotations. It has been shown Preprint. Under review. arXiv:2106.09575v1 [cs.LG] 17 Jun 2021 \fFigure 1: Illustration of projecting an atom \u00b4 s in the neighborhood of s onto a sphere in a local coordinate frame de\ufb01ned by atom s and t (left). For each projected atom, a corresponding latitude \u03c6 (inclination) and longitude \u03b8 (azimuth) is computed for its projection onto a 2D reference frame (middle). The spin convolution is done in the longitudinal direction, corresponding to a roll is 3D space. (right) Example channel \ufb01lters that are learned using the grid-based approach for the \ufb01rst through third message blocks and the force block. that this additional angular information results in signi\ufb01cantly improved accuracies on several tasks [15, 14, 3]. We propose encoding angular information using a local reference frame de\ufb01ned by only two atoms; the source and target atoms for each edge in a GNN. Using this reference frame, a spherical representation of the incoming messages to the source atom is created, Figure 1. The representation has the bene\ufb01t of encoding all neighboring atom information, and not just information between atom triplets, which may result in higher-order information being captured. The complication is a reference frame de\ufb01ned by two atoms (or two 3D points) still has one remaining degree of freedom the roll rotation about the axis de\ufb01ned by the two 3D points. If this \ufb01nal degree of freedom is not accounted for, the model will not be invariant to system rotations. Our solution is to perform a convolution on the spherical representation across this \ufb01nal rotation, called a \u201cspin convolution\u201d. By globally pooling the convolution\u2019s features, the resulting SpinConv model maintains rotation invariance while enabling the capture of rich angular information. We describe two model variations that are used depending on the importance of energy conservation in the \ufb01nal application. We propose an energy-centric model that enforces energy conservation by calculating the forces using the negative partial derivative of the energy with respect to the atoms\u2019 positions [4]. Our second approach is a force-centric model that directly estimates the atom forces that is not energy conserving. While the force-centric model\u2019s energy estimation is rotation invariant, the model\u2019s \ufb01nal force estimation layer is not strictly rotation equivariant, but through its architectural design it is encouraged to learn rotation equivariance during training. Results are demonstrated on the Open Catalyst 2020 (OC20) dataset [3] aimed at simulating catalyst materials that are useful for climate change related applications. The OC20 dataset contains over 130M training examples for approximating the DFT-estimated forces and energies. Our SpinConv model achieves state-of-the-art performance for both energy and force estimation. Notably, the force-centric variant, which is not energy conserving, outperforms the energy-centric models. Signi\ufb01cant gains in accuracy are achieved for predicting relaxed energies from initial structures, by using the force-centric approach to predict the relaxed structure followed by its energy. Ablation studies are performed on numerous architectural choices, such as the choice of spherical representation and the size of the model. For completeness, we also evaluate our model on the MD17 [4, 5] and QM9 [22] datasets that measure accuracy for molecular dynamics and property prediction tasks respectively for small molecules. Results compare favorably with respect to state-of-the-art methods. 2 Approach We model a system or structure of atoms using a Graph Neural Network (GNN) [10, 16, 34], where the nodes represent atoms and the edges represent the atoms\u2019 neighbors. In this section, we describe both an energy-centric and force-centric model to estimating atomic forces, which vary in how they estimate forces and whether they are energy conserving. We begin by describing the components shared by each approach, followed by how these components are used. Code will be released upon acceptance under a permissive open-source license. 2 \fFigure 2: (left) Overall model diagram for energy-centric model taking atom positions x and atomic numbers a as input and estimating the energy E. (right) Diagram of the embedding and force blocks. The force block is only used in the force-centric model to estimate the per-atom forces after the message blocks. 2.1 Inputs and Outputs The inputs to the network are the 3D positions xi and the atomic numbers ai for all i \u2208n atoms. The outputs are the per atom forces fi \u2208R3 and the overall structure\u2019s energy E. The 3D distance offset between a pair of source and target atoms s and t respectively is xst = xs \u2212xt with a distance of dst = \u2225xst\u22252. Directional information is encoded using the normalized unit vector \u02c6 xst = xst/dst. The graph neural network is constructed with each atom t as a node and the edges representing the atom\u2019s neighbors s \u2208Nt, where Nt contains all atoms s with dst < \u03b4. Each edge has a corresponding message mst that passes information from atom s to t. The output forces and energy are computed as a function of edge messages mst that we describe next. 2.2 Energy and force estimation The energy-centric and force-centric models compute the structure\u2019s energy E as an output. Our GNN model updates for each edge an M-dimensional hidden message h(k) st \u2208RM for K iterations. The structure\u2019s energy E \u2208R is computed as a function of the \ufb01nal layer of the edge messages in the GNN: E(x, a) = X t Fe(at, X s h(K) st ), (1) where Fe is a single embedding block described later. As we also discuss later, the edge messages hst are invariant to system rotations, so the estimated energy E is also invariant. The estimation of the forces varies for the energy-centric and force-centric models. The energy-centric model estimates the forces using the negative partial derivative of the energy with respect to the atom positions. This approach to force estimation has the bene\ufb01t of enforcing energy conservation [4], i.e., the forces along any closed path sum to zero. The calculation of the partial derivative [4, 25, 27] requires an additional step similar to performing backpropagation when updating the network\u2019s weights: f = \u2212\u2202 \u2202xE(x, a) (2) The force-centric model estimates forces directly for an atom t using: ft = Ff(at, \u02c6 xt, h(K) t ), (3) where Ff is the force block we describe later, \u02c6 xt are all the normalized unit vectors for the neighbors of t and h(K) t are all incoming messages to atom t. This has the bene\ufb01t of improved ef\ufb01ciency since it does not require an extra backward pass to estimate the forces. The tradeoff is that it does not enforce energy conservation, i.e., the sum of the forces along a closed path may not equal zero. Depending on the application, an energy-centric or force-centric approach may be most suitable. In either model, losses may be applied to both the energy and force estimates with weights determined by the needs of the application. 3 \f2.3 Messages The edge messages are iteratively updated to allow information from increasingly distant atoms to be captured. Each message is represented by a tuple, mst = {\u02c6 xst, dst, hk st}, where hk st is the message\u2019s hidden state at iteration k. Both \u02c6 xst and dst are used to update the message\u2019s hidden state hst, which is itself rotation invariant due to the spin convolution that we describe later. The hidden state hst \u2208RM is updated using: h(k+1) st = h(k) st + Fh \u0010 as, at, m(k) st , m(k) s \u0011 , (4) where m(k) s is the set of messages coming into node s, i.e., all m\u00b4 ss with \u00b4 s \u2208Ns. The form of Fh is illustrated in Figure 2. It contains three parts; the spin convolution that transforms a spherical projection of the messages into a rotation invariant representation, the distance block that encodes the distance dst between atoms, and the embedding block that incorporates information about the atoms\u2019 atomic numbers. The output of the spin convolution is passed through an embedding block, added to the output of the distance block and \ufb01nally passed through another embedding block. We describe each of these parts in turn. The hidden messages are initialized using just a distance block followed by and embedding block, Figure 2. 2.3.1 Spin Convolution The spin convolution captures information about the neighbors \u00b4 s \u2208Ns of atom s when updating the message hidden state hst. The spin convolution has three stages that we describe in turn; projection, convolution and pooling. The convolution captures the relative angular information between the neighboring atoms, and the pooling ensures the output D-dimensional feature representation is invariant to system rotations. An important feature is the angular information of the neighboring atoms in Ns relative to s and t. This information is encoded by creating a local reference frame in which atom s is the center (0, 0, 0) and the z-axis points from atom s to atom t. As shown in Figure 1(left), this \ufb01xes all degrees of freedom except the roll rotation about the vector from s to t. The spin convolution is performed across a discretized set of rotations about the roll rotation axis. At each rotation, the atoms \u00b4 s are projected onto a sphere centered on s and used to create a spherical representation of the hidden states h\u00b4 ss. Each atom \u00b4 s \u2208Ns is projected using a polar coordinate frame (\u03c6, \u03b8) where \u03c6 may be viewed as the latitude (inclination) and \u03b8 as the longitude (azimuth). The polar coordinates are computed in the local edge coordinate frame using \u00af x\u00b4 ss = Rst \u02c6 x\u00b4 ss where Rst is a 3D rotation matrix that satis\ufb01es Rst \u02c6 xst = (0, 0, 1). To capture the rich information encoded in the relative angular information between atoms, a set of \ufb01lters is applied to the spherical representation (Figure 1(right)), similar to how a \ufb01lter is applied to an image patch with traditional CNNs. We explore two potential spherical representations: spherical harmonics and a grid-based approach. Spherical harmonics represent a spherical function using a set of basis functions that are equivariant to rotations. The degree \u2113indicates the number of basis functions L = (\u2113+ 1)2 used. The spherical representation of the incoming messages for each atom is RL \u00d7 RM, where M is the size of the message hidden states in h. The second approach uses the computed polar coordinates (\u03c6, \u03b8) for all \u00b4 s \u2208Ns to create a grid-based representation, Figure 1(middle). The polar coordinates are discretized creating a R\u03c6 \u00d7 R\u03b8 \u00d7 RM feature representation. Each message hidden state h(k) s\u00b4 s \u2208RM is added to the 3D feature representation using bilinear interpolation with its corresponding (\u03c6, \u03b8). A 1D convolution is performed with either spherical representation in the longitudinal direction. Filters have the same size as the feature representation, RL \u00d7 RM or R\u03c6 \u00d7 R\u03b8 \u00d7 RM for spherical harmonics and the grid-based approach respectively. Full coverage \ufb01lters are used since the angular relationship between atoms at distant angles is important, e.g., the forces of atoms at exactly 180\u25e6from each other may cancel out. Large \ufb01lters also enable the network to learn the complex relationships between numerous neighboring atoms. Rotations are performed using Wigner D-matrices for the spherical harmonic representation, while a simple translation is used for the grid-based representation. The result of the convolution is a R\u03b8 \u00d7 RD feature vector corresponding to D \ufb01lters applied to each longitudinal orientation. To make the representation invariant to rotations, average pooling is performed in the longitudinal direction resulting in a \ufb01nal RD feature vector. 4 \fFigure 3: Illustration of learned embeddings (weights on the one-hot embeddings) for the source as and target at atomic numbers plotted on a periodic table. A random sample of 12 values from each embedding are shown. Embeddings are from the \ufb01rst embedding block in the \ufb01rst message update. Note that neighboring atoms in the periodic table with similar properties have similar weights. Elements not in the OC20 dataset are marked with a light grey checkerboard pattern. 2.3.2 Distance Block The distance block encodes the distance between two atoms. The distance is encoded using a set of evenly distributed Gaussian basis functions G with means \u00b5i and standard deviation \u03c3. The means of the basis functions are evenly distributed from 0 to \u03b4 angstroms. Since the atomic radii of each element varies, the relative position of two atoms s and t is highly dependent on their atomic numbers as and at. To account for this, gain vasat and offset uasat scalars for the distance dst are learned for each potential pair of atomic numbers: bi = Gi(vasatdst + uasat \u2212\u00b5i, \u03c3) (5) The resulting feature b is passed through a linear transformation to create a D-dimensional feature vector that is passed to the next block. 2.3.3 Embedding Block The embedding block incorporates the atomic number information as and at into the update of the message\u2019s hidden state. The embedding operation may be interpreted as a mixture of experts [18] approach that computes B different variations of the input, which are weighted by an embedding computed from the atoms\u2019 atomic numbers. The block\u2019s inputs are used to compute B sets of hidden values Vst \u2208RD \u00d7 RB. A one-hot embedding for the atomic numbers as and at are concatenated and used to compute an B dimensional vector, vst \u2208RB, for weighting the B different sets of hidden values. An illustration of the learned embeddings are shown in Figure 3. vst is computed using a two layer network and softmax. The matrix Vst is multiplied by vector vst resulting in a vector of length D. As shown in Figure 2, the result is passed through an additional fully connected layer before being passed to the next block. The output of the block is either D if it is used in the message update. If the embedding block is used to compute the \ufb01nal energy, only the atomic number at embedding is used, the input dimension is M instead of D, and the output is size 1. 2.4 Force Block The force block computes the per-atom 3D forces f from at, \u02c6 xt, and h(K) t using Equation (3). The force block uses a similar spin convolution as the message block, except the sphere is centered on the target atom t and is orientated along the x, y and z axes to compute fx, fy and fz respectively. That is, the force block is used three times to compute the force magnitude in each orthogonal direction for each atom. The force block uses the same embedding blocks as message passing, Figure 2. The same weights are used to compute forces in each of the three directions, only the orientation of the sphere used to create the convolutional features changes. To add more robustness to the force estimation and encourage rotational equivariance, the overall structure may be randomly rotated several times and the forces estimated. The multiple estimates may then be rotated back to the original reference frame and averaged. For both training and testing, \ufb01ve random rotations are used. Empirically, this approach encourages the networks to learn an approximate rotation equivariant representation even though rotation equivariance is not strictly enforced. 5 \fModel Hidden #Msg #Params Train Inference OC20 Test dim layers time time Energy MAE [eV] \u2193 Force MAE [eV/\u00c5] \u2193 Force Cos \u2191 EFwT [%] \u2191 Median \u2013 \u2013 \u2013 2.258 0.08438 0.0156 0.005 SchNet[27, 3] 1024 5 9.1M 194d 0.8h \u2013 0.04903 0.3413 0 DimeNet++[14, 3] 192 3 1.8M 587d 8.5h 0.5343 0.04758 0.3560 0.05 DimeNet++ energy-only[14, 3] 192 3 1.8M 587d 8.5h 0.4802 0.3459 0.1021 0.0 DimeNet++ force-only[14, 3] 192 3 1.8M 587d 8.5h \u2013 0.03573 0.4785 \u2013 DimeNet++-large[14, 3] 512 3 10.7M 1600d 27.0h \u2013 0.03275 0.5408 \u2013 ForceNet[12] 512 5 11.3M 31d 1.3h \u2013 0.03432 0.4770 \u2013 ForceNet-large[12] 768 7 34.8M 194d 3.5h 0.03113 0.5195 SpinConv (energy-centric) 256 3 6.1M 275d 22.7h 0.4114 0.03888 0.4299 0.16 SpinConv (energy-centric) force-only 256 3 6.1M 380d 22.7h \u2013 0.03258 0.4976 \u2013 SpinConv (force-centric) 256 3 8.5M 275d 9.1h 0.3363 0.02966 0.5391 0.45 Table 1: Comparison of SpinConv to existing GNN models on the S2EF task. Average results across all four test splits are reported. We mark as bold the best performance and close ones, i.e., within 0.0005 MAE, which according to our preliminary experiments, is a good threshold to meaningfully distinguish model performance. Training time is in GPU days, and inference time is in GPU hours. Median represents the trivial baseline of always predicting the median training force across all the validation atoms. Model Energy MAE (eV) \u2193 Force MAE (eV/\u00c5) \u2193 ID OOD Ads. OOD Cat. OOD Both ID OOD Ads. OOD Cat. OOD Both Median 2.043 2.420 1.992 2.577 0.0809 0.0801 0.0787 0.0978 Energy Loss Only SchNet 0.395 0.446 0.551 0.703 DimeNet++ 0.359 0.402 0.506 0.654 Force Loss Only SchNet 0.0443 0.0469 0.0459 0.0590 DimeNet++ 0.0331 0.0341 0.0340 0.0417 DimeNet++-large 0.0281 0.0289 0.0312 0.0371 ForceNet 0.0313 0.0320 0.0331 0.0409 ForceNet-large 0.0278 0.0283 0.0309 0.0375 SpinConv (energy-centric) 0.0309 0.0321 0.0315 0.0393 Energy and Force Loss SchNet 0.443 0.491 0.529 0.716 0.0493 0.0527 0.0508 0.0652 DimeNet++ 0.486 0.470 0.533 0.648 0.0443 0.0458 0.0444 0.0558 SpinConv (energy-centric) 0.351 0.367 0.411 0.517 0.0358 0.0374 0.0364 0.0460 SpinConv (force-centric) 0.261 0.275 0.350 0.459 0.0269 0.0277 0.0285 0.0356 Table 2: Comparison of SpinConv to existing GNN models on different test splits. We mark as bold the best performance and close ones, i.e., within 0.0005 MAE, which according to our preliminary experiments, is a good threshold to meaningfully distinguish model performance. Training time is in GPU days, and inference time is in GPU hours. Median represents the trivial baseline of always predicting the median training force across all the validation atoms. 3 Experiments In this section, we begin by presenting our primary results on the Open Catalyst 2020 (OC20) dataset [3] and compare against state-of-the-art models. This is followed by results on the smaller datasets of MD17 [4, 5] and QM9 [22] for additional model comparison. Implementation details. For all models, the edge messages have size M = 32 with K = 3 layers, the hidden dimension D = 256 and embedding dimension B = 8. Unless otherwise stated, the convolutional \ufb01lters are of size 16x12 and 12x8 for the force-centric and energy-centric models respectively. A smaller \ufb01lter size was used for the energy-centric model due to memory constraints. GroupNorm [32] is applied after the spin convolution with group size 4. An L1 loss is used for all experiments. The force loss was weighed by 100 with respect to the energy loss, except for the force-only model where the energy loss is set to 0. All models were trained with Adam (amsgrad) to convergence with the learning rate multiplied by 0.8 when the validation error plateaus. Training was performed using batch sizes ranging from 64 to 96 samples across 32 Volta 32GB GPUs. The Swish [21] function is used for all non-linear activation functions. The neighbors s \u2208Nt of each atom t are found using a distance threshold of \u03b4 = 6\u00c5. If more than 30 atoms are within the distance threshold, only the closest 30 are used. The distance block uses 256 to 512 Gaussian basis functions with \u03c3\u2019s equal to three times the distance between Gaussian means. 6 \fModel Hidden #Msg #Params Train OC20 Val ID 30k dim layers time Energy MAE [eV] \u2193 Force MAE [eV/\u00c5] \u2193 Force Cos \u2191 EFwT [%] \u2191 Median Energy-Centric SpinConv (grid 12x8) 128 2 1.3M 54d \u2013 0.0417 0.401 \u2013 SpinConv (spherical harmonics, \u2113= 5) 256 3 6.4M 119d \u2013 0.0405 0.411 \u2013 SpinConv (grid 12x8) 256 3 6.1M 87d \u2013 0.0406 0.426 \u2013 Force-Centric SpinConv (grid 12x8) 128 2 1.8M 54d 0.376 0.0370 0.436 0.15% SpinConv (grid no conv 16x12) 256 3 8.5M 56d 0.341 0.0348 0.462 0.20% SpinConv (spherical harmonics, \u2113= 5) 256 3 8.1M 113d 0.321 0.0328 0.484 0.22% SpinConv (grid 16x12) 256 3 8.5M 76d 0.317 0.0326 0.484 0.20% Table 3: Ablation studies for SpinConv model variations trained for 560k steps (32-48 batch size, 0.2 epochs) with 16 Volta 32 GB GPUs. Training time is in GPU days and the validation set is a 30k random sample of the OC20 ID Validation set. Figure 4: Performance of SpinConv ablations on OC20 Val ID 30k (Table 3). All models trained for 560k steps and plotted against wall-clock training time. Note force-centric models and grid-based approaches converge more quickly than energy-centric models and those using spherical harmonics. 3.1 OC20 The OC20 dataset [3] contains over 130 million structures used to train models for predicting forces and energies during structure relaxations that is released under a CC Attribution 4.0 License. Since the goal of a structure relaxation is to \ufb01nd a local energy minimum, energy conservation in optional for this task. We report results for the Structure to Energy and Forces (S2EF), the Initial Structure to Relaxed Energy (IS2RE) and the Initial Structure to Relaxed Structure (IS2RS) tasks. 3.1.1 Structure to Energy and Forces (S2EF) There are four metrics for the S2EF task, the energy and force Mean Absolute Error (MAE), the Force Cosine similarity, and the Energy and Forces within a Threshold (EFwT). The EFwT metric is meant to indicate the percentage of energy and force predictions that would be useful in practice. Results for three model variants are shown in Table 1 on the test set. The SpinConv force-centric approach has the lowest energy MAE and force MAE of all models. While still low in absolute terms, the SpinConv models are improving over other models on the EFwT metric. DimeNet++-large slightly out performs SpinConv on the force cosine metric. The training time for the SpinConv is signi\ufb01cantly faster than DimeNet++, while being a little slower than ForceNet [12] or SchNet [27]. In Table 2 we examine the performance of SpinConv across different test splits. Note that the energy prediction of SpinConv is sign\ufb01cantly better than SchNet or DimeNet++. Across all models the accuracy for the in domain split are highest and decline for the three Out of Domain (OOD Adsorbate, OOD Catalyst, OOD Both) splits. SpinConv outperforms all models on each of the different domain splits. When comparing energy-centric approaches trained with both force and energy losses (bottom rows), the SpinConv model does signi\ufb01cantly better at predicting both. In fact, the energy-centric approach trained on forces and energy outperforms the DimeNet++ [14] model when trained on only energy, or energy and forces. We examine variations of the SpinConv model in Table 3 and Figure 4 through ablation studies. We trained three variants of the energy-centric model and four variants of the force-centric model. The grid-based and spherical harmonic approaches produced similar accuracies. However, the grid-based approach was signi\ufb01cantly faster to train, so it was used in the remaining experiments. Smaller models lead to reduced performance on the OC20 dataset, but we found for smaller datasets such as MD17 or QM9 smaller model sizes can be bene\ufb01cial to avoid over\ufb01tting. Finally, we test the impact of not performing the convolution (no conv) and only applying the \ufb01lter at a single rotation. Rotation invariance was maintained by orienting the \ufb01lter based on the mean angle of the neighboring atoms 7 \fEnergy MAE [eV] \u2193 EwT \u2191 Model Approach ID OOD Ads OOD Cat OOD Both ID OOD Ads OOD Cat OOD Both Median baseline 1.7499 1.8793 1.7090 1.6636 0.71% 0.72% 0.89% 0.74% CGCNN [33] Direct 0.6149 0.9155 0.6219 0.8511 3.40% 1.93% 3.10% 2.00% SchNet [25] Direct 0.6387 0.7342 0.6616 0.7037 2.96% 2.33% 2.94% 2.21% DimeNet++ [15] Direct 0.5620 0.7252 0.5756 0.6613 4.25% 2.07% 4.10% 2.41% SpinConv Direct 0.5583 0.7230 0.5687 0.6738 4.08% 2.26% 3.82% 2.33% DimeNet++ Relaxation 0.6908 0.6842 0.7027 0.6834 4.25% 3.36% 3.76% 3.52% DimeNet++ \u2013 force-only + energy-only Relaxation 0.5124 0.5744 0.5935 0.6126 6.12% 4.29% 5.07% 3.85% DimeNet++ \u2013 large force-only + energy-only Relaxation 0.5034 0.5430 0.5789 0.6113 6.57% 4.34% 5.09% 3.93% SpinConv (force-centric) Relaxation 0.4235 0.4415 0.4572 0.4245 9.37% 6.75% 8.49% 6.76% Table 4: Initial Structure to Relaxed Energy (IS2RE) results on the OC20 test split as evaluated by the Energy MAE (eV) and Energy within Threshold (EwT) [3] (see OC20 discussion board). Comparisons made for the direct and relaxation approaches using various models. Model Inference AFbT (%) \u2191 ADwT (%) \u2191 time \u2193 ID OOD Ads. OOD Cat. OOD Both Average ID OOD Ads. OOD Cat. OOD Both Average SchNet [25] 54.1h 5.28 2.82 2.62 2.73 3.36 32.49 28.59 30.99 35.08 31.79 DimeNet++ [14] 407.6h 17.52 14.67 14.32 14.43 15.23 48.76 45.19 48.59 53.14 48.92 DimeNet++-large [14] 814.6h 25.65 20.73 20.24 20.67 21.82 52.45 48.47 50.99 54.82 51.68 ForceNet [12] 75.1h 10.75 7.74 7.54 7.78 8.45 46.83 41.26 46.45 49.60 46.04 ForceNet-large [12] 186.9h 14.77 12.23 12.16 11.46 12.66 50.59 45.16 49.80 52.94 49.62 SpinConv (force-centric) 263.2h 21.10 15.70 15.86 14.01 16.67 53.68 48.87 53.92 58.03 53.62 Table 5: Relaxed structure from initial structure (IS2RS) results on the OC20 test split, as evaluated by Average Distance within Threshold (ADwT) and Average Forces below Threshold (AFbT). All values in percentages, higher is better. Results computed via the OCP evaluation server. Inference times are total across the 4 splits. weighted by distance. The result of not performing the convolution is signi\ufb01cantly reduced accuracy. However, its faster training time may make it suitable for some applications. Finally, for the force-centric SpinConv model we explore results when varying the number of random rotations used in the force block. The force MAE when using a single random rotation is 0.0276 and improves slightly to 0.0270 when using 5 random rotations. Increasing the number of rotations beyond 5 leads to negligible gains. The standard deviation of the force estimates at different random rotations is 0.004 eV/\u00c5. This is equal to 15% of the force MAE, which indicates the amount of error due to the model not being strictly rotation equivariant is small relative to the overall error of the model. 3.1.2 Initial Structure to Relaxed Energy (IS2RE) The Initial Structure to Relaxed Energy (IS2RE) task takes an initial atomic structure and attempts to predict the energy of the structure after it has been relaxed. Two approaches may be taken to address this problem, the direct and relaxation approaches [3]. The direct treats the task as a standard regression problem and directly estimates the relaxed energy from the initial structure. The relaxation approach computes the relaxed structure using the ML predicted forces to update the atom positions. Next, given the ML relaxed structure the energy is estimated. We show results for both approaches in the OC20 dataset using SpinConv in Table 4. The results of the SpinConv model signi\ufb01cantly outperform all previous approaches using the relaxation approach for both energy MAE and Energy within Threshold (EwT) metrics. DimeNet++ also shows improved results for the relaxation approach with the best approach using two models; DimeNet++-large for force estimation and DimeNet++ (energy-only) for the energy estimation. Note in contrast to other approaches, SpinConv shows good results across all test splits, including those with out of domain adsorbates and catalysts. Using the direct approach, SpinConv is comparable to DimeNet++\u2019s direct approach. 3.1.3 Initial Structure to Relaxed Structure (IS2RS) Our \ufb01nal results on the OC20 dataset are on the IS2RS task where predicted forces are used to relax an atom structure to a local energy minimum. The is performed by iteratively estimating the forces 8 \fMolecule GDML PhysNet PhysNet-ens5 SchNet DimeNet* SpinConv Aspirin 0.02 0.06 0.04 0.33 0.09 0.07 Benzene 0.24 0.15 0.14 0.17 0.15 0.17 Ethanol 0.09 0.03 0.02 0.05 0.03 0.02 Malonaldehyde 0.09 0.04 0.03 0.08 0.04 0.04 Naphthalene 0.03 0.04 0.03 0.11 0.06 0.04 Salicylic 0.03 0.04 0.03 0.19 0.09 0.05 Toluene 0.05 0.03 0.03 0.09 0.05 0.03 Uracil 0.03 0.03 0.03 0.11 0.04 0.03 Mean 0.073 0.053 0.044 0.141 0.069 0.058 Table 6: Forces MAE (kcal/mol\u00c5) on MD17 for models trained using 50k samples. Best results for models not using domain speci\ufb01c information are in bold. *The DimeNet results were trained in-house as the original authors did not use the 50k dataset. DimeNet was found to outperform DimeNet++ on this task. Task \u03b1 \u2206\u03f5 \u03f5HOMO \u03f5LUMO \u00b5 C\u03bd G H R2 U U0 ZPVE Units bohr3 meV meV meV D cal/mol K meV meV bohr3 meV meV meV NMP [9] .092 69 43 38 .030 .040 19 17 .180 20 20 1.50 Schnet [25] .235 63 41 34 .033 .033 14 14 .073 19 14 1.70 Cormorant [1] .085 61 34 38 .038 .026 20 21 .961 21 22 2.03 L1Net [19] .088 68 46 35 .043 .031 14 14 .354 14 13 1.56 LieConv [7] .084 49 30 25 .032 .038 22 24 .800 19 19 2.28 TFN [29] .223 58 40 38 .064 .101 SE(3)-Tr. [8] .142 53 35 33 .051 .054 EGNN [24] .071 48 29 25 .029 .031 12 12 .106 12 11 1.55 DimeNet++ [14] .044 33 25 20 .030 .023 8 7 .331 6 6 1.21 SphereNet [17] .047 32 24 19 .027 .022 8 6 .292 7 6 1.12 SpinConv .058 47 26 22 .027 .028 12 12 .156 12 12 1.50 Table 7: Mean absolute error results for QM9 dataset [22] on 12 properties for small molecules. that are in turn used to update the atoms positions. This process is repeated until convergence or 200 iterations. Results are shown in Table 5. The suggested metrics are Average Distance within Threshold (ADwT) metric, which measures whether the atom positions are close to those found using DFT and Average Forces below Threshold (AFbT), which measures whether a true energy minimum was found (i.e., forces are close to zero). On the ADwT metric, SpinConv outperforms other approaches (53.62% averaged across splits). On the AFbT metric, DimeNet++-large outperforms SpinConv (21.82% vs. 16.67%), but is more than \u223c3x slower (814.6h vs. 263.2h) during inference. SpinConv outperforms all other models. 3.2 MD17 The MD17 dataset [4, 5] contains molecular dynamic simulations for eight small molecules. Two training datasets are commonly used, one containing 1k examples and another containing 50k examples. We found the 1k training dataset to be too small for the SpinConv model, and may be more appropriate for approaches that incorporate prior chemistry knowledge, such as hand-coded features or force \ufb01elds [4, 30]. The 50k dataset provides signi\ufb01cantly more training data, but the remaining validation and test data are highly similar to those found in training, and may not guarantee independent samples in the test set[6]. Nevertheless, we report results on MD17 for comparison to prior work on the molecular dynamics task. Research in this domain would greatly bene\ufb01t from the generation of a larger dataset. Results are shown in Table 6. SpinConv is on par or better for 7 of the 8 molecules when compared to DimeNet [15]. Both SpinConv and DimeNet perform well with respect to the GDML [4] and PhysNet [30] models that take advantage of domain-speci\ufb01c information. Given the smaller dataset size, the SpinConv model uses a reduced 8x8 grid-based spherical representation. Other model parameters are the same as previously described. 3.3 QM9 Our \ufb01nal set of results are on the popular QM9 dataset [22] that tests the prediction of numerous properties for small molecules. While the SpinConv model was designed to estimate energies and per-atom forces, we may use the same model to predict other proprieties. Results are shown in Table 7 on a random test split for an energy-centric 8x8 grid-based SpinConv model. The results 9 \fof DimeNet++ and the recent SphereNet[17] outperform those of others. However, DimeNet++, SphereNet and SpinConv perform well with respect to other approaches across many properties. 4 Related work A common approach to estimating molecular and atomic properties is the use of GNNs [26, 9, 13, 25, 27, 33, 20, 15] where nodes represent atoms and edges connect neighboring atoms. One of the \ufb01rst approaches for force estimation was SchNet [25], which computed forces using only the distance between atoms without the use of angular information. Unlike previous approaches that used discrete distance \ufb01lters [33], SchNet proposed the used of differentiable edge \ufb01lters. This enabled the construction of an energy-conserving model for molecular dynamics that estimates forces by taking the negative gradient of the energy with respect to the atom positions [4]. DimeNet extended this approach to also represent the angular information between triplets of atoms [15, 14]. The more recent SphereNet further extends this by capturing dihedral angles [17]. SpinConv is able to model relative angular relationships between all neighboring atoms, and not just triplets of atoms, due to the use of the spin convolutional \ufb01lter. In parallel to invariant models, rotational equivariant networks are explored in depth by [31, 2, 1, 29, 24]. This was accomplished by decoupling the network-fed invariant information (distance), from the equivariant information (distance vector), followed by the careful combination via tensor products. The energy-centric SpinConv model is invariant to rotations due to the use of global pooling after the spin convolution. The \ufb01nal force block of the force-centric model is not strictly rotation equivariant, but is encouraged to learn rotation equivariance during training. Another approach to force estimation is to directly regress the forces as an output of the network. This doesn\u2019t enforce energy conservation or rotational equivariance, but as shown by ForceNet [12], such models can still produce accurate force estimates. Numerous approaches incorporate more domain speci\ufb01c information into machine learning models. These include GDML [4] and PhysNet [30] that use handcrafted features and force-\ufb01elds respectively. OrbNet [20] is a hybrid approach that utilizes proprietary orbital features that improves accuracy while achieving signi\ufb01cant ef\ufb01ciency gains over DFT. While these approaches can lead to improved accuracy, they typically result in increased computational expense over ML models. 5 Discussion While the SpinConv model demonstrates improved performance, it still has signi\ufb01cant limitations. Most notable is the force and energy estimates are still signi\ufb01cantly lower than desired for practical applications. Further research is needed to improve accuracies, so that machine learning models can be widely adopted. Currently, the SpinConv model does not take advantage of domain speci\ufb01c information. Results could be signi\ufb01cantly improved, especially for smaller datasets (e.g., MD17 1k), if more domain information was integrated into the model [4, 30, 20]. The use of the spin convolution becomes increasingly expensive as the size of the \ufb01lter increases, since the number of convolutions is equal to the longitudinal dimension of the \ufb01lter. If \ufb01lters of higher resolution are needed, more computationally ef\ufb01cient approaches may be required. In conclusion, we propose the SpinConv model that effectively captures the relative angular information of neighboring atoms, while maintaining the invariance of the energy estimation with respect to system rotations. This is enabled by utilizing a spin convolution over a spherical representation in a per-edge local reference frame, followed by global pooling. Two model variants are proposed based on whether energy conservation is enforced. Results demonstrate state-of-the-art results on the OC20 dataset, and strong results on both the MD17 and QM9 datasets. 6 Societal Impact This work is motivated by the problems we face due to climate change [35], many of which require innovative solutions to reduce energy usage and replace traditional chemical feedstocks with renewable alternatives. For example, one of the most energy intensive chemical processes is the development of new electrochemical catalysts for ammonia fertilizer production that helped to feed the world\u2019s growing population during the 20th century [11]. This is also an illustrative example of possible 10 \funintended consequences as advancements in chemistry and materials may be used for numerous purposes. As ammonia fertilization increased in use, its overuse in today\u2019s farming has led to ocean \u201cdead zones\u201d and its production is very carbon intensive. Knowledge and techniques used to create ammonia were also transferred to the creation of explosives during wartime. We hope to steer the use of ML for atomic simulations to societally-bene\ufb01cial uses by training and testing our approaches on datasets, such as OC20, that were speci\ufb01cally designed to address chemical reactions useful for addressing climate change." + } + ], + "Nima Shoghi": [ + { + "url": "http://arxiv.org/abs/2310.16802v2", + "title": "From Molecules to Materials: Pre-training Large Generalizable Models for Atomic Property Prediction", + "abstract": "Foundation models have been transformational in machine learning fields such\nas natural language processing and computer vision. Similar success in atomic\nproperty prediction has been limited due to the challenges of training\neffective models across multiple chemical domains. To address this, we\nintroduce Joint Multi-domain Pre-training (JMP), a supervised pre-training\nstrategy that simultaneously trains on multiple datasets from different\nchemical domains, treating each dataset as a unique pre-training task within a\nmulti-task framework. Our combined training dataset consists of $\\sim$120M\nsystems from OC20, OC22, ANI-1x, and Transition-1x. We evaluate performance and\ngeneralization by fine-tuning over a diverse set of downstream tasks and\ndatasets including: QM9, rMD17, MatBench, QMOF, SPICE, and MD22. JMP\ndemonstrates an average improvement of 59% over training from scratch, and\nmatches or sets state-of-the-art on 34 out of 40 tasks. Our work highlights the\npotential of pre-training strategies that utilize diverse data to advance\nproperty prediction across chemical domains, especially for low-data tasks.\nPlease visit https://nima.sh/jmp for further information.", + "authors": "Nima Shoghi, Adeesh Kolluru, John R. Kitchin, Zachary W. Ulissi, C. Lawrence Zitnick, Brandon M. Wood", + "published": "2023-10-25", + "updated": "2024-05-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION Computing atomic properties accurately and efficiently for a vast array of molecules and materials is crucial for a range of applications, from drug discovery (Chan et al., 2019; Deng et al., 2022) to catalyst design (Zitnick et al., 2020). Currently, the quantum chemistry method Density Functional Theory (DFT) is commonly employed for atomic property calculations. Unfortunately, DFT\u2019s use is limited by its significant computational expense, which can range from hours to days for certain calculations. Machine learning (ML) potentials, which approximate or augment DFT, are capable of reducing the computational cost by orders of magnitude (Behler, 2016). In recent years, much progress has been made towards this goal (Kolluru et al., 2022b), fueled in part by the release of large and diverse DFT-generated datasets for training ML models. While these datasets are incredibly useful, they are also extremely expensive to generate, e.g., \u223c400 million CPU hours for the Open Catalyst 2020 dataset (OC20) (Chanussot et al., 2021). As a consequence, it is impractical to create a large dataset for every specific chemistry problem of interest. Similarly, it is non-ideal to train a model from scratch for all use cases, which is common practice currently. Foundation models (FMs) \u2014 large pre-trained models that can be fine-tuned for various tasks \u2014 have achieved remarkable success in domains such as natural language processing (NLP) and computer vision (CV), especially when fine-tuned on low-resource downstream tasks. Several key factors have enabled this effectiveness: (1) the availability of massive datasets, (2) the development of widely adopted pre-training strategies, and (3) the establishment of diverse benchmarks to rigorously assess the performance of these fine-tuned models. Despite the availability of large DFT-labeled datasets (e.g., OC20) and the existence of a wide and diverse range of downstream tasks (e.g., QM9 (Ruddigkeit et al., 2012b), MatBench (Dunn et al., 2020)), the adoption of pre-training in ML 1 arXiv:2310.16802v2 [cs.LG] 6 May 2024 \fPublished as a conference paper at ICLR 2024 Data Normalization & Temperature Sampling Pre-train Backbone Model QM9 Multi-Task Learning SPICE MD17 MD22 QMOF Matbench Backbone Model MD22 E F QMOF EG QM9 Cv SPICE E F Matbench Eform MD17 E F Fine-tune Knowledge Transfer OC20 E F OC22 E F ANI1x E F Trans1x E F Small Molecules Large Molecules Materials Small Molecules Catalysis OC20 OC22 Trans1x ANI1x Learned Representations Figure 1: An overview of the Joint Multi-domain Pre-training (JMP) method. Left: JMP\u2019s pretraining setup, where a single model is simultaneously trained on set of diverse pre-training datasets using multi-task learning. Center: JMP\u2019s fine-tuning process, where the pre-trained JMP backbone is equipped with new prediction heads and trained on downstream tasks. Right: t-SNE visualizations of JMP\u2019s node-level (\u02dc h) embeddings for randomly selected structures from all datasets. for atomic property prediction has been noticeably less prevalent. This under-utilization becomes evident when noting that most of the state-of-the-art (SOTA) results on downstream tasks come from models trained from scratch. More specifically, prior to our work, all previous SOTA results on the rMD17, MD22, SPICE, and MatBench datasets come from models trained form scratch. For QM9, models trained from scratch hold SOTA status for 7 of the 12 targets. In total, out of the 40 total tasks explored in this work\u2019s evaluation benchmark, models trained from scratch hold the previous SOTA on 34 tasks. At its core, the challenge of pre-training for atomic property prediction lies in the complexity and diversity of the underlying chemical space. Target applications vary from drug design to catalysis, the data ranges from small molecules with only 4 atoms to periodic crystals with hundreds, and even the properties of interest for each application vary from various energies to forces to phonon peaks. Furthermore, the nature of atomic properties imposes a unique set of challenges. Unlike in NLP or CV, where the data is often discrete and finite, atomic properties are continuous and can span several orders of magnitude. This requires models to be robust to outliers and capable of predicting highly variable outputs. Further, existing pre-training strategies (e.g., Zaidi et al. (2022); Zhou et al. (2023b)) are designed with equilibrium systems in mind and are not directly applicable to non-equilibrium systems, which are common in DFT datasets (e.g., over 99.7% of OC20\u2019s training data comprises of non-equilibrium structures). These challenges motivate the need for a flexible and generalizable pre-training strategy that can be adapted to different applications and datasets. In this work, we introduce Joint Multi-domain Pre-training (JMP), a supervised pre-training strategy tailored to the challenges and opportunities of machine learning for atomic modeling. JMP concurrently trains over 120 million diverse equilibrium and non-equilibrium atomic structures by framing each chemical domain as a separate pre-training task in a multi-task framework. This large-scale pre-training enables learning generalizable representations of atomic interactions. The contributions of our work are summarized as follows: First, we introduce the JMP method, shown in Figure 1, and demonstrate its powerful generalization ability by evaluating its fine-tuning performance across a diverse benchmark suite spanning small molecules, large molecules, and materials. Our results show that JMP consistently outperforms training from scratch and sets or matches the state-of-the-art on 34 out of the 40 fine-tuning benchmarks. Second, we show that JMP enables efficient scaling to larger models that would normally overfit if trained from scratch on small datasets. Pre-training acts as a strong regularizer, allowing us to train a 235M parameter model that sets new state-of-the-art performance on multiple low-data benchmarks. Finally, we conduct a detailed analysis of JMP\u2019s 2 \fPublished as a conference paper at ICLR 2024 computational requirements. While expensive upfront, we show JMP\u2019s pre-training cost is recovered by enabling over 12x faster fine-tuning compared to training from scratch. By pre-training large models on diverse chemical data, we believe JMP represents an important step towards the goal of a universal ML potential, and that the continued growth of available data and compute power will only improve JMP\u2019s ability to learn transferable atomic representations. 2 RELATED WORK Machine learning potentials: There has been significant progress in developing ML models for atomic property prediction. Initial approaches focused on descriptor-based methods, where these descriptors were hand-fitted physically meaningful analytical functions (Gonz\u00e1lez, 2011; Sundius, 2002; Dinur and Hagler, 1991). These functions were incorporated into gaussian process models (Chmiela et al., 2017) or neural networks (Behler and Parrinello, 2007). Recent advances in graph neural networks (GNNs) have shown to be a promising approach for these tasks, surpassing descriptorbased methods (Gasteiger et al., 2020; Sch\u00fctt et al., 2017; Batzner et al., 2021; Batatia et al., 2022) on multiple benchmarks across the atomic domains of small molecules, catalysts, and bulk materials. While much progress has been made, it remains difficult for a single model to perform well across all chemical domains. Pretraining and transfer learning on 3D atomic systems: The concept of transfer learning, where representations are learned on one dataset and transferred to another, has been successfully applied to a number of atomic modeling tasks (Kolluru et al., 2022a; Cai et al., 2020; Tsubaki and Mizoguchi, 2021; Smith et al., 2018). However, most of the focus in this area has been on transferring representations within the same chemical domain with a limited amount of pre-training data (Smith et al., 2019; Yamada et al., 2019; Pesciullesi et al., 2020). There are beginning to be more dedicated works on pre-training (Zhu et al., 2022; Liu et al., 2021; Jiao et al., 2022; Zhou et al., 2023b), but most do not explore generalization across multiple chemical domains. Many of these works focus on self-supervised pre-training on molecular graphs and/or 3D atomic structures. Recent self-supervised methods have focused on denoising methods (Song et al., 2020) applied to equilibrium structures \u2014 i.e. the per atom forces are close to zero (Zaidi et al., 2022; Feng et al., 2023b; Liu et al., 2022). The original formulation of denoising equilibrium structures is applicable to less than 1% of our training data because most of the atomic properties data is non-equilibrium. This is an active area of research and since the beginning of our present work, alternative formulations that could apply to non-equilibrium data have started to emerge (Feng et al., 2023a; Zheng et al., 2023). 3 DATASETS We separate the atomic space into four domains for the purposes of this manuscript including, small molecules (1-20 atoms), large molecules (more than 20 atoms), materials, and catalysis (contains material surfaces with molecules). Each dataset sample contains a 3D atomic structure (positions and atomic numbers) and a set of atomic properties. The atomic properties can be either node-level (e.g., forces) or graph-level (e.g., energy). The datasets are summarized in Table 1, with additional information, including details on train, validation, and test splits, in Appendix H. To study the ability of pre-trained models to generalize across domains and tasks, we only pre-train on small molecule and catalysis datasets, and fine-tune on small molecule, large molecule, and materials datasets. Our pre-training datasets include the ANI-1x (Smith et al., 2020) and Transition1x (Schreiner et al., 2022) small molecule datasets and the OC20 (Chanussot et al., 2021) and OC22 (Tran et al., 2022) catalysis datasets. These datasets were chosen due to their diversity and large size. The combined pre-training dataset contains over 120M training examples with energy and force labels, with the majority of the data (> 99%) coming from non-equilibrium structures. Due to the difference in underlying DFT theory and software used across the datasets, we utilize different prediction heads for each dataset. We also use a per-dataset linear referencing scheme for the energies. For fine-tuning, we use smaller datasets from three domains to evaluate how pre-trained models perform in similar (small molecule) and unseen domains (large molecule and materials). These datasets may contain in-distribution (ID) (i.e., energies and forces) or out-of-distribution (OOD) labels (e.g., QM9\u2019s \u2206\u03f5). 3 \fPublished as a conference paper at ICLR 2024 Dataset Domain Labels Elements Avg size Train Set Description Pretraining Datasets OC20 Catalyst E, F 55 \u223c73 (7-225) 100M Catalyst relaxations OC22 Catalyst E, F 51 \u223c80 (17-228) 8M Oxide catalyst relaxations ANI-1x Small Molecule E, F H, C, N, O \u223c15 (4-63) 2M MD simulations Transition-1x Small Molecule E, F H, C, N, O \u223c14 (4-23) 10M Reactions database Finetuning Datasets Matbench Materials (OOD) ID / OOD 84 \u223c30 (4-444) \u223c600\u2013130k Material properties QMOF Materials (OOD) OOD 77 \u223c109 (17, 500) 10k MOF properties MD17 Small Mols. (ID) ID H, C, N, O \u223c13 (9-21) 1k MD simulation QM9 Small Mols. (ID) ID / OOD H, C, N, O \u223c18 (3-29) \u223c130k QM properties SPICE Large Mols. (OOD) ID H, C, N, O, S \u223c46 (26-96) 1300, \u223c34k MD simulations MD22 Large Mols. (OOD) ID H, C, N, O \u223c67 (42-370) \u223c600\u20138k MD simulations Table 1: Summary of datasets and their properties, including the domain, target labels, atomic elements present, their sizes and a brief description. 4 JOINT MULTI-DOMAIN PRE-TRAINING Joint Multi-domain Pre-training (JMP), shown in Figure 1, is based on the intuition that pre-training on a diverse set of chemical domains should lead to better representation learning and thus better generalization through fine-tuning. The pre-training task is framed as a multi-task supervised learning problem, where each label of each pre-training dataset is treated as a separate task. This allows us to pre-train a single backbone model on multiple chemical domains and labels simultaneously. Notation: We use the following notation throughout this section. Let D = {D1, . . . , DM} be the set of M datasets that we pre-train on. Each dataset, Di, is a set systems (e.g., molecules or crystals), where each system is a tuple of atomic numbers (Z), atomic positions (R), and target (i.e., ground-truth) energy ( \u02c6 E) and forces ( \u02c6 F). For a given mini-batch of B systems, Wb is the index of the dataset that system b \u2208B belongs to, and Nb is the number of atoms in system b. Model Architecture: Our goal in this work is to design model-agnostic strategies for supervised pre-training. For our backbone model architecture, we chose GemNet-OC (Gasteiger et al., 2022) for its effectiveness across a wide spectrum of chemical domains as well as at large scales (Sriram et al., 2022). GemNet-OC is a message-passing neural network that computes a node representation hi for each atom i and an edge representation mij for pairs of nearby atoms, i and j. Using these representations, prediction heads compute desired target properties. System-level scalar predictions, such as energy, are computed by summing the node representations, E = PN i=1 MLP(hi). Nodelevel vector predictions, such as forces, are computed by summing the edge direction unit vectors, weighted by the edge representations, Fi = PN j=1 (MLP(mij) \u00b7 \u02c6 rij). During pre-training, we compute forces using a direct equivariant block, similar to Klicpera et al. (2021)\u2019s model setup for the OC20 dataset. This is for two reasons: (1) direct force prediction is much more computationally efficient than gradient-based force prediction, as the latter needs to perform a secondary backward pass to compute the gradient of the energy with respect to the atomic positions and (2) previous works (Gasteiger et al., 2022) have shown that for larger datasets, direct force prediction shows much faster convergence while producing similar converged accuracies to gradient-based force prediction. 4.1 MULTI-TASK PRE-TRAINING In the multi-task setting, each dataset has its own energy and force prediction heads, as shown in Figure 1 (left). This allows us to train a single model on multiple datasets simultaneously. In the following sections, we describe each of these imbalances and our proposed solutions in detail. Data Normalization: When pre-training on multiple datasets, we first need to normalize the targets to make sure they are on a common scale across datasets. Since our pre-training task is energy and force prediction, for each dataset we first linearly reference the total energies and then normalize them to unit Gaussian distributions. We normalize the forces by dividing them by component-wise RMS force. This puts the energy and forces for each dataset on a common scale. Dataset Size Imbalance: Our pre-training datasets vary greatly in size, from 2 million to 100 million training samples, for a total of 120M samples. To maintain a proper balance between the total contribution of large, high-resource and small, low-resource pre-training datasets and to prevent overfitting to high-resource datasets and underfitting on low-resource datasets, we use temperature 4 \fPublished as a conference paper at ICLR 2024 sampling (Devlin et al., 2018) during batch construction. Specifically, we sample each dataset i with probability pi \u221d( |Di| P j |Dj|)1/T , where |Di| is the number of samples in dataset i and T is the temperature hyperparameter. Inspired by Shaham et al. (2023), which shows that T = 2 optimizes model performance on high and low-resource languages for large models, we use T = 2. System Size Imbalance: The number of atoms per system varies greatly across our pre-training datasets. For example, Transition-1x has 14 atoms per system on average, while OC22 has 80 atoms per system on average. The naive loss reduction method shown in the non-teal terms of Equation (1), which is the default behavior of most machine learning libraries, computes an atom-level force loss and then averages the force loss across all atoms in the batch. This leads to datasets with more atoms per system dominating the force loss. To address this issue, we propose a structure-wise loss reduction strategy which first computes the average force loss for each system and then computes the average force loss across all systems. This ensures that the relative importance of the force loss is roughly equal across datasets, regardless of the number of nodes per system. In Equation (1), the updates to the naive formulation of the loss function are shown in teal and removed terms are red. This simple change leads to a significant improvement in model performance, as shown in Section 5.1. L = 1 B B X b=0 h \u03bb(Wb) E \f \f \f \u02c6 Eb \u2212Eb \f \f \f i | {z } Energy Loss (LE) + 1 B 1 P b Nb B X b=0 \" 1 Nb \u03bb(Wb) F Nb X i=0 \r \r \r \u02c6 Fb,i \u2212Fb,i \r \r \r 2 # | {z } Force Loss (LF ) (1) Loss Imbalance Within a Single Dataset: In the single-dataset setting, \u03bbE and \u03bbF are typically tuned by grid search, but this approach is not feasible in the multi-dataset setting, as there 2 \u00b7 M hyperparameters to tune, and changing one hyperparameter affects the optimal values of the others. Therefore, we need a simple heuristic to determine the loss coefficients for each dataset that provides a reasonable balance between the energy and force losses. Inspired by Tran et al. (2022)\u2019s size invariant force loss, which computes a dynamic \u03bbF based on the number of atoms in each system of the input batch, we fix \u03bb(i) E = 1 and \u03bb(i) F = \u27e8N\u27e9Di, where \u27e8N\u27e9Di is the average number of atoms per system in the ith dataset, Di. This provides a reasonable balance between energy and force loss within each dataset. Fine-Tuning: Once we have a fully pre-trained model, we can fine-tune it on downstream tasks. Our fine-tuning procedure is very similar to other fine-tuning procedures in the machine learning literature (Devlin et al., 2018; Zaidi et al., 2022): We discard the pre-training prediction heads, add new randomly initialized prediction heads for the downstream task, and fine-tune the entire model on the downstream task. This procedure is illustrated in Figure 1 (middle). For fine-tuning tasks with force labels, we have the option of using the directly computed forces (i.e., using the direct equivariant block) or computing the forces by taking the gradient of the energy with respect to the atomic positions. Our initial experiments showed that JMP works well with both methods. In our evaluations, however, we chose to compute forces conservatively by taking the gradient of the energy with respect to the atomic positions, as this is the standard approach in the literature. 5 EXPERIMENTS We benchmark our pre-trained models on a diverse set of atomic ML tasks. In previous related works, evaluations are commonly restricted to downstream datasets that align closely with the pre-training dataset\u2019s domain (Zaidi et al., 2022; Zhou et al., 2023a). We posit that true success in pre-training models for atomic machine learning tasks requires adeptness at out-of-domain extrapolation. To test this hypothesis, our benchmark uniquely spans across diverse domains including small molecules (QM9 and rMD17), large molecules (MD22 and SPICE), and materials (MatBench and QMOF). We compare our fine-tuned models (JMP) to randomly initialized models trained from scratch (GNOC) to demonstrate the effectiveness of JMP. We also compare to previous state-of-the-art (SOTA) models where available. For each task, we present results for both a small (\u223c30M parameters, labeled with the -S suffix) and large (\u223c230M parameters, labeled with the -L suffix) pre-trained model to probe the impact of the model size. These two variants utilize the GN-OC Base and Large backbone architectures from Gasteiger et al. (2022), respectively. Finally, we conduct ablation studies to understand the impact of various components of JMP. More information on the datasets used for 5 \fPublished as a conference paper at ICLR 2024 (c) Fine-tuned Large vs Scratch Large (b) Fine-tuned Large vs Fine-tuned Small 75 50 25 0 25 50 75 Relative Improvement (%) Fine-tuning T asks (a) Scrach Large vs Scratch Small QM9 MD17 MD22 SPICE Matbench QMOF Figure 2: Relative performance improvement across all tasks of all fine-tuning datasets, in percentages, of (a) Scratch Large (GN-OC-L) over Scratch Small (GN-OC-S), (b) Fine-tuned Large (JMP-L) over Fine-tuned Small (JMP-S), and (c) Fine-tuned Large (JMP-L) over Scratch Large (GN-OC-L). GN-OC shows poor scaling to large models, a clear sign of overfitting, whereasJMP reverses this, exhibiting much improved scaling dynamics. JMP also consistently outperforms GN-OC across all domains, datasets, and targets. The shaded rectangles indicate the average relative performance across all tasks for each dataset. The exact percentages can be found in Appendix C.1 pre-training and fine-tuning can be found in Section 3. Details on the pre-training and fine-tuning setup, such as the optimizers, learning rate schedules, and early stopping information, can be found in Appendix F. Exact hyperparameters can be found in Appendix J. 0 20 40 60 Relative Improvement (%) HOMO LUMO GNS-TAT-NN ET-NN ET-OREO JMP-S JMP-L Figure 3: Relative improvement, over training from scratch, of different pre-training methods on QM9\u2019s \u03f5LUMO and \u03f5HOMO. Common Observations: We begin by highlighting some common observations across all experiments. First, when training from scratch, GN-OC-L performs 8% worse on average than GN-OC-S, as shown in Figure 2 (a). This is a clear indication of overfitting and has been consistently observed in low-data regimes (Gasteiger et al., 2022). Second, this problem of overfitting is nearly eliminated by JMP, illustrated in Figure 2 (b). On average, JMP-L exhibits an impressive 21% relative performance gain over JMP-S. This indicates that the JMP training procedure is able to effectively leverage the additional capacity of the large model, even in low-data regimes. Third, we observe that pre-training with JMP elevates performance across all domains, datasets, and tasks (Figure 2 (c)), with an average relative improvement of 59% for JMP-L over GN-OC-L. Results on Small Molecules QM9 and rMD17: For each target of QM9 (Wu et al., 2018), we fine-tune a dedicated model using a simple prediction head with sum pooling for all targets. For R2, we use the same prediction head formulation as Th\u00f6lke and De Fabritiis (2022). Our results can be found in Table 2 compared against previous SOTA works (Liao and Smidt, 2022; Batatia et al., 2022; Musaelian et al., 2023; Feng et al., 2023a; Zaidi et al., 2022). With the sole exception of R2, our JMP-L model achieves SOTA results on all QM9 targets. For the R2 target, a similar phenomenon has been observed in previous pre-training works (Zaidi et al., 2022) where the benefits of using pre-trained models are not as pronounced. In addition to their impressive performance, our JMP-S and JMP-L models demonstrate a large improvement relative to their scratch-trained counterparts. Figure 3 compares this relative improvement \u2014 measured on the \u03f5LUMO and \u03f5HOMO targets \u2014 to other SOTA pre-training and transfer learning methods for QM9. As shown, JMP outperforms all previous methods by a significant margin. This is a strong signal that our pre-training approach is effective at learning generalizable representations for small molecules. We also report additional pretraining comparisons on all our finetuning benchmarks with a pre-trained model from (Zaidi et al., 2022) and demonstrate significant improvements on all tasks in Appendix A. Data overlap: Due to the limited complexity of small molecules, there is some data overlap between our pre-training datasets (ANI-1x and Transition-1x) and QM9. To check the impact of this overlap on our results, we evaluate the fine-tuning performance of our JMP-L on a QM9 dataset that excludes 6 \fPublished as a conference paper at ICLR 2024 the overlapping molecules. Using molecular compositions to identify overlaps, we observe that the exclusion of overlapping molecules has a negligible impact on our results (see Appendix I). Target (Units) TorchMDEquiMACE Allegro Pretrained Pretrained GN-OCGN-OCJMPJMPNet former ET-OREO GNS+TAT+NN S L S L \u00b5 (D) 0.011 0.011 0.015 0.016 0.020 0.023 0.010 0.008 \u03b1 (a3 0) 0.059 0.046 0.038 0.040 0.052 0.056 0.037 0.032 \u03b5HOMO (meV ) 20.3 15.0 22.0 16.8 14.9 21.8 22.7 11.1 8.8 \u03b5LUMO (meV ) 18.6 14.0 19.0 14.5 14.7 17.3 18.6 10.8 8.6 \u2206\u03b5 (meV ) 36.1 30.0 42.0 26.4 22.0 38.5 40.6 23.1 19.1 R2 (a2 0) 0.033 0.251 0.210 0.440 0.210 0.171 0.200 0.163 ZPVE (meV ) 1.8 1.3 1.2 1.0 1.2 1.2 1.0 0.9 U0 (meV ) 6.2 6.6 4.1 4.7 5.8 7.2 9.4 3.3 2.9 U (meV ) 6.4 6.7 4.1 4.4 5.8 6.9 9.7 3.3 2.8 H (meV ) 6.2 6.6 4.7 4.4 5.8 7.3 8.7 3.3 2.8 G (meV ) 8.3 7.6 5.5 5.7 6.9 8.1 9.2 4.5 4.3 C\u03bd (Cal/MolK) 0.026 0.023 0.021 0.020 0.024 0.024 0.018 0.017 Table 2: MAE test split results on all targets of the QM9 dataset. SOTA results are bolded. For rMD17, we compute forces by taking the negative gradient of the energy with respect to the atomic positions. Table 3 shows our force prediction results on the rMD17 dataset. Similarly to QM9, we observe that JMP consistently outperforms GN-OC across all rMD17 targets. Our JMP-L model achieves state of the art performance in 5 molecules and is very competitive on the rest. Appendix B.1 also shows that JMP achieves SOTA on 6/10 targets on the few-shot 50-sample subset of rMD17. Molecules MACE Allegro GN-OC-S GN-OC-L JMP-S JMP-L Aspirin 6.6 7.3 24.3 24.7 6.7 5.1 Benzene 0.3 0.2 1.0 1.0 0.7 0.3 Ethanol 2.1 2.1 13.0 13.3 2.8 2.0 Malonaldehyde 4.1 4.1 21.1 25.7 5.3 4.0 Naphthalene 1.6 0.9 5.6 5.7 2.2 1.4 Salicylic acid 3.1 2.9 14.7 15.1 4.6 3.4 Toluene 1.5 1.8 6.8 7.2 2.3 1.5 Uracil 2.1 1.8 12.0 12.9 4.0 2.5 Paracetamol 4.8 4.9 17.3 18.4 5.3 4.0 Azobenzene 3.0 2.6 11.1 11.4 4.5 3.3 Table 3: Force MAE results in meV/\u00c5 on the test split of the rMD17 dataset. SOTA is bolded. Results on Materials MatBench and QMOF: In the materials domain, we fine-tune on the MatBench (Dunn et al., 2020) and QMOF datasets (Rosen et al., 2021). For MatBench, we evaluated all regression tasks that utilize a 3D structure as an input and compared them with competitive models on the leaderboard (De Breuck et al., 2021; Ruff et al., 2023). For QMOF, we predict the band gap target on a 10k split, similarly to Kang et al. (2022); Cao et al. (2023). We use mean pooling for all experiments, except MatBench\u2019s phonons, which is the measure of frequency of the highest frequency optical phonon mode peak and thus uses max pooling. Our results can be found in Table 4. We observe that JMP-L achieves SOTA performance across QMOF and on all MatBench tasks. These two datasets contain diverse out-of-domain chemical structures (materials) and out-of-domain target labels (i.e., not energies and forces), relative to the pre-training datasets. JMP\u2019s impressive performance is yet another positive signal indicating that JMP is learning generalizable representations. Materials (Units) MODNet coGN GN-OC-S GN-OC-L JMP-S JMP-L (fold0 / mean) (fold0 / mean) (fold0) (fold0) (fold0 / mean) (fold0 / mean) JDFT2D (meV/atom) 25.55 / 33.20 22.25 / 37.17 26.19 25.34 20.72 / 30.16 23.12 / 29.94 Phonons (cm\u22121) 34.77 / 34.28 32.12 / 29.71 93.45 88.74 26.6 / 22.77 21.28 / 20.57 Dielectric (unitless) 0.169 / 0.271 0.178 / 0.309 0.225 0.211 0.133 / 0.252 0.119 / 0.249 Log GVRH (log10(GP A)) 0.073 / 0.073 0.068 / 0.069 0.082 0.082 0.06 / 0.062 0.057 / 0.059 Log KVRH (log10(GP A)) 0.054 / 0.055 0.052 / 0.054 0.061 0.063 0.044 / 0.046 0.045 / 0.045 Perovskites (eV/unitcell) 0.093 / 0.091 0.027 / 0.027 0.045 0.045 0.029 / 0.028 0.026 / 0.026 MP Gap (eV ) 0.215 / 0.220 0.153 / 0.156 0.228 0.235 0.119 / 0.121 0.089 / 0.091 MP E Form (meV/atom) 40.2 / 44.8 17.4 / 17 31.4 33.1 13.6 / 13.3 10.3 / 10.1 PT CGCNN PT MOFTransformer QMOF 0.28 0.27 0.25 0.24 0.18 0.16 Table 4: MAE test split results on different targets in the materials domain. SOTA is bolded. Results on Large Molecules MD22 and SPICE: To further investigate the impact of pre-training on unseen domains, we evaluate two large molecule datasets, MD22 (Chmiela et al., 2023) and SPICE (Eastman et al., 2023) and compare our results to the previous SOTA (Kovacs et al., 2023; Li et al., 2023). For SPICE, we only use the large molecule sub-tasks: solvated amino acids and dipeptides. 7 \fPublished as a conference paper at ICLR 2024 Molecule sGDML MACE Allegro GN-OC-S GN-OC-L JMP-S JMP-L Ac-Ala3-NHMe 34.55 3.80 4.63 5.07 6.27 2.64 1.92 DHA 32.41 2.80 3.17 2.87 3.95 2.01 1.37 Stachyose 29.24 3.80 4.21 2.22 3.85 2.69 1.73 AT-AT 29.97 4.30 4.13 5.38 5.96 3.02 1.98 AT-AT-CG-CG 30.48 5.00 5.55 5.80 5.62 3.28 2.11 Buckyball Catcher 29.57 3.70 10.35 8.20 3.08 2.26 Double Walled Nanotubes 22.68 12.00 11.20 9.61 8.36 6.17 Solvated Amino Acids 22.14 28.64 5.71 4.75 Dipeptides 8.78 10.68 4.71 3.64 Table 5: Force MAE results in meV/\u00c5 on test splits of large molecule datasets. SOTA is bolded. Similar to rMD17, we compute forces by taking the negative gradient of the energy with respect to the atomic positions. However, for MD22\u2019s Buckyball Catcher and Double-Walled Nanotubes, we were unable to fit these large structures in memory with using gradient-based force predictions; therefore, we used direct force prediction heads instead. Our results can be found in Table 5. Once again, our model demonstrates SOTA results across all molecules of MD22 and all tasks of SPICE. 5.1 ABLATION STUDIES Our ablations demonstrate the impact of various changes to JMP on the downstream fine-tuning performance. We performed pre-training experiments including dataset sampling strategies, loss formulation, and regularization strategies, and observed their impact on fine-tuning. Given the computational cost of training models on the full pre-training dataset, ablation experiments were conducted on a scaled-down version of the full pre-training dataset containing a randomly selected \u223c2.5M examples. All pre-training models are trained for 10 epochs. Similarly, fine-tuning for these experiments was run on only one task from each of the fine-tuning datasets (MD17: Aspirin, MD22: Stachyose, QM9: \u2206\u03f5, MatBench: MP E Form, QMOF: Band Gap, and SPICE: Solvated Amino Acids). Additional ablations, including using fully balanced (T = \u221e) sampling, threshold regression loss for energies and forces, and automatic task weighting strategies such as PCGrad (Yu et al., 2020) are explored in Appendix B. Table 6 shows the mean improvement, relative to the base, across all the fine-tuning tasks described above. A summarized insight of each ablation study follows: Ablations E[RI](%) Base (Temperature 1.0) [B] 0% B + Temperature 2.0 [T2] 2.2% B + Temperature \u221e[T\u221e] 2.6% T2 + SW Loss Averaging [SWL] 7.7% SWL + Weight Decay [WD] 11.4% SWL + Dropout [DO] 11.4% WD + Edge Dropout [ED] 13.2% WD + ED + EMA Weights [EMA] 12.4% EMA + OC20 Only [OC20] -9.9% Table 6: Ablation results demonstrating the mean relative improvements of each method relative to the base method (B), averaged over ablation subsplits. Base (B): Base refers to the naive implementation of a multi-task pre-training model without temperature sampling, structure-wise loss reduction, or additional regularization. This model serves as the baseline for comparison. Temperature Sampling: Temperature-based sampling with T = 2 provides a moderate improvement, while higher values (e.g., T = \u221e) show diminishing returns. This is consistent with Shaham et al. (2023), which shows that for large-enough models, T = 2 provides ideal performance across both low and high resource datasets. Structure-Wise Loss Reduction (SWL): The application of the structure-wise loss reduction strategy proved to be a substantial improvement on the model\u2019s performance, with T2 + SWL offering a 7.7% improvement over B. Weight Decay (WD): Elevating the weight decay regularization parameter to 0.1 (from the default 0.01) brings the collective improvement to 11.4% over B. Dropout (DO): Using dropout with p = 0.1 on atom update layers yielded similar uplift to WD. Edge Dropout (ED): For this ablation, we drop p = 0.1 of the edges at every step. We then scale the embeddings of remaining edges by a factor of 1 1\u2212p. This yielded a small improvement over WD, increasing the collective improvement to 13.2% over B. Exponential Moving Average (EMA): Fine-tuning on EMA weights did not improve performance. OC20 Only (OC20): To understand the impact of multi-task pre-training, we trained a model on the OC20 dataset only. We selected a 120M subset of OC20 to match the number of examples in the full JMP pre-training dataset. Note that this means that the dataset used in the OC20 ablation contains 48\u00d7 more data points than the rest of our ablations. Despite this, OC20 performed substantially worse than B, indicating that diverse multi-task pre-training is important for generalization. 8 \fPublished as a conference paper at ICLR 2024 Based on our ablation results the two most important changes from B were SWL and regularization methods like WD, DO, and ED. These results are consistent with Kurin et al. (2022), which demonstrates the effectiveness of regularization in multi-task learning. Our final model integrates temperature-based sampling (T2), structure-wise loss reduction strategy (SWL), an amplified weight decay regularization parameter of 0.1 (WD), edge dropout with p = 0.1 (ED), and EMA despite not showing a performance boost as it has been standard for training GemNet (Gasteiger et al., 2022). 5.2 COMPUTATIONAL COST ANALYSIS QM9 MD17 MD22 SPICE Matbench QMOF 0 50 100 150 Time to Milestone (GPU Hours) JMP GN-OC Figure 4: The number of GPU hours, averaged for each dataset, required to train GNOC-L to convergence and to fine-tune JMP-L to match GN-OC-L\u2019s performance. Overall, fine-tuning JMP-L was able to match GN-OCL\u2019s performance in 1 12 the time. Pre-training JMP-L required significant computational resources, which is typical for foundation model approaches. We pre-trained JMP-L on 128x V100 32GB GPUs for 2 epochs, which took around 34,400 GPU hours in total (see Appendix G for exact training times and CO2 impact). While this is a substantial upfront investment, it enables efficient fine-tuning across a diverse set of downstream tasks. We evaluated JMP-L fine-tuning performance versus training models from scratch (i.e., GN-OC-L). Training GN-OC-L on the downstream tasks until convergence based on our stopping criteria took around 3,300 GPU hours in total across all tasks. In contrast, fine-tuning JMP-L on the same tasks took only around 275 GPU hours total to match the performance of the models trained from scratch. This 12x reduction in compute demonstrates the significant benefits of pre-training. Figure 4 shows this difference in compute requirements, averaged for each fine-tuning dataset. 6" + } + ], + "Abhishek Das": [ + { + "url": "http://arxiv.org/abs/2108.05185v3", + "title": "Evolution of Curvature in Riemannian Geometry", + "abstract": "In this paper shall we endeavour to substantiate that the evolution of the\nRiemann- Christoffel tensor or curvature tensor can be expressed entirely by an\narbitrary timelike vector field and that the curvature tensor returns to its\ninitial value with respect to change in a particular index. This implies that\nPoincare's recurrence theorem is valid in this cosmological scenario. Also, it\nhas been shown that geodesics can diverge just as they can converge. As is\nostensible, this result indicates the existence the of a point of exclusivity -\nthe opposite of a singularity.", + "authors": "Abhishek Das", + "published": "2021-08-09", + "updated": "2022-05-28", + "primary_cat": "physics.gen-ph", + "cats": [ + "physics.gen-ph" + ], + "main_content": "Introduction General theory of relativity, the geometric theory of gravitation is based on the plinth of Riemannian geometry that studies di\ufb00erentiable manifolds. There is a plethora of literature on these two intricately related topics. As is known, there are a multitude of implications resulting from Riemannian geometry when applied to the general relativity the Einstein \ufb01eld equations [1], solutions to such equations in the form of Schwarzchild metric [2], Friedmann-Lemaitre-Robertson-Walker metric [3, 4, 5, 6, 7, 8, 9, 10] and others [11, 12], singularities [13] and black holes [14, 15]. However, in essence, the current paper is not innately related to the general theory of relativity. It is essentially devoted to the geometric aspect of space-time, somewhat in contradistinction to a recent paper that was based on a Hamiltonian formulation [16]. The inception begins with the feasibility of existence of a point of exclusivity (the opposite of a singularity), that is corroborated considering the Raychaudhuri equation [13]. And then, we consider parallel transported vectors and thereby manipulating of their connection with the geometry of space-time we are led to an evolution equation for the Riemann-Christo\ufb00el tensor. As a consequence, several interesting implications are drawn out. Most importantly, it has been argued that with necessary conditions ful\ufb01lled one can indeed discern the existence of a point of exclusivity. 1 \f2 The positive and negative values of the expansion scalar In his seminal paper of 1955 [13], Raychaudhuri had obtained an equation that led to the focusing theorem which in turn substantiated the existence of singularities and black holes, eventually. We would \ufb01nd in the present section that the result derived by Raychaudhuri plays a signi\ufb01cant role in some novel aspects. Let us commence with the expansion scalar (\u03b8) of the Raychaudhuri equation. We know that \u03b8 = \u2202 \u2202t(ln G) where, G = \u221a\u2212g. Also, we know that Ai ;i = 1 \u221a\u2212g[Ai \u2202 \u2202xi \u221a\u2212g + \u221a\u2212g \u2202 \u2202xi Ai] So, considering the coordinate t (i = 0) we may write A0 ;0 = [A0\u03b8 + A0 ,0] Now, since the Kronecker delta function is independent of both the normal and covariant derivatives the last equation can be equivalently expressed as Ak;0 = Ak\u03b8 + Ak,0 (1) Again, we also know that Ak;i = Ak,i \u2212\u0393l kiAl Therefore, using equation (1) we obtain Ak\u03b8 + \u0393l k0Al = 0 (2) or, Ak\u03b8 + \u0393l k0\u03b4k l Ak = 0 Since, the vector \ufb01elds Ak\u2019s are arbitrary, we have the following equation for the expansion scalar \u03b8 + \u0393l k0\u03b4k l = 0 (3) 2 \fOne can make two immediate observations from equation (3). If k \u0338= l, we have \u03b8 = 0 (4) And, if k = l, we have \u03b8 + \u0393l l0 = 0 (5) Again, on account of the relation \u0393l l0 = 1 \u221a\u2212g \u2202 \u2202x0 \u221a\u2212g = \u2202 \u2202x0 (ln \u221a\u2212g) we derive from equation (5) \u03b8 + \u03b8 = 0 (6) which is, in essence, similar to equations (37). Either the expansion scalar is zero, or |\u03b8| = \u00b1\u03b8 (7) which seems erroneous from a mathematical perspective. But, suppose \u03b8 can actually have the same value with opposite signs; then the last two equations imply some physical meaning. This shall be further corroborated by the following methodology. Equation (2) can also be written as (with, Ak 0 = \u0393l k0Al) Ak\u03b8 + Ak 0 = 0 Di\ufb00erentiating with respect to x0 and using the above equation again we have Ak \u02d9 \u03b8 \u2212Ak,0AkAk 0 A2 + Ak 0,0 = 0 (8) where, A2 = AkAk. Again, Raychaudhuri\u2019s equation is as follows: \u02d9 \u03b8 = \u03b82 3 \u22122\u03c32 + 2\u03c92 \u2212Rs (9) where, the symbols have their usual meanings. Thus, from equations (8) and (9) we have A2\u03b82 3 + A2(2\u03c92 \u2212\u03c32 + Rs) + Ak,0Ak\u03b8 + AkAk 0,0 = 0 Writing \u01eb = 2\u03c92 \u22122\u03c32 \u2212Rs, and solving the quadratic equation we \ufb01nally derive \u03b8 = 3 2 \uf8ee \uf8f0\u2212Ak,0Ak A2 \u00b1 v u u t(Ak,0Ak)(Ak,0Ak) A4 \u22124 3 (Ak,0Ak A2 + \u01eb )\uf8f9 \uf8fb (10) 3 \fHere, we can derive some interesting conclusions. When the discriminant of the above solution is zero, we would have another quadratic equation of the form: p2 \u22124 3p \u22124 3\u01eb = 0 where, p = Ak,0Ak A2 . The solution is of the form p = 2 3 \u00b1 2 3 \u221a 1 \u22123\u01eb which implies that \u01eb \u22651 3. On the other hand, if we have (Ak,0Ak)(Ak,0Ak) A4 = 4 3 (Ak,0Ak A2 + \u01eb ) then, in such a scenario \u03b8 = \u22123 2 Ak,0Ak A2 This may be considered the initial value of \u03b8 during the birth of the universe when all geodesics were focused at a particular point and then after the big bang as the universe began to evolve the value of \u03b8 evolved accumulating the discriminant term. Again, if the Raychaudhri scalar is such that Rs > Ak,0Ak A2 + 2\u03c92 \u22122\u03c32 then v u u t(Ak,0Ak)(Ak,0Ak) A4 \u22124 3 (Ak,0Ak A2 + \u01eb ) > Ak,0Ak A2 So, \u03b8 can have positive values as well. Therefore, writing \u03b5 = \u2212Ak,0Ak A2 \u00b1 v u u t(Ak,0Ak)(Ak,0Ak) A4 \u22124 3 (Ak,0Ak A2 + \u01eb ) we would have \u03b8 = \u00b13 2\u03b5 (11) This is what insinuates from equation (7) too. Hence, essentially, \u02d9 \u03b8 can diverge to both positive and negative in\ufb01nity. The physical signi\ufb01cance of this conclusion is novel and 4 \fsigni\ufb01cant geodesics can diverge just as they can converge. As a consequence, there will exist a point of exclusivity akin to a point of singularity. This point of exclusivity can be looked upon as the origin of all matter and energy and the formation of spacetime as we know it. We understand immdiately how important the Raychaudhuri scalar and expansion scalar are. However, it should be borne in the mind that the notion and methodology innovated here is in stark contrast with the theory of Big Rip [17], particularly for the fact that no speculative, mysterious energy has been talked about here it is essentially the geometry trying to explain the evolution of the universe. 3 The evolution equation Now, we shall substantiate the novel result of the preceding section taking a di\ufb00erent route. At \ufb01rst, let us consider a covariant vector Bi whose transport between two di\ufb00erent points of a Riemannian manifold is independent of the path and that is not covariantly constant; then it is known that the derivative of this vector \ufb01eld would be given as [18] \u2202Bi \u2202xk = \u0393l ikBl (12) We assume that Bi is thrice di\ufb00erentiable. Thus we have the following from relation (12) \u22022Bi \u2202xn\u2202xk = \u2202\u0393l ik \u2202xn Bl + \u0393l ik \u2202Bl \u2202xn (13) and \u22023Bi \u2202xm\u2202xn\u2202xk = Bi,mnk = \u22022\u0393l ik \u2202xm\u2202xn Bl + \u0393l ik \u22022Bl \u2202xm\u2202xn + \u2202\u0393l ik \u2202xn \u2202Bl \u2202xm + \u2202\u0393l ik \u2202xm \u2202Bl \u2202xn (14) Now, computing Bi,mkn, subtracting the resultant equation from (14) and then changing the indices as l \u2192j, we have the equation Bi,mnk \u2212Bi,mkn = \u0393j ik,mnBj \u2212\u0393j in,mkBj + \u0393j ikBj,mn \u2212\u0393j inBj,mk + \u03b8 (15) where, \u03b8 = \u0393j ik,nBj,m + \u0393j ik,mBj,n \u2212\u0393j in,kBj,m \u2212\u0393j in,mBj,k and the \u2019comma\u2019 implies normal derivative. Now, we know the following relations regarding the Riemann-Christo\ufb00el (RC) tensor Bi;kn \u2212Bi,nk = \u2212Rm k niBm and Bi;nk \u2212Bi,kn = \u2212Rm n kiBm 5 \fwhere, the \u2019semi-colon\u2019 implies covariant derivative. Therefore, we can write the following from equation (15) \u2202m[Bi,nk \u2212Bi,kn] = \u2202m[2Rj i knBj + \u03b7Bi] = \u0393j ik,mnBj \u2212\u0393j in,mkBj\u0393j ikBj,mn \u2212\u0393j inBj,mk + \u03b8 where, \u03b7Bi = Bi;kn \u2212Bi;nk (\u03b7 being a covariant derivative operator). This yields the premature evolution equation for the RC tensor or the curvature tensor as 2\u2202m[Rj i kn]Bj+2Rj i knBj,m+[\u0393j in,mk\u2212\u0393j ik,mn]Bj+\u0393j inBj,mk\u2212\u0393j ikBj,mn+\u2202m(\u03b7Bi)\u2212\u03b8 = 0 (16) It is worth noting that the vector \ufb01elds Bi can be related to all the necessary features of the manifold and do not depend explicitly on the Christo\ufb00el symbols, in this regard. Now, with the di\ufb00erential equation \u2202Bi \u2202xn = \u0393m inBm we shall have equation (13) as Bi,kn = \u0393m in,kBm + \u0393m in\u0393p mkBp Di\ufb00erentiating this again and rearranging we get \u0393m in,jkBm = Bi,jkn \u2212\u0393m in,k\u0393r mjBr \u2212\u0393m in\u0393p mk,jBp \u2212\u0393m in,j\u0393p mkBp \u2212\u0393m in\u0393p mkBp,j Now, interchanging the indices m and j (m \u2194j) we have \u0393j in,mkBj = Bi,mkn \u2212\u0393j in,k\u0393r jmBr \u2212\u0393j in\u0393p jk,mBp \u2212\u0393j in,m\u0393p jkBp \u2212\u0393j in\u0393p jkBp,m (17) Similarly, we would have \u0393j ik,mnBj = Bi,mnk \u2212\u0393j ik,n\u0393r jmBr \u2212\u0393j in\u0393p jn,mBp \u2212\u0393j in,m\u0393p jnBp \u2212\u0393j ik\u0393p jnBp,m (18) Again, since \u0393j in,kBj = Bi,kn \u2212\u0393j in\u0393p jkBp, the equations (17) and (18) can be rewritten respectively as \u0393j in,mkBj = Bi,mkn \u2212\u0393j in\u0393p jk\u0393q pmBq \u2212\u03b4j r\u0393r jm{Bi,kn \u2212\u0393j in\u0393p jkBp} \u2212\u0393j in{Bj,mk \u2212\u0393p jk\u0393q pmBq} \u2212\u03b4j p\u0393p jk{Bi,mn \u2212\u0393j in\u0393p jmBp} (19) and \u0393j ik,mnBj = Bi,mnk \u2212\u0393j ik\u0393p jn\u0393q pmBq \u2212\u03b4j r\u0393r jm{Bi,nk \u2212\u0393j ik\u0393p jnBp} \u2212\u0393j ik{Bj,mn \u2212\u0393p jn\u0393q pmBq} \u2212\u03b4j p\u0393p jn{Bi,mk \u2212\u0393j ik\u0393p jmBp} (20) 6 \fNow, subtracting equation (20) from equation (19) and rearranging, we have Bj[\u0393j in,mk \u2212\u0393j ik,mn] = (Bi,mkn \u2212Bi,mnk) + \u0393j jm(Bi,nk \u2212Bi,kn) + \u0393j ikBj,mn \u2212\u0393j inBj,mk + \u03b4j p(\u0393p jnBi,mk \u2212\u0393p jkBi,mn) + 2\u03b4j pBj,m(\u0393j in\u0393p jk \u2212\u0393j ik\u0393p jn) (21) Let us consider the second term with parenthesis on the right hand side of the last equation. Multiplying by B2Bj = BjBjBj = BjB2 j we shall have B2Bj\u0393j jm(Bi,nk \u2212Bi,kn) = BjBjBj\u0393j jm(Bi,nk \u2212Bi,kn) = B2Bj,m(Bi,nk \u2212Bi,kn) Clearly, there is a breakdown of index notation, pertinent to the index \u2019j\u2019. We shall elaborate this scenario now and we shall \ufb01nd the above equation to be useful subsequently. We make an ansatz that in this special scenario, the \ufb01rst term in the parenthesis in equation (21), namely (Bi,mkn \u2212Bi,mnk), becomes explicitly independent of the index j present in the left hand side. The rationale can be attributed to when some structure preserving endomorphism that preserves the geometry of the manifold breaks down and thereby the RC tensor accrues a new upper index and the index notation breaks down. The feasibility of the rationale introduced above will be evident later while elucidating equation (30). So, essentially, the RC tensor arising from the breakdown will be independent of the index j. This causes the breakdown of the \u2019index notation\u2019 mentioned earlier. Therefore, considering this ansatz we may write Bi,mkn \u2212Bi,mnk = \u2212\u2202m[2Rp i knBp + \u03b7Bi] = \u22122\u2202m[Rp i kn]Bp \u22122[Rp i kn]Bp,m \u2212\u2202m(\u03b7Bi) where we have considered a new index p, such that j \u0338= p. Thus, using (21) and multiplying both sides of equation (16) by B2 = BjBj = \u03b4jjBjBj we derive 2BjB2 j [\u2202m{Rj i kn}Bj \u2212\u2202m{Rp i kn}Bp] + 2B2Bj[{Rj i kn}Bj,m \u2212{Rp i kn}Bp,m] + B2Bj,m(Bi,nk \u2212Bi,kn) + \u03b4j pBjB2 j (\u0393p jnBi,mk \u2212\u0393p jkBi,mn) + 2\u03b4j pBjB2 j Bj,m(\u0393j in\u0393p jk \u2212\u0393j ik\u0393p jn) \u2212BjB2 j \u03b8 = 0 (22) Now, we shall use the relation \u0393j in,kBj = Bi,kn \u2212\u0393j in\u0393p jkBp, and compute \u03b8 as follows: B2 j \u03b8 = BjBj,m(Bi,nk \u2212\u0393j ik\u0393p jnBp) + BjBj,n(Bi,km \u2212\u0393j im\u0393p jkBp) \u2212BjBj,m(Bi,kn \u2212\u0393j in\u0393p jkBp) \u2212BjBj,k(Bi,nm \u2212\u0393j im\u0393p jnBp) (23) from which we obtain B2 j \u03b8 = Bj,m(BjBi,nk \u2212Bi,kBj,n) + Bj,n(BjBi,km \u2212Bi,mBj,k) \u2212Bj,m(BjBi,kn \u2212Bi,nBj,k) \u2212Bj,k(BjBi,nm \u2212Bi,mBj,n) (24) Rearranging the terms we have B2 j \u03b8 = BjBj,m(Bi,nk \u2212Bi,kn) + Bj(Bj,nBi,km \u2212Bj,kBi,nm) + Bj,m(Bi,nBj,k \u2212Bi,kBj,n) 7 \fAlso, we have BjBj\u03b4j pBj(\u0393p jnBi,mk \u2212\u0393p jkBi,mn) = B2(Bj,nBi,mk \u2212Bj,kBi,mn) and Bj\u03b4j pB2 j Bj,m(\u0393j in\u0393p jk \u2212\u0393j ik\u0393p jn) = BjBj,m(Bi,nBj,k \u2212Bi,kBj,n) Therefore, the revised form of equation (22) is given as 2B2Bj[\u2202m{Rj i kn}Bj \u2212\u2202m{Rp i kn}Bp] + 2B2Bj[{Rj i kn}Bj,m \u2212{Rp i kn}Bp,m] + B2Bj(Bj,nBi,mk \u2212Bj,kBi,mn) + 2BjBj,m(Bi,nBj,k \u2212Bi,kBj,n) \u2212B2Bj(Bj,nBi,km \u2212Bj,kBi,nm) \u2212BjBj,m(Bi,nBj,k \u2212Bi,kBj,n) = 0 (25) Again, as we have seen before (Bi,nk \u2212Bi,kn) = (2Rj i kn + \u03b7Bj) where, \u03b7Bj = Bj;kn \u2212Bj;nk. And, we know the relation for the contravariant vector as Bn ;ik \u2212Bn ;ki = Rn i klBl Now since, the metric tensor is invariant with respect to the covariant derivative, lowering the index of the vector we obtain Bl;ik \u2212Bl;ki = Rn i klBn Thus (Bi,nk \u2212Bi,kn) = 2Rj i knBj + Rq k ijBq Using this relation we \ufb01nally derive the curvature evolution equation as follows: 2B2Bj[\u2202m{Rj i kn}Bj \u2212\u2202m{Rp i kn}Bp] + 2B2Bj[{Rj i kn}Bj,m \u2212{Rp i kn}Bp,m] + B2BjBj,n(2Rj i kmBj + Rq k ijBq) + B2BjBj,k(2Rj i mnBj + Rq m ijBq) + BjBj,m(Bi,nBj,k \u2212Bi,kBj,n) = 0 (26) The mathematical signi\ufb01cance is immediately apparent. The physical signi\ufb01cance will be manifest in the subsequent parts of the paper. Let us consider the special case where k = n. 8 \fIn such a scenario, the preceding equation reduces to 2B2Bj[\u2202m{Rj i}Bj\u2212\u2202m{Rp i }Bp]+2B2Bj[{Rj i}Bj,m\u2212{Rp i }Bp,m]+B2BjBj,nBq(Rq n ij+Rq m ij) = 0 (27) Again, Rp i = \u03b4p j Rj i and Bp = \u03b4j pBj. Also, taking into consideration another special case: n = q = m, we have 2B2B2 j \u2202mRj i[1 \u2212\u03b4p j \u03b4j p] + 2B2BjBj,mRj i[1 \u2212\u03b4p j \u03b4j p] + B2BjBmBj,mRij = 0 (28) which is another form of the evolution equation (26), with m = n = q = k. Now, if the ansatz breaks down and j = p then we have from (28) \u03b4jm\u03b4ijBj,mB2RijBiBj = 0 where, Rs = RijBiBj is the Raychaudhuri scalar. So Bj,mB2Rs = \u2202m(B2Rs) \u2212B2\u2202mRs = 0 \u21d2B2 = const. Assuming that this constant term doesn\u2019t change the structural form and properties of Rs we can write without loss of any generality B2Rs \u223cRs (29) which can be looked upon as an endomorphism of the set of Raychaudhuri scalar, in this particular cosmology (with respect to the index j), given as B2 : Rs 7\u2212 \u2192Rs (30) which implies B2 \u25e6Rs = Rs So, when the ansatz breaks down it corresponds to the consideration of an endomorphism. On the other hand, we assumed earlier that when an endomorphism breaks down the ansatz will hold. This substantiates the plausibility of why we had associated the breakdown of an endomorphism with the ansatz we introduced. Essentially, these two can be considered to be correlated and complementary. Another interpretation with regard to the endomorphism is that Bj is a timelike unit vector \ufb01eld with the respect to the index j, which insinuates that the Raychaudhuri scalar precludes all vector \ufb01elds the self scalar products of which are not constant. 9 \fNow, let us get back to equation (26). Using the Kronecker delta it can be rewritten as 2B2B2 j \u2202m{Rj i kn}[1 \u2212\u03b4p j \u03b4j p] + 2B2BjBj,m{Rj i kn}[1 \u2212\u03b4p j \u03b4j p] + B2BjBj,n(2Rj i kmBj + Rq k ijBq) + B2BjBj,k(2Rj i mnBj + Rq m ijBq) + BjBj,m(Bi,nBj,k \u2212Bi,kBj,n) = 0 (31) Again, we know that the Brouwer\u2019s \ufb01xed point theorem [19] states: For any continuous function f mapping a compact convex set to itself there is a \ufb01xed point. Therefore, considering a continuous mapping f in our Riemannian manifold and a compact, geodesically convex vector \ufb01eld (B) comprised of timelike vectors Bi, we shall have f : B 7\u2212 \u2192B and a \ufb01xed point under this automorphism. Incidentally, choosing the index j we can infer that there is a \ufb01xed point with respect to this index in the vector \ufb01eld (or the geodesic \ufb01eld), through which the family of vectors Bj are parallel transported. Consequently, the vectors Bj will be constant irrespective of the coordinates and the geometric structure of the manifold. Thus, equation (31) becomes 2B2B2 j \u2202m{Rj i kn}[1 \u2212\u03b4p j \u03b4j p] = 0 Rj i kn = const. (32) Hence, the curvature tensor returns to its initial state with respect to the index j. This corresponds to the statement of Poincare\u2019s recurrence theorem [20], if one considers the whole manifold to be a system. Now, let us consider equation (28). Writing, 1 \u2212\u03b4p j \u03b4j p = \u03b4, we have 2B2B2 j \u03b4\u03b4jk\u2202mRik + 2B2BjBj,m\u03b4\u03b4jkRik + B2BjBmBj,m\u03b4k j Rik = 0 or, 2B2B2 j \u03b4jk\u2202mRik + 2BjBj,m\u03b4jiRikBiBk + B2Bj,m\u03b4k j \u03b4ji\u03b4mkRikBiBk = 0 Since, we have considered the case where j \u0338= p, \u03b4 = 1. So, using the expression for the Raychaudhuri scalar (Rs) and after rearranging we have \ufb01nally Bj,mRs = \u2212\u03c1B2B2 j Rik,m (33) where, \u03c1 = 2\u03b4jk(2\u03b4jiB\u22121 j + \u03b4k j \u03b4ji\u03b4mk)\u22121 = 2\u03b4jk(2\u03b4jiB\u22121 j + \u03b4mi)\u22121. Thus, we have obtained the result that under certain conditions the Raychaudhuri scalar depends on the derivative of the Ricci tensor (Rik), the associated timelike vector \ufb01eld (Bj) and its derivatives. 10 \fSince, it is known that Rs is the trace of the tidal tensor epitomizing the relative acceleration due to gravity of two objects separated by an in\ufb01nitesimal distance and that Rik measures the change in geometry as an object moves along geodesics in the space, we can conclude: For some particular timelike vector \ufb01eld in a Riemannian or pseudo-Riemannian manifold, the relative acceleration due to gravity decreases with the increase in curvature and vice versa. 4 Sectional curvature In this section, we analyze and discuss the results of the preceding section. Firstly, let us consider the parameter \u03c1 of equation (33). Now, for j = k \u0338= i and m = i we have Bj,iRs = \u22122B2B2 j Rij,i (34) where, Rs = RijBiBj. On the other hand, for i = j = k, i \u0338= m we have with Rs = RjjBjBj. Hence, we shall obtain Bj,mRs = \u2212B2B3 j Rjj,m (35) Now, equation (34) can also be written as Bj,iRs = \u2212B2B2 j [Rij,i + Rij,i] (36) Now, if we take into consideration the automorphism introduced in the previous section, then Bj,i = 0 Consequently, we have Rij,i + Rij,i = 0 (37) The general implication of this equation is that the Ricci curvature, Rij, is constant with respect to the index i. However, there can be another implication that the \ufb01rst term gives a positive value and the second gives a negative value a notion that might seem erroneous, but can be used as an alternative explanation. To be precise |Rij,i| = \u00b1Rij,i (38) which essentially insinuates that |Rij| = \u00b1Rij (39) i.e. the Ricci curvature can have both positive and negative values. Again, since Rij is obtained by contracting the RC tensor which in turn is related to the sectional curvature of 11 \fa Riemannian manifold (with respect to the given manifold and two linearly independent tangent vectors at the same point), we can conclude that the sectional curvature also will have both positive and negative values. This bespeaks for both geodesic convergence and divergence on account of Rauch\u2019s comparison theorem [21, 22] that states: for positive sectional curvature, geodesics tend to converge and for negative sectional curvature, geodesics tend to diverge. Essentially, under special circumstances, geodesics can diverge. Geodesic convergence leads to a singularity; similarly geodesic divergence would lead to a point of exclusivity as shown previously by resorting to Raychaudhuri\u2019s equation. It is interesting to point out that during the in\ufb02ationary era, curvature played a signi\ufb01cant role pertinent to the dynamics, as researchers have found out [23]. 5 Discussions In the present article, we have established that an equation epitomizing the evolution of curvature, resorting to an ansatz originating from the breakdown of an endomorphism correlated to the Raychaudhuri scalar. It is also shown that the ansatz and the endomorphism are interrelated in the sense that one precludes the other. From the aforementioned considerations we have also found that the Riemann-Christo\ufb00el curvature tensor tends to follow Poincare\u2019s recurrence theorem and thereby the curvature returns to its initial value after certain period of time. Another interesting consequence is that the existence and feasibility of negative curvature which entails geodesic divergence. This is also validated by using the expansion scalar which is show to have both positive and negative values. This negative value and that of the Riemann-Christo\ufb00el tensor indicates that there might exist a point of exclusivity which is the opposite of the point of singularity a result that has been derived from the Raychaudhuri equation too. Ostensibly, the notion of such a point is in a sense a speculative extrapolation and demands ample amount of study and research. But, the prospect is something worth investigating, at the very least." + }, + { + "url": "http://arxiv.org/abs/2101.01670v1", + "title": "Rain Sensing Automatic Car Wiper Using AT89C51 Microcontroller", + "abstract": "The turn of the century has seen a tremendous rise in technological advances\nin the field of automobiles. With 5G technology on its way and the development\nin the IoT sector, cars will start interacting with each other using V2V\ncommunications and become much more autonomous. In this project, an effort is\nmade to move in the same direction by proposing a model for an automatic car\nwiper system that operates on sensing rain and snow on the windshield of a car.\nWe develop a prototype for our idea by integrating a servo motor and raindrop\nsensor with an AT89C51 Microcontroller.", + "authors": "Abhishek Das, Vivek Dhuri, Ranjushree Pal", + "published": "2021-01-02", + "updated": "2021-01-02", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP" + ], + "main_content": "Introduction Today\u2019s car wipers are manual systems that work on the principle of manual switching. So here we propose an automatic wiper system that automatically switches ON on detecting rain and stops when the rain stops. Our project brings forward this system to automate the wiper system not need manual intervention. For this purpose, we use a rain sensor along with a microcontroller to drive the wiper motor. Our system uses a rain sensor to detect rain, this signal is then processed by a microcontroller to take the desired action. The rain sensor works on the principle of using water for completing its circuit, so when rain falls on it, the circuit gets completed and sends out a signal to the microcontroller. The microcontroller now processes this data and controls the motor. This system is equally useful for Aircraft and a smaller version of this can be used by motorbikers in their helmets so that they can drive easily in rains. Figure 1 shows the block diagram for our proposed idea. We use Assembly Language Coding using arm Keil \u00b5Vision 5 interface. For PCB designing, we use EAGLE software (Monk & Amos, 2017) and for circuit simulation, we have used Proteus Design Suite (Su & Wang, 2010). In what follows, we discuss the related prior work for such a problem in the next section (3), followed by de\ufb01ning the 1UG. Student, Department of Electronics and Telecommunication, DJSCE, Mumbai, India. Figure 1. Block Diagram of Proposed Model problem statement (4) and discussing our novel approaches in section (5). We then present our Experimental setup in section (6) followed with its results and discussion in section (7). Finally, we end the discussion with conclusion and future directions in the last section (8). The scripts and circuit designs are publicly available here. 3. Related Work In the current scenario, only high-end vehicles employ intelligent rain-sensing automatic wiper systems. Our system is modeled to demonstrate how useful is an automatic wiper system that adjusts speed itself based on rainfall intensity. Such a system improves the safety of a ride. There are many instances of accidents occurring during heavy rainfall due to lack of proper vision. In many cases, these accidents were due to manual errors (for example: not increasing the speed of the wiper) from the driver. An automatic, intelligent system like ours removes any manual errors. Our system adjusts wiper speed according to the intensity of rainfall and hence improves safety. Nowadays some models of Ford and Hyundai are also implementing an automatic wiper system in their vehicles[1]. Now, we discuss some prior work in this area. Since the rain sensor used in the automatic wiper system is expensive (Kulkarni & Holalad) designed a semi-automatic rain wiper arXiv:2101.01670v1 [eess.SP] 2 Jan 2021 \fRain Sensing Automatic Car Wiper Using AT89C51 Microcontroller Figure 2. 8051 Microcontroller pin con\ufb01guration that could be installed in economic vehicles. Their semiautomatic rain wiper had Cup sensor which was based on the principle of rate of water \ufb02ow and volume of water. The cup sensor was made up of a conical cup with probes at different levels of height. These levels of the probe were used to increase the wiper speed. Therefore, depending on the rain intensity the wiper system could change the speed. Their design was economical and had three different stages of rain intensity. Similarly, (Ashik & Basavaraju, 2014) designed automatic wipers with mist control that worked with three different rain intensities, which are drizzling, medium rain and heavy rain. The automatic wiper and internal wiper uses the combination of a sensor, microcontroller and the wiper motor. The external sensor and internal sensor is based on the principle of conductance. The microcontroller actuates the speed of the wiper motor by measuring the rain intensity as detected by the external motor. Similarly, the internal mist controllers are placed on the windshield which detects the mist signalling the controller to actuate the internal wiper motor. 4. Experimental Setup 4.1. AT89C51 Microcontroller The AT89C51 (Mazidi et al., 2005) is a low-power, highperformance CMOS 8-bit microcomputer with 4K bytes of Flash programmable and erasable read-only memory. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional non-volatile Figure 3. Rain Sensor Module Figure 4. Servo Motor memory programmer. By combining a versatile 8-bit CPU with Flash on a monolithic chip, the Atmel AT89C51 is a powerful microcomputer that provides a highly-\ufb02exible and cost-effective solution to many embedded control applications. Figure 2 shows the pin con\ufb01guration of 8051 Microcontroller. 4.2. Rain Sensor Module A rain sensor module is an easy tool for rain detection (Gupta et al.). It can be used as a switch when a raindrop falls through the raining board and for measuring rainfall intensity. Figure 3 shows a depiction of a typical Rain Sensor Module. Due to its compact design and light weight, it can be easily attached into any system. The module features, a rain board, and the control board that is separate for more convenience, a power indicator LED, and sensitivity adjustable through a potentiometer. A raindrop sensor is a board coated with nickel in the form of lines. It works on the principle of ohms law. When there is no raindrop on board. Resistance is high so we get high voltage according to V=IR. When raindrop present it reduces the resistance because water is a conductor of electricity and the presence of water connects nickel lines in parallel so reduced resistance and the reduced voltage drop across it. 4.3. Servo Motor Servo motors (Sachin & Gaonkar, 2013) are self-contained mechanical devices that are used to control the machines with great precision. . Usually the servo motor is used to \fRain Sensing Automatic Car Wiper Using AT89C51 Microcontroller Figure 5. Operation of Servo based on Pulse Width Modulation control the angular motion from 0\u00b0 to 180\u00b0 and 0\u00b0 to 90\u00b0. The servo motor can be moved to a desired angular position by sending Pulse Width Modulated (Holtz, 1992) signals on the control wire. The servo understands the language of pulse position modulation. A pulse of width varying from 1 millisecond to 2 milliseconds in a repeated time frame is sent to the servo around 50 times in a second. The width of the pulse determines the angular position. For example, a pulse of 1 millisecond moves the servo towards 0\u00b0, while a 2 milliseconds wide pulse would take it to 180\u00b0. The pulse width for in-between angular positions can be interpolated accordingly. Thus a pulse of width 1.5 milliseconds will shift the servo to 90\u00b0. It must be noted that these values are only approximations. The actual behavior of the servos differs based on their manufacturer. A sequence of such pulses (50 in one second) is required to be passed to the servo to sustain a particular angular position. When the servo receives a pulse, it can retain the corresponding angular position for the next 20 milliseconds. So a pulse in every 20 millisecond time frame must be fed to the servo. Figure 4 shows an example of the servo motor we have used in our implementation, while Figure 5 shows the operation of servo motor based on Pulse Width Modulated signals. 4.4. Circuit Simulation and PCB Designing Proteus Design Suite by Labcenter Electronics provides a simple interface to design and simulate various circuits. It has a variety of electronic components and settings for each of them to choose from and is an ef\ufb01cient method to test the initial circuits for sanity checks before implementation. It has the option of adding various switches and connect and visualize the \ufb02ow in real-time, providing error logs and failure cases. Figure 7 demonstrates the simulation of our circuit design. After selecting all the components and verifying the simulation on the software, we then start implementing the actual PCB designing process which includes Figure 6. Flowchart steps like printing a layout from EAGLE PCB design software, etching the PCB, drilling, integrating and soldering all the components and \ufb01nally testing the prototype. Figure 8 shows the Layout of our printed circuit board on the EAGLE Software and Figure 9 shows the board after etching process. 5. Results and" + }, + { + "url": "http://arxiv.org/abs/2101.00496v1", + "title": "Smart Car Features using Embedded Systems and IoT", + "abstract": "There has been a tremendous rise in technological advances in the field of\nautomobiles and autonomous vehicles. With the increase in the number of driven\nvehicles, the safety concerns with the same have also risen. The cases of\naccidents and life-threatening injuries have skyrocketed. It has become a\nnecessity to provide adequate safety measures in automobiles. This project aims\nto develop a prototype for a smart vehicle system that provides real-time\nlocation of the vehicle on detection of a crash and alert the police station\nand relatives of the user, it has a panic button feature for a passenger's\nsafety. We also demonstrate a mechanism for cabin monitoring and an interactive\ninterface between a user and a car, where the user can inquire about the\ntemperature, humidity, and other variables inside the car remotely by sending a\ntext message to the GSM module which is present in the car. The GSM module\nconnects to the Arduino, which fetches the readings from sensors attached to it\nand sends it back to the user through a text message. We show the integration\nof MQ3 Alcohol sensor with Arduino for drunk driving prevention.", + "authors": "Abhishek Das, Vivek Dhuri, Aditya Desai, Suyash Ail, Ameya Kadam", + "published": "2021-01-02", + "updated": "2021-01-02", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC", + "cs.SY", + "eess.SY" + ], + "main_content": "Introduction Driver safety has been an important feature in automobiles that have been made compulsion in various countries. An increasing number of amateur rash drivers, careless driving, and delayed access to \ufb01rst aid to victims has been a major cause of deaths. Cases of harassment, robbery in cabs are rising with more people using modern-day cab services. Driver fatigue monitoring, accident prevention measures, GPS-based location and nearest hospital alert, smart braking systems, smart airbags, etc. are some of the features currently implemented in a few of the high-end luxury-level vehicles. There hasn\u2019t been a cost-ef\ufb01cient model developed for the low-end budget cars. It is important to provide accessible safety measures in the vehicle to minimize the risk of loss of life. This project aims to develop a cost-ef\ufb01cient smart vehicle system that can help aid the cause. Figure 1 1UG. Student, Department of Electronics and Telecommunication, DJSCE, Mumbai, India 2Assistant Professor, Department of Electronics and Telecommunication, DJSCE, Mumbai, India. Figure 1. Block Diagram of Proposed Model shows the block diagram for our prototype. The primary objective of this project is to show how various sensors can be integrated to the Arduino or any microcontroller system, how to communicate with such a system remotely using technologies like GSM and GPS technology, send commands to inquire about the sensor readings and perform desired actions by using the actuators connected to the system. We have developed a low-cost prototype to demonstrate our ideas and create a baseline implementation for research purposes in this relatively new domain of the Internet of Things. With the recent developments in the capacity to process enormous amount of data from sensors, as well as communication technologies such as 5G, we believe our ideas can be scaled and deployed in real-time. The scripts have been made publicly available to the research community for further development here. In what follows, we discuss the related prior work for such a problem in the next section (3), followed by de\ufb01ning the Experimental Setup (4) and discussing our novel approaches in section (5), followed with its results and discussion in section (6). Finally, we end the discussion with conclusion and future directions in the last section (7). arXiv:2101.00496v1 [cs.HC] 2 Jan 2021 \fSmart Car Features using Embedded Systems and IoT Figure 2. GSM SIM 900A module 3. Related Work According to a Statistical Report (ind, 2016) published by the Department of Roads and Highways Transport on Vehicle Mishaps in the country in 2016, the country has recorded 4,60,852 accidents in the year resulting in 1,45,685 deaths. Approximately 423 people died in 1,227 vehicle accidents every day. The data also states that at least 16 deaths occurring in vehicle mishaps out of 55 accidents in every hour in a particular period were primarily because victims were unable to receive suitable treatment within time. Thus, if an alert system is made and an alarm is raised, it might become possible to save many lives. There has been prior work in the area of using GSM and GPS systems along with microcontrollers (Shinde et al., 2015) developed a similar tracking system using an Embedded Linux board namely Raspberry Pi and a GSM SIM900A module. The objective of their tracker was to raise an alert whenever the vehicle deviated from the prede\ufb01ned route which was set in the Raspberry Pi by the user. It also had features for sending noti\ufb01cations when the vehicle exceeded a set speed limit. (Saaid et al., 2014) implemented a vehicle location \ufb01nder using a similar combination of GSM and GPS systems particularly for the task of vehicle thefts. The use of panic buttons in vehicles is a idea which hasn\u2019t been deployed in real-life applications yet. According to the newspaper article (hin, 2016), the Parliament of India will make it compulsory from 1st of April 2018 for all public transport vehicles which include buses and cabs to have a location tracker device and one or more panic buttons to alert the authorities in case of an emergency. Although, the government has not made the installation of cameras in these vehicles mandatory, primarily citing privacy concerns and due to the factor that it will generate tremendous amounts of data every second. The technology to process such huge Figure 3. Basic AT commands used with GSM module data sets is currently unavailable. However, with the development in Internet of Things, in the future, this might be possible using Vehicle to Vehicle Communications. Nowadays, a large population of people chooses to travel by cabs and hence, keeping in mind the safety of the commuters, it is the need of the hour for developing such products. Another study by (deu) mentions that according to National Crime Records Bureau (NCRB) report, drunk driving was a major factor in road accidents. 99 per cent of the fatal accidents that occur on the Highways are due to drunk driving and there is no check on this. Majority of these accidents involved trucks since the truck drivers drive irresponsibly when they are fully drunk. Until and unless the nation starts a new system of checking drunk driving on the highways, these fatalities cannot be reduced, as mentioned by a Joint Commissioner of Police. The current system of Drunk driver checking requires traf\ufb01c police to make people blow into the breath-analyzers. However, it is not suf\ufb01cient to check every instance of drunk driving cases due to the presence of an enormous number of vehicles on roads and especially outside cities and highways. Thus, an automatic monitoring system is needed to tackle this problem. 4. Experimental Setup 4.1. GSM SMS Alert System SIM900A Modem is built with Dual Band GSM/ GPRS based SIM900A modem from SIMCOM. It works on frequencies 900/ 1800 MHz SIM900A can search these two bands automatically. The baud rate is con\ufb01gurable from 1200-115200 through AT command. This is a complete GSM module in an SMT type and designed with a very powerful single-chip processor integrating AMR926EJ-S core, allowing you to bene\ufb01t from small dimensions and cost-effective solutions. Figure 2 shows a GSM SIM900A \fSmart Car Features using Embedded Systems and IoT Figure 4. Neo 6m GPS module Figure 5. Triggering of Airbag circuitry and Accident Alert module. 4.2. GPS Tracking The NEO-6m module shown in Figure 4 is a stand-alone GPS receiver featuring the high-performance u-blox 6 positioning engines. It is a \ufb02exible and cost-effective receiver that offers numerous connectivity options in a mini 160 x 122 x 24 cm package. It has a compact architecture and power and memory options which makes NEO-6m modules optimal for space constraint, low-cost devices. It has an acquisition engine, and 2 million effective correlators, and can make enormous parallel frequency searches, thus it can \ufb01nd a satellite within a small time. This 50-channel u-blox 6 positioning engine gives a Time to First Fix of around 1-2 seconds. It has an anti-jamming technology, Eeprom for storing settings which gives these receivers fantastic navigation performance even when placed in extremely dif\ufb01cult environments. 4.3. Arduino Uno Development Board Arduino Uno is a development board based on a dual-inlinepackage ATmega328 AVR microcontroller (Mazidi et al., 2005). It has 20 digital input/ output pins, 6 of them can be used as Pulse Width Modulated (Holtz, 1992) outputs and 6 can be used as analog inputs. It has a 16 MHz crystal, a USB port, an ICSP header. Programs can be loaded onto it from Figure 6. SW420 Impact Sensor Figure 7. MQ3 Ethanol Sensor the Arduino computer program software which is an opensource IDE. The Arduino has a vast support community, which makes it a very easy way to get started working with it. 5. Proposed Methodology 5.1. Accident Detection When a car hits something with a strong force, it starts to decelerate very rapidly. An Impact Sensor detects the change of velocity/amount of vibration. If the impact is great enough, the impact sensor triggers the airbag circuit and at the same time, it signals the Arduino to send an alert. Thus, when the impact is severe, the Arduino extracts the location by signalling the GPS module which connects with the GPS satellites and retrieves the location of the car. This location co-ordinates along with a google map link are sent to the designated mobile number in an SMS form through the GSM module. The SW420 sensor module gives outputs as \u20181\u2019s or \u20180\u2019s depending on vibration, tilt, and external force applied to it. In absence of vibration, this module gives logic \u20180\u2019 as output and in presence of vibration, it gives logic \u20181\u2019 as output. It has sensitivity control on the board. Figure 6 shows the SW420 impact sensor which we have used in our prototype. \fSmart Car Features using Embedded Systems and IoT Figure 8. Rain Sensor Module 5.2. Passenger Safety A panic button is placed such that whenever a passenger feels terror and discomfort due to certain reasons, an alert message is raised by sending an SMS on pressing the button. There can be multiple panic buttons placed at different spots in the vehicle and connected to the Arduino. 5.3. Drunk Driver Prevention In this proposed system, an MQ-3 Ethanol Sensor as in Figure 7 is placed on the steering of the car or seat belt of the driver seat, such that it can monitor the percentage of alcohol in the breath of the driver. If it is found to be higher than set limits, then the Arduino signals the GSM to send an alert for the same to the driver\u2019s prede\ufb01ned safety number (such as a home number). Measures can also include not to start the car engine unless the alcohol percentage reduces. When the user exhales, any ethanol present in their breath is oxidized to acetic acid. At the cathode, oxygen from the atmosphere is reduced. The overall reaction is the oxidation of ethyl alcohol. The charge \ufb02ow produced by this reaction is measured and resistance is calculated, which results in the different levels of intoxication that the Arduino will determine. 5.4. Rain Sensing Automatic Wiper Car wipers in existing models are controlled manually by the driver. Some of the high-end cars have this feature, but due to cost factors, they have not yet made their way into normal vehicles. A cost-effective version of it is proposed in this project which includes a rain-drop sensor shown in Figure 8 connected to the microcontroller, which is Arduino in this case. The rain sensor detects rain and sends the corresponding signal to the Arduino. This signal is then processed by the Arduino to take the desired action. The rain sensor consists of nickel tracks which when gets connected by water droplets in between the two tracks, the circuit gets connected and it detects rain. The raindrop sensor module is low cost and precise for raindrop detection. Its sensitivity can be changed by rotating the screw on the board. It has a Figure 9. Demonstration of Prototype digital output pin to indicate whether water is present or not and an analog output pin to give a measure of the intensity of water. The module has a power indicator led and separate control board. A servo motor primarily contains a suitable motor, a gear reduction unit, a position measurement sensor, an control circuitry. Servo motor is a highly precise motor in terms of rotating angle. These are lightweight, low cost, compact motors that can be easily integrated into any circuits. The DC motor is connected to the gear unit which gives feedback to the position sensor. The potentiometer adjusts displacement according to the present position of the motor shaft. As the resistance changes, the differential voltage is generated. A PWM wave is given to the control wire which is transformed into voltage and is compared to the signal generated from the position sensor module. The control pin is connected to the Arduino\u2019s PWM enabled pins. 6. Results and Discussion Figure 9 shows the prototype we have developed for demonstration. Figure 10 shows the screenshot of an alert sent by our system on detection of an accident. The text message has information about the location coordinates of the car. The message could be sent to a police station or an designated relative by presetting the number in the system. Similarly, Figure 11 shows the screenshot of the message delivered when the panic button in the vehicle is pressed. It has the location coordinated and the link to open it on the Google Map. In addition to these alerts, various other information can be sent in case of an emergency by modifying the code in the system. 7." + }, + { + "url": "http://arxiv.org/abs/2101.00350v1", + "title": "Multi-Image Steganography Using Deep Neural Networks", + "abstract": "Steganography is the science of hiding a secret message within an ordinary\npublic message. Over the years, steganography has been used to encode a lower\nresolution image into a higher resolution image by simple methods like LSB\nmanipulation. We aim to utilize deep neural networks for the encoding and\ndecoding of multiple secret images inside a single cover image of the same\nresolution.", + "authors": "Abhishek Das, Japsimar Singh Wahi, Mansi Anand, Yugant Rana", + "published": "2021-01-02", + "updated": "2021-01-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Steganography refers to the technique to hide secret messages within a non-secret message in order to avoid detection during message transmission. The secret data is then extracted from the encoded non-secret message at its destination. The use of steganography can be combined with encryption as an extra step for hiding or protecting data. Traditionally, steganography is performed to embed lowresolution images onto a high-resolution image using naive methods like LSB manipulation. Motivation for the project comes from the recent works, like that of (Baluja, 2017), (Hayes & Danezis, 2017), and (Zhu et al., 2018). These papers suggest the use of deep neural networks to model the data-hiding pipeline. These methods have signi\ufb01cantly improved the ef\ufb01ciency in terms of maintaining the secrecy and quality of the encoded messages. Recently, similar work in terms of audio signal steganography, like (Kreuk et al., 2019), has shown that deep neural networks can be used to encode multiple audio messages onto a single cover message. We aim to make an effort in a similar direction, by utilizing the ideas from the aforementioned papers to encode multiple images into the single cover image. Unlike traditional methods, we use the same resolution cover and secret images and we aim to keep the changes to the encoded cover image unnoticeable to human perception and statistical analysis, while at the same time keeping the decoded images highly intelligible. 1Electrical and Computer Engineering, Carnegie Mellon University 2Information Networking Institute, Carnegie Mellon University. The scripts have been made publicly available to the research community for further development here. In what follows, we discuss the related prior work for such a problem in the next section (3), followed by Baseline Implementations in section (4) Datasets in section (5), our Proposed Methodology in section (6) followed with its Results and Discussion in section (7). Finally, we end the discussion with Future Directions, Conclusion and Acknowledgement in the sections (8), (9), and (10) respectively. 3. Related Work Out of the several implementations, below two are most aligned and important to our goal. 3.1. Hiding Images in Plain Sight: Deep Steganography (Baluja, 2017) attempts to place a full-sized color image within another image of the same size. Deep neural networks are simultaneously trained to create the hiding and revealing processes and are designed to speci\ufb01cally work as a pair. The system is trained on images drawn randomly from the ImageNet database and works well on natural images from a wide variety of sources. Unlike many popular steganographic methods that encode the secret message within the least signi\ufb01cant bits of the cover image, their approach compresses and distributes the secret image\u2019s representation across all of the available bits. The three components involved in the system include1. Preparation Network prepares the secret image to be hidden. If the secret-image (size M\u00d7M) is smaller than the cover image (N\u00d7N), the preparation network progressively increases the size of the secret image to the size of the cover, thereby distributing the secret image\u2019s bits across the entire N \u00d7 N pixels. 2. Hiding Network takes as input the output of the preparation-network and the cover image, and creates the Container image. The input to this network is an N \u00d7 N pixel \ufb01eld, with depth concatenated RGB channels of the cover image and the transformed channels of the secret image. 3. Reveal Network used by the receiver of the image; arXiv:2101.00350v1 [cs.CV] 2 Jan 2021 \fMulti-Image Steganography Using Deep Neural Networks Figure 1. The three components of the full system. Left: Secret-Image preparation. Center: Hiding the image in the cover image. Right: Uncovering the hidden image with the reveal network; this is trained simultaneously, but is used by the receiver. it is the decoder. It receives only the Container image (neither the cover nor the secret image). The decoder network removes the cover image to reveal the secret image. The paper by (Baluja, 2017) talks about how a trained system must learn to compress the information from the secret image into the least noticeable portions of the cover image. However, no explicit attempt has been made to actively hide the existence of that information from machine detection. They trained the steganalysis networks as binary classi\ufb01ers, using the unperturbed ImageNet images as negative samples, and their containers as positive examples. The paper serves a baseline for single secret image encoding. However, it does not talk about multi-image steganography. Figure 2. Model overview: the encoder E gets as input the carrier c and the message m, it encodes c using the carrier encoder Ec and concatenates Ec(c) to c and m to generate h. Then, the decoder carrier Dc generates the new encoded carrier, from which the decoder message Dm decodes the message \u02c6 m . During training the reconstruction loss is applied between c and m to \u02c6 c and \u02c6 m, respectively 3.2. Hide and Speak: Deep Neural Networks for Speech Steganography (Kreuk et al., 2019) implements steganography for speech data using deep neural networks. It is based on an architecture that comprises 3 subparts, i.e, Encoder Network, Carrier Decoder Network, and a Message Decoder Network. They utilize ideas from (Zhu et al., 2018) to extend the encoder network to audio signals. The architecture of the model comprises of 3 sub-parts: 1. An Encoder Network (Ec) 2. A Carrier Decoder Network (Dc) 3. A Message Decoder Network (Dm) In the Carrier/cover encoder network, the encoded carrier (Ec(c)) is appended with the carrier (c) and the secret message (m), forming, [Ec(c);c;m]. This output is fed to the Carrier Decoder (Dc) which outputs the carrier embedded with a hidden message. Finally, this is fed to the Message Decoder (Dm) which reconstructs the hidden message. The \ufb01rst part learns to extract a map of potential redundancies from the carrier signal. The second part utilizes the map to best \u201cstuff\u201d a secret message into the carrier such that the carrier is minimally affected. The part third learns to extract the hidden message from the steganographically-modi\ufb01ed carrier. All the components in these networks are Gated Convs. Ec is composed of 3 blocks of Gated Convs, Dc 4 and Dm 6 blocks of Gated Convs. Each block contains 64 kernels of size 3*3. This paper demonstrates the capability to hide multiple secret messages in a single carrier, which aligns with our goals. In the paper, \ufb01ve independent speech messages have been hidden in a single speech recording. This is achieved by 2 different approaches. One approach utilizes multiple decoders, with each decoder trained to decode a different message. The other approach utilizes a conditional decoder that also takes in as input a code indicating the message index to be encoded. We borrowed the concept of multiple decoders from this paper and used it to fetch multiple secret images from the \fMulti-Image Steganography Using Deep Neural Networks Figure 3. The following results shows the cover and hidden images before and after running the model for the \ufb01rst 2 images after 20 epochs. cover image which looks like the cover image but consists of the secret images being hidden inside the cover image via passing through separate prep networks and then concatenated together. For this, we utilize the loss as de\ufb01ned by this paper for the multiple decoders. For our case, we extend the loss function de\ufb01ned in this paper for our use case. We take the reveal loss for each decoder as RevealLoss = \u03bbs \u2217P ||S \u2212S\u2032||2 and for the entire system as summation of reveal losses for each decoder added with the loss calculated for cover image as well. L(c, M) = \u03bbc \u2217||C \u2212Dc(E(c, M))||2 + \u03bbm \u2217P ||mi \u2212 Dm(Dc(E(c, M)))||2 , where M = mik We used this approach to introduce multi-image steganography using the above idea from this paper and extending it to images to add multiple images in one cover image and then retrieving it. 4. Baseline Implementations We aimed to implement a baseline single image steganography model over which we could preform out extensions. Since (Baluja, 2017) has several implementations, we implemented two of the most popular implementations and analyzed them to be suitable for the extensions for our model. The details of these implementations are as follows: 4.1. Ingham\u2019s Implementation (Ingham) is a PyTorch based implementation which follows the architecture shown in Figure 1. The architecture includes a Prep Network, a Hidden Network, and a Reveal Network and embeds a single secret image onto a single cover image. The model architecture is de\ufb01ned as follows: 1. Prep Network Two sets of three sequential layers consisting of (Conv2d Relu) combinations, concatenated and fed into the next set. 2. Hidden Network Similar to the prep network above but includes an extra Conv2D sequential layer for adding Gaussian noise to the cover. This allows the hidden information to be encoded in bits other than the LSB of the cover image. 3. Reveal Network Similar to the above networks, with an extra Conv2d at the end. 4.1.1. IMPLEMENTATION DETAILS For full code please refer here. The model is explained below: 1. Transformations: Scaling, random crop and normalization 2. Optimizer: Adam, with learning rate 0.001 3. Customized Loss, as suggested by (Baluja, 2017). coverloss + \u03b2\u00d7hiddenloss) \fMulti-Image Steganography Using Deep Neural Networks Figure 4. Baluja\u2019s implementation by Alexandre 4.1.2. RESULTS The author of the implementation had shown desirable results for higher resolution images. Since we used lowresolution images with an equal resolution for both secret and cover, we did not see similar results in our implementations. See Figure 3. The implementation generated lossy secret images, [left-side images in results] while retaining the cover image almost completely [right-side images in results]. Since other baseline models performed signi\ufb01cantly better, we decided not to proceed with this implementation. 4.2. Alexandre\u2019s Implementation Alexandre\u2019s is another implementation (Muzio) of Baluja\u2019s paper, based on Keras. There are new features introduced in this model that others have not implemented like noise addition, mean stabilization. We have tried various models to see which con\ufb01guration provides the best results and is fastest. The model architecture has three parts: 1. Preparation Network : Transforms secret image to be concatenated with cover. 2. Hiding Network: Converts the concatenated image into encoded cover. 3. Reveal Network: Extracts the secret image from the encoded cover. Hiding and reveal networks use aggregated Conv2D layers: 5 layers of 65 \ufb01lters [50 3x3 \ufb01lters, 10 4x4 \ufb01lters, and 5 5x5 \ufb01lters]. Prep network uses 2 layers of similar structure. All Conv 2D layers are followed by ReLU activation. 4.2.1. IMPLEMENTATION DETAILS For full code please refer here. Model is explained below: 1. Adam optimizer, with learning rate as 0.001 and a custom scheduler. 2. Model has been trained for 800 epochs with a batch size of 256 and an additional 100 epochs with batch size of 32. 3. To make sure weights are updated only once, reveal network weights are frozen before adding it to the full model. 4. Gaussian noise with 0.01 standard deviation is added to the encoder\u2019s output before passing it through the decoder. 5. Mean sum of squared error has been used for calculating the decoder\u2019s loss. 4.2.2. RESULTS As it can be seen in Figure 5, the model generates decent results in the Tiny ImageNet dataset. The generated images showed a minimal loss for both the cover and the secret images. Owing to the performance of the model with a single image steganography we decided to use this implementation for our work going ahead. 5. Datasets Since our model does not have speci\ufb01c requirements pertaining to the classes of the images, we used the Tiny ImageNet \fMulti-Image Steganography Using Deep Neural Networks Figure 5. The results showing the cover and hidden images before and after running the model for 900 epochs. Left to Right Columns are: Cover Image, Secret Image, Encoded Cover Image, Decoded Secret Image, Diff Cover Image, Diff Secret Image. We can notice than the differences between the original cover and the encoded cover is almost going null. Same with the original secret and the decoded secret image. (tin) dataset in order to obtain the secret and cover images. The dataset is the collection of 64\u00d764\u00d73 images, used by the Stanford CS231 class. Further extensions of the \ufb01nal model can also be applied to larger images from datasets like ImageNet (Deng et al., 2009). We have also used Tiny ImageNet for faster training. Our training set is made of a random subset of images from all 200 classes. 2000 images are randomly sampled. The image vectors are normalized across RGB values. We split the entire training data into four halves, one for the cover image and the other three halves for three secret images. 6. Proposed Methodology We aim to perform multi-image steganography, hiding three or more images in a single cover image. The embedded secret images must be retrievable with minimum loss. The encoded cover image must look like the original cover image. To perform this, we combine the idea of (Baluja, 2017) and (Kreuk et al., 2019). We take the network implementation idea of having a prep and hiding network as an encoder and a reveal network as a decoder from (Baluja, 2017). To extend this for multiple images, we pass multiple secret images via the prep network and then concatenating these resulting data with the carrier image and then \ufb01nally send this via the Hiding network. We then take the idea of having multiple decoders, one per secret image, from (Kreuk et al., 2019) to retrieve all the secret images from the container image.To improve the security of our image retrieval model, we extend the idea presented by (Baluja, 2017) of putting secret images with noise in the original cover image instead of putting the secret images at the LSBs of the original cover image. Using of Multiple Prep and Reveal Networks. In their implementation for multiple audio signal steganography, (Kreuk et al., 2019), suggested the use of multiple decoders to derive the decoded secret from a single encoded cover. This technique is an extension of the same idea in the image domain. It does not require scaling down the size of the image or sacri\ufb01cing on the color channels of the secret images. There was another technique of using conditional decoders instead of multiple decoders but in this implementation, we have only worked on implementing multiple prep/reveal networks. Hence, we decided to build our extension based on this technique. A brief description of the encoder/decoder framework as per this technique is as follows: \u2022 ENCODER: Consists of multiple prep networks, each corresponding to separate secret image input. Prep \fMulti-Image Steganography Using Deep Neural Networks Figure 6. The image shows the DeepSteg architecture with multiple CNN based sub-networks. Inside the encoder, the prep networks convert the input secret images into images that can be concatenated to the cover. The concatenation is then passed through the hiding network to generate the encode cover. In the decoder network, separate reveal networks are deployed to generate the decoded secrets out of the encoded cover. network outputs are concatenated together with the cover image and then fed through the Hiding network. \u2022 DECODER: The decoder network comprises of multiple reveal networks, each of which is trained separately to decode its corresponding message. 6.1. Model Architecture The model architecture is as described in the Figure 6. The underlying architecture of each of the sub-networks is as follows: 1. Prep Networks: Each of the prep networks consists of the aggregation of 2 layers. With each of the layers made up of 3 separate Conv2D layers. The number of channels for these 3 Conv2D layers is 50, 10, and 5 respectively with the kernel sizes like 3, 4, and 5 for each layer. The stride length constantly remains 1 along both the axis. Appropriate padding is added to each Conv2D layer so as to keep output image in the same dimensions. Each Conv2d layer is followed with a ReLU activation. 2. Hiding Network: The hiding network is an aggregation of 5 layers. With each of these layers made up of the 3 separate Conv2D layers. The underlying structure of the Conv2D layers in the hiding network is similar to the Conv2D layers in the Prep Network. 3. Reveal Network: Each of the reveal networks shares a similar underlying architecture with the hiding network, using 5 layers of similarly formed Conv2D layers. 6.2. Implementation Details For full code please refer here. The training details are explained below: 1. Adam optimizer has been used with a custom LR scheduler. 2. Learning rate remains constant with 0.001 till \ufb01rst 200 epochs, decreasing to 0.0003 from 200 epochs to 400 epochs and further decreasing it 0.00003 for remaining iterations. 3. Model has been trained for 750 epochs with a batch size of 256 and an additional 400 epochs with a batch size of 32. 4. Tiny Image Dataset has been used, where images are 64x64. Dataset is created by taking 10 images per class for train and 2000 images in total for train and test. 5. Train set is divided into 2 sections. First 1000 images are used for training as secret images and rest 1000 for cover images. \fMulti-Image Steganography Using Deep Neural Networks Figure 7. Result of hiding two secret images. Left to Right Columns are: Cover Image, Secret Image1, Secret Image2, Encoded Cover Image, Decoded Secret Image1, Decoded Secret Image2. 6. Preparation and Hiding networks share the same stacked Keras model and loss. Reveal network has its own stacked model and its own loss function. 7. Currently, the learning rate is 0.001. 8. To make sure weights are updated only once, reveal network weights are frozen before adding it to the full model. 9. Gaussian noise with 0.01 standard deviation is added to encoder\u2019s output before passing it through the decoder. 10. Mean sum of squared error has been used for calculating decoder\u2019s loss. 11. The loss used is for the full model is represented as: Loss = \u03bbc \u2217||C \u2212C\u2032||2 + \u03bbs \u2217||S1 \u2212S\u2032 1||2 + \u03bbs \u2217 ||S2 \u2212S\u2032 2||2 + \u03bbs \u2217||S3 \u2212S\u2032 3||2 12. While training the reveal network we only consider the secret image component of the loss. 13. During the full model training, the loss for both cover and secret image is taken into consideration. 14. Currently we are taking both the \u03bbs and \u03bbc as 1.0. 7. Results and Discussion Figure 7 depicts the results of hiding two secret images over a single cover image. The input images are depicted on the left side while the encoder/decoder outputs are presented on the left hand size. The encoded cover image looks similar to the original cover to a great extent, and it does not reveal information about the secret images. The results of hiding three secret images are shown in Figure 8. The encoded cover is more lossy as compared to the case when only two secret images are used. The secret images are retrieved successfully in both cases. The losses received for the below results after 750 epochs were as below 1. Loss of Entire Setup 182053.70 2. Loss secret1 51495.24 3. Loss secret2 39911.16 4. Loss secret3 39337.07 5. Loss Cover 51310.23 Currently, the above two results were taken for two and three secret images added to the cover image and then retrieved. As we increase the number of images the loss for all the \fMulti-Image Steganography Using Deep Neural Networks Figure 8. Result of hiding three secret images. Left to Right Columns are: Cover Image, Secret Image1, Secret Image2, Secret Image2, Secret Image3,Encoded Cover Image, Decoded Secret Image1, Decoded Secret Image2, Decoded Secret Image3. values is expected to increase as more image features are being hidden in one single image. So, we need to \ufb01nd some threshold with respect to how many images can be added to the cover image to get decent results. We also have not explored the \u03bbs value for the secret messages and the \u03bbc for the cover image. This parameter may help incorrectly de\ufb01ning the loss equation and help in getting clearer results for the secret and encoded image. Currently for both the experiments we have taken the \u03bbs and \u03bbc as 1. 8. Future Directions From the implementation perspective, we aim to, 1. Increase the number of secret Images with lower loss. 2. Exploring \u03bbs and \u03bbc to see how it affects our results. 3. Use conditional decoders instead of multiple decoders. 4. We have used visual inspection as our primary evaluation metric, we can improve this by passing the encoded cover image through security software for pixel details veri\ufb01cation. This project can enable exploration with steganography and, more generally, in placing supplementary information in images. Several previous methods have attempted to use neural networks to either augment or replace a small portion of an image-hiding system. We are trying to demonstrate a method to create a fully trainable system that provides visually excellent results in unobtrusively placing multiple full-size, color images into a carrier image. Extensions can be towards a complete steganographic system, hiding the existence of the message from statistical analyzers. This will likely necessitate a new objective in training and encoding smaller images within large cover images. 9." + }, + { + "url": "http://arxiv.org/abs/2012.14891v1", + "title": "Detecting Hate Speech in Multi-modal Memes", + "abstract": "In the past few years, there has been a surge of interest in multi-modal\nproblems, from image captioning to visual question answering and beyond. In\nthis paper, we focus on hate speech detection in multi-modal memes wherein\nmemes pose an interesting multi-modal fusion problem. We aim to solve the\nFacebook Meme Challenge \\cite{kiela2020hateful} which aims to solve a binary\nclassification problem of predicting whether a meme is hateful or not. A\ncrucial characteristic of the challenge is that it includes \"benign\nconfounders\" to counter the possibility of models exploiting unimodal priors.\nThe challenge states that the state-of-the-art models perform poorly compared\nto humans. During the analysis of the dataset, we realized that majority of the\ndata points which are originally hateful are turned into benign just be\ndescribing the image of the meme. Also, majority of the multi-modal baselines\ngive more preference to the hate speech (language modality). To tackle these\nproblems, we explore the visual modality using object detection and image\ncaptioning models to fetch the \"actual caption\" and then combine it with the\nmulti-modal representation to perform binary classification. This approach\ntackles the benign text confounders present in the dataset to improve the\nperformance. Another approach we experiment with is to improve the prediction\nwith sentiment analysis. Instead of only using multi-modal representations\nobtained from pre-trained neural networks, we also include the unimodal\nsentiment to enrich the features. We perform a detailed analysis of the above\ntwo approaches, providing compelling reasons in favor of the methodologies\nused.", + "authors": "Abhishek Das, Japsimar Singh Wahi, Siyao Li", + "published": "2020-12-29", + "updated": "2020-12-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction In today\u2019s world, social media platforms play a major role in in\ufb02uencing people\u2019s everyday life. Though having numerous bene\ufb01ts, it also has the capability of shaping public opinion and religious beliefs across the world. It can be used to attack people directly or indirectly based on race, caste, immigration status, religion, ethnicity, nationality, sex, gender identity, sexual orientation, and disability or disease. Hate Speech on online social media can trigger social polarization, hateful crimes. On large platforms such as Facebook and Twitter, it becomes practically impossible for a human to monitor the source and spreading of such maFigure 1. Multi-modal \u201cmean\u201d meme and Benign confounders. Mean meme (left), Benign text confounder (middle) and Benign image confounder (right) licious activities, thus it is the responsibility of the machine learning and arti\ufb01cial intelligence research community to address and solve this problem of detecting hate speech ef\ufb01ciently. In tasks such as VQA and multi-modal machine translation, it has been observed that baseline models using the language domain perform well without even exploiting the multimodal understanding and reasoning(Devlin et al., 2015). However, the Facebook Hateful Memes Challenge dataset is designed in such a manner that unimodal models exploiting just the language or vision modalities separately will fail, and only the models that can learn the true multi-modal reasoning and understanding will be able to perform well. They achieve this by the introduction of \u201cbenign confounders\u201d in the dataset, i.e. for every hateful meme, they \ufb01nd an alternative image or caption which when replaced, is enough to make the meme harmless or non-hateful, thus \ufb02ipping the label. Consider a sentence like \u201cdishwasher for sale, missing parts\u201d. Unimodally, this sentence is harmless, but when combined with an equally harmless image of a girl without a hand, suddenly it becomes mean. See Figure 1 for an illustration. Thus, this challenge set is an excellent stage that aims to facilitate the development of robust multi-modal models, and at the same time addresses an important realworld problem of detecting hateful speech on online social media platforms. Majority of the prior work baselines aim arXiv:2012.14891v1 [cs.CV] 29 Dec 2020 \fDetecting Hate Speech in multi-modal Memes at solving this problem by \ufb01nding an alignment between the two modalities, but it faces the hardship of not knowing the context behind the image and the text combination. In this paper, we introduce two major ideas wherein we try to explore the two modalities using pre-trained Image captioning models and sentiment analysis to understand the context and relationship between the two modalities. Many of the baselines tend to focus more on the text modality for hate speech. Also, during the data analysis, we realized that majority of the hateful memes are converted into benign just by describing the image, i.e., benign text confounders. In our \ufb01rst approach, we try to balance the representations of the two modalities and tackle the benign text confounders by fetching a deeper understanding of the image via object detection and captioning. We then use this representation and fuse it with the multi-modal representation from the stateof-the-art models to improve the performance. Through the error analysis, we also found that the \ufb01netuning a model with pretrained multi-modal representations does not always provide desirable results. It may because those embeddings are pretrained to predict the semantic correlation between image and text but semantic information are dif\ufb01cult to capture and may be insuf\ufb01cient for solving this challenge. Thus, we try to include some high-level features like text and image sentiments to aid the prediction because the sentiment analysis is a related and relatively simple task. On the Facebook Hateful Memes Challenge Dataset, we show both our approaches bene\ufb01t the prediction. In what follows, we discuss the related prior work for such a problem in the next section (3), followed by de\ufb01ning the problem statement (4) and discussing our novel approaches in section (5). We then present our Experimental setup in section (6) followed with its results and discussion in section (7). Finally, we end the discussion with conclusion and future directions in the last section (8). 3. Related Work Hate speech detection has gained more and more attentions in recent years. There have been several text-only hate speech datasets released, mostly based on Twitter [(Waseem, 2016),(Waseem & Hovy, 2016),(Davidson et al., 2017)], and various architectures have been proposed for classi\ufb01ers [(Kumar et al., 2018), (Malmasi & Zampieri, 2017)]. Also, in the past few years, there has been a surge in multi-modal tasks and problems, ranging from visual question answering[(Goyal et al., 2017)] to image captioning[(Sidorov et al., 2020),(Gurari et al., 2020)] and beyond. However, there has been surprisingly little work related to multi-modal hate speech, with only a few papers including both image and text modality. Some of the works related to multi-modal hate detection based on image and text modality are as follows. Figure 2. Mean memes and their benign text confounders Yang et al. [(Yang et al., 2019)] reported that augmenting text with image embedding information immediately boosts performance in hate speech detection. In this paper, the image embeddings are formed by using the second last layer of the pre-trained ResNet neural network on ImageNet and then hashing these values for ef\ufb01cient photo indexing, searching, and clustering. The most straightforward way of integrating text with photo features is to concatenate both image and text vectors. The concatenated vector is followed by dropout, MLP, and softmax operations for the \ufb01nal hate speech classi\ufb01cation. They also explore other fusion techniques like gated summation and bi-linear transformation. Gomez et al.(Gomez et al., 2020) highlighted the issue that most of the previous work on hate speech is done using textual data only and that hate-speech detection on multi-modal publications has not been addressed yet. So, they created MMHS150k, a manually annotated multi-modal hate speech dataset formed by 150,000 tweets, each one of them containing text and an image. The data points are labeled into one of the six categories: No attacks to any community, racist, sexist, homophobic, religion-based attacks, or attacks to other communities. They trained a LSTM model which considered just the tweets text as a baseline for the task of detecting hate speech in multi-modal publications. Their further objective was to exploit the information in the visual domain to outperform their baseline. They did this by proposing two models. The \ufb01rst one was the Feature Concatenation Model (FCM), which is an MLP that concatenates the image representation extracted by a CNN and the textual features of both the tweet text and the image text extracted by an LSTM. Their second model named Textual Kernels Model (TKM) was inspired by VQA tasks and was based on the intuition of looking for patterns in the image corresponding to the associated texts. This was done by learning kernels from textual representations and convolving them with CNN feature maps. Our \ufb01rst approach extends this idea of a deeper understanding of the visual domain. To our knowledge, this paper is the \ufb01rst to use pre-trained image captioning models to generate the \u201dactual caption\u201d from the image along with \fDetecting Hate Speech in multi-modal Memes Figure 3. Approach 1 Model architecture for Image captioning the image embeddings and add these through fusion techniques like concatenation and bilinear transformations with the multi-modal embedding of the state-of-the-art baselines. Now, we describe some relevant work in Image Captioning. (Xu et al., 2015) introduced an encoder-decoder architecture which uses attention mechanism to generate captions. It is trainable by standard back-propagation methods. Most conventional approaches use a top-down mechanism for captioning tasks. A recent method (Anderson et al., 2018) combines Bottom-Up and Top-Down Attention which utilizes a Faster R-CNN based object detection to extract k image features, V = {v1, ..., vk}, vi \u2208RD that enables the attention to be calculated at the level of objects. Each image feature here encodes a salient image region. The captioning model uses a soft-top down approach given the features and partial output sequences as context. It consists of a 2-Layer LSTM, the \ufb01rst layer is called as Top-Down Attention LSTM, the output of which is used to \ufb01nd the attention weights. These attended image features are used by the second LTSM layer which is characterized as a Language Model. They further use cross-entropy loss minimization. The quality of captions generated is vastly improved using this combined technique. Their method is highly modular and allows using various architectures in captioning stage for the features generated using object detection. One can also use different object detection mechanisms in place of Faster R-CNN, or even replace it with the spatial output of a CNN. Multi-modal sentiment analysis is a relatively new topic. However, extensive research (Soleymani et al., 2017) (Shenoy & Sardana, 2020) (Kumar & Vepa, 2020) (Ghosal et al., 2018) (Zadeh et al., 2018) (Majumder et al., 2018) has already been done in this \ufb01eld and yielded fruitful results. Some (Kumar & Vepa, 2020) (Ghosal et al., 2018) tends to improve the prediction accuracy by developing more sophisticated attention mechanism to better capture the interaction between two modalities, while some (Zadeh et al., 2018) (Majumder et al., 2018) introduce very innovative ways of fusion, which utilizes graph or hierarchical architecture. In addition, (Shenoy & Sardana, 2020) leverages the sentiments to improve multi-modal dialogue task. However, very little has been done to improve the hateful media detection with multi-modal sentiment information. We introduce a sentiment analysis approach as our second experiment wherein we carry out uni-modal sentiment analysis on both text and visual domains to \ufb01nd the orientation of both the modalities. 4. Proposed Approaches 4.1. Problem Statement The objective of this challenge is to classify memes as hateful or benign while considering their information from both text and visual modality. Denote the visual components of all memes by X1 = {I1, ..., Ii} where i is the index of the memes, and in our case, the visual component I is the meme itself. Let X2 = {T1, ..., Ti} denotes the text extracted from the memes. If phrases locate in multiple regions of a single meme, the corresponding T will include all the text information by concatenation. Let Y = {y1, ..., yi} be the corresponding labels of all memes, where each y \u2208{0, 1} \fDetecting Hate Speech in multi-modal Memes Figure 4. Approach 2 Model architecture using Sentiment analysis with 0 means benign and 1 indicates a hateful meme. Thus, our task can be formulated as a binary classi\ufb01cation problem with X1 and X2 as input. The goal of our paper is to model the P(Y |X1, X2), denoted by p\u03b8, which minimize the following cost function: J(\u03b8) = X i \u2212(Y log(p\u03b8) + (1 \u2212Y )log(1 \u2212p\u03b8)) (1) 4.2. Image Captioning As discussed above, this paper tackles the benign text confounders present in the dataset which converts an originally hateful meme into a benign one just by describing what is happening in the image. Figure 2 shows some of these adversarial samples. As shown in Figure 5, they account for 20% of the dataset and thus our hypothesis is that if we can provide our model with this extra knowledge, it will combat these adversarial examples and provide a boost in accuracy. Using object detection and image captioning helps in learning this aspect of the dataset and understanding the behavior of the benign text confounders and thus gives a better performance than the baseline models. Comparing the \u201dactual caption\u201d with the \u201dpre-extracted caption\u201d of the meme will help in understanding whether both are aligned or not. Also, most of the multi-modal baselines tend to focus more on the text modality for the hate speech. Our intuition behind this approach is to \ufb01nd a deeper relationship between the text and the image modalities. As we can see in the Figure 3, we \ufb01st pass the hateful dataset (both the modalities,i.e., Xpre-extracted captions and YImages of the meme) into the Visual Bert model pre-trained on the COCO dataset. This fetches us the multi-modal representation of the two modalities, i,e., a 786 tensor of the multimodal representation (m1,m2,m3,...). Parallelly, we also pass the image into an Image Captioning model (Show, Attend, and Tell, Bottom up top down), which fetches us a caption for the image present in the meme (X3 = {C1, ..., Ci} denotes the caption extracted from the images.). We then pass this text caption via a pre-trained Bert model to get a textual representation of another 768 dimensional tensor (h1,h2,h3,...). Then, we fuse the two tensors using fusion techniques like concatenation and bilinear transformations. Bilinear transformation is a \ufb01lter to integrate the information of two vectors into one vector. Mathematically we have bilinear (m\u2019,h\u2019, dim) = m\u2019T .M.h + b, where dim is a hyper-parameter indicating the expected dimension of the output vector (768), M is a weight matrix of dimension (dim,|m\u2019|,|h\u2019|), and b is a bias vector of dimension dim. Again we concatenate m, h, and bilinear(m\u2019,h\u2019,dim) for hate speech classi\ufb01cation. Finally, we then pass the output via an Multi-layer perceptron to get a binary classi\ufb01cation of hateful and non-hateful memes (0/1). We \ufb01ne tune the Visual Bert model and the Bert model from the Facebook hateful dataset and the captions generated on the images of the Facebook hateful dataset. This new approach of combining the image captioning and multimodal baselines helps in tackling the previous mentioned challenges and increases the performance signi\ufb01cantly. 4.3. Sentiment Analysis Another approach is to utilize the sentiment information of both modalities to generate richer representations for further prediction. We \ufb01rst obtain the multi-modal contextual representations em from input T and I by using a pre-trained model. In our experiment, we use VisualBERT (Li et al., 2019). However, similar to some other pre-trained models, the VisualBERT focuses more on the correlation between the input modalities, but the text and image in hateful memes \fDetecting Hate Speech in multi-modal Memes Figure 5. Types of memes in the Facebook Hateful Memes Challenge Dataset are usually connected indirectly. Thus, unimodal sentiments, which are closely related to hate detection, can bene\ufb01t the prediction. A RoBERTa (Liu et al., 2019) model is then used to obtain the text sentiment embeddings et from T, while a VGG (Simonyan & Zisserman, 2014) is applied for visual sentiments ev from I. However, due to the limitation of annotated data, we are unable to \ufb01ne-tune those two models on our dataset. Instead, the RoBERTa is trained on Stanford Sentiment Treebank (Socher et al., 2013) and the visual sentiment model parameters are learned from T4SA dataset (Vadicamo et al., 2017). Then, em et ev are fused through concatenation and passed to multi-layer perceptrons to make the \ufb01nal prediction yhat. The framework of the entire model is shown in Figure 4. 5. Experimental Setup 5.1. Dataset We have used the Facebook Memes Challenge Dataset (Kiela et al., 2020) which comprises 10k memes. These memes are carefully designed for this task by annotators who are specially trained to employ hate-speech as de\ufb01ned by Facebook. The features in this dataset are the meme images themselves and string representations of the text in the image. The dataset comprises \ufb01ve different types of memes as shown in Figure 5: multi-modal hate, where benign confounders were found for both modalities, unimodal hate where one or both modalities were already hateful on their own, benign image and benign text confounders and \ufb01nally random not-hateful examples. The Training, Validation and Test split is 85, 5 and 10 respectively, and the individual sets are fully balanced. Each meme in the training and validation set are annotated as either 1 or 0 which corresponds to the hateful and benign classes respectively. 5.2. Multi-modal Baselines For analysis, we select VisualBERT(Li et al., 2019), a baseline model pretrained on COCO dataset with a multimodal objective. We \ufb01ne-tune the model on our dataset following the same training guidelines as mentioned in the original challenge paper(Kiela et al., 2020) and then evaluate it on the validation set comprising of 500 memes. Figure 6 shows the Confusion matrix for the same, which gives an approximate of the error cases made by the baseline. 5.2.1. VISUALBERT In order to utilize the VisualBERT, multiple region features f1, f2, . . . , fn are \ufb01rst extracted from input image I using Faster RCNN (Ren et al., 2015). Each region feature f is then converted to visual embedding ev by following equation. ev = f + es (2) where es stands for segment embedding, which indicates whether the input is text or image. For the text input, the textual embedding et is obtained in a similar way: et = ft + es + ep (3) where ft is the token embedding for each token in the sentence, and ep is the positional embedding to indicate the relative position of each token. After concatenating ev and et, the embedding is sent into pre-trained VisualBERT model for further processing. VisualBERT (Li et al., 2019) is a pre-trained model for learning joint contextualized representations of vision and language. It contains multiple transformer blocks on top of the visual and text embedding. It is pre-trained on Microsoft COCO captions (Chen et al., 2015) with two objectives: masked language modelling and sentence-image prediction task. The masked language modelling is very similar to the approach in sentence BERT (Devlin et al., 2018), where some input text tokens are masked randomly, and the model needs to predict what are the original tokens. The sentenceimage prediction requires the model to decide whether the input text matches the image. The VisualBERT output of the \ufb01rst token is used as the multi-modal representation em. An MLP is then used to make the \ufb01nal prediction. The model is \ufb01ne-tuned for the current task by using the following loss function. l(\u03b8) = CrossEntropyLoss(W \u00b7 em, y) (4) where em is a vector of size h. h is the hidden size of VisualBERT. W, which has a shape of 2 by h, is the learnable matrix of the MLP. \u03b8 denotes the parameters of the entire model, including the W. \fDetecting Hate Speech in multi-modal Memes Figure 6. Confusion Matrix for baseline VisualBERT COCO model 5.3. Methodology For both approaches, we use mmf (Singh et al., 2020), a modular framework from Facebook AI Research, to build the main neural architectures. We use mmf\u2019s version of Visual BERT to generate multi-modal representations. The model is pre-trained on MS COCO dataset with a hidden dimension of 768. For the Image Captioning models in our \ufb01rst approach, we use two implementations. The \ufb01rst one is an implementation of Show, Attend, and Tell by Xu et al. (Xu et al., 2015) and second one using Bottom-Up Top-Down approach by Anderson et al. (Anderson et al., 2018). We take the top 10000 words from the vocabulary and process the images via Inception V3 model. The pre-trained Bert model used to encode generated caption has a dimension of 768. These two result are then fused together and then passed via an MLP classi\ufb01er. In the second approach, we directly use the \ufb01nal logits of sentiment analysis models and their sum as the sentiment embedding. The MLP classifer consists of 2 layers with 768 hidden units. 5.4. Evaluation Metrics We have evaluated the performance of our classi\ufb01er using the following two metrics as suggested in the challenge 5.4.1. AREA UNDER THE RECEIVER OPERATING CHARACTERISTICS (AUCROC) Receiver Operating Characteristics curve is a graph of True Positive Rate (TPR) v/s False Positive Rate (FPR). It measures how well the binary classi\ufb01er discriminates between the classes as its decision threshold is varied.(Bradley, 1997). Figure 7. AUCROC/Accuracy for Different Experiments A perfect classi\ufb01er will have an area under the curve of 1, where the top left corner in the plot is the ideal point with a TPR of 1 and a FPR of 0. Thus, a larger area under the curve is desirable for any classi\ufb01er to maximize TPR and minimize FPR. 5.4.2. CLASSIFICATION ACCURACY We \ufb01nd the accuracy of the predictions which is given by the ratio of correct predictions to the total number of predictions made, since it is easier to interpret. Thus, for each test sample, we output the label \u2208{0, 1} and the probabilities with which the classi\ufb01er predicts a sample to be hateful. This probability is used to plot the AUCROC curve. 6. Results and Discussions 6.1. Image Captioning We use two frameworks to test our experiments, \ufb01rst being the MMF framework designed by the Facebook research lab who conducted this challenge and the second being creating all the models locally by using simple baselines like Concat BERT. Initially, we tested the image captioning locally by fusing it with Concat BERT baseline model . The baseline accuracy for this model turned out to be 57%. Then, we built an Image captioning model based on Xu et al. (Xu et al., 2015) and passed the caption via a Bert model to get the textual representation. When we fused this textual representation with the Concat BERT results, the accuracy increased by 2% verifying the importance of the captioning and tackling the presence of the benign text confounders. Then, we shifted to the MMF framework to test it on better baseline models like Visual Bert. As can be seen in the Figure 7, the image captioning approach gives a signi\ufb01cant improvement \fDetecting Hate Speech in multi-modal Memes Figure 8. Mean meme (left), Benign Text Confounder and the testing meme (middle), Object Detection Visualization before captioning(right) in the AUCROC and the accuracy on the test set. There is increase of 3.6 % in the AUCROC score and an increase of 6.7 % in the accuracy of the model. This shows us that the image captioning model tackles these benign confounders and give a better representation to the image modality and thus improves the results. The Figure 8 comprises of three images. The \ufb01rst image is the original hateful meme, the second image is the one being tested which is created by adding benign text confounder by just describing the image and thus making it a non-hateful meme with a label of \u20190\u2019. i.e., non-hateful. The third image shows the visualization of object detection bounding boxes on the test image. For the test image as input, the baseline VisualBert predicts a label \u20191\u2019, thus, misclassifying it as a hateful meme because it is not able to understand the benign text confounder. However, using our approach, it is correctly labeled as benign. Our model captions the image similar to the benign text confounder and thus the model learns about its similarity and that its benign behavior. This helps the classi\ufb01er to classify this as a benign result. There are many such examples present in the dataset which are correctly classi\ufb01ed by our model, thus improving the accuracy and the AUCROC value. We also ran Bilinear Transformation as the fusion technique but it brought the performance down and also it ran very slow on the dataset, thus, we decided to move ahead with concatenation itself for the results. 6.2. Sentiment Analysis For the sentiment analysis approach, although the model doesn\u2019t improve the AUCROC value by a large margin, we still see a signi\ufb01cant gain of 4% in the accuracy. We directly compare our models\u2019 results against the Visual BERT baseline and observe two common cases when sentiment analysis bene\ufb01ts the prediction. The \ufb01rst case is when the text and image have opposite sentiments, as shown in the Figure 9. Sample images from dev set. The sentiment value under the image are ranged from 0 to 1 with 1 as positive. Green label denotes the ground truth. \ufb01rst image of Figure 9. The baseline considers this meme as benign, but our model can clearly indicate its irony and then guide the prediction. The other is when both modalities have a positive sentiment as the meme shown in the second image of Figure 9. Sentiment information can help to con\ufb01rm benign memes. However, since we do not have the annotated data to \ufb01ne-tune the sentiment analysis models or perform multi-task learning, the accuracy of sentiment prediction is undesirable. As shown in the third meme in Figure 9, the text doesn\u2019t seem very negative, and the image seems neutral, but our model predicts both as negative. Also in some complicated cases, the sentiments are not very helpful. For example, when sentiments of both modalities are negative as the last two visuals in the \ufb01gure, our model does not work well because the meme has a similar chance to be benign or hateful. 6.3. Combining both Image Captioning and Sentiment Analysis We also performed an experiment wherein we concatenated both the image captioning results as well as the sentiment analysis features along with the Visual Bert multimodal representation and \ufb01ne tuned it on the dataset. Again, we saw a signi\ufb01cant increase in the AUCROC and the accuracy of the model in comparison to the baseline model. We expected the results to give an even better performance than the captioning results, as it would have different features to learn from, but the value of the accuracy decreased in comparison to the Image captioning results. Some reasons for this behavior could be due to con\ufb02icts in the two representations being concatenated together which could lead to lower accuracy and AUCROC value. Another reason could be due to presence of redundant features in different representations and thus reducing the performance. We also performed an analysis on some data points related to this test. As can be seen in the \ufb01gure 10, the middle image of the benign \fDetecting Hate Speech in multi-modal Memes Figure 10. Mean meme (left), Benign Text Confounder and the testing meme with positive text sentiment and positive visual sentiment (middle), Object Detection Visualization before captioning(right) confounder is wrongly classi\ufb01ed by the baseline model as hateful but the combined approach learns the alignment of the caption and the pre-extracted caption along with the sentiment of both the modalities (i.e. positive in this case) and gives a correct prediction of non-hateful label. 7." + }, + { + "url": "http://arxiv.org/abs/2006.01016v1", + "title": "Probing Emergent Semantics in Predictive Agents via Question Answering", + "abstract": "Recent work has shown how predictive modeling can endow agents with rich\nknowledge of their surroundings, improving their ability to act in complex\nenvironments. We propose question-answering as a general paradigm to decode and\nunderstand the representations that such agents develop, applying our method to\ntwo recent approaches to predictive modeling -action-conditional CPC (Guo et\nal., 2018) and SimCore (Gregor et al., 2019). After training agents with these\npredictive objectives in a visually-rich, 3D environment with an assortment of\nobjects, colors, shapes, and spatial configurations, we probe their internal\nstate representations with synthetic (English) questions, without\nbackpropagating gradients from the question-answering decoder into the agent.\nThe performance of different agents when probed this way reveals that they\nlearn to encode factual, and seemingly compositional, information about\nobjects, properties and spatial relations from their physical environment. Our\napproach is intuitive, i.e. humans can easily interpret responses of the model\nas opposed to inspecting continuous vectors, and model-agnostic, i.e.\napplicable to any modeling approach. By revealing the implicit knowledge of\nobjects, quantities, properties and relations acquired by agents as they learn,\nquestion-conditional agent probing can stimulate the design and development of\nstronger predictive learning objectives.", + "authors": "Abhishek Das, Federico Carnevale, Hamza Merzic, Laura Rimell, Rosalia Schneider, Josh Abramson, Alden Hung, Arun Ahuja, Stephen Clark, Gregory Wayne, Felix Hill", + "published": "2020-06-01", + "updated": "2020-06-01", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Since the time of Plato, philosophers have considered the apparent distinction between \u201cknowing how\u201d (procedural knowledge or skills) and \u201cknowing what\u201d (propositional knowledge or facts). It is uncontroversial that deep rein*Equal contribution 1Georgia Institute of Technology 2DeepMind. Correspondence to: . Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 108, 2020. Copyright 2020 by the author(s). forcement learning (RL) agents can effectively acquire procedural knowledge as they learn to play games or solve tasks. Such knowledge might manifest in an ability to \ufb01nd all of the green apples in a room, or to climb all of the ladders while avoiding snakes. However, the capacity of such agents to acquire factual knowledge about their surroundings \u2013 of the sort that can be readily hard-coded in symbolic form in classical AI \u2013 is far from established. Thus, even if an agent successfully climbs ladders and avoids snakes, we have no certainty that it \u2018knows\u2019 that ladders are brown, that there are \ufb01ve snakes nearby, or that the agent is currently in the middle of a three-level tower with one ladder left to climb. The acquisition of knowledge about objects, properties, relations and quantities by learning-based agents is desirable for several reasons. First, such knowledge should ultimately complement procedural knowledge when forming plans that enable execution of complex, multi-stage cognitive tasks. Second, there seems (to philosophers at least) to be something fundamentally human about having knowledge of facts or propositions (Stich, 1979). If one of the goals of AI is to build machines that can engage with, and exhibit convincing intelligence to, human users (e.g. justifying their behaviour so humans understand/trust them), then a need for uncovering and measuring such knowledge in learning-based agents will inevitably arise. Here, we propose the question-conditional probing of agent internal states as a means to study and quantify the knowledge about objects, properties, relations and quantities encoded in the internal representations of neural-networkbased agents. Couching an analysis of such knowledge in terms of question-answering has several pragmatic advantages. First, question-answering provides a general purpose method for agent-analysis and an intuitive investigative tool for humans \u2013 one can simply ask an agent what it knows about its environment and get an answer back, without having to inspect internal activations. Second, the space of questions is essentially open-ended \u2013 we can pose arbitrarily complex questions to an agent, enabling a comprehensive analysis of the current state of its propositional knowledge. Question-answering has previously been studied in textual (Rajpurkar et al., 2016; 2018), visual (Malinowski & Fritz, 2014; Antol et al., 2015; Das et al., 2017) and embodied (Gordon et al., 2018; Das et al., 2018a) setarXiv:2006.01016v1 [cs.AI] 1 Jun 2020 \fProbing Emergent Semantics in Predictive Agents via Question Answering Figure 1. We train predictive agents to explore a visually-rich 3D environment with an assortment of objects of different shapes, colors and sizes. As the agent navigates (trajectory shown in white on the top-down map), an auxiliary network learns to simulate representations of future observations (labeled \u2018Simulation Network\u2019) k steps into the future, self-supervised by a loss against the ground-truth egocentric observation at t+k. Simultaneously, another decoder network is trained to extract answers to a variety of questions about the environment, conditioned on the agent\u2019s internal state but without affecting it (notice \u2018stop gradient\u2019 \u2013 gradients from the QA decoder are not backpropagated into the agent). We use this question-answering paradigm to decode and understand the internal representations that such agents develop. Note that the top-down map is only shown for illustration and not available to the agent. tings. Crucially, however, these systems are trained end-toend for the goal of answering questions. Here, we utilize question-answering simply to probe an agent\u2019s internal representation, without backpropagating gradients from the question-answering decoder into the agent. That is, we view question-answering as a general purpose (conditional) decoder of environmental information designed to assist the development of agents by revealing the extent (and limits) of their knowledge. Many techniques have been proposed for endowing agents with general (i.e. task-agnostic) knowledge, based on both hard-coding and learning. Here, we speci\ufb01cally focus on the effect of self-supervised predictive modeling \u2013 a learningbased approach \u2013 on the acquisition of propositional knowledge. Inspired by learning in humans (Elman, 1990; Rao & Ballard, 1999; Clark, 2016; Hohwy, 2013), predictive modeling, i.e. predicting future sensory observations, has emerged as a powerful method to learn general-purpose neural network representations (Elias, 1955; Atal & Schroeder, 1970; Schmidhuber, 1991; Schaul & Ring, 2013; Schaul et al., 2015; Silver et al., 2017; Wayne et al., 2018; Guo et al., 2018; Gregor et al., 2019; Recanatesi et al., 2019). These representations can be learned while exploring in and interacting with an environment in a task-agnostic manner, and later exploited for goal-directed behavior. We evaluate predictive vs. non-predictive agents (both trained for exploration) on our question-answering testbed to investigate how much knowldge of object shapes, quantities, and spatial relations they acquire solely by egocentric prediction. The set includes a mix of questions that can plausibly be answered from a single observation or a few consecutive observations, and those that require the agent to integrate global knowledge of its entire surroundings. Concretely, we make the following contributions: \u2022 In a visually-rich 3D room environment developed in the Unity engine, we develop a set of questions designed to probe a diverse body of factual knowledge about the environment \u2013 from identifying shapes and colors (\u2018What shape is the red object?\u2019) to counting (\u2018How many blue objects are there?\u2019) to spatial relations (\u2018What is the color of the chair near the table?\u2019), exhaustive search (\u2018Is there a cushion?\u2019), and comparisons (\u2018Are there the same number of tables as chairs?\u2019). \u2022 We train RL agents augmented with predictive loss functions \u2013 1) action-conditional CPC (Guo et al., 2018) and 2) SimCore (Gregor et al., 2019) \u2013 for an exploration task and analyze the internal representations they develop by decoding answers to our suite of questions. Crucially, the QA decoder is trained independent of the predictive agent and we \ufb01nd that QA performance is indicative of the agent\u2019s ability to capture global environment structure and semantics solely through egocentric prediction. We compare these predictive agents to strong non-predictive LSTM baselines as well as to an agent that is explicitly optimized for the question-answering task. \u2022 We establish generality of the encoded knowledge by testing zero-shot generalization of a trained QA decoder to compositionally novel questions (unseen combinations of seen attributes), suggesting a degree of \fProbing Emergent Semantics in Predictive Agents via Question Answering compositionality in the internal representations captured by predictive agents. 2. Background and related work Our work builds on studies of predictive modeling and auxiliary objectives in reinforcement learning as well as grounded language learning and embodied question answering. Propositional knowledge is knowledge that a statement, expressed in natural or formal language, is true (Truncellito, 2007). Since at least Plato, epistemologist philosophers have contrasted propositional knowledge with procedural knowledge (knowledge of how to do something), and some (but not all) distinguish this from perceptual knowledge (knowledge obtained by the senses that cannot be translated into a proposition) (Dretske, 1995). An ability to exhibit this sort of knowledge in a convincing way is likely to be crucial for the long-term goal of having agents achieve satisfying interactions with humans, since an agent that cannot express its knowledge and beliefs in human-interpretable form may struggle to earn the trust of users. Predictive modeling and auxiliary loss functions in RL. The power of predictive modeling for representation learning has been known since at least the seminal work of (Elman, 1990) on emergent language structures. More recent examples include Word2Vec (Mikolov et al., 2013), Skip-Thought vectors (Kiros et al., 2015), and BERT (Devlin et al., 2019) in language, while in vision similar principles have been applied to context prediction (Doersch et al., 2015; Noroozi & Favaro, 2016), unsupervised tracking (Wang & Gupta, 2015), inpainting (Pathak et al., 2016) and colorization (Zhang et al., 2016). More related to us is the use of such techniques in designing auxiliary loss functions for training model-free RL agents, such as successor representations (Dayan, 1993; Zhu et al., 2017a), value and reward prediction (Jaderberg et al., 2016; Hermann et al., 2017; Wayne et al., 2018), contrastive predictive coding (CPC) (Oord et al., 2018; Guo et al., 2018), and SimCore (Gregor et al., 2019). Grounded language learning. Inspired by the work of (Winograd, 1972) on SHRDLU, several recent works have explored linguistic representation learning by grounding language into actions and pixels in physical environments \u2013 in 2D gridworlds (Andreas et al., 2017; Yu et al., 2018; Misra et al., 2017), 3D (Chaplot et al., 2018; Das et al., 2018a; Gordon et al., 2018; Cangea et al., 2019; Puig et al., 2018; Zhu et al., 2017a; Anderson et al., 2018; Gupta et al., 2017; Zhu et al., 2017b; Oh et al., 2017; Shu et al., 2018; Vogel & Jurafsky, 2010; Hill et al., 2020) and textual (Matuszek et al., 2013; Narasimhan et al., 2015) environments. Closest to our work is the task of Embodied Question Answering (Gordon et al., 2018; Das et al., 2018a;b; Yu et al., 2019; Wijmans et al., 2019) \u2013 where an embodied agent in an environment (e.g. a house) is asked to answer a question (e.g. \u201cWhat color is the piano?\u201d). Typical approaches to EmbodiedQA involve training agents to move for the goal of answering questions. In contrast, our focus is on learning a predictive model in a goal-agnostic exploration phase and using question-answering as a post-hoc testbed for evaluating the semantic knowledge that emerges in the agent\u2019s representations from predicting the future. Neural population decoding. Probing an agent with a QA decoder can be viewed as a variant of neural population decoding, used as an analysis tool in neuroscience (Georgopoulos et al., 1986; Bialek et al., 1991; Salinas & Abbott, 1994) and more recently in deep learning (Guo et al., 2018; Gregor et al., 2019; Azar et al., 2019; Alain & Bengio, 2016; Conneau et al., 2018; Tenney et al., 2019). The idea is to test whether speci\ufb01c information is encoded in a learned representation, by feeding the representation as input to a probe network, generally a classi\ufb01er trained to extract the desired information. In RL, this is done by training a probe to predict parts of the ground-truth state of the environment, such as an agent\u2019s position or orientation, without backpropagating through the agent\u2019s internal state. Prior work has required a separate network to be trained for each probe, even for closely related properties such as position vs. orientation (Guo et al., 2018) or grammatical features of different words in the same sentence (Conneau et al., 2018). Moreover, each probe is designed with property-speci\ufb01c inductive biases, such as convnets for topdown views vs. MLPs for position (Gregor et al., 2019). In contrast, we train a single, general-purpose probe network that covers a variety of question types, with an inductive bias for language processing. This generality is possible because of the external conditioning, in the form of the question, supplied to the probe. External conditioning moreover enables agent analysis using novel perturbations of the probe\u2019s training questions. Neuroscience. Predictive modeling is thought to be a fundamental component of human cognition (Elman, 1990; Hohwy, 2013; Seth, 2015). In particular, it has been proposed that perception, learning and decision-making rely on the minimization of prediction error (Rao & Ballard, 1999; Clark, 2016). A well-established strand of work has focused on decoding predictive representations in brain states (Nortmann et al., 2013; Huth et al., 2016). The question of how prediction of sensory experience relates to higherorder conceptual knowledge is complex and subject to debate (Williams, 2018; Roskies & Wood, 2017), though some have proposed that conceptual knowledge, planning, reasoning, and other higher-order functions emerge in deeper layers of a predictive network. We focus on the emergence of propositional knowledge in a predictive agent\u2019s internal representations. \fProbing Emergent Semantics in Predictive Agents via Question Answering Table 1. QA task templates. In every episode, objects and their con\ufb01gurations are randomly generated, and these templates get translated to QA pairs for all unambiguous combinations. There are 50 shapes and 10 colors in total. See A.4 for details. Question type Template Level codename # QA pairs Attribute What is the color of the ? color 500 What shape is the object? shape 500 Count How many are there? count shape 200 How many objects are there? count color 40 Exist Is there a ? existence shape 100 Compare + Count Are there the same number of objects as objects? compare n color 180 Are there the same number of as ? compare n shape 4900 Relation + Attribute What is the color of the near the ? near color 24500 What is the object near the ? near shape 25000 3. Environment & Tasks Environment. We use a Unity-based visually-rich 3D environment (see Figure 1). It is a single L-shaped room that can be programmatically populated with an assortment of objects of different colors at different spatial locations and orientations. In total, we use a library of 50 different objects, referred to as \u2018shapes\u2019 henceforth (e.g. chair, teddy, glass, etc.), in 10 different colors (e.g. red, blue, green, etc.). For a complete list of environment details, see Sec. A.4. At every step, the agent gets a 96 \u00d7 72 \ufb01rst-person RGB image as its observation, and the action space consists of movements (move-{forward,back,left,right}), turns (turn-{up,down,left,right}), and object pickup and manipulation (4 DoF: yaw, pitch, roll, and movement along the axis between the agent and object). See Table 5 in the Appendix for the full set of actions. Question-Answering Tasks. We develop a range of question-answering tasks of varying complexity that test the agent\u2019s local and global scene understanding, visual reasoning, and memory skills. Inspired by (Johnson et al., 2017; Das et al., 2018a; Gordon et al., 2018), we programmatically generate a dataset of questions (see Table 1). These questions ask about the presence or absence of objects (existence shape), their attributes (color, shape), counts (count color, count shape), quantitative comparisons (compare count color, compare count shape), and elementary spatial relations (near color, near shape). Unlike the fully-observable setting in CLEVR (Johnson et al., 2017), the agent does not get a global view of the environment, and must answer these questions from a sequence of partial egocentric observations. Moreover, unlike prior work on EmbodiedQA (Gordon et al., 2018; Das et al., 2018a), the agent is not being trained end-to-end to move to answer questions. It is being trained to explore, and answers are being decoded (without backpropagating gradients) from its internal representation. Thus, in order to answer these questions, the agent must learn to encode relevant aspects of the environment in a representation amenable to easy decoding into symbols (e.g. what does the word \u201cchair\u201d mean? or what representations does computing \u201chow many\u201d require?). 4. Approach Learning an exploration policy. Predictive modeling has proven to be effective for an agent to develop general knowledge of its environment as it explores and behaves towards its goal, typically maximising environment returns (Gregor et al., 2019; Guo et al., 2018). Since we wish to evaluate the effectiveness of predictive modeling independent of the agent\u2019s speci\ufb01c goal, we de\ufb01ne a simple task that stimulates the agent to visit all of the \u2018important\u2019 places in the environment (i.e. to acquire an exploratory but otherwise task-neutral policy). This is achieved by giving the agent a reward of +1.0 every time it visits an object in the room for the \ufb01rst time. After visiting all objects, rewards are refreshed and available to be consumed by the agent again (i.e. re-visiting an object the agent has already been to will now again lead to a +1.0 reward), and this process continues for the duration of each episode (30 seconds or 900 steps). During training on this exploration task, the agent receives a \ufb01rst-person RGB observation xt at every timestep t, and processes it using a convolutional neural network to produce zt. This is input to an LSTM policy whose hidden state is ht and output a discrete action at. The agent optimizes the discounted sum of future rewards using an importanceweighted actor-critic algorithm (Espeholt et al., 2018). Training the QA-decoder. The question-answering decoder is operationalized as an LSTM that is initialized with the agent\u2019s internal representation ht and receives the question as input at every timestep (see Fig. 2). The question is a string that we tokenise into words and then map to learned embeddings. The question decoder LSTM is then unrolled for a \ufb01xed number of computation steps after which it predicts a softmax distribution over the vocabulary of oneword answers to questions in Table 1, and is trained via a cross-entropy loss. Crucially, this QA decoder is trained \fProbing Emergent Semantics in Predictive Agents via Question Answering Figure 2. Approach: at every timestep t, the agent receives an RGB observation xt as input, processes it using a convolutional neural network to produce zt, which is then processed by an LSTM to select action at. The agent learns to explore \u2013 it receives a reward of 1.0 for navigating to each new object. As it explores the environment, it builds up an internal representation ht, which receives pressure from an auxiliary predictive module to capture environment semantics so as to accurately predict consequences of its actions multiple steps into the future. We experiment with a vanilla LSTM agent and two recent predictive approaches \u2013 CPC|A (Guo et al., 2018) and SimCore (Gregor et al., 2019). The internal representations are then probed via a question-answering decoder whose gradients are not backpropagated into the agent. The QA decoder is an LSTM initialized with ht and receiving the question at every timestep. independent of the agent policy; i.e. gradients from this decoder are not allowed to \ufb02ow back into the agent. We evaluate question-answering performance by measuring top1 accuracy at the end of the episode \u2013 we consider the agent\u2019s top predicted answer at the last time step of the episode and compare that with the ground-truth answer. The QA decoder can be seen as a general purpose decoder trained to extract object-speci\ufb01c knowledge from the agent\u2019s internal state without affecting the agent itself. If this knowledge is not retained in the agent\u2019s internal state, then this decoder will not be able to extract it. This is an important difference with respect to prior work (Gordon et al., 2018; Das et al., 2018a) \u2013 wherein agents were trained to move to answer questions, i.e. all parameters had access to linguistic information. Recall that the agent\u2019s navigation policy has been trained for exploration, and so the visual information required to answer a question need not be present in the observation at the end of the episode. Thus, through question-answering, we are evaluating the degree to which agents encode relevant aspects of the environment (object colors, shapes, counts, spatial relations) in their internal representations and maintain this information in memory beyond the point at which it was initially received. See A.1.3 for more details about the QA decoder. 4.1. Auxiliary Predictive Losses We augment the baseline architecture described above with an auxiliary predictive head consisting of a simulation network (operationalized as an LSTM) that is initialized with the agent\u2019s internal state ht and deterministically simulates future latent states s1 t, . . . , sk t , . . . in an open-loop manner, receiving the agent\u2019s action sequence as input. We evaluate two predictive losses \u2013 action-conditional CPC (Guo et al., 2018) and SimCore (Gregor et al., 2019). See Fig. 2 for overview, A.1.2 for details. Action-conditional CPC (CPC|A, (Guo et al., 2018)) makes use of a noise contrastive estimation model to discriminate between true observations processed by the convolutional neural network z+ t+k (k steps into the future) and negatives randomly sampled from the dataset z\u2212 t+k, in our case from other episodes in the minibatch. Speci\ufb01cally, at each timestep t + k (up to a maximum), the output of the simulation core sk t and z+ t+k are fed to an MLP to predict 1, and sk t and z\u2212 t+k are used to predict 0. SimCore (Gregor et al., 2019) uses the simulated state sk t to condition a generative model based on ConvDRAW (Gregor et al., 2016) and GECO (Rezende & Viola, 2018) that predicts the distribution of true observations p(xt+k|ht, at,...,(t+k)) in pixel space. Baselines. We evaluate and compare the above approaches with 1) a vanilla RL agent without any auxiliary predictive losses (referred to as \u2018LSTM\u2019), and 2) a question-only agent that receives zero-masked observations as input and is useful to measure biases in our question-answering testbed. Such a baseline is critical, particularly when working with simulated environments, as it can uncover biases in the environment\u2019s generation of tasks that can result in strong but uninteresting performance from agents capable of powerful function approximation (Thomason et al., 2019). \fProbing Emergent Semantics in Predictive Agents via Question Answering Figure 3. L \u2013 Reward in an episode. R \u2013 Top-1 QA accuracy. Averaged over 3 seeds. Shaded region is 1 SD. No stop gradient. We also compare against an agent without blocking the QA decoder gradients (labeled \u2018No SG\u2019). This model differs from the above in that it is trained endto-end \u2013 with supervision \u2013 to answer the set of questions in addition to the exploration task. Hence, it represents an agent receiving privileged information about how to answer and its performance provides an upper bound for how challenging these question-answering tasks are in this context. 5. Experiments & Results 5.1. Question-Answering Performance We begin by analyzing performance on a single question \u2013 shape \u2013 which are of the form \u201cwhat shape is the object?\u201d. Figure 3 shows the average reward accumulated by the agent in one episode (left) and the QA accuracy at the last timestep of the episode (right) for all approaches over the course of training. We make the following observations: \u2022 All agents learn to explore. With the exception \u2018question-only\u2019, all agents achieve high reward on the exploration task. This means that they visited all objects in the room more than once each and therefore, in principle, have been exposed to suf\ufb01cient information to answer all questions. \u2022 Predictive models aid navigation. Agents equipped with auxiliary predictive losses \u2013 CPC|A and SimCore \u2013 collect the most rewards, suggesting that predictive modeling helps navigate the environment ef\ufb01ciently. This is consistent with \ufb01ndings in (Gregor et al., 2019). \u2022 QA decoding from LSTM and CPC|A representations is no better than chance. \u2022 SimCore\u2019s representations lead to best QA accuracy. SimCore gets to a QA accuracy of \u223c72% indicating that its representations best capture propositional knowledge and are best suited for decoding answers to questions. Figure 4 (Left) shows example predictions. \u2022 Wide gap between SimCore and No SG. There is a \u223c24% gap between SimCore and the No SG oracle, suggesting scope for better auxiliary predictive losses. It is worth emphasizing that answering this shape question from observations is not a challenging task in and of itself. The No SG agent, which is trained end-to-end to optimize both for exploration and QA, achieves almost-perfect accuracy (\u223c96%). The challenge arises from the fact that we are not training the agent end-to-end \u2013 from pixels to navigation to QA \u2013 but decoding the answer from the agent\u2019s internal state, which is learned agnostic to the question. The answer can only be decoded if the agent\u2019s internal state contains relevant information represented in an easily-decodable way. Decoder complexity. To explore the possibility that answer-relevant information is present in the agent\u2019s internal state but requires a more powerful decoder, we experiment with QA decoders of a range of depths. As detailed in Figure 7 in the appendix, we \ufb01nd that using a deeper QA decoder with SimCore does lead to higher QA accuracy (from 1 \u219212 layers), although greater decoder depths become detrimental after 12 layers. Crucially, however, in the nonpredictive LSTM agent, the correct answer cannot be decoded irrespective of QA decoder capacity. This highlights an important aspect of our question-answering evaluation paradigm \u2013 that while the absolute accuracy at answering questions may also depend on decoder capacity, relative differences provide an informative comparison between internal representations developed by different agents. Table 2 shows QA accuracy for all QA tasks (see Figure 8 in appendix for training curves). The results reveal large variability in dif\ufb01culty across question types. Questions about attributes (color and shape), which can be answered from a single well-chosen frame of visual experience, are the easiest, followed by spatial relationship questions (near color and near shape), and the hardest are counting questions (count color and count shape). We further note that: \u2022 All agents perform better than the question-only baseline, which captures any biases in the environment or question distributions (enabling strategies such as constant prediction of the most-common answer). \u2022 CPC|A representations are not better than LSTM \fProbing Emergent Semantics in Predictive Agents via Question Answering Table 2. Top-1 accuracy on question-answering tasks. Overall shape color exist count shape count color compare n color compare n shape near shape near color Baseline: Question-only 29 \u00b1 3 04 \u00b1 2 10 \u00b1 2 63 \u00b1 4 24 \u00b1 3 24 \u00b1 3 49 \u00b1 3 70 \u00b1 3 04 \u00b1 2 09 \u00b1 3 LSTM 31 \u00b1 4 04 \u00b1 1 10 \u00b1 2 54 \u00b1 6 34 \u00b1 3 38 \u00b1 3 53 \u00b1 3 70 \u00b1 3 04 \u00b1 2 09 \u00b1 3 CPC|A 32 \u00b1 3 06 \u00b1 2 08 \u00b1 2 64 \u00b1 3 39 \u00b1 3 39 \u00b1 3 50 \u00b1 4 70 \u00b1 3 06 \u00b1 2 10 \u00b1 3 SimCore 60 \u00b1 3 72 \u00b1 3 81 \u00b1 3 72 \u00b1 3 39 \u00b1 3 57 \u00b1 3 56 \u00b1 3 73 \u00b1 3 30 \u00b1 3 59 \u00b1 3 Oracle: No SG 63 \u00b1 3 96 \u00b1 2 81 \u00b1 2 60 \u00b1 3 45 \u00b1 3 57 \u00b1 3 51 \u00b1 3 76 \u00b1 3 41 \u00b1 3 72 \u00b1 3 Figure 4. (Left): Sample trajectory (1 \u21924) and QA decoding predictions (for top 5 most probable answers) for the \u2018What shape is the green object?\u2019 from SimCore. Note that top-down map is not available to the agent. (Right): QA accuracy on disjoint train and test splits. on most question types. \u2022 SimCore representations achieve higher QA accuracy than other approaches, substantially above the question-only baseline on count color (57% vs. 24%), near shape (30% vs. 4%) and near color (59% vs. 9%), demonstrating a strong tendency for encoding and retaining information about object identities, properties, and both spatial and temporal relations. Finally, as before, the No SG agent trained to answer questions without stopped gradients achieves highest accuracy for most questions, although not all \u2013 perhaps due to tradeoffs between simultaneously optimizing performance for different QA losses and the exploration task. 5.2. Compositional Generalization While there is a high degree of procedural randomization in our environment and QA tasks, overparameterized neuralnetwork-based models in limited environments are always prone to over\ufb01tting or rote memorization. We therefore constructed a test of the generality of the information encoded in the internal state of an agent. The test involves a variant of the shape question type (i.e. questions like \u201cwhat shape is the object?\u201d), but in which the possible question-answer pairs are partitioned into mutually exclusive training and test splits. Speci\ufb01cally, the test questions are constrained such that they are compositionally novel \u2013 the combination involved in the question-answer pair is never observed during training, but both attributes are observed in other contexts. For instance, a test question-answer pair \u201cQ: what shape is the blue object?, A: table\u201d is excluded from the training set of the QA decoder, but \u201cQ: what shape is the blue object?, A: car\u201d and \u201cQ: What shape is the green object?, A: table\u201d are part of the training set (but not the test set). We evaluate the SimCore agent on this test of generalization (since other agents perform poorly on the original task). Figure 4 (right) shows that the QA decoder applied to SimCore\u2019s internal states performs at substantially above-chance (and all baselines) on the held-out test questions (although somewhat lower than training performance). This indicates that the QA decoder extracts and applies information in a comparatively factorized (or compositional) manner, and suggests (circumstantially) that the knowledge acquired by the SimCore agent may also be represented in this way. 5.3. Robustness of the results To check if our results are robust to the choice of environment, we developed a similar setup using the DeepMind Lab environment (Beattie et al., 2016) and ran the same experiments without any change in hyperparameters. The environment consists of a rectangular room that is populated with a random selection of objects of different shapes and colors in each episode. There are 6 distinct objects in each room, selected from a pool of 20 objects and 9 different colors. We use a similar exploration reward structure as in our earlier environment to train the agents to navigate and observe all objects. Finally, in each episode, we introduce a question of the form \u2018What is the color of the ?\u2019 where is replaced by the name of an object present in the room. \fProbing Emergent Semantics in Predictive Agents via Question Answering Figure 5. (Left) DeepMind Lab environment (Beattie et al., 2016): Rectangular-shaped room with 6 randomly selected objects out of a pool of 20 different objects of different colors. (Right) QA accuracy for color questions (What is the color of the ?) in DeepMind Lab. Consistent with results in the main paper, internal representations of the SimCore agent lead to the highest accuracy while CPC|A and LSTM perform worse and similar to each other. Figure 5 shows question-answering accuracies in the DeepMind Lab environment. Consistent with the results presented above, internal representations of the SimCore agent lead to the highest answering accuracy while CPC|A and the vanilla LSTM agent perform worse and similar to each other. Crucially, for running experiments in DeepMind Lab, we did not change any hyperparameters from the experimental setup described before. This demonstrates that our approach is not speci\ufb01c to a single environment and that it can be readily applied in a variety of settings. 6. Discussion Developing agents with world models of their environments is an important problem in AI. To do so, we need tools to evaluate and diagnose the internal representations forming these world models in addition to studying task performance. Here, we marry together population or glass-box decoding techniques with a question-answering paradigm to discover how much propositional (or declarative) knowledge agents acquire as they explore their environment. We started by developing a range of question-answering tasks in a visually-rich 3D environment, serving as a diagnostic test of an agent\u2019s scene understanding, visual reasoning, and memory skills. Next, we trained agents to optimize an exploration objective with and without auxiliary self-supervised predictive losses, and evaluated the representations they form as they explore an environment, via this question-answering testbed. We compared model-free RL agents alongside agents that make egocentric visual predictions and found that the latter (in particular SimCore (Gregor et al., 2019)) are able to reliably capture detailed propositional knowledge in their internal states, which can be decoded as answers to questions, while non-predictive agents do not, even if they optimize the exploration objective well. Interestingly, not all predictive agents are equally good at acquiring knowledge of objects, relations and quantities. We compared a model learning the probability distribution of future frames in pixel space via a generative model (SimCore (Gregor et al., 2019)) with a model based on discriminating frames through contrastive estimation (CPC|A (Guo et al., 2018)). We found that while both learned to navigate well, only the former developed representations that could be used for answering questions about the environment. (Gregor et al., 2019) previously showed that the choice of predictive model has a signi\ufb01cant impact on the ability to decode an agent\u2019s position and top-down map reconstructions of the environment from its internal representations. Our experiments extend this result to decoding factual knowledge, and demonstrate that the question-answering approach has utility for comparing agents. Finally, the fact that we can even decode answers to questions from an agent\u2019s internal representations learned solely from egocentric future predictions, without exposing the agent itself directly to knowledge in propositional form, is encouraging. It indicates that the agent is learning to form and maintain invariant object identities and properties (modulo limitations in decoder capacity) in its internal state without explicit supervision. It is \u223c30 years since (Elman, 1990) showed how syntactic structures and semantic organization can emerge in the units of a neural network as a consequence of the simple objective of predicting the next word in a sequence. This work corroborates Elman\u2019s \ufb01ndings, showing that language-relevant general knowledge can emerge in a situated neural-network agent that predicts future low-level visual observations via suf\ufb01ciently powerful generative mechanism. The result also aligns with perspectives that emphasize the importance of between sensory modalities in supporting the development of conceptual or linguistic knowledge (McClelland et al., 2019). Our study is a small example of how language can be used as a channel to probe and understand what exactly agents can learn from their environments. We hope it motivates future research in evaluating predictive agents using natural linguistic interactions. \fProbing Emergent Semantics in Predictive Agents via Question Answering" + }, + { + "url": "http://arxiv.org/abs/1901.05531v1", + "title": "Response to \"Visual Dialogue without Vision or Dialogue\" (Massiceti et al., 2018)", + "abstract": "In a recent workshop paper, Massiceti et al. presented a baseline model and\nsubsequent critique of Visual Dialog (Das et al., CVPR 2017) that raises what\nwe believe to be unfounded concerns about the dataset and evaluation. This\narticle intends to rebut the critique and clarify potential confusions for\npractitioners and future participants in the Visual Dialog challenge.", + "authors": "Abhishek Das, Devi Parikh, Dhruv Batra", + "published": "2019-01-16", + "updated": "2019-01-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction . Figure 1: Visual Dialog task: given an image, dialog history, and follow-up question, predict the answer. Task. The goal of Visual Dialog is to develop conversation agents that can talk about images. Towards this end, in previous work [3], we proposed a task \u2013 given an image, dialog history, and follow-up question, predict a free-form natural language answer to the question (Fig. 1) \u2013 and a large-scale dataset1, evaluation metrics and server2, and baseline models3 for this task. Key challenge. A fundamental challenge in dialog systems is automatic evaluation of long free-form answers since existing metrics such as BLEU, METEOR, and ROUGE are known to correlate poorly with human judgement [6]. Thus, as proposed in our initial paper [3], to evaluate Visual Dialog, models are provided a list of 100 candidate answers for each question \u2013 consisting of the ground-truth answer from the dataset mixed with nearest neighbors, popular, and random answers \u2013 and evaluated on how well they rank the ground-truth answer on retrieval metrics such as mean reciprocal rank (MRR), recall (R@1, 5, 10), and mean rank. 1visualdialog.org/data 2evalai.cloudcv.org/web/challenges/challenge-page/103/overview 3github.com/batra-mlp-lab/visdial arXiv:1901.05531v1 [cs.CV] 16 Jan 2019 \fAs we describe in our paper [3], these candidate answers for each question are programmatically curated from other answers in the dataset and not human-generated, and so, some candidate answers may be semantically identical (e.g. \u2018yeah\u2019 and \u2018yes\u2019). Thus, more recently, we conducted new human studies \u2013 asking four human subjects to annotate whether each of the 100 candidate answers is correct or not for all questions in the VisDial test split. For evaluation, we report the normalized discounted cumulative gain (NDCG) over the top K ranked options, where K is the number of answers marked as correct by at least one annotator. For this computation, we consider the relevance of an answer to be the fraction of annotators that marked it as correct. This was the primary evaluation criterion for the 1st Visual Dialog Challenge4. As described in [3], there are two broad families of dialog models (unfortunately with names that are overloaded in machine learning) \u2013 \u2018generative\u2019 models (that produce a response wordby-word given some context and are evaluated on the ranking of the likelihood scores they assign to candidate answers), and \u2018discriminative\u2019 models (that simply learn to rank a list of candidate answers and cannot produce a new response). This retrieval-based evaluation holds for both families. Compatibility of the evaluation metric with generative models is crucial, since they are more useful for real-world applications where answer options are not available. 2 Concern 1: Suitability of NDCG evaluation . Massiceti et al. [7] note that \u2018the VisDial dataset was recently updated to version 1.0, where the curators try to ameliorate some of the issues with the single-\u201cground-truth\" answer approach. They incorporate a human-agreement scores for candidate answers, and introduce a modi\ufb01ed evaluation which weighs the predicted rankings by these scores. However, in making this change, the primary evaluation for this data has now become an explicit classi\ufb01cation task on the candidate answers \u2013 requiring access, at train time, to all 100 candidates for every question-image pair. For the stated goals of Visual Dialog, this change can be construed as unsuitable as it falls into the category of rede\ufb01ning the problem to match a potentially unsuitable evaluation measure \u2013 how can one get better ranks in the candidate-answer-ranking task.\u2019 The claim that \u201cthe primary evaluation for this data has now become an explicit classi\ufb01cation task on the candidate answers\u201d is incorrect and thus the conclusion drawn from it is inaccurate and confusing. First, the task has not changed, only the evaluation metric (from MRR to NDCG). The task did not and does not \u201crequire access, at train time, to all 100 candidates\u201d. Discriminative models use 100 candidate answers at train time; generative models do not. This was discussed in our initial paper [3] and continues to be true. Perhaps what the authors [7] are trying to say and express concern for is \u2013 this metric (NDCG) will favor one kind of model family over another. This is possible and something we have given a lot of thought to. Empirical \ufb01ndings from the 1st Visual Dialog Challenge5 indicate that these generative models perform comparably (or even better sometimes) than discriminative models on the NDCG metric \u2013 for example, 53.67 vs. 49.58 on VisDial v1.0 test-std for Memory Network + Attention with generative vs. discriminative decoding respectively. Code and models available here: https://github.com/batra-mlp-lab/visdial#pretrained-models-1. While this is still a potentially weak surrogate for human-in-the-loop evaluation of Visual Dialog models, it is encouraging that there now seems to be an automatic evaluation criterion on which generative models, which do not have access to candidate answers during training, outperform discriminative models. As we describe on visualdialog.org, the reason why we chose a single track for the challenge was that in practice, the distinction between the two model families can get blurry (e.g., non-parametric models that internally maintain a large list of answer options), and the separation would be dif\ufb01cult to enforce. Note that our choice of ranking for evaluation isn\u2019t an endorsement of either approach (generative or discriminative). 4visualdialog.org/challenge/2018#evaluation 5visualdialog.org/challenge/2018#winners 2 \f3 Concern 2: Comparison to proposed CCA baseline [7] . Massiceti et al. [7] proposed a simple CCA baseline with two variants \u2013 1) question-only (ignoring image and dialog history), 2) question + image (ignoring dialog history), which they show outperforms state-of-the-art models on the mean rank metric. They further note that \u2018an important takeaway from our analyses is that it is highly effective to begin exploration with the simplest possible tools one has at one\u2019s disposal. This is particularly apposite in the era of deep neural networks, where the prevailing attitude appears to be that it is preferable to start exploration with complicated methods that aren\u2019t well understood, as opposed to older, perhaps even less fashionable methods that have the bene\ufb01t of being rigorously understood.\u2019 We agree that simple and strong baselines are important, and are pleasantly surprised to see that a CCA baseline performs so well on mean rank. However, there are a few problems with this analysis. First, the baseline proposed by Massiceti et al. [7] is not close to state-ofthe-art \u2013 the authors cherry-pick the mean rank metric and ignore trends on all other metrics (see Tab. 1). Second, it ignores that a similar \ufb01nding has already been presented in the original Visual Dialog paper [3], that question-only and question + image models perform close to but slightly worse than full Q+I+H models. We recreate Tab. 1 from [3]. Third, the authors [7] ignore that the CCA baselines perform worse than not just state-of-the-art models, but also these Q and Q+I ablations [3], and comparable to answer prior and nearest neighbor (NN) baselines [3] on MRR and R@k. Finally, the results presented in [7] are not directly comparable. The proposed CCA baselines use Resnet-34 [5] features and FastText [1] embeddings, while the baselines in [3] use VGG-16 [8] and learn word embeddings from scratch respectively. Model NDCG MRR R@1 R@5 R@10 Mean Rank v0.9 val \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Answer prior 0.3735 23.55 48.52 53.23 26.50 NN-Q 0.4570 35.93 54.07 60.26 18.93 NN-QI 0.4274 33.13 50.83 58.69 19.62 LF-Q-G 0.5048 39.78 60.58 66.33 17.89 LF-QI-G 0.5204 42.04 61.65 67.66 16.84 LF-QIH-G 0.5199 41.83 61.78 67.59 17.07 HRE-QIH-G 0.5237 42.29 62.18 67.92 17.07 HREA-QIH-G 0.5242 42.28 62.33 68.17 16.79 MN-QIH-G 0.5259 42.29 62.85 68.88 17.06 A-Q (Massiceti et al. [7]) 0.3031 16.77 44.86 58.06 16.21 A-QI (Massiceti et al. [7]) 0.2427 12.17 35.38 50.57 18.29 v1.0 test-std \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 LF-QIH-G 0.5121 0.4568 35.08 55.92 64.02 18.81 HRE-QIH-G 0.5245 0.4561 34.78 56.18 63.72 18.78 MN-QIH-G 0.5280 0.4580 35.05 56.35 63.92 19.31 A-Q (Massiceti et al. [7]) 0.2832 15.95 40.10 55.10 17.08 A-QI (Massiceti et al. [7]) 0.2393 12.73 33.05 48.68 19.24 Table 1: Performance of methods on VisDial v0.9 and v1.0, measured by normalized discounted cumulative gain (NDCG), mean reciprocal rank (MRR), recall@k and mean rank. Higher is better for NDCG, MRR, and recall@k, while lower is better for mean rank. 4" + }, + { + "url": "http://arxiv.org/abs/1810.11187v2", + "title": "TarMAC: Targeted Multi-Agent Communication", + "abstract": "We propose a targeted communication architecture for multi-agent\nreinforcement learning, where agents learn both what messages to send and whom\nto address them to while performing cooperative tasks in partially-observable\nenvironments. This targeting behavior is learnt solely from downstream\ntask-specific reward without any communication supervision. We additionally\naugment this with a multi-round communication approach where agents coordinate\nvia multiple rounds of communication before taking actions in the environment.\nWe evaluate our approach on a diverse set of cooperative multi-agent tasks, of\nvarying difficulties, with varying number of agents, in a variety of\nenvironments ranging from 2D grid layouts of shapes and simulated traffic\njunctions to 3D indoor environments, and demonstrate the benefits of targeted\nand multi-round communication. Moreover, we show that the targeted\ncommunication strategies learned by agents are interpretable and intuitive.\nFinally, we show that our architecture can be easily extended to mixed and\ncompetitive environments, leading to improved performance and sample complexity\nover recent state-of-the-art approaches.", + "authors": "Abhishek Das, Th\u00e9ophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Michael Rabbat, Joelle Pineau", + "published": "2018-10-26", + "updated": "2020-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA", + "stat.ML" + ], + "main_content": "Introduction Effective communication is a key ability for collaboration. Indeed, intelligent agents (humans or arti\ufb01cial) in realworld scenarios can signi\ufb01cantly bene\ufb01t from exchanging information that enables them to coordinate, strategize, and utilize their combined sensory experiences to act in the physical world. The ability to communicate has wide-ranging applications for arti\ufb01cial agents \u2013 from multi-player gameplay in simulated (e.g. DoTA, StarCraft) or physical worlds (e.g. robot soccer), to self-driving car networks communicating with each other to achieve safe 1Georgia Tech 2McGill University 3Facebook AI Research. \u2039Work done during an internship at Facebook AI Research. Correspondence to: Abhishek Das . Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s). and swift transport, to teams of robots on search-and-rescue missions deployed in hostile, fast-evolving environments. A salient property of human communication is the ability to hold targeted interactions. Rather than the \u2018one-size\ufb01ts-all\u2019 approach of broadcasting messages to all participating agents, as has been previously explored (Sukhbaatar et al., 2016; Foerster et al., 2016; Singh et al., 2019), it can be useful to direct certain messages to speci\ufb01c recipients. This enables a more \ufb02exible collaboration strategy in complex environments. For example, within a team of searchand-rescue robots with a diverse set of roles and goals, a message for a \ufb01re-\ufb01ghter (e.g. \u201csmoke is coming from the kitchen\u201d) is largely meaningless for a bomb-defuser. We develop TarMAC, a Targeted Multi-Agent Communication architecture for collaborative multi-agent deep reinforcement learning. Our key insight in TarMAC is to allow each individual agent to actively select which other agents to address messages to. This targeted communication behavior is operationalized via a simple signature-based soft attention mechanism: along with the message, the sender broadcasts a key which encodes properties of agents the message is intended for, and is used by receivers to gauge the relevance of the message. This communication mechanism is learned implicitly, without any attention supervision, as a result of end-to-end training using task reward. The inductive bias provided by soft attention in the communication architecture is suf\ufb01cient to enable agents to 1) communicate agent-goal-speci\ufb01c messages (e.g. guide \ufb01re\ufb01ghter towards \ufb01re, bomb-defuser towards bomb, etc.), 2) be adaptive to variable team sizes (e.g. the size of the local neighborhood a self-driving car can communicate with changes as it moves), and 3) be interpretable through predicted attention probabilities that allow for inspection of which agent is communicating what message and to whom. Our results however show that just using targeted communication is not enough. Complex real-world tasks might require large populations of agents to go through multiple rounds of collaborative communication and reasoning, involving large amounts of information to be persistent in memory and exchanged via high-bandwidth communication channels. To this end, our actor-critic framework combines centralized training with decentralized exarXiv:1810.11187v2 [cs.LG] 22 Feb 2020 \fTarMAC: Targeted Multi-Agent Communication Decentralized Targeted Multi-Round Reinforcement Execution Communication Decisions Learning DIAL (Foerster et al., 2016) Yes No No Yes (Q-Learning) CommNet (Sukhbaatar et al., 2016) Yes No Yes Yes (REINFORCE) VAIN (Hoshen, 2017) No Yes Yes No (Supervised) ATOC (Jiang & Lu, 2018) Yes No No Yes (Actor-Critic) IC3Net (Singh et al., 2019) Yes No Yes Yes (REINFORCE) TarMAC (this paper) Yes Yes Yes Yes (Actor-Critic) Table 1: Comparison with previous work on collaborative multi-agent communication with continuous vectors. ecution (Lowe et al., 2017), thus enabling scaling to large team sizes. In this context, our inter-agent communication architecture also supports multiple rounds of targeted interactions at every time-step, wherein the agents\u2019 recurrent policies persist relevant information in internal states. While natural language, i.e. a \ufb01nite set of discrete tokens with pre-speci\ufb01ed human-conventionalized meanings, may seem like an intuitive protocol for inter-agent communication \u2013 one that enables human-interpretability of interactions \u2013 forcing machines to communicate among themselves in discrete tokens presents additional training challenges. Since our work focuses on machine-only multiagent teams, we allow agents to communicate via continuous vectors (rather than discrete symbols), as has been explored in (Sukhbaatar et al., 2016; Singh et al., 2019), and agents have the \ufb02exibility to discover and optimize their communication protocol as per task requirements. We provide extensive empirical evaluation of our approach across a range of tasks, environments, and team sizes. \u2022 We begin by benchmarking TarMAC and its ablation without attention on a cooperative navigation task derived from the SHAPES environment (Andreas et al., 2016) in Section 5.1. We show that agents learn intuitive attention behavior across task dif\ufb01culties. \u2022 Next, we evaluate TarMAC on the traf\ufb01c junction environment (Sukhbaatar et al., 2016) in Section 5.2, and show that agents are able to adaptively focus on \u2018active\u2019 agents in the case of varying team sizes. \u2022 We then demonstrate its ef\ufb01cacy in 3D environments with a cooperative \ufb01rst-person point-goal navigation task in House3D (Wu et al., 2018) (Section 5.3). \u2022 Finally, in Section 5.4, we show that TarMAC can be easily combined with IC3Net (Singh et al., 2019), thus extending its applicability to mixed and competitive environments, and leading to signi\ufb01cant improvements in performance and sample complexity. 2. Related Work Multi-agent systems fall at the intersection of game theory, distributed systems, and Arti\ufb01cial Intelligence in general (Shoham & Leyton-Brown, 2008), and thus have a rich and diverse literature. Our work builds on and is related to prior work in deep multi-agent reinforcement learning, the centralized training and decentralized execution paradigm, and emergent communication protocols. Multi-Agent Reinforcement Learning (MARL). Within MARL (see Busoniu et al. (2008) for a survey), our work is related to efforts on using recurrent neural networks to approximate agent policies (Hausknecht & Stone, 2015), stabilizing algorithms for multi-agent training (Lowe et al., 2017; Foerster et al., 2018), and tasks in novel domains e.g. coordination and navigation in 3D environments (Peng et al., 2017; OpenAI, 2018; Jaderberg et al., 2018). Centralized Training & Decentralized Execution. Both Sukhbaatar et al. (2016) and Hoshen (2017) adopt a centralized framework at both training and test time \u2013 a central controller processes local observations from all agents and outputs a probability distribution over joint actions. In this setting, the controller (e.g. a fully-connected network) can be viewed as implicitly encoding communication. Sukhbaatar et al. (2016) propose an ef\ufb01cient controller architecture that is invariant to agent permutations by virtue of weight-sharing and averaging (as in Zaheer et al. (2017)), and can, in principle, also be used in a decentralized manner at test time since each agent just needs its local state vector and the average of incoming messages to take an action. Meanwhile, Hoshen (2017) proposes to replace averaging by an attentional mechanism to allow targeted interactions between agents. While closely related to our communication architecture, this work only considers fully-supervised one-next-step prediction tasks, while we study the full reinforcement learning problem with tasks requiring planning over long time horizons. Moreover, a centralized controller quickly becomes intractable in real-world tasks with many agents and high-dimensional observation spaces e.g. navigation in House3D (Wu et al., 2018). To address these weaknesses, we adopt the framework of centralized learning but decentralized execution (following Foerster et al. (2016); Lowe et al. (2017)) and further relax it by allowing agents to communicate. While agents can use extra information during training, at test time, they pick actions solely based on local observations and communication messages. \fTarMAC: Targeted Multi-Agent Communication Emergent Communication Protocols. Our work is also related to recent work on learning communication protocols in a completely end-to-end manner with reinforcement learning \u2013 from perceptual input (e.g. pixels) to communication symbols (discrete or continuous) to actions (e.g. navigating in an environment). While (Foerster et al., 2016; Jorge et al., 2016; Das et al., 2017; Kottur et al., 2017; Mordatch & Abbeel, 2017; Lazaridou et al., 2017) constrain agents to communicate with discrete symbols with the explicit goal to study emergence of language, our work operates in the paradigm of learning a continuous communication protocol in order to solve a downstream task (Sukhbaatar et al., 2016; Hoshen, 2017; Jiang & Lu, 2018; Singh et al., 2019). Jiang & Lu (2018); Singh et al. (2019) also operate in a decentralized execution setting and use an attentional communication mechanism, but in contrast to our work, they use attention to decide when to communicate, not who to communicate with. In Section 5.4, we discuss how to potentially combine the two approaches. Table 1 summarizes the main axes of comparison between our work and previous efforts in this exciting space. 3. Technical Background Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs). A Dec-POMDP is a multiagent extension of a partially observable Markov decision process (Oliehoek, 2012). For N agents, it is de\ufb01ned by a set of states S describing possible con\ufb01gurations of all agents, a global reward function R, a transition probability function T, and for each agent i P 1, ..., N a set of allowed actions Ai, a set of possible observations \u2126i and an observation function Oi. At each time step every agent picks an action ai based on its local observation \u03c9i following its own stochastic policy \u03c0\u03b8ipai|\u03c9iq. The system randomly transitions to the next state s1 given the current state and joint action Tps1|s, a1, ..., aNq. The agent team receives a global reward r \u201c Rps, a1, ..., aNq while each agent receives a local observation of the new state Oip\u03c9i|s1q. Agents aim to maximize the total expected return J \u201c \u0159T t\u201c0 \u03b3trt where \u03b3 is a discount factor and T is the episode time horizon. Actor-Critic Algorithms. Policy gradient methods directly adjust the parameters \u03b8 of the policy in order to maximize the objective Jp\u03b8q \u201c Es\u201ep\u03c0,a\u201e\u03c0\u03b8psq rRps, aqs by taking steps in the direction of \u2207Jp\u03b8q. We can write the gradient with respect to the policy parameters as the following: \u2207\u03b8Jp\u03b8q \u201c Es\u201ep\u03c0,a\u201e\u03c0\u03b8psq r\u2207\u03b8 log \u03c0\u03b8pa|sqQ\u03c0ps, aqs , where Q\u03c0ps, aq is the action-value. It is the expected remaining discounted reward if we take action a in state s and follow policy \u03c0 thereafter. Actor-Critic algorithms learn an approximation \u02c6 Qps, aq of the unknown true action-value function by e.g. temporal-difference learning (Sutton & Barto, 1998). This \u02c6 Qps, aq is the Critic and \u03c0\u03b8 is the Actor. Multi-Agent Actor-Critic. Lowe et al. (2017) propose a multi-agent Actor-Critic algorithm adapted to centralized learning and decentralized execution wherein each agent learns its own policy \u03c0\u03b8ipai|\u03c9iq conditioned on local observation \u03c9i using a central Critic that estimates the joint action-value \u02c6 Qps, a1, ..., aNq conditioned on all actions. 4. TarMAC: Targeted Multi-Agent Communication We now describe our multi-agent communication architecture in detail. Recall that we have N agents with policies t\u03c01, ..., \u03c0Nu, respectively parameterized by t\u03b81, ..., \u03b8Nu, jointly performing a cooperative task. At every timestep t, the ith agent for all i P t1, ..., Nu sees a local observation \u03c9t i, and must select a discrete environment action at i \u201e \u03c0\u03b8i and send a continuous communication message mt i, received by other agents at the next timestep, in order to maximize global reward rt \u201e R. Since no agent has access to the underlying complete state of the environment st, there is incentive in communicating with each other and being mutually helpful to do better as a team. Policies and Decentralized Execution. Each agent is essentially modeled as a Dec-POMDP augmented with communication. Each agent\u2019s policy \u03c0\u03b8i is implemented as a 1-layer Gated Recurrent Unit (Cho et al., 2014). At every timestep, the local observation \u03c9t i and a vector ct i aggregating messages sent by all agents at the previous timestep (described in more detail below) are used to update the hidden state ht i of the GRU, which encodes the entire messageaction-observation history up to time t. From this internal state representation, the agent\u2019s policy \u03c0\u03b8i pat i | ht iq predicts a categorical distribution over the space of actions, and another output head produces an outgoing message vector mt i. Note that for our experiments, agents are symmetric and policy parameters are shared across agents, i.e. \u03b81 \u201c ... \u201c \u03b8N. This considerably speeds up learning. Centralized Critic. Following prior work (Lowe et al., 2017; Foerster et al., 2018), we operate under the centralized learning and decentralized execution paradigm wherein during training, a centralized Critic guides the optimization of individual agent policies. The Critic takes as input predicted actions tat 1, ..., at Nu and internal state representations tht 1, ..., ht Nu from all agents to estimate the joint action-value \u02c6 Qt at every timestep. The centralized Critic is learned by temporal difference (Sutton & Barto, 1998) and the gradient of the expected return Jp\u03b8iq \u201c ErRs with respect to policy parameters is approximated by: \u2207\u03b8iJp\u03b8iq \u201c E \u201d \u2207\u03b8i log \u03c0\u03b8ipat i|ht iq \u02c6 Qtpht 1, ..., ht N, a1 t, ..., aN t q \u0131 . Note that compared to an individual Critic \u02c6 Qipht i, at iq per agent, having a centralized Critic leads to considerably \fTarMAC: Targeted Multi-Agent Communication Figure 1: Overview of our multi-agent architecture with targeted communication. Left: At every timestep, each agent policy gets a local observation \u03c9t i and aggregated message ct i as input, and predicts an environment action at i and a targeted communication message mt i. Right: Targeted communication between agents is implemented as a signature-based soft attention mechanism. Each agent broadcasts a message mt i consisting of a signature kt i, which can be used to encode agent-speci\ufb01c information and a value vt i, which contains the actual message. At the next timestep, each receiving agent gets as input a convex combination of message values, where the attention weights are obtained by a dot product between sender\u2019s signature kt i and a query vector qt`1 j predicted from the receiver\u2019s hidden state. lower variance in policy gradient estimates since it takes into account actions from all agents. At test time, the Critic is not needed and policy execution is fully decentralized. Targeted, Multi-Round Communication. Establishing complex collaboration strategies requires targeted communication i.e. the ability to address speci\ufb01c messages to speci\ufb01c agents, as well as multi-round communication i.e. multiple rounds of back-and-forth interactions between agents. We use a signature-based soft-attention mechanism in our communication structure to enable targeting. Each message mt i consists of 2 parts: a signature kt i P Rdk to encode properties of intended recipients and a value vt i P Rdv: mt i \u201c r signature kt i vt i value s . (1) At the receiving end, each agent (indexed by j) predicts a query vector qt`1 j P Rdk from its hidden state ht`1 j , which is used to compute a dot product with signatures of all N messages. This is scaled by 1{?dk followed by a softmax to obtain attention weights \u03b1ji for each incoming message: \u03b1j \u201c softmax \u00bb \u2014 \u2014 \u2013 qt`1 j T kt 1 ?dk ... qt`1 j T kt i ?dk \u03b1ji ... qt`1 j T kt N ?dk \ufb01 \ufb03 \ufb03 \ufb02 (2) used to compute ct`1 j , the input message for agent j at t`1: ct`1 j \u201c N \u00ff i\u201c1 \u03b1jivt i. (3) Intuitively, attention weights are high when both sender and receiver predict similar signature and query vectors respectively. Note that Equation 2 also includes \u03b1ii corresponding to the ability to self-attend (Vaswani et al., 2017), which we empirically found to improve performance, especially in situations when an agent has found the goal in a coordinated navigation task and all it is required to do is stay at the goal, so others bene\ufb01t from attending to this agent\u2019s message but return communication is not necessary. Note that the targeting mechanism in our formulation is implicit i.e. agents implicitly encode properties without addressing speci\ufb01c recipients. For example, in a self-driving car network, a particular message may be for \u201ccars travelling on the west to east road\" (implicitly encoding properties) as opposed to speci\ufb01cally for \u201ccar 2\u201d (explicit addressing). For multi-round communication, aggregated message vector ct`1 j and internal state ht j are \ufb01rst used to predict the next internal state h1t j taking into account the \ufb01rst round: h1t j \u201c tanh ` Wh\u00d1h1r ct`1 j } ht j s \u02d8 . (4) Next, the updated hidden state h1t j is used to predict signature, query, value followed by repeating Equations 1-4 \fTarMAC: Targeted Multi-Agent Communication Figure 2: Visualizations of learned targeted communication in SHAPES. Figure best viewed in color. 4 agents have to \ufb01nd rred, red, green, blues respectively. t \u201c 1: inital spawn locations; t \u201c 2: 4 was on red at t \u201c 1 so 1 and 2 attend to messages from 4 since they have to \ufb01nd red. 3 has found its goal (green) and is self-attending; t \u201c 6: 4 attends to messages from 2 as 2 is on 4\u2019s target \u2013 blue; t \u201c 8: 1 \ufb01nds red, so 1 and 2 shift attention to 1; t \u201c 21: all agents are at their respective goal locations and primarily self-attending. for multiple rounds until we get a \ufb01nal aggregated message vector ct`1 j to be used as input at the next timestep. Number of rounds of communication is treated as a hyperparameter. Our entire communication architecture is differentiable, and message vectors are learnt through backpropagation. 5. Experiments We evaluate TarMAC on a variety of tasks and environments. All our models were trained with a batched synchronous version of the multi-agent Actor-Critic described above, using RMSProp with a learning rate of 7 \u02c6 10\u00b44 and \u03b1 \u201c 0.99, batch size 16, discount factor \u03b3 \u201c 0.99 and entropy regularization coef\ufb01cient 0.01 for agent policies. All our agent policies are instantiated from the same set of shared parameters; i.e. \u03b81 \u201c ... \u201c \u03b8N. Each agent\u2019s GRU hidden state is 128-d, message signature/query is 16-d, and message value is 32-d (unless speci\ufb01ed otherwise). All results are averaged over 5 independent seeds (unless noted otherwise), and error bars show standard error of means. 5.1. SHAPES The SHAPES dataset was introduced by Andreas et al. (2016)1, and originally created for testing compositional visual reasoning for the task of visual question answering. It consists of synthetic images of 2D colored shapes arranged in a grid (3 \u02c6 3 cells in the original dataset) along with corresponding question-answer pairs. There are 3 shapes (circle, square, triangle), 3 colors (red, green, blue), and 2 sizes (small, big) in total (see Figure 2). 1github.com/jacobandreas/nmn2/tree/shapes We convert each image from SHAPES into an active environment where agents can now be spawned at different regions of the image, observe a 5\u02c65 local patch around them and their coordinates, and take actions to move around \u2013 tup, down, left, right, stayu. Each agent is tasked with navigating to a speci\ufb01ed goal state in the environment within a max no. of steps \u2013 t\u2018red\u2019, \u2018blue square\u2019, \u2018small green circle\u2019, etc. u \u2013 and the reward for each agent at every timestep is based on team performance i.e. rt \u201c # agents on goal # agents . Having a symmetric, team-based reward incentivizes agents to cooperate in \ufb01nding each agent\u2019s goal. How does targeting work? Recall that each agent predicts a signature and value vector as part of the message it sends, and a query vector to attend to incoming messages. The communication is targeted because the attention probabilities are a function of both the sender\u2019s signature and receiver\u2019s query vectors. So it is not just the receiver deciding how much of each message to listen to. The sender also sends out signatures that affects how much of each message is sent to each receiver. The sender\u2019s signature could encode parts of its observation most relevant to other agents\u2019 goals (e.g. it would be futile to convey coordinates in the signature), and the message value could contain the agent\u2019s own location. For example, in Figure 2, at t \u201c 6, we see that when agent 2 passes by blue, agent 4 starts attending to agent 2. Here, agent 2\u2019s signature likely encodes the color it observes (which is blue), and agent 4\u2019s query encodes its goal (which is also blue) leading to high attention probability. Agent 2\u2019s message value encodes coordinates agent 4 has to navigate to, which it ends up reaching by t \u201c 21. SHAPES serves as a \ufb02exible testbed for carefully controlling and analyzing the effect of changing the size of the en\fTarMAC: Targeted Multi-Agent Communication 30 \u02c6 30, 4 agents, \ufb01ndrreds 50 \u02c6 50, 4 agents, \ufb01ndrreds 50 \u02c6 50, 4 agents, \ufb01ndrred,red,green,blues No communication 95.3\u02d82.8% 83.6\u02d83.3% 69.1\u02d84.6% No attention 99.7\u02d80.8% 89.5\u02d81.4% 82.4\u02d82.1% TarMAC 99.8\u02d80.9% 89.5\u02d81.7% 85.8\u02d82.5% Table 2: Success rates on 3 different settings of cooperative navigation in the SHAPES environment. vironment, no. of agents, goal con\ufb01gurations, etc. Figure 2 visualizes learned protocols, and Table 2 reports quantitative evaluation for three different con\ufb01gurations \u2013 1) 4 agents, all tasked with \ufb01nding red in 30 \u02c6 30 images, 2) 4 agents, all tasked with \ufb01nding red in 50 \u02c6 50 images, 3) 4 agents, tasked with \ufb01nding rred,red,green,blues respectively in 50\u02c650 images. We compare TarMAC against two baselines \u2013 1) without communication, and 2) with communication but where broadcasted messages are averaged instead of attention-weighted, so all agents receive the same message vector, similar to Sukhbaatar et al. (2016). Bene\ufb01ts of communication and attention increase with task complexity (30 \u02c6 30 \u00d1 50 \u02c6 50 & \ufb01ndrreds \u00d1 \ufb01ndrred,red,green,blues). 5.2. Traf\ufb01c Junction Environment and Task. The simulated traf\ufb01c junction environments from (Sukhbaatar et al., 2016) consist of cars moving along pre-assigned, potentially intersecting routes on one or more road junctions. The total number of cars is \ufb01xed at Nmax, and at every timestep new cars get added to the environment with probability parrive. Once a car completes its route, it becomes available to be sampled and added back to the environment with a different route assignment. Each car has a limited visibility of a 3 \u02c6 3 region around it, but is free to communicate with all other cars. The action space for each car at every timestep is gas and brake, and the reward consists of a linear time penalty \u00b40.01\u03c4, where \u03c4 is the number of timesteps since car has been active, and a collision penalty rcollision \u201c \u00b410. Easy Hard No communication 84.9\u02d84.3% 74.1\u02d83.9% CommNet (Sukhbaatar et al., 2016) 99.7\u02d80.1% 78.9\u02d83.4% TarMAC 1-round 99.9\u02d80.1% 84.6\u02d83.2% TarMAC 2-round 99.9\u02d80.1% 97.1\u02d81.6% Table 3: Success rates on traf\ufb01c junction. Our targeted 2-round communication architecture gets a success rate of 97.1\u02d81.6% on the \u2018hard\u2019 variant, signi\ufb01cantly outperforming Sukhbaatar et al. (2016). Note that 1and 2-round refer to the number of rounds of communication between actions (Equation 4). Quantitative Results. We compare our approach with CommNet (Sukhbaatar et al., 2016) on the easy and hard dif\ufb01culties of the traf\ufb01c junction environment. The easy task has one junction of two one-way roads on a 7 \u02c6 7 grid with Nmax \u201c 5 and parrive \u201c 0.30, while the hard task has four connected junctions of two-way roads on a 18\u02c618 grid with Nmax \u201c 20 and parrive \u201c 0.05. See Figure 4a, 4b for an example of the four two-way junctions in the hard task. As shown in Table 3, a no communication baseline has success rates of 84.9\u02d84.3% and 74.1\u02d83.9% on easy and hard respectively. On easy, both CommNet and TarMAC get close to 100%. On hard, TarMAC with 1-round signi\ufb01cantly outperforms CommNet with a success rate of 84.6\u02d83.2%, while 2-round further improves on this at 97.1\u02d81.6%, which is an \u201e18% absolute improvement over CommNet. We did not see gains going beyond 2 rounds in this environment. Message size vs. multi-round communication. We study performance of TarMAC with varying message value size and number of rounds of communication on the hard variant of the traf\ufb01c junction task. As can be seen in Figure 3, multiple rounds of communication leads to signi\ufb01cantly higher performance than simply increasing message size, demonstrating the advantage of multi-round communication. In fact, decreasing message size to a single scalar performs almost as well as 64-d, perhaps because even a single real number can be suf\ufb01ciently partitioned to cover the space of meanings/messages that need to be conveyed. Figure 3: Success rates for 1 vs. 2-round vs. message size on hard. Performance does not decrease signi\ufb01cantly even when the message vector is a single scalar, and 2-round communication before taking an action leads to signi\ufb01cant improvements over 1-round. Model Interpretation. Interpreting the learned policies of TarMAC, Figure 4a shows braking probabilities at different locations \u2013 cars tend to brake close to or right before entering traf\ufb01c junctions, which is reasonable since junctions have the highest chances for collisions. Turning our attention to attention probabilities (Figure 4b), we can see that cars are most-attended to when in the \u2018internal grid\u2019 \u2013 right after crossing the 1st junction and before hitting the 2nd junction. These attention probabilities are intuitive \u2013 cars \fTarMAC: Targeted Multi-Agent Communication (a) Brake probabilities at different locations on the hard traf\ufb01c junction environment. Cars tend to brake close to or right before entering junctions. (b) Attention probabilities at different locations. Cars are most attended to in the \u2018internal grid\u2019 \u2013 right after the 1st junction and before the 2nd. (c) No. of cars being attended to 1) is positively correlated with total cars, indicating that TarMAC is adaptive to dynamic team sizes, and 2) is slightly right-shifted, since it takes few steps of communication to adapt. Figure 4: Interpretation of model predictions from TarMAC in the traf\ufb01c junction environment. Figure 5: Agents navigating to the fireplace in House3D (marked in yellow). Note in particular that agent 4 is spawned facing away from it. It communicates with others, turns to face the fireplace, and moves towards it. learn to attend to speci\ufb01c sensitive locations with the most relevant local observations to avoid collisions. Finally, Figure 4c compares total number of cars in the environment vs. number of cars being attended to with probability \u0105 0.1 at any time. Interestingly, these are (loosely) positively correlated, with Spearman\u2019s \u03c3 \u201c 0.49, which shows that TarMAC is able to adapt to variable number of agents. Crucially, agents learn this dynamic targeting behavior purely from task rewards with no hand-coding! Note that the right shift between the two curves is expected, as it takes a few timesteps of communication for team size changes to propagate. At a relative time shift of 3, the Spearman\u2019s rank correlation between the two curves goes up to 0.53. 5.3. House3D Next, we benchmark TarMAC on a cooperative point-goal navigation task in House3D (Wu et al., 2018). House3D provides a rich and diverse set of publicly-available2 3D indoor environments, wherein agents do not have access to the top-down map and must navigate purely from \ufb01rstperson vision. Similar to SHAPES, the agents are tasked with \ufb01nding a speci\ufb01ed goal (such as \u2018\ufb01replace\u2019) within a max no. of steps, spawned at random locations in the environment and allowed to communicate and move around. Each agent gets a shaped reward based on progress towards the speci\ufb01ed target. An episode is successful if all agents end within 0.5m of the target object in 500 navigation steps. 2github.com/facebookresearch/house3d Table 4 shows success rates on a find[fireplace] task in House3D. A no-communication navigation policy trained with the same reward structure gets a success rate of 62.1\u02d85.3%. Mean-pooled communication (no attention) performs slightly better with a success rate of 64.3\u02d82.3%, and TarMAC achieves the best success rate at 68.9\u02d81.1%. TarMAC agents take 82.5 steps to reach the target on average vs. 101.3 for no attention vs. 186.5 for no communication. Figure 5 visualizes a predicted navigation trajectory of 4 agents. Note that the communication vectors are signi\ufb01cantly more compact (32-d) than the high-dimensional observation space (224\u02c6224 image), making our approach particularly attractive for scaling to large agent teams. Success rate Avg. # steps No communication 62.1\u02d85.3% 186.5 No attention 64.3\u02d82.3% 101.3 TarMAC 68.9\u02d81.1% 82.5 Table 4: 4-agent find[fireplace] navigation task in House3D. Note that House3D is a challenging testbed for multi-agent reinforcement learning. To get to \u201e100% accuracy, agents have to deal with high-dimensional visual observations, be able to navigate long action sequences (up to \u201e500 steps), and avoid getting stuck against objects, doors, and walls. \fTarMAC: Targeted Multi-Agent Communication (a) 3 agents, 5 \u02c6 5 grid, vision=0, max steps=20 (b) 10 agents, 20 \u02c6 20 grid, vision=1, max steps=80 Figure 6: Average no. of steps to complete an episode (lower is better) during training in the Predator-Prey mixed environment. IC3Net + TarMAC converges much faster than IC3Net, demonstrating that attentional communication helps. Shaded region shows 95% CI. 3 agents, 5 \u02c6 5, 5 agents, 10 \u02c6 10, 10 agents, 20 \u02c6 20, vision=0, max steps=20 vision=1, max steps=40 vision=1, max steps=80 CommNet (Sukhbaatar et al., 2016) 9.1\u02d80.1 13.1\u02d80.01 76.5\u02d81.3 IC3Net (Singh et al., 2019) 8.9\u02d80.02 13.0\u02d80.02 52.4\u02d83.4 IC3Net ` TarMAC 8.31\u02d80.06 12.74\u02d80.08 41.67\u02d85.82 IC3Net ` TarMAC (2-round) 7.24\u02d80.08 \u2013 35.57\u02d83.96 Table 5: Average number of steps taken to complete an episode (lower is better) at convergence in the Predator-Prey mixed environment. 5.4. Mixed and Competitive Environments Finally, we look at how to extend TarMAC to mixed and competitive scenarios. Communication via sender-receiver soft attention in TarMAC is poorly suited for competitive scenarios, since there is always \u201cleakage\u201d of the agent\u2019s state as a message to other agents via a low but non-zero attention probability, thus compromising its strategy and chances of success. Instead, an agent should \ufb01rst be able to independently decide if it wants to communicate at all, and then direct its message to speci\ufb01c recipients if it does. The recently proposed IC3Net architecture by Singh et al. (2019) addresses the former \u2013 learning when to communicate. At every timestep, each agent in IC3Net predicts a hard gating action to decide if it wants to communicate. At the receiving end, messages from agents who decide to communicate are averaged to be the next input message. Replacing this message averaging with our sender-receiver soft attention, while keeping the rest of the architecture and training details the same as IC3Net, should provide an inductive bias for more \ufb02exible communication strategies, since this model (IC3Net + TarMAC) can learn both when to communicate and whom to address messages to. We evaluate IC3Net + TarMAC on the Predator-Prey environment from Singh et al. (2019), consisting of n predators, with limited vision, moving around (with a penalty of rexplore \u201c \u00b40.05 per timestep) in search of a stationary prey. Once a predator reaches a prey, it keeps getting positive reward rprey \u201c 0.05 till end of episode i.e. till other agents reach prey or maximum no. of steps. The prey gets 0.05 per timestep only till the \ufb01rst predator reaches it, so it has incentive to not communicate its location. We compare average no. of steps for agents to reach the prey during training (Figure 6) and at convergence (Table 5). Figure 6 shows that using TarMAC with IC3Net leads to signi\ufb01cantly faster convergence than IC3Net alone, and Table 5 shows that TarMAC agents reach the prey faster. Results are averaged over 3 independent runs with different seeds. 6." + }, + { + "url": "http://arxiv.org/abs/1711.11543v2", + "title": "Embodied Question Answering", + "abstract": "We present a new AI task -- Embodied Question Answering (EmbodiedQA) -- where\nan agent is spawned at a random location in a 3D environment and asked a\nquestion (\"What color is the car?\"). In order to answer, the agent must first\nintelligently navigate to explore the environment, gather information through\nfirst-person (egocentric) vision, and then answer the question (\"orange\").\n This challenging task requires a range of AI skills -- active perception,\nlanguage understanding, goal-driven navigation, commonsense reasoning, and\ngrounding of language into actions. In this work, we develop the environments,\nend-to-end-trained reinforcement learning agents, and evaluation protocols for\nEmbodiedQA.", + "authors": "Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra", + "published": "2017-11-30", + "updated": "2017-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction The embodiment hypothesis is the idea that intelligence emerges in the interaction of an agent with an environment and as a result of sensorimotor activity. Smith and Gasser [1] Our long-term goal is to build intelligent agents that can perceive their environment (through vision, audition, or other sensors), communicate (i.e., hold a natural language dialog grounded in the environment), and act (e.g. aid humans by executing API calls or commands in a virtual or embodied environment). In addition to being a fundamental scienti\ufb01c goal in arti\ufb01cial intelligence (AI), even a small advance towards such intelligent systems can fundamentally change our lives \u2013 from assistive dialog agents for the visually impaired, to natural-language interaction with selfdriving cars, in-home robots, and personal assistants. As a step towards goal-driven agents that can perceive, communicate, and execute actions, we present a new AI task \u2013 Embodied Question Answering (EmbodiedQA), along \u2039Work partially done during an internship at Facebook AI Research. Figure 1: Embodied Question Answering \u2013 EmbodiedQA\u2013 tasks agents with navigating rich 3D environments in order to answer questions. These embodied agents must jointly learn language understanding, visual reasoning, and navigation to succeed. with virtual environments, evaluation metrics, and a novel deep reinforcement learning (RL) model for this task. Concretely, the EmbodiedQA task is illustrated in Fig. 1 \u2013 an agent is spawned at a random location in an environment (a house or building) and asked a question (e.g. \u2018What color is the car?\u2019). The agent perceives its environment through \ufb01rst-person vision (a single RGB camera) and can perform a few atomic actions: move-tforward, backward, right, leftu and turn-tright, leftu. The goal of the agent is to intelligently navigate the environment and gather the visual information necessary to answer the question. EmbodiedQA is a challenging task that subsumes several fundamental AI problems as sub-tasks. Clearly, the agent must understand language (what is the question asking?) and vision (what does a car look like?), but a successful agent must also learn to perform: Active Perception: The agent may be spawned anywhere in the environment and may not immediately \u2018see\u2019 the pixels containing the answer to the visual question (i.e. the car may not be visible). Thus, the agent must move to succeed \u2013 controlling the pixels that it will perceive. The agent must learn to map its visual input to the correct action based on its perception of the world, the underlying physical constraints, and its understanding of the question. arXiv:1711.11543v2 [cs.CV] 1 Dec 2017 \fCommon Sense Reasoning: The agent is not provided a \ufb02oor-plan or map of the environment, and must navigate from egocentric views alone. Thus, it must learn common sense (where am I? where are cars typically found in a housing compound? and where is the garage with respect to me?) similar to how humans may navigate in a house they have never visited (the car is probably in the garage outside, so I should \ufb01nd a door that leads out). Language Grounding: One commonly noted shortcoming of modern vision-and-language models is their lack of grounding \u2013 these models often fail to associate entities in text with corresponding image pixels, relying instead on dataset biases to respond seemingly intelligently even when attending to irrelevant regions [2, 3]. In EmbodiedQA, we take a goal-driven view of grounding \u2013 our agent grounds a visual question not into pixels but into a sequence of actions (\u2018garage\u2019 means to navigate towards the house exterior where the \u2018car\u2019 is usually parked). Credit Assignment: From a reinforcement learning perspective, EmbodiedQA presents a particularly challenging learning problem. Consider the question \u2018How many rooms contain chairs?\u2019. How does an agent discover that this question involves exploring the environment to visit \u2018rooms\u2019, detecting \u2018chairs\u2019, incrementing a count every time a \u2018chair\u2019 is in the view (except while the agent is in the same \u2018room\u2019), and stopping when no more \u2018rooms\u2019 can be found? All without knowing what a \u2018room\u2019 is or how to \ufb01nd it, what a \u2018chair\u2019 looks like, or what counting is. To succeed, the agent must execute a somewhat precise sequence of hundreds of inter-dependent actions (forward, forward, turn-right, forward, forward, ..., turn-left, \u20185\u2019) \u2013 all to be learned from a reward signal that says \u20184\u2019 is the right answer and anything else is incorrect. The task is complex enough that most random action sequences result in negative reward, and when things do go wrong, it\u2019s dif\ufb01cult for the agent to know why \u2013 was the question misunderstood? Can the agent not detect chairs? Did the agent navigate incorrectly? Was the counting incorrect? As the \ufb01rst step in this challenging space, we judiciously scope out a problem space \u2013 environments, question types, learning paradigm \u2013 that allow us to augment the sparse RL rewards with imitation learning (showing the agent example trajectories) and reward shaping [4] (giving intermediate \u2018getting closer or farther\u2019 navigation rewards). Speci\ufb01cally, our approach follows the recent paradigm from robotics and deep RL [5,6] \u2013 that the training environments are assumed to be suf\ufb01ciently instrumented \u2013 i.e., provide access to the agent location, depth and semantic annotations of the environment, and allow for computing obstacle-avoiding shortest paths from the agent to any target location. Crucially, at test time, our agents operate entirely from egocentric RGB vision alone \u2013 no structured representation of the environments, no access to a map, no explicit localization of the agent or mapping of the environment, no A* or any other heuristic planning, and no pre-processing or handcoded knowledge about the environment or the task of any kind. The agent in its entirety \u2013 vision, language, navigation, answering modules \u2013 is trained completely end-to-end \u2013 from raw sensory input (pixels and words) to goal-driven multi-room indoor navigation to visual question answering! Contributions. We make the following contributions: \u2022 We propose a new AI task: EmbodiedQA, where an agent spawned in an environment must intelligently navigate from an egocentric view to gather the necessary information to answer visual questions about its environment. \u2022 We introduce a novel Adaptive Computation Time [7] navigator \u2013 that decomposes navigation into a \u2018planner\u2019 that selects actions, and a \u2018controller\u2019 that executes these primitive actions a variable number of times before returning control to the planner. When the agent decides it has seen the required visual information to answer the question, it stops navigating and outputs an answer. \u2022 We initialize our agents via imitation learning and show that agents can answer questions more accurately after \ufb01ne-tuning with reinforcement learning \u2013 that is, when allowed to control their own navigation for the express purpose of answering questions accurately. Unlike some prior work, we explicitly test and demonstrate generalization of our agents to unseen environments. \u2022 We evaluate our agents in House3D [8], a rich, interactive environment based on human-designed 3D indoor scenes from the SUNCG dataset [9]. These diverse virtual environments enable us to test generalization of our agent across \ufb02oor-plans, objects, and room con\ufb01gurations \u2013 without the concerns of safety, privacy, and expense inherent to real robotic platforms. \u2022 We introduce the EQA dataset of visual questions and answers grounded in House3D. The different question types test a range of agent abilities \u2013 scene recognition (location), spatial reasoning (preposition), color recognition (color). While the EmbodiedQA task de\ufb01nition supports free-fom natural language questions, we represent each question in EQA as as a functional program that can be programmatically generated and executed on the environment to determine the answer. This gives us the ability to control the distribution of question-types and answers in the dataset, deter algorithms from exploiting dataset bias [3, 10], and provide \ufb01ne-grained breakdown of performance by skill. \u2022 We integrated House3D renderer with Amazon Mechanical Turk (AMT), allowing subjects to remotely operate the agent, and collected expert demonstrations of questionbased navigation that serve as a benchmark to compare our proposed and future algorithms. All our code and data will be made publicly available. \f2. Related Work We place our work in context by arranging prior work along the axes of vision (from a single-frame to video), language (from single-shot question answering to dialog), and action (from passive observers to active agents). When viewed from this perspective, EmbodiedQA Language Single Frame Video Single-Shot QA Dialog Vision VideoQA VQA Embodied QA Visual Dialog presents a novel problem con\ufb01guration \u2013 single-shot QA about videos captured by goal-driven active agents. Next, we contrast this against various 2D slices in this space. VQA: Vision + Language. Like EmbodiedQA, image and video question answering tasks [11\u201315] require reasoning about natural language questions posed about visual content. The crucial difference is the lack of control \u2013 these tasks present answering agents with a \ufb01xed view of the environment (i.e. one or more images from some \ufb01xed trajectory through the world) from which the agent must answer the question, never allowing the agents to actively perceive. In contrast, EmbodiedQA agents control their trajectory and fate, for good or ill. The task is signi\ufb01cantly harder than VQA (i.e. most random paths are useless) but the agent has the \ufb02exibility to avoid confusing viewpoints and seek visual input that will maximize answer con\ufb01dence. Visual Navigation: Vision + Action. The problem of navigating in an environment based on visual perception has long been studied in vision and robotics (see [16] for an extensive survey). Classical techniques divide navigation into two distinct phases \u2013 mapping (where visual observations are used to construct a 3D model of the environment), and planning (which selects paths based on this map). Recent developments in deep RL have proposed fused architectures that go directly from egocentric visual observations to navigational actions [17\u201323]. We model our agents as similar pixel-to-action navigators. The key distinction in EmbodiedQA is how the goals are speci\ufb01ed. Visual navigation typically speci\ufb01es agent goals either implicitly via the reward function [19, 20] (thus training a separate policy for each goal/reward), or explicitly by conditioning on goal state representations [24] including images of target objects [18]. In contrast, EmbodiedQA speci\ufb01es agent goals via language, which is inherently compositional and renders training a separate policy for every task (question) infeasible. Situated Language Learning: Language + Action. Inspired by the classical work of Winograd [25], a number of recent works have revisited grounded language learning by situating agents in simple globally-perceived environments and tasking them with goals speci\ufb01ed in natural language. The form and structure of these goal speci\ufb01cations range from declarative programs [26], to simple templated commands [27,28], to free-form natural language instructions [29,30]. One key distinction in EmbodiedQA, of course, is visual sensing \u2013 the environment is only partially observable, i.e. the agent does not have access to the \ufb02oor plan, object labels, attributes, etc., and must extract this information purely from \ufb01rst-person visual sensing. Embodiment: Vision + Language + Action. Closest to EmbodiedQA are recent works that extend the situated language learning paradigm to settings where agents\u2019 perceptions are local, purely visual, and change based on their actions \u2013 a setting we refer to as embodied language learning. In concurrent and unpublished work, Hermann et al. [21] and Chaplot et al. [17] both develop embodied agents in simple game-like environments consisting of 1-2 rooms and a handful of objects with variable color and shape. In both settings, agents were able to learn to understand simple \u2018go to X\u2019/\u2018pick up X\u2019 style commands where X would specify an object (and possibly some of its attributes). Similarly, Oh et al. [23] present embodied agents in a simple mazeworld and task them to complete a series of instructions. In contrast to these approaches, our EmbodiedQA environments consist of multi-room homes (\u201e8 per home) that are densely populated by a variety of objects (\u201e54 unique objects per home). Furthermore, the instructions and commands in these works are low-level and more closely relate to actions than the questions presented in EmbodiedQA. Interactive Environments. There are a number of interactive environments commonly used in the community, ranging from simple 2D grid-worlds (e.g. XWORLD [27]), to 3D game-like environments with limited realism (e.g. DeepMind Lab [31] or Doom [17]), to more complex, realistic environments (e.g. AI2-THOR [19] or Stanford 2D-3DS [32]). While realistic environments provide rich representations of the world, most consist of only a handful of environments due to the high dif\ufb01culty of their creation. On the other hand, large sets of synthetic environments can be programmatically generated; however, they typically lack realism (either in appearance or arrangement). In this work, we use the House3D [8] environment as it strikes a useful middle-ground between simple synthetic and realistic environments. See Sec. 3.1 for more details. Hierarchical Agents. We model our EmbodiedQA agents as deep hierarchical agents that decompose the overall control problem such that a higher-level planner invokes lowerlevel controls to issue primitive actions. Such hierarchical modeling has recently shown promise in the deep reinforcement learning setting [23, 28, 33]. Our model also draws inspiration from the work on Adaptive Computation Time models of Graves [7]. \f(a) Sample Environments garage kitchen elevator office balcony patio lobby gym bathroom living room bedroom dining room (b) Queryable Rooms ironing board food processor sink rug cup desk pan bed sofa toilet piano xbox vase table chessboard towel rack television whiteboard range oven dishwasher fireplace fish tank stereo set shoe rack fruit bowl knife rack wardrobe cabinet cutting board vacuum cleaner utensil holder water dispenser coffee machine loudspeaker playstation dressing table refrigerator bookshelf microwave bathtub ottoman dresser computer washer tv stand mirror heater dryer kettle plates shower (c) Queryable Objects Figure 2: The EQA dataset is built on a subset of the environments and objects from the SUNCG [9] dataset. We show (a) sample environments and the (b) rooms and (c) objects that are asked about in the EmbodiedQA task. 3. EQA Dataset: Questions In Environments Having placed EmbodiedQA in context, we now dive deeper by outlining the environments in which our agents are embodied and the questions they must answer. We will publicly release the environments, our curated EQA dataset, and our code to aid research in this nascent area. 3.1. House3D: Interactive 3D Environments We instantiate EmbodiedQA in House3D [8], a recently introduced rich, interactive environment based on 3D indoor scenes from the SUNCG dataset [9]. Concretely, SUNCG consists of synthetic 3D scenes with realistic room and furniture layouts, manually designed using an online interior design interface (Planner5D [34]). Scenes were also further \u2018veri\ufb01ed\u2019 as realistic by majority vote of three human annotators. In total, SUNCG contains over 45k environments with 49k valid \ufb02oors, 404k rooms containing 5 million object instances of 2644 unique objects from 80 different categories. House3D converts SUNCG from a static 3D dataset to a set of virtual environments, where an agent (approximated as a cylinder 1 meter high) may navigate under simple physical constraints (not being able to pass through walls or objects). Fig. 2a shows top-down views of sample environments. Full details may be found in [8]. We build the EQA dataset on a pruned subset of environments from House3D. First, we only consider environments for which all three SUNCG annotators consider the scene layout realistic. Next, we \ufb01lter out atypical environments such as those lacking ground or those that are too small or large (only keeping houses with an internal area of 300800m2 covering at least 1{3 the total ground area). Finally, we exclude non-home environments by requiring at least one kitchen, living room, dining room, and bedroom. 3.2. Question-Answer Generation We would like to pose questions to agents that test their abilities to ground language, use common sense, reason visually, and navigate the environments. For example, answering the question \u2018What color is the car?\u2019 ostensibly requires grounding the symbol \u2018car\u2019, reasoning that cars are typically outside, navigating outside and exploring until the car is found, and visually inspecting its color. We draw inspiration from the CLEVR [35] dataset, and programmatically generate a dataset (EQA) of grounded questions and answers. This gives us the ability to control the distribution of question-types and answers in the dataset, and deter algorithms from exploiting dataset bias. Queryable Rooms and Objects. Figs. 2c, 2b show the queryable rooms (12) and objects (50) in EQA. We exclude objects and rooms from SUNCG that are obscure (e.g. loggia rooms) or dif\ufb01cult to resolve visually (e.g. very small objects like light switches). We merge some semantically similar object categories (e.g. teapot, coffee kettle) and singular vs plural forms of the same object type (e.g. (books, book)) to reduce ambiguity. Questions as Functional Programs. Each question in EQA is represented as a functional program that can be executed on the environment yielding an answer1. These functional programs are composed of a small set of elementary operations (selectp\u00a8q, uniquep\u00a8q, queryp\u00a8q, etc.) that operate on sets of room or object annotations. The number and the order of evaluation of these elementary operations de\ufb01nes a question type or template. For instance, one question type in EQA is the location template: location: \u2018What room is the located in?\u2019 where refers to one of the queryable objects. The sequence of elementary operations for this question type is: selectpobjectsq \u00d1 uniquepobjectsq \u00d1 queryplocationq. The \ufb01rst function, selectpobjectsq, gets all the object names from the environment. The second, uniquepobjectsq, retains only the objects that have a single instance in the entire house. The third, queryplocationq, generates a question (by \ufb01lling in the appropriate template) for each such object. The 2nd operation, uniquepobjectsq, is particularly important to generate unambiguous questions. For instance, if there are two air conditioners in the house, the question \u2018What room is the air conditioner lo1or a response that the question is inapplicable (e.g. referring to objects not in the environment) or ambiguous (having multiple valid answers). \fcated in?\u2019 is ambiguous, with potentially two different answers depending on which instance is being referred to. Question Types. Associated with each question type is a template for generating a question about the rooms and objects, their attributes and relationships. We de\ufb01ne nine question types and associated templates in EQA: EQA v1 $ \u2019 \u2019 \u2019 \u2019 \u2019 \u2019 & \u2019 \u2019 \u2019 \u2019 \u2019 \u2019 % location: \u2018What room is the located in?\u2019 color: \u2018What color is the ?\u2019 color_room: \u2018What color is the in the ?\u2019 preposition: \u2018What is the in the ?\u2019 existence: \u2018Is there a in the ?\u2019 logical: \u2018Is there a(n) and a(n) in the ?\u2019 count: \u2018How many in the ?\u2019 room_count: \u2018How many in the house?\u2019 distance: \u2018Is the closer to the than to the in the ?\u2019 The and tags above can be \ufb01lled by any valid room or object listed in Fig. 2b and Fig. 2c respectively. Given these question templates, the possible answers are room names (location), object names (preposition), yes/no (existence, logical and distance), color names (color) or numbers (count). These questions test a range of agent abilities including object detection (existence), scene recognition (location), counting (count), spatial reasoning (preposition), color recognition (color), and logical operators (logic). Moreover, many of these questions require multiple capabilities: e.g., answering a distance question requires recognizing the room and objects as well as reasoning about their spatial relation. Furthermore, the agent must do this by navigating the environment to \ufb01nd the room, looking around the room to \ufb01nd the objects, and possibly remembering their positions through time (if all three objects are not simultaneously visible). Different question types also require different degrees of navigation and memory. For instance, \u2018How many bedrooms in the house?\u2019 requires signi\ufb01cant navigation (potentially exploring the entire environment) and long-term memory (keeping track of the count), while a question like \u2018What color is the chair in the living room?\u2019 requires \ufb01nding a single room, the living room, and looking for a chair. EQA is easily extensible to include new elementary operations, question types, and templates as needed to increase the dif\ufb01culty of the task to match the development of new models. As a \ufb01rst step in this challenging space, our experiments focus on EQA v1, which consists of 4 question types \u2013 location, color, color_room, preposition. One virtue of these questions is that there is a single target queried object (), which enables the use of shortest paths from the agent\u2019s spawn location to the target as expert demonstrations for imitation learning (details in Section 4.1). We stress that EQA is not a static dataset, rather a curriculum of capabilities that we would like to achieve in embodied communicating agents. Question-Answer Generation and Dataset Bias. In principle, we now have the ability to automatically generate all valid questions and their associated answers for each environment by executing the functional programs on the environment\u2019s annotations provided by SUNCG. However, careful consideration is needed to make sure the developed dataset is balanced over question types and answers. For each \ufb01lled question template (e.g. \u2018What room is the refrigerator located in?\u2019), we execute its functional form on all associated environments in the dataset (i.e. those containing refrigerators) to compute the answer distribution for this question. We exclude questions for which the normalized entropy of the answer distribution is below 0.5 \u2013 e.g., an agent can simply memorize that refrigerators are almost always in kitchens, so this question would be discarded. We also exclude questions occurring in fewer than four environments as the normalized entropy estimates are unreliable. Finally, in order to benchmark performance of agents vs human performance on EQA, it is important for the questions to not be tedious or frustrating for humans to answer. We do not ask count questions for objects with high counts (>=5) or distance questions between object triplets without clear differences in distance. We set these thresholds and room / object blacklists manually based on our experience performing these tasks. Complete discussion of the question templates, functional programs, elementary operations, and various checks-andbalances can be found in the supplement. Environments Unique Questions Total Questions train 643 147 4246 val 67 104 506 test 57 105 529 25.9% 32.5% 38.7% 2.9% color color_room preposition location Figure 3: Overview of the EQA v1 dataset including dataset split statistics (left) and question type breakdown (right). EQA v1 Statistics. The EQA v1 dataset consists of over 5000 question across over 750 environments, referring to a total of 45 unique objects in 7 unique room types. The dataset is split into train, val, test such that there is no overlap in environments across splits. Fig. 3 shows the dataset splits and question type distribution. Approximately 6 questions are asked per environment on average, 22 at most, and 1 at fewest. There are relatively few \fPLNR CNN STOP \ud835\udc3c\"#$ % \ud835\udc4e\" #$ PLNR CNN TURN LEFT \u210e\"#( \ud835\udc3c\"#( % \ud835\udc4e\"#( \ud835\udc4e\" #( CNN CTRL 1 \ud835\udc4e\" #( TURN LEFT \ud835\udc3c\"#( ) CNN CTRL 0 RETURN \ud835\udc3c\"#( * PLNR CNN FORWARD \u210e\"#* \ud835\udc3c\"#* % \ud835\udc4e\"#* \ud835\udc4e\" #* CNN CTRL 1 \ud835\udc4e\" #* FORWARD \ud835\udc3c\"#* ) CNN CTRL 1 \ud835\udc4e\" #* FORWARD \ud835\udc3c\"#* * CNN CTRL 1 \ud835\udc4e\" #* FORWARD \ud835\udc3c\"#* ( CNN CTRL 0 RETURN \ud835\udc3c\"#* $ PLNR CNN TURN RIGHT \u210e\"#) \ud835\udc3c\"#) % \ud835\udc4e\"#) \ud835\udc4e\" #) CNN CTRL 0 RETURN \ud835\udc3c\"#) ) \u210e\" \ud835\udc4e\" \ud835\udc44 \ud835\udc44 \ud835\udc44 \ud835\udc44 Figure 4: Our Adaptive Computation Time (ACT) navigator splits the navigation task between a planner and a controller module. The planner selects actions and the controller decides to continue performing that action for a variable number of time steps \u2013 resulting in a decoupling of direction (\u2018turn left\u2019) and velocity (\u20185 times\u2019) and strengthening the long-term gradient \ufb02ows of the planner module. preposition questions as many frequently occurring spatial relations are too easy to resolve without exploration and fail the entropy thresholding. We will make EQA v1 and the entire generation engine publicly available. 4. A Hierarchical Model for EmbodiedQA We now introduce our proposed neural architecture for an EmbodiedQA agent. Recall that the agent is spawned at a random location in the environment, receives a question, and perceives only through a single egocentric RGB camera. Importantly, unlike some prior work [26\u201330], in EmbodiedQA, the agent does not receive any global or structured representation of the environment (map, location, objects, rooms), or of the task (the functional program that generated the question). Overview of the Agent. The agent has 4 natural modules \u2013 vision, language, navigation, answering \u2013 and is trained from raw sensory input (pixels and words) to goal-driven multi-room indoor navigation to visual question answering. The modules themselves are built up largely from conventional neural building blocks \u2013 Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). One key technical novelty in our model is the use of Adaptive Computation Time (ACT) RNNs by Graves [7], which is an elegant approach for allowing RNNs to learn how many computational steps to take between receiving an input and emitting an output by back-propagating through a \u2018halting\u2019 layer. We make use of this idea in our navigation module to cleanly separate the decision between \u2013 direction (where to move, decided by a \u2018planner\u2019) and velocity (how far to move, decided by a \u2018controller\u2019). Fig. 4 illustrates the different modules in our agent, which we describe next. Vision. Our agent takes egocentric 224\u02c6224 RGB images from the House3D renderer as input, which we process with a CNN consisting of 4 t5\u02c65 Conv, ReLU, BatchNorm, 2\u02c62 MaxPoolu blocks, producing a \ufb01xed-size representation. A strong visual system for EmbodiedQA should encode information about object attributes (i.e. colors and textures), semantics (i.e. object categories), and environmental geometry (i.e. depth). As such, we pretrain the CNN under a multi-task pixel-to-pixel prediction framework \u2013 treating the above CNN as an encoder network, we train multiple network heads to decode the 1) original RGB values, 2) semantic class, and 3) depth of each pixel (which can be obtained from the House3D renderer). See the supplementary material for full model and training details. Language. Our agents also receive questions which we encode with 2-layer LSTMs with 128-dim hidden states. Note that we learn separate question encoders for the navigation and answering modules \u2013 as each may need to focus on different parts of the question. For instance, in the question \u2018What color is the chair in the kitchen?\u2019, \u2018color\u2019 is irrelevant for navigation and \u2018kitchen\u2019 matters little for question answering (once in the kitchen). Navigation. We introduce a novel Adaptive Computation Time (ACT) navigator that decomposes navigation into a \u2018planner\u2019, that selects actions (forward, left, right), and a \u2018controller\u2019, that executes these primitive actions a variable number of times (1,2, ...) before returning control back to the planner. Intuitively, this structure separates the intention of the agent (i.e. get to the other end of the room) from the series of primitive actions required to achieve this directive (i.e. \u2018forward, forward, forward, ...\u2019), and is reminiscent of hierarchical RL approaches [23, 28, 33]. This division also allows the planner to have variable time steps between decisions, strengthening long-term gradient \ufb02ows. Formally, let t \u201c 1, 2, . . . , T denote planner timestamps, and n \u201c 0, 1, 2, . . . Nptq denote the variable number of controller steps. Let In t denote the encoding of the image observed at t-th planner-time and n-th controller-step. We instantiate the planner as an LSTM. Thus, the planner maintains a hidden state ht (that is updated only at planner timesteps), and samples an action at P tforward, turn-left, turn-right, stop-navigationu: \fat, ht \u00d0 PLNR ` ht\u00b41, I0 t , Q, at\u00b41 \u02d8 , (1) where Q is the question encoding. After taking this action, the planner passes control to the controller, which considers the planner\u2019s state and the current frame to decide to continue performing at or to return control to the planner, i.e. t0, 1u Q cn t \u00d0 CTRL pht, at, In t q (2) If cn t \u201c 1 then the action at repeats and CTRL is applied to the next frame. Else if cn t \u201c 0 or a max of 5 controllersteps has been reached, control is returned to the planner. We instantiate the controller as a feed-forward multi-layer perceptron with 1 hidden layer. Intuitively, the planner encodes \u2018intent\u2019 into the state encoding ht and the chosen action at, and the controller keeps going until the visual input In t aligns with the intent of the planner. Question Answering. After the agent decides to stop (or a max number of actions have been taken), the question answering module is executed to provide an answer based on the sequence of frames I1 1, . . . , In T the agent has observed throughout its trajectory. The answering module attends to each of the last \ufb01ve frame, computes an attention pooled visual encoding based on image-question similarity, combines these with an LSTM encoding of the question, and outputs a softmax over the space of 172 possible answers. 4.1. Imitation Learning and Reward Shaping We employ a two-stage training process. First, the navigation and answering modules are independently trained using imitation/supervised learning on automatically generated expert demonstrations of navigation. Second, the entire architecture is jointly \ufb01ne-tuned using policy gradients. Independent Pretraining via Imitation Learning. Most questions that could be asked in EmbodiedQA do not have a natural \u2018correct\u2019 navigation required to answer them. As mentioned in Section 3.2, one virtue of EQA v1 questions is that they contain a single target queried object (). This allows us to use the shortest path from the agent\u2019s spawn location to the target as an expert demonstration. The navigation module is trained to mimic the shortest path actions in a teacher forcing setting i.e., given the history encoding, question encoding, and the current frame, the model is trained to predict the action that would keep it on the shortest path. We use a cross-entropy loss and train the model for 15 epochs. We \ufb01nd that even in this imitation learning case, it is essential to train the navigator under a distance-based curriculum. In the \ufb01rst epoch, we backtrack 10 steps from the target along the shortest path and initialize the agent at this point with the full history of the trajectory from the spawned location. We step back an additional 10 steps at each successive training epoch. We train for 15 epochs total with batch size ranging from 5 to 20 questions (depending on path length due to memory limitations). The question answering module is trained into predict the correct answer based on the question and the frames seen on the shortest path. We apply standard cross-entropy training over 50 epochs with a batch size of 20. Target-aware Navigational Fine-tuning. While the navigation and answering modules that result from imitation learning perform well on their independent tasks, they are poorly suited to dealing with each other. Speci\ufb01cally, both modules are used to following the provided shortest path, but when in control the navigator may generalize poorly and provide the question answerer with unhelpful views of target (if it \ufb01nds it at all). Rather than try to force the answering agent to provide correct answers from noisy or absent views, we freeze it and \ufb01ne-tune the navigator. We provide two types of reward signals to the navigator: the question answering accuracy achieved at the end of the navigation and a reward shaping [4] term that gives intermediate rewards for getting closer to the target. Speci\ufb01cally, the answering reward is 5 if the agent chooses to stop and answers correctly and 0 otherwise. The navigational reward for forward actions is 0.005 times the change in distance to target object (there is no reward or penalty for turning). We train the agent with REINFORCE [36] policy gradients with a running average baseline for the answer reward. As in the imitation learning setting, we follow a curriculum of increasing distance between spawn and target locations. Training details. All LSTMs are 2-layered with a 128-d hidden state. We use Adam [37] with a learning rate of 10\u00b43, and clamp gradients to r\u00b45, 5s. We incrementally load environments in memory and use a batch size of 10 both during the imitation learning and REINFORCE \ufb01netuning stages. One forward step corresponds to at most 0.25 metres, and it takes 40 turns to turn 360\u02dd, i.e. one right or left turn action leads to 9\u02dd change in viewing angle. Backward and strafe motions are not allowed. We snap the continuous renderer space to a 1000\u02c61000 grid to check for obstacles. Our entire codebase will be publicly available. 5. Experiments and Results The ultimate goal of an EmbodiedQA agent is to answer questions accurately. However, it is important to disentangle success/failure at the intermediate task of navigation from the ultimate downstream task of question answering. Question Answering Accuracy. Our agent (and all baselines) produce a probability distribution over 172 possible answers (colors, rooms, objects). We report the mean rank (MR) of the ground-truth answer in the answer list sorted by the agent\u2019s beliefs, where the mean is computed over all test questions and environments. Navigation Accuracy. We evaluate navigation performance on EQA v1 by reporting the distance to the target object at navigation termination pdTq, change in distance \fNavigation QA dT d\u2206 dmin %rT %r\u00ea %stop MR T\u00b410 T\u00b430 T\u00b450 T\u00b410 T\u00b430 T\u00b450 T\u00b410 T\u00b430 T\u00b450 T\u00b410 T\u00b430 T\u00b450 T\u00b410 T\u00b430 T\u00b450 T\u00b410 T\u00b430 T\u00b450 T\u00b410 T\u00b430 T\u00b450 Baselines $ \u2019 \u2019 \u2019 & \u2019 \u2019 \u2019 % Reactive 2.09 2.72 3.14 -1.44 -1.09 -0.31 0.29 1.01 1.82 50% 49% 47% 52% 53% 48% 3.18 3.56 3.31 LSTM 1.75 2.37 2.90 -1.10 -0.74 -0.07 0.34 1.06 2.05 55% 53% 44% 59% 57% 50% 80% 75% 80% 3.35 3.07 3.55 Reactive+Q 1.58 2.27 2.89 -0.94 -0.63 -0.06 0.31 1.09 1.96 52% 51% 45% 55% 57% 54% 3.17 3.54 3.37 LSTM+Q 1.13 2.23 2.89 -0.48 -0.59 -0.06 0.28 0.97 1.91 63% 53% 45% 64% 59% 54% 80% 71% 68% 3.11 3.39 3.31 Us \" ACT+Q 0.46 1.50 2.74 0.16 0.15 0.12 0.42 1.42 2.63 58% 54% 45% 60% 56% 46% 100% 100% 100% 3.09 3.13 3.25 ACT+Q-RL 1.67 2.19 2.86 -1.05 -0.52 0.01 0.24 0.93 1.94 57% 56% 45% 65% 62% 52% 32% 32% 24% 3.13 2.99 3.22 Oracle \" HumanNav\u02da 0.81 0.81 0.81 0.44 1.62 2.85 0.33 0.33 0.33 86% 86% 86% 87% 89% 89% ShortestPath+VQA 0.85 2.78 4.86 3.21 3.21 3.21 Table 1: Quantitative evaluation of EmbodiedQA agents on navigation and answering metrics for the EQA v1 test set. Ill-de\ufb01ned cells are marked with \u2018-\u2019 because 1) reactive models don\u2019t have a stopping action, 2) humans pick a single answer from a drop-down list, so mean rank is not de\ufb01ned, 3) most distance metrics are trivially de\ufb01ned for shortest paths since they always end at the target object by design. what color is the fish tank/bowl in the living room? what room is the vase located in? Figure 5: Sample trajectories from ACT+Q-RL agent projected on a \ufb02oor plan (white areas are unoccupiable) and on-path egocentric views. The agent moves closer to already visible objects \u2013 potentially improving its perception of the objects. Note that the \ufb02oor plan is shown only for illustration and not available to the agents. to target from initial to \ufb01nal position pd\u2206q, and the smallest distance to the target at any point in the episode pdminq. All distances are measured in meters along the shortest path to the target. We also record the percentage of questions for which an agent either terminates in p%rTq or ever enters p%r\u00eaq the room containing the target object(s). Finally, we also report the percent of episodes in which agents choose to terminate navigation and answer before reaching the maximum episode length p%stopq. To sweep the dif\ufb01culty of the task at test time, we spawn the agent 10, 30, or 50 actions away from the target and report each metric for T\u00b410, T\u00b430, T\u00b450 settings. Navigation Baselines. We compare our ACT navigator with a number of sophisticated baselines and ablations. Reactive CNN. This is a feedforward network that uses the last-n frames to predict the next action. We tried n \u201c t1, 3, 5, 10u and report n \u201c 5, which worked best. Note that this is a target-agnostic baseline (i.e., is not aware of the question). The purpose of this baseline is to check whether simply memorizing frames from training environments generalizes to test (it does not). Reactive CNN+Question. This combines the frame representation (as above) with an LSTM encoding of the question to predict the next action. This is similar to the approach of [18], with the difference that the goal is speci\ufb01ed via a question encoding instead of a target image. Note that the action space for both the reactive baselines is tforward, turn-left, turn-rightu, there is no stop token. At test time, the model is run for max no. of actions (\u201c 100). LSTM+Question. The above two are memoryless navigators. This LSTM navigator takes as input the encodings of the question, current frame, and previous action, and predicts the next action. Note that these are identical inputs/outputs as our ACT navigator. The purpose of comparing to this ablation of our approach is to establish the bene\ufb01t of our proposed planner-controller architecture. We also compare against an ablated version of this baseline without the question encoding as input (LSTM). Navigation Oracles. We compare against two oracles: HumanNav* denotes goal-driven navigations by AMT workers remotely operating the agent (\u02da denotes that data human studies were conducted on a subset set of test). ShortestPaths+VQA denotes the question answering performance achieved by our answering module when fed in shortest path at test time. Table 1 shows the results of all baselines compared with our approach trained with just imitation learning (ACT+Q) and our approach \ufb01ne-tuned with RL (ACT+Q-RL)2. We make a few key observations: \u2022 All baselines are poor navigators. All baselines methods have negative d\u2206, i.e. they end up farther from the target than where they start. This con\ufb01rms our intuition that EmbodiedQA is indeed a dif\ufb01cult problem. \u2022 Memory helps. All models start equally far away from the target. Baselines augmented with memory (LSTM vs Reactive and LSTM-Q vs Reactive-Q) end closer to the target, i.e. achieve smaller dT, than those without. 2youtube.com/watch?v=gVj-TeIJfrk shows example navigation and answer predictions by our agent. \f\u2022 ACT Navigators performs best. Our proposed navigator (ACT+Q) achieves the smallest distance to target at the end of navigation (dT), and the RL-\ufb01netuned navigator (ACT+Q-RL) achieves the highest answering accuracy. \u2022 RL agent overshoots. Interestingly, we observe that while our RL-\ufb01netuned agent (ACT+Q-RL) gets closest to the target in its trajectory (i.e., achieves least dmin) and enters the target room most often (i.e., achieves highest %r\u00ea), it does not end closest to the target (i.e., does not achieve highest d\u2206). These statistics and our qualitative analysis suggests that this is because RL-\ufb01netuned agents learn to explore, with a lower stopping rate (%stop), and often overshoot the target. This is consistent with observations in literature [6]. In EmbodiedQA, this overshooting behavior does not hurt the question answering accuracy because the answering module can attend to frames along the trajectory. This behavior can be corrected by including a small negative reward for each action. \u2022 Shortest paths are not optimal for VQA. A number of methods outperform ShortestPath+VQA in terms of answering accuracy. This is because while the shortest path clearly takes an agent to the target object, it may not provide the best vantage to answer the question. In future work, these may be improved by ray tracing methods to appropriately frame of the target object at termination. 6." + } + ], + "Aditya Grover": [ + { + "url": "http://arxiv.org/abs/1906.09531v2", + "title": "Bias Correction of Learned Generative Models using Likelihood-Free Importance Weighting", + "abstract": "A learned generative model often produces biased statistics relative to the\nunderlying data distribution. A standard technique to correct this bias is\nimportance sampling, where samples from the model are weighted by the\nlikelihood ratio under model and true distributions. When the likelihood ratio\nis unknown, it can be estimated by training a probabilistic classifier to\ndistinguish samples from the two distributions. We employ this likelihood-free\nimportance weighting method to correct for the bias in generative models. We\nfind that this technique consistently improves standard goodness-of-fit metrics\nfor evaluating the sample quality of state-of-the-art deep generative models,\nsuggesting reduced bias. Finally, we demonstrate its utility on representative\napplications in a) data augmentation for classification using generative\nadversarial networks, and b) model-based policy evaluation using off-policy\ndata.", + "authors": "Aditya Grover, Jiaming Song, Alekh Agarwal, Kenneth Tran, Ashish Kapoor, Eric Horvitz, Stefano Ermon", + "published": "2019-06-23", + "updated": "2019-11-03", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.NE" + ], + "main_content": "Introduction Learning generative models of complex environments from high-dimensional observations is a longstanding challenge in machine learning. Once learned, these models are used to draw inferences and to plan future actions. For example, in data augmentation, samples from a learned model are used to enrich a dataset for supervised learning [1]. In model-based off-policy policy evaluation (henceforth MBOPE), a learned dynamics model is used to simulate and evaluate a target policy without real-world deployment [2], which is especially valuable for risk-sensitive applications [3]. In spite of the recent successes of deep generative models, existing theoretical results show that learning distributions in an unbiased manner is either impossible or has prohibitive sample complexity [4, 5]. Consequently, the models used in practice are inherently biased,1 and can lead to misleading downstream inferences. In order to address this issue, we start from the observation that many typical uses of generative models involve computing expectations under the model. For instance, in MBOPE, we seek to \ufb01nd the expected return of a policy under a trajectory distribution de\ufb01ned by this policy and a learned dynamics model. A classical recipe for correcting the bias in expectations, when samples from a different distribution than the ground truth are available, is to importance weight the samples according to the likelihood ratio [6]. If the importance weights were exact, the resulting estimates are unbiased. But in practice, the likelihood ratio is unknown and needs to be estimated since the true data distribution is unknown and even the model likelihood is intractable or ill-de\ufb01ned for many deep generative models, e.g., variational autoencoders [7] and generative adversarial networks [8]. Our proposed solution to estimate the importance weights is to train a calibrated, probabilistic classi\ufb01er to distinguish samples from the data distribution and the generative model. As shown in prior work, the output of such classi\ufb01ers can be used to extract density ratios [9]. Appealingly, this estimation procedure is likelihood-free since it only requires samples from the two distributions. 1We call a generative model biased if it produces biased statistics relative to the true data distribution. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. arXiv:1906.09531v2 [stat.ML] 3 Nov 2019 \fTogether, the generative model and the importance weighting function (speci\ufb01ed via a binary classi\ufb01er) induce a new unnormalized distribution. While exact density estimation and sampling from this induced distribution is intractable, we can derive a particle based approximation which permits ef\ufb01cient sampling via resampling based methods. We derive conditions on the quality of the weighting function such that the induced distribution provably improves the \ufb01t to the the data distribution. Empirically, we evaluate our bias reduction framework on three main sets of experiments. First, we consider goodness-of-\ufb01t metrics for evaluating sample quality metrics of a likelihood-based and a likelihood-free state-of-the-art (SOTA) model on the CIFAR-10 dataset. All these metrics are de\ufb01ned as Monte Carlo estimates from the generated samples. By importance weighting samples, we observe a bias reduction of 23.35% and 13.48% averaged across commonly used sample quality metrics on PixelCNN++ [10] and SNGAN [11] models respectively. Next, we demonstrate the utility of our approach on the task of data augmentation for multi-class classi\ufb01cation on the Omniglot dataset [12]. We show that, while naively extending the model with samples from a data augmentation, a generative adversarial network [1] is not very effective for multiclass classi\ufb01cation, we can improve classi\ufb01cation accuracy from 66.03% to 68.18% by importance weighting the contributions of each augmented data point. Finally, we demonstrate bias reduction for MBOPE [13]. A typical MBOPE approach is to \ufb01rst estimate a generative model of the dynamics using off-policy data and then evaluate the policy via Monte Carlo [2, 14]. Again, we observe that correcting the bias of the estimated dynamics model via importance weighting reduces RMSE for MBOPE by 50.25% on 3 MuJoCo environments [15]. 2 Preliminaries Notation. Unless explicitly stated otherwise, we assume that probability distributions admit absolutely continuous densities on a suitable reference measure. We use uppercase notation X, Y, Z to denote random variables and lowercase notation x, y, z to denote speci\ufb01c values in the corresponding sample spaces X, Y, Z. We use boldface for multivariate random variables and their vector values. Background. Consider a \ufb01nite dataset Dtrain of instances x drawn i.i.d. from a \ufb01xed (unknown) distribution pdata. Given Dtrain, the goal of generative modeling is to learn a distribution p\u03b8 to approximate pdata. Here, \u03b8 denotes the model parameters, e.g. weights in a neural network for deep generative models. The parameters can be learned via maximum likelihood estimation (MLE) as in the case of autoregressive models [16], normalizing \ufb02ows [17], and variational autoencoders [7, 18], or via adversarial training e.g., using generative adversarial networks [8, 19] and variants. Monte Carlo Evaluation We are interested in use cases where the goal is to evaluate or optimize expectations of functions under some distribution p (either equal or close to the data distribution pdata). Assuming access to samples from p as well some generative model p\u03b8, one extreme is to evaluate the sample average using the samples from p alone. However, this ignores the availability of p\u03b8, through which we have a virtually unlimited access of generated samples ignoring computational constraints and hence, could improve the accuracy of our estimates when p\u03b8 is close to p. We begin by presenting a direct motivating use case of data augmentation using generative models for training classi\ufb01ers which generalize better. Example Use Case: Suf\ufb01cient labeled training data for learning classi\ufb01cation and regression system is often expensive to obtain or susceptible to noise. Data augmentation seeks to overcome this shortcoming by arti\ufb01cially injecting new datapoints into the training set. These new datapoints are derived from an existing labeled dataset, either by manual transformations (e.g., rotations, \ufb02ips for images), or alternatively, learned via a generative model [1, 20]. Consider a supervised learning task over a labeled dataset Dcl. The dataset consists of feature and label pairs (x, y), each of which is assumed to be sampled independently from a data distribution pdata(x, y) de\ufb01ned over X \u00d7 Y. Further, let Y \u2286Rk. In order to learn a classi\ufb01er f\u03c8 : X \u2192Rk with parameters \u03c8, we minimize the expectation of a loss \u2113: Y \u00d7 Rk \u2192R over the dataset Dcl: Epdata(x,y)[\u2113(y, f\u03c8(x))] \u2248 1 |Dcl| X (x,y)\u223cDcl \u2113(y, f\u03c8(x)). (1) 2 \fE.g., \u2113could be the cross-entropy loss. A generative model for the task of data augmentation learns a joint distribution p\u03b8(x, y). Several algorithmic variants exist for learning the model\u2019s joint distribution and we defer the speci\ufb01cs to the experiments section. Once the generative model is learned, it can be used to optimize the expected classi\ufb01cation loss in Eq. (1) under a mixture distribution of empirical data distributions and generative model distributions given as: pmix(x, y) = mpdata(x, y) + (1 \u2212m)p\u03b8(x, y) (2) for a suitable choice of the mixture weights m \u2208[0, 1]. Notice that, while the eventual task here is optimization, reliably evaluating the expected loss of a candidate parameter \u03c8 is an important ingredient. We focus on this basic question \ufb01rst in advance of leveraging the solution for data augmentation. Further, even if evaluating the expectation once is easy, optimization requires us to do repeated evaluation (for different values of \u03c8) which is signi\ufb01cantly more challenging. Also observe that the distribution p under which we seek expectations is same as pdata here, and we rely on the generalization of p\u03b8 to generate transformations of an instance in the dataset which are not explicitly present, but plausibly observed in other, similar instances [21]. 3 Likelihood-Free Importance Weighting Whenever the distribution p, under which we seek expectations, differs from p\u03b8, model-based estimates exhibit bias. In this section, we start out by formalizing bias for Monte Carlo expectations and subsequently propose a bias reduction strategy based on likelihood-free importance weighting (LFIW). We are interested in evaluating expectations of a class of functions of interest f \u2208F w.r.t. the distribution p. For any given f : X \u2192R, we have Ex\u223cp[f(x)] = R p(x)f(x)dx. Given access to samples from a generative model p\u03b8, if we knew the densities for both p and p\u03b8, then a classical scheme to evaluate expectations under p using samples from p\u03b8 is to use importance sampling [6]. We reweight each sample from p\u03b8 according to its likelihood ratio under p and p\u03b8 and compute a weighted average of the function f over these samples. Ex\u223cp[f(x)] = Ex\u223cp\u03b8 \u0014 p(x) p\u03b8(x)f(x) \u0015 \u22481 T T X i=1 w(xi)f(xi) (3) where w(xi) := p(xi)/p\u03b8(xi) is the importance weight for xi \u223cp\u03b8. The validity of this procedure is subject to the use of a proposal p\u03b8 such that for all x \u2208X where p\u03b8(x) = 0, we also have f(x)p(x) = 0.2 To apply this technique to reduce the bias of a generative sampler p\u03b8 w.r.t. p, we require knowledge of the importance weights w(x) for any x \u223cp\u03b8. However, we typically only have a sampling access to p via \ufb01nite datasets. For instance, in the data augmentation example above, where p = pdata, the unknown distribution used to learn p\u03b8. Hence we need a scheme to learn the weights w(x), using samples from p and p\u03b8, which is the problem we tackle next.In order to do this, we consider a binary classi\ufb01cation problem over X \u00d7 Y where Y = {0, 1} and the joint distribution is denoted as q(x, y). Let \u03b3 = q(y=0) q(y=1) > 0 denote any \ufb01xed odds ratio. To specify the joint q(x, y), we additionally need the conditional q(x|y) which we de\ufb01ne as follows: q(x|y) = \u001ap\u03b8(x) if y = 0 p(x) otherwise. (4) Since we only assume sample access to p and p\u03b8(x), our strategy would be to estimate the conditional above via learning a probabilistic binary classi\ufb01er. To train the classi\ufb01er, we only require datasets of samples from p\u03b8(x) and p(x) and estimate \u03b3 to be the ratio of the size of two datasets. Let c\u03c6 : X \u2192[0, 1] denote the probability assigned by the classi\ufb01er with parameters \u03c6 to a sample x belonging to the positive class y = 1. As shown in prior work [9, 22], if c\u03c6 is Bayes optimal, then the importance weights can be obtained via this classi\ufb01er as: w\u03c6(x) = p(x) p\u03b8(x) = \u03b3 c\u03c6(x) 1 \u2212c\u03c6(x). (5) 2A stronger suf\ufb01cient, but not necessary condition that is independent of f, states that the proposal p\u03b8 is valid if it has a support larger than p, i.e., for all x \u2208X, p\u03b8(x) = 0 implies p(x) = 0. 3 \f(a) Setup (b) n = 50 (c) n = 100 (d) n = 1000 Figure 1: Importance Weight Estimation using Probabilistic Classi\ufb01ers. (a) A univariate Gaussian (blue) is \ufb01t to samples from a mixture of two Gaussians (red). (b-d) Estimated class probabilities (with 95% con\ufb01dence intervals based on 1000 bootstraps) for varying number of points n, where n is the number of points used for training the generative model and multilayer perceptron. In practice, we do not have access to a Bayes optimal classi\ufb01er and hence, the estimated importance weights will not be exact. Consequently, we can hope to reduce the bias as opposed to eliminating it entirely. Hence, our default LFIW estimator is given as: Ex\u223cp[f(x)] \u22481 T T X i=1 \u02c6 w\u03c6(xi)f(xi) (6) where \u02c6 w\u03c6(xi) = \u03b3 c\u03c6(xi) 1\u2212c\u03c6(xi) is the importance weight for xi \u223cp\u03b8 estimated via c\u03c6(x). Practical Considerations. Besides imperfections in the classi\ufb01er, the quality of a generative model also dictates the ef\ufb01cacy of importance weighting. For example, images generated by deep generative models often possess distinct artifacts which can be exploited by the classi\ufb01er to give highly-con\ufb01dent predictions [23, 24]. This could lead to very small importance weights for some generated images, and consequently greater relative variance in the importance weights across the Monte Carlo batch. Below, we present some practical variants of LFIW estimator to offset this challenge. 1. Self-normalization: The self-normalized LFIW estimator for Monte Carlo evaluation normalizes the importance weights across a sampled batch: Ex\u223cp[f(x)] \u2248 T X i=1 \u02c6 w\u03c6(xi) PT j=1 \u02c6 w\u03c6(xj) f(xi) where xi \u223cp\u03b8. (7) 2. Flattening: The \ufb02attened LFIW estimator interpolates between the uniform importance weights and the default LFIW weights via a power scaling parameter \u03b1 \u22650: Ex\u223cp[f(x)] \u22481 T T X i=1 \u02c6 w\u03c6(xi)\u03b1f(xi) where xi \u223cp\u03b8. (8) For \u03b1 = 0, there is no bias correction, and \u03b1 = 1 returns the default estimator in Eq. (6). For intermediate values of \u03b1, we can trade-off bias reduction with any undesirable variance introduced. 3. Clipping: The clipped LFIW estimator speci\ufb01es a lower bound \u03b2 \u22650 on the importance weights: Ex\u223cp[f(x)] \u22481 T T X i=1 max( \u02c6 w\u03c6(xi), \u03b2)f(xi) where xi \u223cp\u03b8. (9) When \u03b2 = 0, we recover the default LFIW estimator in Eq. (6). Finally, we note that these estimators are not exclusive and can be combined e.g., \ufb02attened or clipped weights can be normalized. Con\ufb01dence intervals. Since we have real and generated data coming from a \ufb01nite dataset and parametric model respectively, we propose a combination of empirical and parametric bootstraps to derive con\ufb01dence intervals around the estimated importance weights. See Appendix A for details. Synthetic experiment. We visually illustrate our importance weighting approach in a toy experiment (Figure 1a). We are given a \ufb01nite set of samples drawn from a mixture of two Gaussians (red). The model family is a unimodal Gaussian, illustrating mismatch due to a parametric model. The mean 4 \fAlgorithm 1 SIR for the Importance Resampled Generative Model p\u03b8,\u03c6 Input: Generative Model p\u03b8, Importance Weight Estimator \u02c6 w\u03c6, budget T 1: Sample x1, x2, . . . , xT independently from p\u03b8 2: Estimate importance weights \u02c6 w(x1), \u02c6 w(x2), . . . , \u02c6 w(xT ) 3: Compute \u02c6 Z \u2190PT t=1 \u02c6 w(xt) 4: Sample j \u223cCategorical \u0010 \u02c6 w(x1) \u02c6 Z , \u02c6 w(x2) \u02c6 Z , . . . , \u02c6 w(xT ) \u02c6 Z \u0011 5: return xj and variance of the model are estimated by the empirical means and variances of the observed data. Using estimated model parameters, we then draw samples from the model (blue). In Figure 1b, we show the probability assigned by a binary classi\ufb01er to a point to be from true data distribution. Here, the classi\ufb01er is a single hidden-layer multi-layer perceptron. The classi\ufb01er is not Bayes optimal, which can be seen by the gaps between the optimal probabilities curve (black) and the estimated class probability curve (green). However, as we increase the number of real and generated examples n in Figures 1c-d, the classi\ufb01er approaches optimality. Furthermore, even its uncertainty shrinks with increasing data, as expected. In summary, this experiment demonstrates how a binary classi\ufb01er can mitigate this bias due to a mismatched generative model. 4 Importance Resampled Generative Modeling In the previous section, we described a procedure to augment any base generative model p\u03b8 with an importance weighting estimator \u02c6 w\u03c6 for debiased Monte Carlo evaluation. Here, we will use this augmentation to induce an importance resampled generative model with density p\u03b8,\u03c6 given as: p\u03b8,\u03c6(x) \u221dp\u03b8(x) \u02c6 w\u03c6(x) (10) where the partition function is expressed as Z\u03b8,\u03c6 = R p\u03b8(x) \u02c6 w\u03c6(x)dx = Ep\u03b8[ \u02c6 w\u03c6(x)]. Density Estimation. Exact density estimation requires a handle on the density of the base model p\u03b8 (typically intractable for models such as VAEs and GANs) and estimates of the partition function. Exactly computing the partition function is intractable. If p\u03b8 permits fast sampling and importance weights are estimated via LFIW (requiring only a forward pass through the classi\ufb01er network), we can obtain unbiased estimates via a Monte Carlo average, i.e., Z\u03b8,\u03c6 \u22481 T PT i=1 \u02c6 w\u03c6(xi) where xi \u223cp\u03b8. To reduce the variance, a potentially large number of samples are required. Since samples are obtained independently, the terms in the Monte Carlo average can be evaluated in parallel. Sampling-Importance-Resampling. While exact sampling from p\u03b8,\u03c6 is intractable, we can instead perform sample from a particle-based approximation to p\u03b8,\u03c6 via sampling-importance-resampling [25, 26] (SIR). We de\ufb01ne the SIR approximation to p\u03b8,\u03c6 via the following density: pSIR \u03b8,\u03c6 (x; T) := Ex2,x3,...,xT \u223cp\u03b8 \" \u02c6 w\u03c6(x) \u02c6 w\u03c6(x) + PT i=2 \u02c6 w\u03c6(xi) p\u03b8(x) # (11) where T > 0 denotes the number of independent samples (or \u201cparticles\"). For any \ufb01nite T, sampling from pSIR \u03b8,\u03c6 is tractable, as summarized in Algorithm 1. Moreover, any expectation w.r.t. the SIR approximation to the induced distribution can be evaluated in closed-form using the self-normalized LFIW estimator (Eq. 7). In the limit of T \u2192\u221e, we recover the induced distribution p\u03b8,\u03c6: lim T \u2192\u221epSIR \u03b8,\u03c6 (x; T) = p\u03b8,\u03c6(x) \u2200x (12) Next, we analyze conditions under which the resampled density p\u03b8,\u03c6 provably improves the model \ufb01t to pdata. In order to do so, we further assume that pdata is absolutely continuous w.r.t. p\u03b8 and p\u03b8,\u03c6. We de\ufb01ne the change in KL via the importance resampled density as: \u2206(pdata, p\u03b8, p\u03b8,\u03c6) := DKL(pdata, p\u03b8,\u03c6) \u2212DKL(pdata, p\u03b8). (13) Substituting Eq. 10 in Eq. 13, we can simplify the above quantity as: \u2206(pdata, p\u03b8, p\u03b8,\u03c6) = Ex\u223cpdata[\u2212log(p\u03b8(x) \u02c6 w\u03c6(x)) + log Z\u03b8,\u03c6 + log p\u03b8(x)] (14) = Ex\u223cpdata[log \u02c6 w\u03c6(x)] \u2212log Ex\u223cp\u03b8[ \u02c6 w\u03c6(x)]. (15) 5 \fTable 1: Goodness-of-\ufb01t evaluation on CIFAR-10 dataset for PixelCNN++ and SNGAN. Standard errors computed over 10 runs. Higher IS is better. Lower FID and KID scores are better. Model Evaluation IS (\u2191) FID (\u2193) KID (\u2193) Reference 11.09 \u00b1 0.1263 5.20 \u00b1 0.0533 0.008 \u00b1 0.0004 PixelCNN++ Default (no debiasing) 5.16 \u00b1 0.0117 58.70 \u00b1 0.0506 0.196 \u00b1 0.0001 LFIW 6.68 \u00b1 0.0773 55.83 \u00b1 0.9695 0.126 \u00b1 0.0009 SNGAN Default (no debiasing) 8.33\u00b1 0.0280 20.40 \u00b1 0.0747 0.094 \u00b1 0.0002 LFIW 8.57 \u00b1 0.0325 17.29 \u00b1 0.0698 0.073 \u00b10.0004 The above expression provides a necessary and suf\ufb01cient condition for any positive real valued function (such as the LFIW classi\ufb01er in Section 3) to improve the KL divergence \ufb01t to the underlying data distribution. In practice, an unbiased estimate of the LHS can be obtained via Monte Carlo averaging of logimportance weights based on Dtrain. The empirical estimate for the RHS is however biased.3 To remedy this shortcoming, we consider the following necessary but insuf\ufb01cient condition. Proposition 1. If \u2206(pdata, p\u03b8, p\u03b8,\u03c6) \u22650, then the following conditions hold: Ex\u223cpdata[ \u02c6 w\u03c6(x)] \u2265Ex\u223cp\u03b8[ \u02c6 w\u03c6(x)], (16) Ex\u223cpdata[log \u02c6 w\u03c6(x)] \u2265Ex\u223cp\u03b8[log \u02c6 w\u03c6(x)]. (17) The conditions in Eq. 16 and Eq. 17 follow directly via Jensen\u2019s inequality applied to the LHS and RHS of Eq. 15 respectively. Here, we note that estimates for the expectations in Eqs. 16-17 based on Monte Carlo averaging of (log-) importance weights are unbiased. 5 Application Use Cases In all our experiments, the binary classi\ufb01er for estimating the importance weights was a calibrated deep neural network trained to minimize the cross-entropy loss. The self-normalized LFIW in Eq. (7) worked best. Additional analysis on the estimators and experiment details are in Appendices B and C. 5.1 Goodness-of-\ufb01t testing In the \ufb01rst set of experiments, we highlight the bene\ufb01ts of importance weighting for a debiased evaluation of three popularly used sample quality metrics viz. Inception Scores (IS) [27], Frechet Inception Distance (FID) [28], and Kernel Inception Distance (KID) [29]. All these scores can be formally expressed as empirical expectations with respect to the model. For all these metrics, we can simulate the population level unbiased case as a \u201creference score\" wherein we arti\ufb01cially set both the real and generated sets of samples used for evaluation as \ufb01nite, disjoint sets derived from pdata. We evaluate the three metrics for two state-of-the-art models trained on the CIFAR-10 dataset viz. an autoregressive model PixelCNN++ [10] learned via maximum likelihood estimation and a latent variable model SNGAN [11] learned via adversarial training. For evaluating each metric, we draw 10,000 samples from the model. In Table 1, we report the metrics with and without the LFIW bias correction. The consistent debiased evaluation of these metrics via self-normalized LFIW suggest that the SIR approximation to the importance resampled distribution (Eq. 11) is a better \ufb01t to pdata. 5.2 Data Augmentation for Multi-Class Classi\ufb01cation We consider data augmentation via Data Augmentation Generative Adversarial Networks (DAGAN) [1]. While DAGAN was motivated by and evaluated for the task of meta-learning, it can also be applied for multi-class classi\ufb01cation scenarios, which is the setting we consider here. We trained a DAGAN on the Omniglot dataset of handwritten characters [12]. The DAGAN training procedure is described in the Appendix. The dataset is particularly relevant because it contains 1600+ classes but only 20 examples from each class and hence, could potentially bene\ufb01t from augmented data. 3If \u02c6 Z is an unbiased estimator for Z, then log \u02c6 Z is a biased estimator for log Z via Jensen\u2019s inequality. 6 \f(a) (b) (c) (d) (e) (f) Figure 2: Qualitative evaluation of importance weighting for data augmentation. (a-f) Top row shows held-out data samples from a speci\ufb01c class in Omniglot. Bottom row shows generated samples from the same class ranked in decreasing order of importance weights. Table 2: Classi\ufb01cation accuracy on the Omniglot dataset. Standard errors computed over 5 runs. Dataset Dcl Dg Dg w/ LFIW Dcl + Dg Dcl + Dg w/ LFIW Accuracy 0.6603 \u00b1 0.0012 0.4431 \u00b1 0.0054 0.4481 \u00b1 0.0056 0.6600 \u00b1 0.0040 0.6818 \u00b1 0.0022 Once the model has been trained, it can be used for data augmentation in many ways. In particular, we consider ablation baselines that use various combinations of the real training data Dcl and generated data Dg for training a downstream classi\ufb01er. When the generated data Dg is used, we can either use the data directly with uniform weighting for all training points, or choose to importance weight (LFIW) the contributions of the individual training points to the overall loss. The results are shown in Table 2. While generated data (Dg) alone cannot be used to obtain competitive performance relative to the real data (Dcl) on this task as expected, the bias it introduces for evaluation and subsequent optimization overshadows even the naive data augmentation (Dcl + Dg). In contrast, we can obtain signi\ufb01cant improvements by importance weighting the generated points (Dcl + Dg w/ LFIW). Qualitatively, we can observe the effect of importance weighting in Figure 2. Here, we show true and generated samples for 6 randomly choosen classes (a-f) in the Omniglot dataset. The generated samples are ranked in decreasing order of the importance weights. There is no way to formally test the validity of such rankings and this criteria can also prefer points which have high density under pdata but are unlikely under p\u03b8 since we are looking at ratios. Visual inspection suggests that the classi\ufb01er is able to appropriately downweight poorer samples, as shown in Figure 2 (a, b, c, d bottom right). There are also failure modes, such as the lowest ranked generated images in Figure 2 (e, f bottom right) where the classi\ufb01er weights reasonable generated samples poorly relative to others. This could be due to particular artifacts such as a tiny disconnected blurry speck in Figure 2 (e bottom right) which could be more revealing to a classi\ufb01er distinguishing real and generated data. 5.3 Model-based Off-policy Policy Evaluation So far, we have seen use cases where the generative model was trained on data from the same distribution we wish to use for Monte Carlo evaluation. We can extend our debiasing framework to more involved settings when the generative model is a building block for specifying the full data generation process, e.g., trajectory data generated via a dynamics model along with an agent policy. In particular, we consider the setting of off-policy policy evaluation (OPE), where the goal is to evaluate policies using experiences collected from a different policy. Formally, let (S, A, r, P, \u03b7, T) denote an (undiscounted) Markov decision process with state space S, action space A, reward function r, transition P, initial state distribution \u03b7 and horizon T. Assume \u03c0e : S \u00d7 A \u2192[0, 1] is a known policy that we wish to evaluate. The probability of generating a certain trajectory \u03c4 = {s0, a0, s1, a1, ..., sT , aT } of length T with policy \u03c0e and transition P is given as: p\u22c6(\u03c4) = \u03b7(s0) T \u22121 Y t=0 \u03c0e(at|st)P(st+1|st, at). (18) The return on a trajectory R(\u03c4) is the sum of the rewards across the state, action pairs in \u03c4: R(\u03c4) = PT t=1 r(st, at), where we assume a known reward function r. 7 \fTable 3: Off-policy policy evaluation on MuJoCo tasks. Standard error is over 10 Monte Carlo estimates where each estimate contains 100 randomly sampled trajectories. Environment v(\u03c0e) (Ground truth) \u02dc v(\u03c0e) \u02c6 v(\u03c0e) (w/ LFIW) \u02c6 v80(\u03c0e) (w/ LFIW) Swimmer 36.7 \u00b1 0.1 100.4 \u00b1 3.2 25.7 \u00b1 3.1 47.6 \u00b1 4.8 HalfCheetah 241.7 \u00b1 3.56 204.0 \u00b1 0.8 217.8 \u00b1 4.0 219.1 \u00b1 1.6 HumanoidStandup 14170 \u00b1 53 8417 \u00b1 28 9372 \u00b1 375 9221 \u00b1 381 0 20 40 60 80 100 H 20 40 60 | (v)| Swimmer 0 20 40 60 80 100 H 20 30 | (v)| HalfCheetah 0 20 40 60 80 100 H 4500 5000 5500 | (v)| HumanoidStandup Figure 3: Estimation error \u03b4(v) = v(\u03c0e) \u2212\u02c6 vH(\u03c0e) for different values of H (minimum 0, maximum 100). Shaded area denotes standard error over different random seeds. We are interested in the value of a policy de\ufb01ned as v(\u03c0e) = E\u03c4\u223cp\u2217(\u03c4) [R(\u03c4)]. Evaluating \u03c0e requires the (unknown) transition dynamics P. The dynamics model is a conditional generative model of the next states st+1 conditioned on the previous state-action pair (st, at). If we have access to historical logged data D\u03c4 of trajectories \u03c4 = {s0, a0, s1, a1, . . . , } from some behavioral policy \u03c0b : S \u00d7 A \u2192[0, 1], then we can use this off-policy data to train a dynamics model P\u03b8(st+1|st, at). The policy \u03c0e can then be evaluated under this learned dynamics model as \u02dc v(\u03c0e) = E\u03c4\u223c\u02dc p(\u03c4)[R(\u03c4)], where \u02dc p uses P\u03b8 instead of the true dynamics in Eq. (18). However, the trajectories sampled with P\u03b8 could signi\ufb01cantly deviate from samples from P due to compounding errors [30]. In order to correct for this bias, we can use likelihood-free importance weighting on entire trajectories of data. The binary classi\ufb01er c(st, at, st+1) for estimating the importance weights in this case distinguishes between triples of true and generated transitions. For any true triple (st, at, st+1) extracted from the off-policy data, the corresponding generated triple (st, at,\u02c6 st+1) only differs in the \ufb01nal transition state, i.e., \u02c6 st+1 \u223cP\u03b8(\u02c6 st+1|st, at). Such a classi\ufb01er allows us to obtain the importance weights \u02c6 w(st, at,\u02c6 st+1) for every predicted state transition (st, at,\u02c6 st+1). The importance weights for the trajectory \u03c4 can be derived from the importance weights of these individual transitions as: p\u22c6(\u03c4) \u02dc p(\u03c4) = QT \u22121 t=0 P(st+1|st, at) QT \u22121 t=0 P\u03b8(st+1|st, at) = T \u22121 Y t=0 P(st+1|st, at) P\u03b8(st+1|st, at) \u2248 T \u22121 Y t=0 \u02c6 w(st, at,\u02c6 st+1). (19) Our \ufb01nal LFIW estimator is given as: \u02c6 v(\u03c0e) = E\u03c4\u223c\u02dc p(\u03c4) \"T \u22121 Y t=0 \u02c6 w(st, at,\u02c6 st+1) \u00b7 R(\u03c4) # . (20) We consider three continuous control tasks in the MuJoCo simulator [15] from OpenAI gym [31] (in increasing number of state dimensions): Swimmer, HalfCheetah and HumanoidStandup. High dimensional state spaces makes it challenging to learning a reliable dynamics model in these environments. We train behavioral and evaluation policies using Proximal Policy Optimization [32] with different hyperparameters for the two policies. The dataset collected via trajectories from the behavior policy are used train a ensemble neural network dynamics model. We the use the trained dynamics model to evaluate \u02dc v(\u03c0e) and its IW version \u02c6 v(\u03c0e), and compare them with the ground truth returns v(\u03c0e). Each estimation is averaged over a set of 100 trajectories with horizon T = 100. Speci\ufb01cally, for \u02c6 v(\u03c0e), we also average the estimation over 10 classi\ufb01er instances trained with different random seeds on different trajectories. We further consider performing IW over only the \ufb01rst H steps, and use uniform weights for the remainder, which we denote as \u02c6 vH(\u03c0e). This allow us to interpolate between \u02dc v(\u03c0e) \u2261\u02c6 v0(\u03c0e) and \u02c6 v(\u03c0e) \u2261\u02c6 vT (\u03c0e). Finally, as in the other experiments, we used the self-normalized variant (Eq. (7)) of the importance weighted estimator in Eq. (20). We compare the policy evaluations under different environments in Table 3. These results show that the rewards estimated with the trained dynamics model differ from the ground truth by a large margin. 8 \fBy importance weighting the trajectories, we obtain much more accurate policy evaluations. As expected, we also see that while LFIW leads to higher returns on average, the imbalance in trajectory importance weights due to the multiplicative weights of the state-action pairs can lead to higher variance in the importance weighted returns. In Figure 3, we demonstrate that policy evaluation becomes more accurate as more timesteps are used for LFIW evaluations, until around 80 \u2212100 timesteps and thus empirically validates the bene\ufb01ts of importance weighting using a classi\ufb01er. Given that our estimates have a large variance, it would be worthwhile to compose our approach with other variance reduction techniques such as (weighted) doubly robust estimation in future work [33], as well as incorporate these estimates within a framework such as MAGIC to further blend with model-free OPE [14]. In Appendix C.5.1, we also consider a stepwise LFIW estimator for MBOPE which applies importance weighting at the level of every decision as opposed to entire trajectories. Overall. Across all our experiments, we observe that importance weighting the generated samples leads to uniformly better results, whether in terms of evaluating the quality of samples, or their utility in downstream tasks. Since the technique is a black-box wrapper around any generative model, we expect this to bene\ufb01t a diverse set of tasks in follow-up works. However, there is also some caution to be exercised with these techniques as evident from the results of Table 1. Note that in this table, the con\ufb01dence intervals (computed using the reported standard errors) around the model scores after importance weighting still do not contain the reference scores obtained from the true model. This would not have been the case if our debiased estimator was completely unbiased and this observation reiterates our earlier claim that LFIW is reducing bias, as opposed to completely eliminating it. Indeed, when such a mismatch is observed, it is a good diagnostic to either learn more powerful classi\ufb01ers to better approximate the Bayes optimum, or \ufb01nd additional data from pdata in case the generative model fails the full support assumption. 6 Related Work & Discussion Density ratios enjoy widespread use across machine learning e.g., for handling covariate shifts, class imbalance etc. [9, 34]. In generative modeling, estimating these ratios via binary classi\ufb01ers is frequently used for de\ufb01ning learning objectives and two sample tests [19, 35, 35\u201341]. In particular, such classi\ufb01ers have been used to de\ufb01ne learning frameworks such as generative adversarial networks [8, 42], likelihood-free Approximate Bayesian Computation (ABC) [43] and earlier work in unsupervised-as-supervised learning [44] and noise contrastive estimation [43] among others. Recently, [45] used importance weighting to reweigh datapoints based on differences in training and test data distributions i.e., dataset bias. The key difference is that these works are explicitly interested in learning the parameters of a generative model. In contrast, we use the binary classi\ufb01er for estimating importance weights to correct for the model bias of any \ufb01xed generative model. Recent concurrent works [46\u201348] use MCMC and rejection sampling to explicitly transform or reject the generated samples. These methods require extra computation beyond training a classi\ufb01er, in rejecting the samples or running Markov chains to convergence, unlike the proposed importance weighting strategy. For many model-based Monte Carlo evaluation usecases (e.g., data augmentation, MBOPE), this extra computation is unnecessary. If samples or density estimates are explicitly needed from the induced resampled distribution, we presented a particle-based approximation to the induced density where the number of particles is a tunable knob allowing for trading statistical accuracy with computational ef\ufb01ciency. Finally, we note resampling based techniques have been extensively studied in the context of improving variational approximations for latent variable generative models [49\u201352]. 7" + }, + { + "url": "http://arxiv.org/abs/1905.12892v2", + "title": "AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows", + "abstract": "Given datasets from multiple domains, a key challenge is to efficiently\nexploit these data sources for modeling a target domain. Variants of this\nproblem have been studied in many contexts, such as cross-domain translation\nand domain adaptation. We propose AlignFlow, a generative modeling framework\nthat models each domain via a normalizing flow. The use of normalizing flows\nallows for a) flexibility in specifying learning objectives via adversarial\ntraining, maximum likelihood estimation, or a hybrid of the two methods; and b)\nlearning and exact inference of a shared representation in the latent space of\nthe generative model. We derive a uniform set of conditions under which\nAlignFlow is marginally-consistent for the different learning objectives.\nFurthermore, we show that AlignFlow guarantees exact cycle consistency in\nmapping datapoints from a source domain to target and back to the source\ndomain. Empirically, AlignFlow outperforms relevant baselines on image-to-image\ntranslation and unsupervised domain adaptation and can be used to\nsimultaneously interpolate across the various domains using the learned\nrepresentation.", + "authors": "Aditya Grover, Christopher Chute, Rui Shu, Zhangjie Cao, Stefano Ermon", + "published": "2019-05-30", + "updated": "2019-12-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "stat.ML" + ], + "main_content": "Introduction In recent years, there has been an increase in the availability of both labeled and unlabeled datasets from multiple sources. For example, many variants of face datasets scraped from sources such as Wikipedia and IMDB are publicly available. Given data from two or more domains, we expect sample-ef\ufb01cient learning algorithms to be able to learn and align the shared structure across these domains for accurate downstream tasks. This perspective has a broad range of applications across machine learning, including relational learning (Kim et al. 2017), domain adaptation (Taigman, Polyak, and Wolf 2016; Hoffman et al. 2017; Bousmalis et al. 2017), image and video translation for computer vision (Isola et al. 2017; Wang et al. 2018), and machine translation for low resource languages (Gu et al. 2018). Many variants of the domain alignment problem have been studied in prior work. For instance, unpaired cross-domain translation refers to the task of learning a mapping from one domain to another given datasets from the two domains (Zhu \u2217Equal Contribution Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. et al. 2017b). This task can be used as a subproblem in the domain adaptation setting, where the goal is to learn a classi\ufb01er for the unlabeled domain given labeled data from a related source domain (Saenko et al. 2010). Many of these problems are underconstrained due to the limitations on the labelled supervision available for target domain. An amalgam of inductive biases need to be explicitly enforced to learn meaningful solutions, e.g., cycle-consistency (Zhu et al. 2017b), entropic regularization (Courty et al. 2017) etc. These inductive biases can be speci\ufb01ed via additional loss terms or by specifying constraints on the model family. We present AlignFlow, a latent variable generative framework that seeks to discover the shared structure across multiple data domains using normalizing \ufb02ows (Rezende and Mohamed 2015; Dinh, Krueger, and Bengio 2014; Dinh, Sohl-Dickstein, and Bengio 2017). Latent variable generative models are highly effective for inferring hidden structure within observed data from a single domain. In AlignFlow, we model the data from each domain via an invertible generative model with a single latent space shared across all the domains. If we let the two domains to be A and B with a shared latent space, say Z, then the latent variable generative model for A may additionally share some or all parameters with the model of domain B. Akin to a single invertible model, the collection of invertible models in AlignFlow provide great \ufb02exibility in specifying learning objectives and can be trained via maximum likelihood estimation, adversarial training or a hybrid variant accounting for both objectives. By virtue of an invertible design, AlignFlow naturally extends as a cross-domain translation model. To translate data across two domains, say A to B, we can invert a data point from A \u2192Z \ufb01rst followed by a second inversion from Z \u2192B. Appealingly, we show that this composition of invertible mappings is exactly cycle-consistent, i.e., translating a datapoint from A to B using the forward mapping and backwards using the reverse mapping gives back the original datapoint and vice versa from B to A. Cycle-consistency was \ufb01rst introduced in CycleGAN (Zhu et al. 2017a) and has been shown to be an excellent inductive bias for underconstrained problems, such as unpaired domain alignment. While models such as CycleGAN only provide approximate cycle-consistency by incorporating additional loss arXiv:1905.12892v2 [cs.LG] 21 Dec 2019 \fterms, AlignFlow can omit these terms and guarantee exact cycle-consistency by design. We analyze the AlignFlow framework extensively. Theoretically, we derive conditions under which the AlignFlow objective is consistent in the sense of recovering the true marginal distributions. For objectives that use adversarial loss terms, we derive optimal critics in this setting. Empirically, we consider two sets of tasks: image-to-image translation and unsupervised domain adaptation. On both these tasks, we observe consistent improvements over other approximately cycle-consistent generative frameworks on three benchmark pairs of high-dimensional image datasets. 2 Preliminaries In this section, we discuss the necessary background and notation on generative adversarial networks and normalizing \ufb02ows. We overload uppercase notation X, Y, Z to denote random variables and their sample spaces and use lowercase notation x, y, z to denote values assumed by these variables. 2.1 Generative Adversarial Networks Generative adversarial networks (GAN) are a class of latent variable generative models that specify the generator as a deterministic mapping h : Z \u2192X between a set of latent variables Z and a set of observed variables X (Goodfellow et al. 2014). In order to sample from a GAN, we need a prior density over Z that permits ef\ufb01cient sampling. The generator of a GAN can also be conditional, where the conditioning is on another set of observed variables and optionally the latent variables Z as before (Mirza and Osindero 2014). A GAN is trained via adversarial training, wherein the generator h plays a minimax game with an auxiliary critic C. The goal of the critic C : X \u2192R is to distinguish real samples from the observed dataset with samples generated via h. The generator, on the other hand, tries to generate samples that can maximally confuse the critic. Many learning objectives have been proposed for adversarial training, such as those based on f-divergences (Nowozin, Cseke, and Tomioka 2016) and Wasserstein Distance (Arjovsky, Chintala, and Bottou 2017). For the standard cross-entropy GAN, the critic outputs a probability of a datapoint being real and optimizes the following objective w.r.t. a data distribution p\u2217 X : X \u2192R\u22650: LGAN(C, h) = Ex\u223cp\u2217 X[log C(x)] + Ez\u223cpZ[log(1 \u2212C(h(z)))]. (1) for a suitable choice of prior density pZ. The generator and the critic are both parameterized by deep neural networks and learned via alternating gradient updates. Because adversarial training only requires samples from the generative model, it can be used to train generative models with intractable or ill-de\ufb01ned likelihoods (Mohamed and Lakshminarayanan 2016). Hence, adversarial training is likelihood-free and in practice, it gives excellent performance for tasks that require data generation. However, these models are hard to train due to the alternating minimax optimization and suffer from issues such as mode collapse (Goodfellow 2016). 2.2 Normalizing Flows Normalizing \ufb02ows are a class of latent variable generative models that specify the generator as an invertible mapping h : Z \u2192X between a set of latent variables Z and a set of observed variables X. Let pX and pZ denote the marginal densities de\ufb01ned by the model over X and Z respectively. Using the change-of-variables formula, these marginal densities can be related as: pX(x) = pZ(z) \f \f \f \fdet\u2202h\u22121 \u2202X \f \f \f \f X=x (2) where z = h\u22121(x) due to the invertibility constraints. Here, the second term on the RHS corresponds to the absolute value of the determinant of the Jacobian of the inverse transformation and signi\ufb01es the change in volume when translating across the two sample spaces. For evaluating likelihoods via the change-of-variables formula, we require ef\ufb01cient and tractable evaluation of the prior density, the inverse transformation h\u22121, and the determinant of the Jacobian of h\u22121. To draw a sample from this model, we perform ancestral sampling, i.e., we \ufb01rst sample a latent vector z \u223cpZ(z) and obtain the sampled vector as given by x = h(z). This requires the ability to ef\ufb01ciently: (1) sample from the prior density and (2) evaluate the forward transformation h. Many transformations parameterized by deep neural networks that satisfy one or more of these criteria have been proposed in the recent literature on normalizing \ufb02ows, e.g., NICE (Dinh, Krueger, and Bengio 2014) and Autoregressive Flows (Kingma et al. 2016; Papamakarios, Murray, and Pavlakou 2017). By suitable design of transformations, both likelihood evaluation and sampling can be performed ef\ufb01ciently, as in Real-NVP (Dinh, Sohl-Dickstein, and Bengio 2017). Consequently, a \ufb02ow model can be trained ef\ufb01ciently to maximize the likelihood of the observed dataset (a.k.a. maximum likelihood estimation or MLE) as well as likelihood-free adversarial training (Grover, Dhar, and Ermon 2018). 3 The AlignFlow Framework In this section, we present the AlignFlow framework for learning generative models in the presence of unpaired data from multiple domains. For ease of presentation, we consider the case of two domains. Unless mentioned otherwise, our results naturally extend to more than two domains as well. 3.1 Problem Setup The learning setting we consider is as follows. We are given unpaired datasets DA and DB from two domains A and B respectively. We assume that the datapoints are sampled i.i.d. from some true but unknown marginal densities denoted as p\u2217 A and p\u2217 B respectively. We are interested in learning models for the following distributions: (a) the marginal likelihoods pA and pB that approximate p\u2217 A and p\u2217 B and (b) conditional distributions pA|B and pB|A. The unconditional models can be used for density estimation and sampling from A and B whereas the conditional models can be used for translating (i.e., conditional sampling) from B \u2192A and A \u2192B. \fBefore presenting the AlignFlow framework, we note two observations. For task (a), we need datasets from the domains A and B respectively for learning. For task (b), we note that the problem is underconstrained since we are only given data from the marginal distributions and hence, it is unclear how to learn the conditional distribution that relates the datapoints from the two domains. Hence, we need additional inductive biases on our learning algorithm that can learn useful conditional distributions, In practice, many such forms of inductive biases have been designed and shown to be useful across relavant tasks such as crossdomain translation and domain adaptation (Zhu et al. 2017b; Liu, Breuel, and Kautz 2017). 3.2 Representation We will use a graphical model to represent the relationships between the domains. Consider a Bayesian network A \u2190 Z \u2192B with two sets of observed random variables (domains) A \u2286Rn and B \u2286Rn and a parent set of latent random variables Z \u2286Z. The latent variables Z indicate a shared feature space between the observed variables A and B, which will be exploited later for ef\ufb01cient learning and inference. While Z is unobserved, we assume a prior density pZ over these variables, such as an isotropic Gaussian. Finally, to compactly specify the joint distribution over all sets of variables, we constrain the relationship between A and Z, and B and Z to be invertible. That is, we specify mappings GZ\u2192A and GZ\u2192B such that the respective inverses GA\u2192Z = G\u22121 Z\u2192A and GB\u2192Z = G\u22121 Z\u2192B exist. Notice that such a representation naturally provides a mechanism to translate from one domain to another as the composition of two invertible mappings: GA\u2192B = GZ\u2192B \u25e6GA\u2192Z (3) GB\u2192A = GZ\u2192A \u25e6GB\u2192Z. (4) Since composition of invertible mappings is invertible, both GA\u2192B and GB\u2192A are invertible. In fact, it is straightforward to observe that GA\u2192B and GB\u2192A are inverses of each other: G\u22121 A\u2192B = (GZ\u2192B \u25e6GA\u2192Z)\u22121 = G\u22121 A\u2192Z \u25e6G\u22121 Z\u2192B = GZ\u2192A \u25e6GB\u2192Z = GB\u2192A. (5) 3.3 Learning Algorithms & Objectives As discussed in the preliminaries, each of the individual \ufb02ows GZ\u2192A and GZ\u2192B express a model with density pA and pB respectively and can be trained independently via maximum likelihood estimation, adversarial learning, or a hybrid objective. However, our goal is to perform sampleef\ufb01cient learning by exploiting data from other domains as well as learn a conditional mapping across the two domains. For both these goals, we require learning algorithms which use data from both domains for parameter estimation. Unless mentioned otherwise, all our results that hold for a particular domain A will have a natural counterpart for the domain B. Adversarial Training Instead of adversarial training of GZ\u2192A and GZ\u2192B independently, we can directly perform adversarial training of the mapping GB\u2192A. That is, we \ufb01rst generate data from GB\u2192A using the prior density given as p\u2217 B. We also introduce a critic CA which distinguishes real samples a \u223cp\u2217 A with the generated samples GB\u2192A(b) for b \u223cp\u2217 B. For example, the cross-entropy GAN loss in this case is given as: LGAN(CA, GB\u2192A) = Ea\u223cp\u2217 A[log CA(a)] + Eb\u223cp\u2217 B[log(1 \u2212CA(GB\u2192A(b)))]. (6) The expectations above are approximated empirically via datasets DA and DB respectively. Maximum Likelihood Estimation Unlike adversarial training, \ufb02ow models trained with maximum likelihood estimation (MLE) explicitly require a prior pZ with a tractable density (e.g., isotropic Gaussian) to evaluate model likelihoods using the change-of-variables formula in Eq. 2. Due to this tractability requirement, we cannot substitute pZ with the unknown p\u2217 B for MLE. Instead, we can share parameters between the two mappings. The extent of parameter sharing depends on the similarity across the two domains; for highly similar domains, entire architectures could potentially be shared in which case GZ\u2192A = GZ\u2192B. Hybrid Training Both MLE and adversarial training objectives can be combined into a single training objective. The most general AlignFlow objective is given as: LAlignFlow(GB\u2192A, CA, CB; \u03bbA, \u03bbB) = LGAN(CA, GB\u2192A) + LGAN(CB, GA\u2192B) \u2212\u03bbALMLE(GZ\u2192A) \u2212\u03bbBLMLE(GZ\u2192B) (7) where \u03bbA \u22650 and \u03bbB \u22650 are hyperparameters that control the strength of the MLE terms for domains A and B respectively. The AlignFlow objective is minimized w.r.t. the parameters of the generator GA\u2192B and maximized w.r.t. parameters of the critics CA and CB. Notice that LAlignFlow is expressed as a function of the critics CA, CB and only GB\u2192A since the latter also encompasses the other parametric functions appearing in the objective (GA\u2192B, GZ\u2192A, GZ\u2192B) via the invertibility constraints in Eqs. 3-5. When \u03bbA = \u03bbB = 0, we perform pure adverarial training and the prior over Z plays no role in learning. On the other hand, when \u03bbA = \u03bbB \u2192\u221e, we can perform pure MLE training to learn the invertible generator. Here, the critics CA, CB play no role since the adversarial training terms are ignored. 3.4 Inference AlignFlow can be used for both conditional and unconditional sampling at test time. For conditional sampling as in the case of domain translation, we are given a datapoint b \u2208B and we can draw the corresponding cross-domain translation in domain A via the mapping GB\u2192A. For unconditional sampling, we require \u03bbA \u0338= 0 since doing so will activate the use of the prior pZ via the MLE terms in the learning objective. Thereafter, we can obtain samples by \ufb01rst drawing z \u223cpZ and then applying the mapping GZ\u2192A to z. Furthermore, the same z can be mapped to domain B via GZ\u2192B. Hence, we can sample paired data (GZ\u2192A(z), GZ\u2192B(z)) given z \u223cpZ. \f4 Theoretical Analysis The AlignFlow objective consists of three parametric models: one generator GB\u2192A \u2208G, and two critics CA \u2208CA, CB \u2208 CB. Here, G, CA, CB denote model families speci\ufb01ed e.g., via deep neural network based architectures. In this section, we analyze the optimal solutions to these parameterized models within well-speci\ufb01ed model families. 4.1 Optimal Generators Our \ufb01rst result characterizes the conditions under which the optimal generators exhibit marginal-consistency for the data distributions de\ufb01ned over the domains A and B. De\ufb01nition 1. (Marginal-consistency) Let pX,Y denote the joint distribution between two domains X and Y. An invertible mapping GY\u2192X : Y \u2192X is marginally-consistent w.r.t. two arbitrary distributions (pX, pY) iff for all x \u2208X, y \u2208Y: pX(x) = \u001a pY(y) \f \f \fdet \u2202GY\u2192X \u22121 \u2202X \f \f \f X=x , if x = GY\u2192X(y) 0, otherwise. (8) Next, we show that AlignFlow is marginally-consistent for well-speci\ufb01ed model families. Lemma 1. Let GA and GB denote the class of invertible mappings represented by the AlignFlow architecture for mapping Z \u2192A and Z \u2192B. For a given choice of prior distribution pZ, if there exist mappings G\u2217 Z\u2192A \u2208GA, G\u2217 Z\u2192B \u2208GB that are marginally-consistent w.r.t. (p\u2217 A, pZ) and (p\u2217 B, pZ) respectively, then the mapping G\u2217 B\u2192A = G\u2217 Z\u2192A \u25e6G\u2217\u22121 Z\u2192B is marginallyconsistent w.r.t. (p\u2217 A, p\u2217 B). The result follows directly from De\ufb01nition 1 and changeof-variables applied to the mapping G\u2217 B\u2192A. Theorem 1. Assume that the model families for the critics CA : A \u2192[0, 1] and CB : B \u2192[0, 1] are the set of all measurable functions for the cross-entropy GAN objective. Then, G\u2217 B\u2192A (as de\ufb01ned in Lemma 1) globally minimizes the AlignFlow objective in Eq. 7 for any \u03bbA \u22650, \u03bbB \u22650. Proof. See Appendix A.1. Theorem 1 suggests that optimizing the AlignFlow objective will recover the marginal data distributions p\u2217 A and p\u2217 B under suitable conditions. For the other goal of learning cross-domain mappings, we note that marginally-consistent mappings w.r.t. a target data distribution (such as p\u2217 A) and a target prior density (such as p\u2217 B) need not be unique. While a cycle-consistent, invertible model family mitigates the underconstrained nature of the crossdomain translation problem, it does not provably eliminate it. We provide some non-identi\ufb01able constructions in Appendix A.3 and leave open the exploration of additional constraints that guarantee identi\ufb01ability for future exploration. 4.2 Optimal Critics Unlike standard adversarial training of an unconditional normalizing \ufb02ow model (Grover, Dhar, and Ermon 2018; Danihelka et al. 2017), the AlignFlow model involves two critics. Here, we are interested in characterizing the dependence of the optimal critics for a given invertible mapping GA\u2192B. Consider the AlignFlow framework where the GAN A B YA YB GA\u2192B GB\u2192A CA CB (a) CycleGAN A B Z YA YB GA\u2192Z = G\u22121 Z\u2192A GB\u2192Z = G\u22121 Z\u2192B CA CB (b) AlignFlow Figure 1: CycleGAN v.s. AlignFlow for unpaired crossdomain translation. Unlike CycleGAN, AlignFlow speci\ufb01es a single invertible mapping GA\u2192Z \u25e6G\u22121 B\u2192Z that is exactly cycle-consistent, represents a shared latent space Z between the two domains, and can be trained via both adversarial training and exact maximum likelihood estimation. Doubleheaded arrows denote invertible mappings. YA and YB are random variables denoting the output of the critics used for adversarial training. loss terms in Eq. 7 are speci\ufb01ed via the cross-entropy objective in Eq. 1. For this model, we can relate the optimal critics using the following result. Theorem 2. Let p\u2217 A and p\u2217 B denote the true data densities for domains A and B respectively. Let C\u2217 A and C\u2217 B denote the optimal critics for the AlignFlow objective with the crossentropy GAN loss for any \ufb01xed choice of the invertible mapping GA\u2192B. Letting b = GA\u2192B(a) for any a \u2208A, we have: C\u2217 A(a) = C\u2217 B(b)p\u2217 A(a) p\u2217 A(a) + p\u2217 B(b)(1 \u2212C\u2217 B(b)) \f \f \fdet \u2202G\u22121 B\u2192A \u2202A \f \f \f A=a . (9) Proof. See Appendix A.2. In essence, the above result shows that the optimal critic for one domain, w.l.o.g. say A, can be directly obtained via the optimal critic of another domain B for any choice of the invertible mapping GA\u2192B, assuming access to the data marginals p\u2217 A and p\u2217 B. 4.3 Exact Cycle Consistency So far, we have only discussed objectives that are marginallyconsistent with respect to data distributions p\u2217 A and p\u2217 B. However, many domain alignment tasks such as cross-domain \ftranslation require can be cast as learning a joint distribution p\u2217 A,B. As discussed previously, this problem is underconstrained given unpaired datasets DA and DB and the learned marginal densities alone do not guarantee learning a mapping that is useful for downstream tasks. Cycle consistency, as proposed in CycleGAN (Zhu et al. 2017a), is a highly effective learning objective that encourages learning of meaningful cross-domain mappings such that the data translated from domain A to B via GA\u2192B to be mapped back to the original datapoints in A via GB\u2192A. That is, GB\u2192A(GA\u2192B(a)) \u2248a for all a \u2208A. Formally, the cycle-consistency loss for translation from A to B and back is de\ufb01ned as: LCycle(GB\u2192A, GA\u2192B) = Ea\u223cp\u2217 A[\u2225GB\u2192A(GA\u2192B(a)) \u2212a\u22251]. (10) Symmetrically, we have a cycle consistency term LCycle(GA\u2192B, GB\u2192A) in the reverse direction that encourages GA\u2192B(GB\u2192A(b)) \u2248b for all b \u2208B. Next, we show that AlignFlow is exactly cycle consistent. Proposition 1. Let G denote the class of invertible mappings represented by an arbitrary AlignFlow architecture. For any GB\u2192A \u2208G, we have: LCycle(GB\u2192A, GA\u2192B) = 0 (11) LCycle(GA\u2192B, GB\u2192A) = 0 (12) where GA\u2192B = G\u22121 B\u2192A by design. The proposition follows directly from the invertible design of the AlignFlow framework (Eq. 5) and Eq. 10. Comparison with CycleGAN We illustrate and compare AlignFlow and CycleGAN in Figure 1. CycleGAN parameterizes two independent cross-domain mappings GA\u2192B and GB\u2192A, whereas AlignFlow only speci\ufb01es a single, invertible mapping. Learning in a CycleGAN is restricted to an adversarial training objective along with additional cycleconsistent loss terms. In contrast, AlignFlow is exactly consistent and can be trained via adversarial learning, MLE, or a hybrid (Eq. 7) without the need for additional loss terms to enforce cycle consistency. Finally, inference in CycleGAN is restricted to conditional sampling since it does not involve any latent variables Z with easy-to-sample prior densities. As described previously, AlignFlow permits both conditional and unconditional sampling. Comparison with UNIT and CoGAN Models such as CoGAN (Liu and Tuzel 2016) and its extension UNIT (Liu, Breuel, and Kautz 2017) also consider adding a shared-space constraint between two different domain decoders. These models again can only enforce approximate cycle consistency and introduce additional encoder networks. Moreover, they only approximate lower bounds to the log-likelihood unlike AlignFlow which permits exact MLE training. 5 Experimental Evaluation To achieve our two goals of data-ef\ufb01cient modeling of individual domains and effective cross-domain mappings, we evaluate AlignFlow on two tasks: (a) unsupervised image-toimage translation, and (b) unsupervised domain adaptation. For additional experimental details, results, and analysis beyond those stated below, we refer the reader to Appendix B. Table 1: Mean Squared Error (MSE) comparing CycleGAN and variants of AlignFlow(AF) on paired test sets. MSE is computed pixelwise after normalizing images to (\u22121, 1). Dataset Model MSE (A \u2192B) MSE (B \u2192A) Facades CycleGAN 0.7129 0.3286 AF (ADV only) 0.6727 0.2679 AF (Hybrid) 0.5801 0.2512 AF (MLE only) 0.9014 0.5960 Maps CycleGAN 0.0245 0.0953 AF (ADV only) 0.0385 0.1123 AF (Hybrid) 0.0209 0.0897 AF (MLE only) 0.0452 0.1746 CityScapes CycleGAN 0.1252 0.1200 AF (ADV only) 0.2569 0.2196 AF (Hybrid) 0.1130 0.1462 AF (MLE only) 0.2526 0.2272 5.1 Image-To-Image Translation We evaluate AlignFlow on three image-to-image translation datasets used by Zhu et al. (2017a): Facades, Maps, and CityScapes (Cordts et al. 2016). These datasets are chosen because they provide one-to-one aligned image pairs, so one can quantitatively evaluate unpaired image-to-image translation models via a distance metric such as mean squared error (MSE) between generated examples and the corresponding ground truth. While MSE can be substituted for perceptual losses in other scenarios, it is a suitable metric for evaluating datasets with one-to-one ground pairings. Note that the task at hand is unpaired translation and hence, the pairing information is omitted during training and only used for evaluation. We report the MSE for translations on the test sets after cross-validation of hyperparameters in Table 1. For hybrid models, we set \u03bbA = \u03bbB and report results for the best values of these hyperparameters. We observe that while learning AlignFlow via adversarial training or MLE alone is not as competitive as CycleGAN, hybrid training of AlignFlow signi\ufb01cantly outperforms CycleGAN in almost all cases. Specifically, we observe that MLE alone typically performs worse than adversarial training, but together both these objectives seem to have a regularizing effect on each other. Qualitative interpolations on the Facades dataset are shown in Figure 2. 5.2 Unsupervised Domain Adaptation In unsupervised domain adaptation (Saenko et al. 2010), we are given data from two related domains: a source and a target domain. For the source, we have access to both the input datapoints and their labels. For the target, we are only provided with input datapoints without any labels. Using the available data, the goal is to learn a classi\ufb01er for the target domain. We extend Hoffman et al. (2017) to use an AlignFlow architecture and objective (adversarially trained Real-NVPs (Dinh, Sohl-Dickstein, and Bengio 2017) here) in place of CycleGAN for this task. A variety of algorithms have been proposed for the above task which seek to match pixel-level or feature-level distri\fFigure 2: Latent space interpolation on Facades. Top: Leftand right-most images are sampled from DA (red boxes). Interpolation is then performed in latent space and then decoded using GZ\u2192A. We see semantically meaningful changes across the row, e.g., in the shadow and the style of entrance to the building. Bottom: For each image in the top row, its latent representation is decoded into the target domain using GZ\u2192B. Inspection of the orange regions indicates a change from 3 \ufb02oors (left) to 4 \ufb02oors (right). Table 2: Test classi\ufb01cation accuracies for domain adaptation from source \u2192target. The source only and target only models directly use classi\ufb01ers trained on the source and target datasets respectively. Baseline numbers taken from the cited works. Model MNIST \u2192USPS USPS \u2192MNIST SVHN \u2192MNIST source only 82.2 \u00b1 0.8 69.6 \u00b1 3.8 67.1 \u00b1 0.6 ADDA (Tzeng et al. 2017) 89.4 \u00b1 0.2 90.1 \u00b1 0.8 76.0 \u00b1 1.8 CyCADA (Hoffman et al. 2017) 95.6 \u00b1 0.2 96.5 \u00b1 0.1 90.4 \u00b1 0.4 UNIT (Liu, Breuel, and Kautz 2017) 95.97 93.58 90.53 AlignFlow 96.2 \u00b1 0.2 96.7 \u00b1 0.1 91.0 \u00b1 0.3 target only 96.3 \u00b1 0.1 99.2 \u00b1 0.1 99.2 \u00b1 0.1 (a) (b) (c) (d) Figure 3: Examples of failure modes for CycleGAN reconstructions in SVHN \u2194MNIST cross-domain translation. In each group of three images in (a-d), a real example from the source domain (SVHN) is shown on the left, the translated image in the target domain (MNIST) is shown at center, and the reconstructed image in the source domain based on the translation is shown on the right. butions across the two domains. See Appendix B for more details. For fair and relevant comparison, we compare against baselines Cycada (Hoffman et al. 2017) and UNIT (Liu, Breuel, and Kautz 2017) which involve pixel-level translations and are closest to the current work. We evaluate across all pairs of source and target datasets as in Hoffman et al. (2017) and Liu, Breuel, and Kautz (2017): MNIST, USPS, SVHN, which are all image datasets of handwritten digits with 10 classes. In Table 2, we see that AlignFlow outperforms both Cycada (Hoffman et al. 2017) (based on CycleGAN) and UNIT (Liu, Breuel, and Kautz 2017) in all cases. Combining AlignFlow with other state-of-the-art adaptation approaches e.g., Shu et al. (2018), Long et al. (2018), Kumar et al. (2018), Liu et al. (2018a), Sankaranarayanan et al. (2018), Liu et al. (2018b) is an interesting direction for future work. In Figure 3, we show some failure modes of using approximately cycle-consistent objectives for the Cycada model. Notice that the image label and style changes or becomes unrecognizable in translating and reconstructing the input. In contrast, AlignFlow is exactly cycle consistent and hence, the source reconstructions based on the translated images will be exactly the source image by design. 5.3 Multi-Domain Concurrent Interpolations The use of a shared latent space in AlignFlow allows us to perform paired interpolations in two domains simultaneously. While pure MLE without any parameter sharing does not give good alignment, pure adversarial training cannot be used for unconditional sampling since the prior pZ is inactive. Hence, \f(a) MNIST\u2192USPS (b) USPS\u2192MNIST Figure 4: Multi-domain latent space interpolations. Top: Left-most and right-most images are sampled from DA (in red boxes). Interpolation is then performed in latent space and then decoded using GZ\u2192A. Bottom: For each corresponding image in the top row, its latent representation is decoded into the target domain using GZ\u2192B. Note how both class identity and style are preserved in the interpolated pairs of digits in the two domains. Also, notice that the USPS images (even the true ones in red boxes) are slightly blurred due to the upscaling applied as standard preprocessing. we use AlignFlow models trained via a hybrid objective for latent space interpolations. In particular, we sample two datapoints a\u2032, a\u2032\u2032 \u2208DA and obtain their latent representations z\u2032, z\u2032\u2032 \u2208Z via GZ\u2192A. Following Dinh, Sohl-Dickstein, and Bengio (2017), we compute interpolations in the polar space as \u02dc z = z\u2032 sin \u03c6 + z\u2032\u2032 cos \u03c6 for several values of \u03c6 \u2208(0, 2\u03c0). Finally, we map \u02dc z to either back to domain A via GZ\u2192A and B via GZ\u2192B. We show this empirically on the MNIST/USPS datasets in Figure 4. We see that many aspects of style and content are preserved in the interpolated samples. 6 Related Work A key assumption in unsupervised domain alignment is the existence of a deterministic or stochastic mapping GA\u2192B such that the distribution of B matches that of GA\u2192B(A), and vice versa. This assumption can be incorporated as a marginal distribution-matching constraint into the objective using an adversarially-trained GAN critic (Goodfellow et al. 2014). However, this objective is under-constrained. To partially mitigate this issue, CycleGAN (Zhu et al. 2017a), DiscoGAN (Kim et al. 2017), and DualGAN (Yi et al. 2017) added an approximate cycle-consistency constraint that encourages GB\u2192A \u25e6GA\u2192B and GA\u2192B \u25e6GB\u2192A to behave like identity functions on domains A and B respectively. While cycle-consistency is empirically very effective, alternatives based on variational autoencoders that do not require either cycles or adversarial training have also been proposed recently (Hoshen 2018; Hoshen and Wolf 2018). Models such as CoGAN (Liu and Tuzel 2016), UNIT (Liu, Breuel, and Kautz 2017), and CycleGAN (Zhu et al. 2017a) have since been extended to enable one-to-many mappings (Huang et al. 2018b; Zhu et al. 2017b) as well as multi-domain alignment (Choi et al. 2018). Our work focuses on the one-to-one unsupervised domain alignment setting. In contrast to previous models, AlignFlow leverages both a shared latent space and exact cycle-consistency. To our knowledge, AlignFlow provides the \ufb01rst demonstration that invertible models can be used successfully in lieu of the cycleconsistency objective. Furthermore, AlignFlow allows the incorporation of exact maximum likelihood training, which we demonstrated to induce a meaningful shared latent space that is amenable to interpolation. 7" + }, + { + "url": "http://arxiv.org/abs/1903.08850v2", + "title": "Stochastic Optimization of Sorting Networks via Continuous Relaxations", + "abstract": "Sorting input objects is an important step in many machine learning\npipelines. However, the sorting operator is non-differentiable with respect to\nits inputs, which prohibits end-to-end gradient-based optimization. In this\nwork, we propose NeuralSort, a general-purpose continuous relaxation of the\noutput of the sorting operator from permutation matrices to the set of unimodal\nrow-stochastic matrices, where every row sums to one and has a distinct arg\nmax. This relaxation permits straight-through optimization of any computational\ngraph involve a sorting operation. Further, we use this relaxation to enable\ngradient-based stochastic optimization over the combinatorially large space of\npermutations by deriving a reparameterized gradient estimator for the\nPlackett-Luce family of distributions over permutations. We demonstrate the\nusefulness of our framework on three tasks that require learning semantic\norderings of high-dimensional objects, including a fully differentiable,\nparameterized extension of the k-nearest neighbors algorithm.", + "authors": "Aditya Grover, Eric Wang, Aaron Zweig, Stefano Ermon", + "published": "2019-03-21", + "updated": "2019-04-29", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.NE" + ], + "main_content": "INTRODUCTION Learning to automatically sort objects is useful in many machine learning applications, such as topk multi-class classi\ufb01cation (Berrada et al., 2018), ranking documents for information retrieval (Liu et al., 2009), and multi-object target tracking in computer vision (Bar-Shalom & Li, 1995). Such algorithms typically require learning informative representations of complex, high-dimensional data, such as images, before sorting and subsequent downstream processing. For instance, the k-nearest neighbors image classi\ufb01cation algorithm, which orders the neighbors based on distances in the canonical pixel basis, can be highly suboptimal for classi\ufb01cation (Weinberger et al., 2006). Deep neural networks can instead be used to learn representations, but these representations cannot be optimized end-to-end for a downstream sorting-based objective, since the sorting operator is not differentiable with respect to its input. In this work, we seek to remedy this shortcoming by proposing NeuralSort, a continuous relaxation to the sorting operator that is differentiable almost everywhere with respect to the inputs. The output of any sorting algorithm can be viewed as a permutation matrix, which is a square matrix with entries in {0, 1} such that every row and every column sums to 1. Instead of a permutation matrix, NeuralSort returns a unimodal row-stochastic matrix. A unimodal row-stochastic matrix is de\ufb01ned as a square matrix with positive real entries, where each row sums to 1 and has a distinct arg max. All permutation matrices are unimodal row-stochastic matrices. NeuralSort has a temperature knob that controls the degree of approximation, such that in the limit of zero temperature, we recover a permutation matrix that sorts the inputs. Even for a non-zero temperature, we can ef\ufb01ciently project any unimodal matrix to the desired permutation matrix via a simple row-wise arg max operation. Hence, NeuralSort is also suitable for ef\ufb01cient straight-through gradient optimization (Bengio et al., 2013), which requires \u201cexact\u201d permutation matrices to evaluate learning objectives. As the second primary contribution, we consider the use of NeuralSort for stochastic optimization over permutations. In many cases, such as latent variable models, the permutations may be latent but directly in\ufb02uence observed behavior, e.g., utility and choice models are often expressed as distributions over permutations which govern the observed decisions of agents (Regenwetter et al., \u2217Equal contribution 1 arXiv:1903.08850v2 [stat.ML] 29 Apr 2019 \fPublished as a conference paper at ICLR 2019 2006; Chierichetti et al., 2018). By learning distributions over unobserved permutations, we can account for the uncertainty in these permutations in a principled manner. However, the challenge with stochastic optimization over discrete distributions lies in gradient estimation with respect to the distribution parameters. Vanilla REINFORCE estimators are impractical for most cases, or necessitate custom control variates for low-variance gradient estimation (Glasserman, 2013). In this regard, we consider the Plackett-Luce (PL) family of distributions over permutations (Plackett, 1975; Luce, 1959). A common modeling choice for ranking models, the PL distribution is parameterized by n scores, with its support de\ufb01ned over the symmetric group consisting of n! permutations. We derive a reparameterizable sampler for stochastic optimization with respect to this distribution, based on Gumbel perturbations to the n (log-)scores. However, the reparameterized sampler requires sorting these perturbed scores, and hence the gradients of a downstream learning objective with respect to the scores are not de\ufb01ned. By using NeuralSort instead, we can approximate the objective and obtain well-de\ufb01ned reparameterized gradient estimates for stochastic optimization. Finally, we apply NeuralSort to tasks that require us to learn semantic orderings of complex, highdimensional input data. First, we consider sorting images of handwritten digits, where the goal is to learn to sort images by their unobserved labels. Our second task extends the \ufb01rst one to quantile regression, where we want to estimate the median (50-th percentile) of a set of handwritten numbers. In addition to identifying the index of the median image in the sequence, we need to learn to map the inferred median digit to its scalar representation. In the third task, we propose an algorithm that learns a basis representation for the k-nearest neighbors (kNN) classi\ufb01er in an end-to-end procedure. Because the choice of the k nearest neighbors requires a non-differentiable sorting, we use NeuralSort to obtain an approximate, differentiable surrogate. On all tasks, we observe signi\ufb01cant empirical improvements due to NeuralSort over the relevant baselines and competing relaxations to permutation matrices. 2 PRELIMINARIES An n-dimensional permutation z = [z1, z2, . . . , zn]T is a list of unique indices {1, 2, . . . , n}. Every permutation z is associated with a permutation matrix Pz \u2208{0, 1}n\u00d7n with entries given as: Pz[i, j] = \u001a1 if j = zi 0 otherwise. Let Zn denote the set of all n! possible permutations in the symmetric group. We de\ufb01ne the sort : Rn \u2192Zn operator as a mapping of n real-valued inputs to a permutation corresponding to a descending ordering of these inputs. E.g., if the input vector s = [9, 1, 5, 2]T , then sort(s) = [1, 3, 4, 2]T since the largest element is at the \ufb01rst index, second largest element is at the third index and so on. In case of ties, elements are assigned indices in the order they appear. We can obtain the sorted vector simply via Psort(s)s. 2.1 PLACKETT-LUCE DISTRIBUTIONS The family of Plackett-Luce distributions over permutations is best described via a generative process: Consider a sequence of n items, each associated with a canonical index i = 1, 2, . . . , n. A common assumption in ranking models is that the underlying generating process for any observed permutation of n items satis\ufb01es Luce\u2019s choice axiom (Luce, 1959). Mathematically, this axiom de\ufb01nes the \u2018choice\u2019 probability of an item with index i as: q(i) \u221dsi where si > 0 is interpreted as the score of item with index i. The normalization constant is given by Z = P i\u2208{1,2,...,n} si. If we choose the n items one at a time (without replacement) based on these choice probabilities, we obtain a discrete distribution over all possible permutations. This distribution is referred to as the Plackett-Luce (PL) distribution, and its probability mass function for any z \u2208Zn is given by: q(z|s) = sz1 Z sz2 Z \u2212sz1 \u00b7 \u00b7 \u00b7 szn Z \u2212Pn\u22121 i=1 szi (1) where s = {s1, s2, . . . , sn} is the vector of scores parameterizing this distribution (Plackett, 1975). 2 \fPublished as a conference paper at ICLR 2019 s z \u03b8 f sort Figure 1: Stochastic computation graphs with a deterministic node z corresponding to the output of a sort operator applied to the scores s. 2.2 STOCHASTIC COMPUTATION GRAPHS The abstraction of stochastic computation graphs (SCG) compactly speci\ufb01es the forward value and the backward gradient computation for computational circuits. An SCG is a directed acyclic graph that consists of three kinds of nodes: input nodes which specify external inputs (including parameters), deterministic nodes which are deterministic functions of their parents, and stochastic nodes which are distributed conditionally on their parents. See Schulman et al. (2015) for a review. To de\ufb01ne gradients of an objective function with respect to any node in the graph, the chain rule necessitates that the gradients with respect to the intermediate nodes are well-de\ufb01ned. This is not the case for the sort operator. In Section 3, we propose to extend stochastic computation graphs with nodes corresponding to a relaxation of the deterministic sort operator. In Section 4, we further use this relaxation to extend computation graphs to include stochastic nodes corresponding to distributions over permutations. The proofs of all theoretical results in this work are deferred to Appendix B. 3 NEURALSORT: THE RELAXED SORTING OPERATOR Our goal is to optimize training objectives involving a sort operator with gradient-based methods. Consider the optimization of objectives written in the following form: L(\u03b8, s) = f(Pz; \u03b8) (2) where z = sort(s). Here, s \u2208Rn denotes a vector of n real-valued scores, z is the permutation that (deterministically) sorts the scores s, and f(\u00b7) is an arbitrary function of interest assumed to be differentiable w.r.t a set of parameters \u03b8 and z. For example, in a ranking application, these scores could correspond to the inferred relevances of n webpages and f(\u00b7) could be a ranking loss. Figure 1 shows the stochastic computation graph corresponding to the objective in Eq. 2. We note that this could represent part of a more complex computation graph, which we skip for ease of presentation while maintaining the generality of the scope of this work. While the gradient of the above objective w.r.t. \u03b8 is well-de\ufb01ned and can be computed via standard backpropogation, the gradient w.r.t. the scores s is not de\ufb01ned since the sort operator is not differentiable w.r.t. s. Our solution is to derive a relaxation to the sort operator that leads to a surrogate objective with well-de\ufb01ned gradients. In particular, we seek to use such a relaxation to replace the permutation matrix Pz in Eq. 2 with an approximation b Pz such that the surrogate objective f( b Pz; \u03b8) is differentiable w.r.t. the scores s. The general recipe to relax non-differentiable operators with discrete codomains N is to consider differentiable alternatives that map the input to a larger continuous codomain M with desirable properties. For gradient-based optimization, we are interested in two key properties: 1. The relaxation is continuous everywhere and differentiable (almost-)everywhere with respect to elements in the input domain. 2. There exists a computationally ef\ufb01cient projection from M back to N. Relaxations satisfying the \ufb01rst requirement are amenable to automatic differentiation for optimizing stochastic computational graphs. The second requirement is useful for evaluating metrics and losses that necessarily require a discrete output akin to the one obtained from the original, non-relaxed operator. E.g., in straight-through gradient estimation (Bengio et al., 2013; Jang et al., 2017), the 3 \fPublished as a conference paper at ICLR 2019 0 1/2 1/2 7/16 3/16 3/8 9/16 5/16 1/8 \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8 P D R U 3/8 1/8 1/2 3/4 1/4 0 1/4 1/2 1/4 \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8 Figure 2: Center: Venn Diagram relationships between permutation matrices (P), doubly-stochastic matrices (D), unimodal row stochastic matrices (U), and row stochastic matrices (R). Left: A doubly-stochastic matrix that is not unimodal. Right: A unimodal matrix that is not doublystochastic. non-relaxed operator is used for evaluating the learning objective in the forward pass and the relaxed operator is used in the backward pass for gradient estimation. The canonical example is the 0/1 loss used for binary classi\ufb01cation. While the 0/1 loss is discontinuous w.r.t. its inputs (real-valued predictions from a model), surrogates such as the logistic and hinge losses are continuous everywhere and differentiable almost-everywhere (property 1), and can give hard binary predictions via thresholding (property 2). Note: For brevity, we assume that the arg max operator is applied over a set of elements with a unique maximizer and hence, the operator has well-de\ufb01ned semantics. With some additional bookkeeping for resolving ties, the results in this section hold even if the elements to be sorted are not unique. See Appendix C. Unimodal Row Stochastic Matrices. The sort operator maps the input vector to a permutation, or equivalently a permutation matrix. Our relaxation to sort is motivated by the geometric structure of permutation matrices. The set of permutation matrices is a subset of doubly-stochastic matrices, i.e., a non-negative matrix such that the every row and column sums to one. If we remove the requirement that every column should sum to one, we obtain a larger set of row stochastic matrices. In this work, we propose a relaxation to sort that maps inputs to an alternate subset of row stochastic matrices, which we refer to as the unimodal row stochastic matrices. De\ufb01nition 1 (Unimodal Row Stochastic Matrices). An n \u00d7 n matrix is Unimodal Row Stochastic if it satis\ufb01es the following conditions: 1. Non-negativity: U[i, j] \u22650 \u2200i, j \u2208{1, 2, . . . , n}. 2. Row Af\ufb01nity: Pn j=1 U[i, j] = 1 \u2200i \u2208{1, 2, . . . , n}. 3. Argmax Permutation: Let u denote an n-dimensional vector with entries such that ui = arg maxj U[i, j] \u2200i \u2208{1, 2, . . . , n}. Then, u \u2208Zn, i.e., it is a valid permuation. We denote Un as the set of n \u00d7 n unimodal row stochastic matrices. All row stochastic matrices satisfy the \ufb01rst two conditions. The third condition is useful for gradient based optimization involving sorting-based losses. The condition provides a straightforward mechanism for extracting a permutation from a unimodal row stochastic matrix via a row-wise arg max operation. Figure 2 shows the relationships between the different subsets of square matrices. NeuralSort. Our relaxation to the sort operator is based on a standard identity for evaluating the sum of the k largest elements in any input vector. Lemma 2. [Lemma 1 in Ogryczak & Tamir (2003)] For an input vector s = [s1, s2, . . . , sn]T that is sorted as s[1] \u2265s[2] \u2265. . . \u2265s[n], we have the sum of the k-largest elements given as: k X i=1 s[i] = min \u03bb\u2208{s1,s2,...,sn} \u03bbk + n X i=1 max(si \u2212\u03bb, 0). (3) The identity in Lemma 2 outputs the sum of the top-k elements. The k-th largest element itself can be recovered by taking the difference of the sum of top-k elements and the top-(k \u22121) elements. 4 \fPublished as a conference paper at ICLR 2019 s z \u03b8 f (a) Stochastic log s + g z \u03b8 f sort (b) Reparameterized Stochastic Figure 3: Stochastic computation graphs with stochastic nodes corresponding to permutations. Squares denote deterministic nodes and circles denote stochastic nodes. Corollary 3. Let s = [s1, s2, . . . , sn]T be a real-valued vector of length n. Let As denote the matrix of absolute pairwise differences of the elements of s such that As[i, j] = |si \u2212sj|. The permutation matrix Psort(s) corresponding to sort(s) is given by: Psort(s)[i, j] = \u001a1 if j = arg max[(n + 1 \u22122i)s \u2212As1] 0 otherwise (4) where 1 denotes the column vector of all ones. E.g., if we set i = \u230a(n + 1)/2\u230bthen the non-zero entry in the i-th row Psort(s)[i, :] corresponds to the element with the minimum sum of (absolute) distance to the other elements. As desired, this corresponds to the median element. The relaxation requires O(n2) operations to compute As, as opposed to the O(n log n) overall complexity for the best known sorting algorithms. In practice however, it is highly parallelizable and can be implemented ef\ufb01ciently on GPU hardware. The arg max operator is non-differentiable which prohibits the direct use of Corollary 3 for gradient computation. Instead, we propose to replace the arg max operator with soft max to obtain a continuous relaxation b Psort(s)(\u03c4). In particular, the i-th row of b Psort(s)(\u03c4) is given by: b Psort(s)[i, :](\u03c4) = soft max [((n + 1 \u22122i)s \u2212As1)/\u03c4] (5) where \u03c4 > 0 is a temperature parameter. Our relaxation is continuous everywhere and differentiable almost everywhere with respect to the elements of s. Furthermore, we have the following result. Theorem 4. Let b Psort(s) denote the continuous relaxation to the permutation matrix Psort(s) for an arbitrary input vector s and temperature \u03c4 de\ufb01ned in Eq. 5. Then, we have: 1. Unimodality: \u2200\u03c4 > 0, b Psort(s) is a unimodal row stochastic matrix. Further, let u denote the permutation obtained by applying arg max row-wise to b Psort(s). Then, u = sort(s). 2. Limiting behavior: If we assume that the entries of s are drawn independently from a distribution that is absolutely continuous w.r.t. the Lebesgue measure in R, then the following convergence holds almost surely: lim \u03c4\u21920+ b Psort(s)[i, :](\u03c4) = Psort(s)[i, :] \u2200i \u2208{1, 2, . . . , n}. (6) Unimodality allows for ef\ufb01cient projection of the relaxed permutation matrix b Psort(s) to the hard matrix Psort(s) via a row-wise arg max, e.g., for straight-through gradients. For analyzing limiting behavior, independent draws ensure that the elements of s are distinct almost surely. The temperature \u03c4 controls the degree of smoothness of our approximation. At one extreme, the approximation becomes tighter as the temperature is reduced. In practice however, the trade-off is in the variance of these estimates, which is typically lower for larger temperatures. 4 STOCHASTIC OPTIMIZATION OVER PERMUTATIONS In many scenarios, we would like the ability to express our uncertainty in inferring a permutation e.g., latent variable models with latent nodes corresponding to permutations. Random variables that assume values corresponding to permutations can be represented via stochastic nodes in the 5 \fPublished as a conference paper at ICLR 2019 stochastic computation graph. For optimizing the parameters of such a graph, consider the following class of objectives: L(\u03b8, s) = Eq(z|s) [f(Pz; \u03b8)] (7) where \u03b8 and s denote sets of parameters, Pz is the permutation matrix corresponding to the permutation z, q(\u00b7) is a parameterized distribution over the elements of the symmetric group Zn, and f(\u00b7) is an arbitrary function of interest assumed to be differentiable in \u03b8 and z. The SCG is shown in Figure 3a. In contrast to the SCG considered in the previous section (Figure 1), here we are dealing with a distribution over permutations as opposed to a single (deterministically computed) one. While such objectives are typically intractable to evaluate exactly since they require summing over a combinatorially large set, we can obtain unbiased estimates ef\ufb01ciently via Monte Carlo. Monte Carlo estimates of gradients w.r.t. \u03b8 can be derived simply via linearity of expectation. However, the gradient estimates w.r.t. s cannot be obtained directly since the sampling distribution depends on s. The REINFORCE gradient estimator (Glynn, 1990; Williams, 1992; Fu, 2006) uses the fact that \u2207sq(z|s) = q(z|s)\u2207s log q(z|s) to derive the following Monte Carlo gradient estimates: \u2207sL(\u03b8, s) = Eq(z|s) [f(Pz; \u03b8)\u2207s log q(z|s)] + Eq(z|s) [\u2207sf(Pz; \u03b8)] . (8) 4.1 REPARAMETERIZED GRADIENT ESTIMATORS FOR PL DISTRIBUTIONS REINFORCE gradient estimators typically suffer from high variance (Schulman et al., 2015; Glasserman, 2013). Reparameterized samplers provide an alternate gradient estimator by expressing samples from a distribution as a deterministic function of its parameters and a \ufb01xed source of randomness (Kingma & Welling, 2014; Rezende et al., 2014; Titsias & L\u00b4 azaro-Gredilla, 2014). Since the randomness is from a \ufb01xed distribution, Monte Carlo gradient estimates can be derived by pushing the gradient operator inside the expectation (via linearity). In this section, we will derive a reparameterized sampler and gradient estimator for the Plackett-Luce (PL) family of distributions. Let the score si for an item i \u2208{1, 2, . . . , n} be an unobserved random variable drawn from some underlying score distribution (Thurstone, 1927). Now for each item, we draw a score from its corresponding score distribution. Next, we generate a permutation by applying the deterministic sort operator to these n randomly sampled scores. Interestingly, prior work has shown that the resulting distribution over permutations corresponds to a PL distribution if and only if the scores are sampled independently from Gumbel distributions with identical scales. Proposition 5. [adapted from Yellott Jr (1977)] Let s be a vector of scores for the n items. For each item i, sample gi \u223cGumbel(0, \u03b2) independently with zero mean and a \ufb01xed scale \u03b2. Let \u02dc s denote the vector of Gumbel perturbed log-scores with entries such that \u02dc si = \u03b2 log si + gi. Then: q(\u02dc sz1 \u2265\u00b7 \u00b7 \u00b7 \u2265\u02dc szn) = sz1 Z sz2 Z \u2212sz1 \u00b7 \u00b7 \u00b7 szn Z \u2212Pn\u22121 i=1 szi . (9) For ease of presentation, we assume \u03b2 = 1 in the rest of this work. Proposition 5 provides a method for sampling from PL distributions with parameters s by adding Gumbel perturbations to the logscores and applying the sort operator to the perturbed log-scores. This procedure can be seen as a reparameterization trick that expresses a sample from the PL distribution as a deterministic function of the scores and a \ufb01xed source of randomness (Figure 3b). Letting g denote the vector of i.i.d. Gumbel perturbations, we can express the objective in Eq. 7 as: L(\u03b8, s) = Eg \u0002 f(Psort(log s+g); \u03b8) \u0003 . (10) While the reparameterized sampler removes the dependence of the expectation on the parameters s, it introduces a sort operator in the computation graph such that the overall objective is nondifferentiable in s. In order to obtain a differentiable surrogate, we approximate the objective based on the NeuralSort relaxation to the sort operator: Eg \u0002 f(Psort(log s+g); \u03b8) \u0003 \u2248Eg h f( b Psort(log s+g); \u03b8) i := b L(\u03b8, s). (11) Accordingly, we get the following reparameterized gradient estimates for the approximation: \u2207sb L(\u03b8, s) = Eg h \u2207sf( b Psort(log s+g); \u03b8) i (12) which can be estimated ef\ufb01ciently via Monte Carlo because the expectation is with respect to a distribution that does not depend on s. 6 \fPublished as a conference paper at ICLR 2019 5 DISCUSSION AND RELATED WORK The problem of learning to rank documents based on relevance has been studied extensively in the context of information retrieval. In particular, listwise approaches learn functions that map objects to scores. Much of this work concerns the PL distribution: the RankNet algorithm (Burges et al., 2005) can be interpreted as maximizing the PL likelihood of pairwise comparisons between items, while the ListMLE ranking algorithm in Xia et al. (2008) extends this with a loss that maximizes the PL likelihood of ground-truth permutations directly. The differentiable pairwise approaches to ranking, such as Rigutini et al. (2011), learn to approximate the comparator between pairs of objects. Our work considers a generalized setting where sorting based operators can be inserted anywhere in computation graphs to extend traditional pipelines e.g., kNN. Prior works have proposed relaxations of permutation matrices to the Birkhoff polytope, which is de\ufb01ned as the convex hull of the set of permutation matrices a.k.a. the set of doubly-stochastic matrices. A doubly-stochastic matrix is a permutation matrix iff it is orthogonal and continuous relaxations based on these matrices have been used previously for solving NP-complete problems such as seriation and graph matching (Fogel et al., 2013; Fiori et al., 2013; Lim & Wright, 2014). Adams & Zemel (2011) proposed the use of the Sinkhorn operator to map any square matrix to the Birkhoff polytope. They interpret the resulting doubly-stochastic matrix as the marginals of a distribution over permutations. Mena et al. (2018) propose an alternate method where the square matrix de\ufb01nes a latent distribution over the doubly-stochastic matrices themselves. These distributions can be sampled from by adding elementwise Gumbel perturbations. Linderman et al. (2018) propose a rounding procedure that uses the Sinkhorn operator to directly sample matrices near the Birkhoff polytope. Unlike Mena et al. (2018), the resulting distribution over matrices has a tractable density. In practice, however, the approach of Mena et al. (2018) performs better and will be the main baseline we will be comparing against in our experiments in Section 6. As discussed in Section 3, NeuralSort maps permutation matrices to the set of unimodal rowstochastic matrices. For the stochastic setting, the PL distribution permits ef\ufb01cient sampling, exact and tractable density estimation, making it an attractive choice for several applications, e.g., variational inference over latent permutations. Our reparameterizable sampler, while also making use of the Gumbel distribution, is based on a result unique to the PL distribution (Proposition 5). The use of the Gumbel distribution for de\ufb01ning continuous relaxations to discrete distributions was \ufb01rst proposed concurrently by Jang et al. (2017) and Maddison et al. (2017) for categorical variables, referred to as Gumbel-Softmax. The number of possible permutations grow factorially with the dimension, and thus any distribution over n-dimensional permutations can be equivalently seen as a distribution over n! categories. Gumbel-softmax does not scale to a combinatorially large number of categories (Kim et al., 2016; Mussmann et al., 2017), necessitating the use of alternate relaxations, such as the one considered in this work. 6 EXPERIMENTS We refer to the two approaches proposed in Sections 3, 4 as Deterministic NeuralSort and Stochastic NeuralSort, respectively. For additional hyperparameter details and analysis, see Appendix D. 6.1 SORTING HANDWRITTEN NUMBERS Dataset. We \ufb01rst create the large-MNIST dataset, which extends the MNIST dataset of handwritten digits. The dataset consists of multi-digit images, each a concatenation of 4 randomly selected individual images from MNIST, e.g., is one such image in this dataset. Each image is associated with a real-valued label, which corresponds to its concatenated MNIST labels, e.g., the label of is 1810. Using the large-MNIST dataset, we \ufb01nally create a dataset of sequences. Every sequence is this dataset consists of n randomly sampled large-MNIST images. Setup. Given a dataset of sequences of large-MNIST images, our goal is to learn to predict the permutation that sorts the labels of the sequence of images, given a training set of ground-truth permutations. Figure 4 (Task 1) illustrates this task on an example sequence of n = 5 large-MNIST images. This task is a challenging extension of the one considered by Mena et al. (2018) in sorting scalars, since it involves learning the semantics of high-dimensional objects prior to sorting. A 7 \fPublished as a conference paper at ICLR 2019 x1 x2 x3 x4 x5 si = CNN(xi) s3 s2 s1 s4 s5 z = NeuralSort(s) Task 1: Sorting Loss([3, 5, 1, 4, 2]T , z) \u02c6 y = CNN(\u02c6 x) Task 2: Median Regression Loss(2960, \u02c6 y) \u02c6 x = xz[3] Figure 4: Sorting and quantile regression. The model is trained to sort sequences of n = 5 largeMNIST images x1, x2, . . . , x5 (Task 1) and regress the median value (Task 2). In the above example, the ground-truth permutation that sorts the input sequence from largest to smallest is [3, 5, 1, 4, 2]T , 9803 being the largest and 1270 the smallest. Blue illustrates the true median image x1 with ground-truth sorted index 3 and value 2960. Table 1: Average sorting accuracy on the test set. First value is proportion of permutations correctly identi\ufb01ed; value in parentheses is the proportion of individual element ranks correctly identi\ufb01ed. Algorithm n = 3 n = 5 n = 7 n = 9 n = 15 Vanilla RS 0.467 (0.801) 0.093 (0.603) 0.009 (0.492) 0. (0.113) 0. (0.067) Sinkhorn 0.462 (0.561) 0.038 (0.293) 0.001 (0.197) 0. (0.143) 0. (0.078) Gumbel-Sinkhorn 0.484 (0.575) 0.033 (0.295) 0.001 (0.189) 0. (0.146) 0. (0.078) Deterministic NeuralSort 0.930 (0.951) 0.837 (0.927) 0.738 (0.909) 0.649 (0.896) 0.386 (0.857) Stochastic NeuralSort 0.927 (0.950) 0.835 (0.926) 0.741 (0.909) 0.646 (0.895) 0.418 (0.862) good model needs to learn to dissect the individual digits in an image, rank these digits, and \ufb01nally, compose such rankings based on the digit positions within an image. The available supervision, in the form of the ground-truth permutation, is very weak compared to a classi\ufb01cation setting that gives direct access to the image labels. Baselines. All baselines use a CNN that is shared across all images in a sequence to map each large-MNIST image to a feature space. The vanilla row-stochastic (RS) baseline concatenates the CNN representations for n images into a single vector that is fed into a multilayer perceptron that outputs n multiclass predictions of the image probabilities for each rank. The Sinkhorn and GumbelSinkhorn baselines, as discussed in Section 5, use the Sinkhorn operator to map the stacked CNN representations for the n objects into a doubly-stochastic matrix. For all methods, we minimized the cross-entropy loss between the predicted matrix and the ground-truth permutation matrix. Results. Following Mena et al. (2018), our evaluation metric is the the proportion of correctly predicted permutations on a test set of sequences. Additionally, we evaluate the proportion of individual elements ranked correctly. Table 1 demonstrates that the approaches based on the proposed sorting relaxation signi\ufb01cantly outperform the baseline approaches for all n considered. The performance of the deterministic and stochastic variants are comparable. The vanilla RS baseline performs well in ranking individual elements, but is not good at recovering the overall square matrix. We believe the poor performance of the Sinkhorn baselines is partly because these methods were designed and evaluated for matchings. Like the output of sort, matchings can also be represented as permutation matrices. However, distributions over matchings need not satisfy Luce\u2019s choice axiom or imply a total ordering, which could explain the poor performance on the tasks considered. 6.2 QUANTILE REGRESSION Setup. In this experiment, we extend the sorting task to regression. Again, each sequence contains n large-MNIST images, and the regression target for each sequence is the 50-th quantile (i.e., the median) of the n labels of the images in the sequence. Figure 4 (Task 2) illustrates this task on an example sequence of n = 5 large-MNIST images, where the goal is to output the third largest label. The design of this task highlights two key challenges since it explicitly requires learning both a suitable representation for sorting high-dimensional inputs and a secondary function that approximates the label itself (regression). Again, the supervision available in the form of the label of only a single image at an arbitrary and unknown location in the sequence is weak. 8 \fPublished as a conference paper at ICLR 2019 Table 2: Test mean squared error (\u00d710\u22124) and R2 values (in parenthesis) for quantile regression. Algorithm n = 5 n = 9 n = 15 Constant (Simulated) 356.79 (0.00) 227.31 (0.00) 146.94 ( 0.00) Vanilla NN 1004.70 (0.85) 699.15 (0.82) 562.97 (0.79) Sinkhorn 343.60 (0.25) 231.87 (0.19) 156.27 (0.04) Gumbel-Sinkhorn 344.28 (0.25) 232.56 (0.23) 157.34 (0.06) Deterministic NeuralSort 45.50 (0.95) 34.98 (0.94) 34.78 (0.92) Stochastic NeuralSort 33.80 (0.94) 31.43 (0.93) 29.34 (0.90) x0 x1 x2 \u00b7 \u00b7 \u00b7 xn ei = h\u03c6(xi) e2 e1 e0 \u00b7 \u00b7 \u00b7 en s1 = \u2212\u2225e1 \u2212e0\u22252 s2 = \u2212\u2225e2 \u2212e0\u22252 \u00b7 \u00b7 \u00b7 sn = \u2212\u2225en\u2212e0\u22252 z = NeuralSort(s) \u2113kNN(Pz, y0, y1, y2, . . . , yn) y0 top-k Figure 5: Differentiable kNN. The model is trained such that the representations ei for the training points {x1, . . . , xn} that have the same label y0 as x0 are closer to e0 (included in top-k) than others. Baselines. In addition to Sinkhorn and Gumbel-Sinkhorn, we design two more baselines. The Constant baseline always returns the median of the full range of possible outputs, ignoring the input sequence. This corresponds to 4999.5 since we are sampling large-MNIST images uniformly in the range of four-digit numbers. The vanilla neural net (NN) baseline directly maps the input sequence of images to a real-valued prediction for the median. Results. Our evaluation metric is the mean squared error (MSE) and R2 on a test set of sequences. Results for n = {5, 9, 15} images are shown in Table 2. The Vanilla NN baseline while incurring a large MSE, is competitive on the R2 metric. The other baselines give comparable performance on the MSE metric. The proposed NeuralSort approaches outperform the competing methods on both the metrics considered. The stochastic NeuralSort approach is the consistent best performer on MSE, while the deterministic NeuralSort is slightly better on the R2 metric. 6.3 END-TO-END, DIFFERENTIABLE k-NEAREST NEIGHBORS Setup. In this experiment, we design a fully differentiable, end-to-end k-nearest neighbors (kNN) classi\ufb01er. Unlike a standard kNN classi\ufb01er which computes distances between points in a prede\ufb01ned space, we learn a representation of the data points before evaluating the k-nearest neighbors. We are given access to a dataset D of (x, y) pairs of standard input data and their class labels respectively. The differentiable kNN algorithm consists of two hyperparameters: the number of training neighbors n, the number of top candidates k, and the sorting temperature \u03c4. Every sequence of items here consists of a query point x and a randomly sampled subset of n candidate nearest neighbors from the training set, say {x1, x2, . . . , xn}. In principle, we could use the entire training set (excluding the query point) as candidate points, but this can hurt the learning both computationally and statistically. The query points are randomly sampled from the train/validation/test sets as appropriate but the nearest neighbors are always sampled from the training set. The loss function optimizes for a representation space h\u03c6(\u00b7) (e.g., CNN) such that the top-k candidate points with the minimum Euclidean distance to the query point in the representation space have the same label as the query point. Note that at test time, once the representation space h\u03c6 is learned, we can use the entire training set as the set of candidate points, akin to a standard kNN classi\ufb01er. Figure 5 illustrates the proposed algorithm. Formally, for any datapoint x, let z denote a permutation of the n candidate points. The uniformlyweighted kNN loss, denoted as \u2113kNN(\u00b7), can be written as follows: \u2113kNN( b Pz, y, y1, y2, . . . , yn) = \u22121 k k X j=1 n X i=1 1(yi = y) b Pz[i, j] (13) 9 \fPublished as a conference paper at ICLR 2019 Table 3: Average test kNN classi\ufb01cation accuracies from n neighbors for best value of k. Algorithm MNIST Fashion-MNIST CIFAR-10 kNN 97.2% 85.8% 35.4% kNN+PCA 97.6% 85.9% 40.9% kNN+AE 97.6% 87.5% 44.2% kNN + Deterministic NeuralSort 99.5% 93.5% 90.7% kNN + Stochastic NeuralSort 99.4% 93.4% 89.5% CNN (w/o kNN) 99.4% 93.4% 95.1% where {y1, y2, . . . , yn} are the labels for the candidate points. Note that when b Pz is an exact permutation matrix (i.e., temperature \u03c4 \u21920), this expression is exactly the negative of the fraction of k nearest neighbors that have the same label as x. Using Eq. 13, the training objectives for Deterministic and Stochastic NeuralSort are given as: Deterministic: min \u03c6 1 |D| X (x,y)\u2208D \u2113kNN( b Psort(s), y, y1, . . . , yn) (14) Stochastic: min \u03c6 1 |D| X (x,y)\u2208D Ez\u223cq(z|s) h \u2113kNN( b Pz, y, y1, y2, . . . , yn) i (15) where each entry of s is given as sj = \u2212\u2225h\u03c6(x) \u2212h\u03c6(xj)\u22252 2. Datasets. We consider three benchmark datasetes: MNIST dataset of handwritten digits, FashionMNIST dataset of fashion apparel, and the CIFAR-10 dataset of natural images (no data augmentation) with the canonical splits for training and testing. Baselines. We consider kNN baselines that operate in three standard representation spaces: the canonical pixel basis, the basis speci\ufb01ed by the top 50 principal components (PCA), an autonencoder (AE). Additionally, we experimented with k = 1, 3, 5, 9 nearest neighbors and across two distance metrics: uniform weighting of all k-nearest neighbors and weighting nearest neighbors by the inverse of their distance. For completeness, we trained a CNN with the same architecture as the one used for NeuralSort (except the \ufb01nal layer) using the cross-entropy loss. Results. We report the classi\ufb01cation accuracies on the standard test sets in Table 3. On both datasets, the differentiable kNN classi\ufb01er outperforms all the baseline kNN variants including the convolutional autoencoder approach. The performance is much closer to the accuracy of a standard CNN. 7" + }, + { + "url": "http://arxiv.org/abs/1812.10539v3", + "title": "Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization", + "abstract": "Compressed sensing techniques enable efficient acquisition and recovery of\nsparse, high-dimensional data signals via low-dimensional projections. In this\nwork, we propose Uncertainty Autoencoders, a learning framework for\nunsupervised representation learning inspired by compressed sensing. We treat\nthe low-dimensional projections as noisy latent representations of an\nautoencoder and directly learn both the acquisition (i.e., encoding) and\namortized recovery (i.e., decoding) procedures. Our learning objective\noptimizes for a tractable variational lower bound to the mutual information\nbetween the datapoints and the latent representations. We show how our\nframework provides a unified treatment to several lines of research in\ndimensionality reduction, compressed sensing, and generative modeling.\nEmpirically, we demonstrate a 32% improvement on average over competing\napproaches for the task of statistical compressed sensing of high-dimensional\ndatasets.", + "authors": "Aditya Grover, Stefano Ermon", + "published": "2018-12-26", + "updated": "2019-04-11", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.NE" + ], + "main_content": "INTRODUCTION The goal of unsupervised representation learning is to learn transformations of the input data which succinctly capture the statistics of an underlying data distribution [1]. In this work, we propose a learning framework for unsupervised representation learning inspired by compressed sensing. Compressed sensing is a class of techniques used to e\ufb03ciently acquire and A preliminary version titled \u201dVariational Compressive Sensing using Uncertainty Autoencoders\u201d appeared at the Uncertainty in Deep Learning Workshop at UAI, 2018. Proceedings of the 22nd International Conference on Arti\ufb01cial Intelligence and Statistics (AISTATS) 2019, Naha, Okinawa, Japan. PMLR: Volume 89. Copyright 2019 by the author(s). recover high-dimensional data using compressed measurements much fewer than the data dimensionality. The celebrated results in compressed sensing posit that sparse, high-dimensional datapoints can be acquired using much fewer measurements (roughly logarithmic) than the data dimensionality [2, 3, 4]. The acquisition is done using certain classes of random matrices and the recovery procedure is based on LASSO [5, 6]. The assumptions of sparsity are fairly general and can be applied \u201cout-of-the-box\u201d for many data modalities, e.g., images and audio are typically sparse in the wavelet and Fourier basis respectively. However, such assumptions ignore the statistical nature of many realworld problems. For representation learning in particular, we have access to a training dataset from an underlying domain. In this work, we use this data to learn the acquisition and recovery procedures, thereby sidestepping generic sparsity assumptions. In particular, we view the compressed measurements as the latent representations of an uncertainty autoencoder. An uncertainty autoencoder (UAE) parameterizes both the acquisition and recovery procedures for compressed sensing. The learning objective for a UAE is based on the InfoMax principle, which seeks to learn encodings that maximize the mutual information between the observed datapoints and noisy representations [7]. Since the mutual information is typically intractable in high-dimensions, we instead maximize tractable variational lower bounds [8, 9]. In doing so, we introduce a parameteric decoder that is trained to recover the original datapoint via its noisy representation. Unlike LASSO-based recovery, a parametric decoder amortizes the recovery process, which requires only a forward pass through the decoder at test time and thus enables scalability to large datasets [10, 11]. Notably, the framework of uncertainty autoencoders uni\ufb01es and extends several lines of prior research in unsupervised representation learning. First, we show theoretically under suitable assumptions that an uncertainty autoencoder is an implicit generative model of the underlying data distribution [12], i.e., a UAE permits sampling from the learned data distribution even arXiv:1812.10539v3 [stat.ML] 11 Apr 2019 \fUncertainty Autoencoders though it does not specify an explicit likelihood function. Hence, it directly contrasts with variational autoencoders (VAE) which specify a likelihood function (which is intractable and approximated by a tractable evidence lower bound) [13]. Unlike a VAE, a UAE does not require specifying a prior over the latent representations and hence o\ufb00sets pathologically observed scenarios that cause the latent representations to be uninformative when used with expressive decoders [14]. Next, we show that an uncertainty autoencoder, under suitable assumptions, is a generalization of principal component analysis (PCA). While earlier results connecting standard autoencoders with PCA assume linear encodings and decodings [15, 16, 17], our result surprisingly holds even for non-linear decodings. In practice, linear encodings learned jointly with non-linear decodings based on the UAE objective vastly outperform the linear encodings obtained via PCA. For dimensionality reduction on the MNIST dataset, we observed an average improvement of 5.33% over PCA when the low-dimensional representations are used for classi\ufb01cation under a wide range of settings. We evaluate UAEs for statistical compressed sensing of high-dimensional datasets. On the MNIST, Omniglot, and CelebA datasets, we observe average improvements of 38%, 31%, and 28% in recovery over the closest benchmark across all measurements considered. Finally, we demonstrate that uncertainty autoencoders demonstrate good generalization performance across domains in experiments where the encoder/decoder trained on a source dataset are transferred over for compressed sensing of another target dataset. 2 PRELIMINARIES We use upper case to denote probability distributions and assume they admit absolutely continuous densities on a suitable reference measure, denoted by lower case notation. We also use upper and lower case for random variables and their realizations respectively. Compressed sensing (CS). Let the datapoint and measurements be denoted with multivariate random variables X \u2208Rn and Y \u2208Rm respectively. The goal is to recover X given the measurements Y . For the purpose of compressed sensing, we assume m < n and relate these variables through a measurement matrix W \u2208Rm\u00d7l and a parameterized acquisition function f\u03c8 : Rn \u2192Rl (for any integer l > 0) such that: y = Wf\u03c8(x) + \u03f5 (1) where \u03f5 is the measurement noise. If we let f\u03c8(\u00b7) be the identity function (i.e., f\u03c8(x) = x for all x), then we recover a standard system of underdetermined linear equations where measurements are linear combinations of the datapoint corrupted by noise. In all other cases, the acquisition function transforms x such that f\u03c8(x) is potentially more amenable for compressed sensing. For instance, f\u03c8(\u00b7) could specify a change of basis that encourages sparsity, e.g., a Fourier basis for audio. Note that we allow the codomain of the mapping f\u03c8(\u00b7) to be de\ufb01ned on a higher or lower dimensional space (i.e., l \u0338= n in general). Sparse CS. To obtain nontrivial solutions to an underdetermined system, X is assumed to be sparse in some basis B. We are not given any additional information about X. The measurement matrix W is a random Gaussian matrix and the recovery is done via LASSO [2, 3, 4]. LASSO solves for a convex \u21131minimization problem such that the reconstruction b x for any datapoint x is given as: \u02c6 x = arg minx \u2225Bx\u22251 + \u03bb\u2225y\u2212Wx\u22252 2 where \u03bb > 0 is a tunable hyperparameter. Statistical CS. In statistical compressed sensing [18], we are additionally given access to a set of signals D, such that each x \u2208D is assumed to be sampled i.i.d. from a data distribution Qdata. Using this dataset, we learn the the measurement matrix W and the acquisition function f\u03c8(\u00b7) in Eq. (1). At test time, we directly observe the measurements ytest that are assumed to satisfy Eq. (1) for a target datapoint xtest \u223cQdata(X) and the task is to provide an accurate reconstruction \u02c6 xtest. Evaluation is based on the reconstruction error between xtest and \u02c6 xtest. Particularly relevant to this work, we can optionally learn a recovery function g\u03b8 : Rm \u2192Rn to reconstruct X given the measurements Y . This amortized approach [11] is in contrast to standard LASSO-based decoding which solves an optimization problem for every new datapoint at test time. If we learned the recovery function g\u03b8(\u00b7) during training, then \u02c6 xtest = g\u03b8(ytest) and the \u21132 error is given by \u2225xtest \u2212g\u03b8(ytest)\u22252. Such a recovery process requires only a function evaluation at test time and permits scaling to large datasets [10, 11]. Autoencoders. An autoencoder is a pair of parameterized functions (e, d) designed to encode and decode datapoints. For a standard autoencoder, let e : Rn \u2192Rm and d : Rm \u2192Rn denote the encoding and decoding functions respectively for an ndimensional datapoint and an m-dimensional latent space. The learning objective minimizes the l2 reconstruction error over a dataset D: min e,d X x\u2208D \u2225x \u2212d(e(x))\u22252 2 (2) \fAditya Grover, Stefano Ermon where the encoding and decoding functions are typically parameterized using neural networks. 3 UNCERTAINTY AUTOENCODER Consider a joint distribution between the signals X and the measurements Y , which factorizes as Q\u03c6(X, Y ) = Qdata(X)Q\u03c6(Y |X). Here, Qdata(X) is a \ufb01xed data distribution and Q\u03c6(Y |X) is a parameterized observation model that depends on the measurement noise \u03f5, as given by Eq. (1). In particular, \u03c6 corresponds to collectively the set of measurement matrix parameters W and the acquisition function parameters \u03c8. For instance, for isotropic Gaussian noise \u03f5 with a \ufb01xed variance \u03c32, we have Q\u03c6(Y |X) = N(Wf\u03c8(X), \u03c32Im). In an uncertainty autoencoder, we wish to learn the parameters \u03c6 that permit e\ufb03cient and accurate recovery of a signal X using the measurements Y . In order to do so, we propose to maximize the mutual information between X and Y : max \u03c6 I\u03c6(X, Y ) = Z q\u03c6(x, y) log q\u03c6(x, y) qdata(x)q\u03c6(y)dxdy = H(X) \u2212H\u03c6(X|Y ) (3) where H denotes di\ufb00erential entropy. The intuition is simple: if the measurements preserve maximum information about the signal, we can hope that recovery will have low reconstruction error. We formalize this intuition by noting that this objective is equivalent to maximizing the average log-posterior probability of X given Y . In fact, in Eq. (3), we can omit the term corresponding to the data entropy (since it is independent of \u03c6) to get the following equivalent objective: max \u03c6 \u2212H\u03c6(X|Y ) = EQ\u03c6(X,Y )[log q\u03c6(x|y)]. (4) Even though the mutual information is maximized and equals the data entropy when Y = X, the dimensionality constraints on m \u226an, the parametric assumptions on f\u03c8(\u00b7), and the noise model prohibit learning an identity mapping. Note that the properties of noise \u03f5 such as the distributional family and su\ufb03cient statistics are externally speci\ufb01ed. For example, these could be speci\ufb01ed based on properties of the measurement device for compressed sensing. More generally for unsupervised representation learning, we treat these properties as hyperparameters tuned based on the reconstruction loss on a held-out set, or any other form of available supervision. It is not suggested to optimize for these statistics during learning as the UAE would tend to shrink this noise to zero to maximize mutual information, thus ignoring measurement uncertainty in the context of compressed sensing and preventing generalization to out-of-distribution examples for representation learning. The theoretical results in Section 4 analyze the e\ufb00ect of noise more formally. Estimating mutual information between arbitrary high dimensional random variables can be challenging. However, we can lower bound the mutual information by introducing a variational approximation to the model posterior Q\u03c6(X|Y ) [8]. Denoting this approximation as P\u03b8(X|Y ), we get the following lower bound: I\u03c6(X, Y ) \u2265H(X) + EQ\u03c6(X,Y ) [log p\u03b8(x|y)] . (5) Comparing Eqs. (3, 4, 5), we can see that the second term in Eq. (5) approximates the intractable negative conditional entropy, \u2212H\u03c6(X|Y ) with a variational lower bound. Optimizing this bound leads to a decoding distribution given by P\u03b8(X|Y ) with variational parameters \u03b8. The bound is tight when there is no distortion during recovery, or equivalently when the decoding distribution P\u03b8(X|Y ) matches the true posterior Q\u03c6(X|Y ) (i.e., the Bayes optimal decoder). Stochastic optimization. Formally, the uncertainty autoencoder (UAE) objective is given by: max \u03b8,\u03c6 EQ\u03c6(X,Y ) [log p\u03b8(x|y)] . (6) In practice, the data distribution Qdata(X) is unknown and accessible only via a \ufb01nite dataset D. Hence, expectations with respect to Qdata(X) and its gradients can be estimated using Monte Carlo methods. This allows us to express the UAE objective as: max \u03b8,\u03c6 X x\u2208D EQ\u03c6(Y |x) [log p\u03b8(x|y)] := L(\u03c6, \u03b8; D). (7) Tractable evaluation of the above objective is closely tied to the distributional assumptions on the noise model. This could be speci\ufb01ed externally based on, e.g., properties of the sensing device in compressed sensing. For the typical case of an isotropic Gaussian noise model, we know that Q\u03c6(Y |X) = N(Wf\u03c8(X), \u03c32Im), which is easy-to-sample. While Monte Carlo gradient estimates with respect to \u03b8 can be e\ufb03ciently obtained via linearity of expectation, gradient estimation with respect to \u03c6 is challenging since these parameters specify the sampling distribution Q\u03c6(Y |X). One solution is to evaluate score function gradient estimates along with control variates [19, 20, 21]. Alternatively, many continuous distributions (e.g., the isotropic Gaussian and Laplace distributions) can be reparameterized such that it is possible to obtain samples by applying a deterministic transformation to samples from a \ufb01xed distribution and typically leads to low-variance gradient estimates [13, 22, 23, 24]. \fUncertainty Autoencoders 4 THEORETICAL ANALYSIS In this section, we derive connections of uncertainty autoencoders with generative modeling and Principal Component Analysis (PCA). The proofs of all theoretical results in this section are in Appendix A. 4.1 Implicit generative modeling Starting from an arbitrary point x(0) \u2208Rn, de\ufb01ne a Markov chain over X, Y with the following transitions: y(t) \u223cQ\u03c6(Y |x(t)) (8) x(t+1) \u223cP\u03b8(X|y(t)) (9) Theorem 1. Let \u03b8\u2217, \u03c6\u2217denote an optimal solution to the UAE objective in Eq. (6). If there exists a \u03c6 such that q\u03c6(x|y) = p\u03b8\u2217(x|y) and the Markov chain de\ufb01ned in Eqs. (8, 9) is ergodic, then the stationary distribution of the chain for the parameters \u03c6\u2217and \u03b8\u2217 is given by Q\u03c6\u2217(X, Y ). The above theorem suggests an interesting insight into the behavior of UAEs. Under idealized conditions, the learned model speci\ufb01es an implicit generative model for Q\u03c6\u2217(X, Y ). Further, ergodicity can be shown to hold for the isotropic Gaussian noise model. Corollary 1. Let \u03b8\u2217, \u03c6\u2217denote an optimal solution to the UAE objective in Eq. (6). If there exists a \u03c6 such that q\u03c6(x|y) = p\u03b8\u2217(x|y) and the noise model is Gaussian, then the stationary distribution of the chain for the parameters \u03c6\u2217and \u03b8\u2217is given by Q\u03c6\u2217(X, Y ). The marginal of the joint distribution Q\u03c6(X, Y ) with respect to X corresponds to the data distribution. A UAE hence seeks to learn an implicit generative model of the data distribution [25, 12], i.e., even though we do not have a tractable estimate for the likelihood of the model, we can generate samples using the Markov chain transitions de\ufb01ned in Eqs. (8, 9). 4.2 Optimal encodings A UAE can also be viewed as a dimensionality reduction technique for the dataset D. While in general the encoding performing this reduction can be nonlinear, the case of a linear encoding is one where the projection vectors are given as the rows of the measurement matrix W. The result below characterizes the optimal encoding of the dataset D with respect to the UAE objective for an isotropic Gaussian noise model. Theorem 2. Assume a uniform data distribution over a \ufb01nite dataset D. Further, we assume that expectations in the UAE objective exist, and the signals and measurement matrices are bounded in \u21132/Frobenius norms, i.e., \u2225x\u22252 \u2264k1 for all x \u2208D, \u2225W\u2225F \u2264k2 10.0 7.5 5.0 2.5 0.0 2.5 5.0 7.5 10.0 6 4 2 0 2 4 6 0.002 0.004 0.006 0.008 0.010 0.010 0.012 0.012 0.014 0.014 0.016 0.016 0.018 0.018 0.020 0.020 PCA projection UAE projection data PCA projected data UAE projected data Figure 1: Dimensionality reduction using PCA vs. UAE. Projections of the data (black points) on the UAE direction (green line) maximize the likelihood of decoding unlike the PCA projection axis (magenta line) which collapses many points in a narrow region. for some positive constants k1, k2 \u2208R+. For a linear encoder and isotropic Gaussian noise \u03f5 \u223cN(0, \u03c32I), the optimal measurement matrix W \u2217that maximizes the mutual information for an optimal decoder in the limit \u03c3 \u2192\u221eis given as: W \u2217= eigm \uf8eb \uf8edX xi,xj\u2208D \u0002 (xi \u2212xj)(xi \u2212xj)T \u0003 \uf8f6 \uf8f8 where eigm(M) denotes the top-m eigenvectors of the matrix M with the largest eigenvalues (speci\ufb01ed up to a positive scaling constant). Under the stated assumptions, the above result suggests an interesting connection between UAE and PCA. PCA seeks to \ufb01nd the directions that explain the most variance in the data. Theorem 2 suggests that when the noise in the projected signal is very high, the optimal projection directions (i.e., the rows of W \u2217) correspond to the principal components of the data signals. We note that this observation comes with a caveat; when the noise variance is high, it will dominate the contribution to the measurements Y in Eq. (1) as one would expect. Hence, the measurements and the signal will have low mutual information even under the optimal measurement matrix W \u2217. Our assumptions are notably di\ufb00erent from prior results in autoencoding drawing connections with PCA. Prior results show that linear encoding and decoding in a standard autoencoder recovers the principal components of the data (Eq. (3) in [15], Eq. (1) in [16]). In contrast, Theorem 2 is derived from variational principles and does not assume linear decoding. In general, the behaviors of UAE and PCA can be vastly di\ufb00erent. As noted in prior work [8, 26], the principal components may not be the the most informative low-dimensional projections for recovering the \fAditya Grover, Stefano Ermon 0 20 40 60 80 100 num measurements 2 3 4 5 6 7 8 9 l2 reconstruction error LASSO CS-VAE RP-UAE UAE (a) MNIST 0 20 40 60 80 100 num measurements 3 4 5 6 7 l2 reconstruction error (b) Omniglot Figure 2: Test \u21132 reconstruction error (per image) for compressed sensing. Original LASSO CS-VAE UAE (a) MNIST Original LASSO CS-VAE UAE (b) Omniglot Figure 3: Reconstructions for m = 25. Top: Original. Second: LASSO. Third: CS-VAE. Last: UAE. 25 projections of the data are su\ufb03cient for UAE to reconstruct the original image with high accuracy. original high-dimensional data back from its projections. A UAE, on the other hand, is explicitly designed to preserve as much information as possible (see Eq. (4)). We illustrate the di\ufb00erences in a synthetic experiment in Figure 1. The true data distribution is an equiweighted mixture of two Gaussians stretched along orthogonal directions. We sample 100 points (black) from this mixture and consider two dimensionality reductions. In the \ufb01rst case, we project the data on the \ufb01rst principal component (blue points on magenta line). This axis captures a large fraction of the variance in the data but collapses data sampled from the bottom right Gaussian in a narrow region. The projections of the data on the UAE axis (red points on green line) are more spread out. This suggest that recovery is easier, even if doing so increases the total variance in the projected space compared to PCA. 5 EXPERIMENTS 5.1 Statistical compressed sensing We perform compressed sensing on three datasets: MNIST [27], Omniglot [28], and CelebA dataset [29], with extremely low number of measurements m \u2208 {2, 5, 10, 25, 50, 100}. We discuss the MNIST and Omniglot datasets here since they have a similar setup. To save space, results on the CelebA dataset are deferred to Appendix B.3. Every image in MNIST and Omniglot has a dimensionality of 28 \u00d7 28. In all our experiments, we assume a Gaussian noise model with \u03c3 = 0.1. We evaluated UAE against: \u2022 LASSO decoding with random Gaussian matrices. The MNIST and Omniglot datasets are reasonably sparse in the canonical pixel basis, and hence, we did not observe any gains after applying Discrete Cosine Transform and Daubechies-1 Wavelet Transform. \u2022 CS-VAE. This approach to compressed sensing was proposed by [30] and learns a latent variable generative model over the observed variables X and the latent variables Z. Such a model de\ufb01nes a mapping G : Rk \u2192Rn from Z to X, which is given by either the mean function of the observation model for a VAE or the forward deterministic mapping to generate samples for a GAN. We use VAEs in our experiments. Thereafter, using a classic acquisition matrix satisfying a generalized Restricted Eigenvalue Condition (say W) (e.g., random Gaussian matrices), the reconstruction \u02c6 x for any datapoint is given as: \u02c6 x = G(arg minz \u2225y \u2212WG(z)\u22252). Intuitively, this procedure seeks the latent vector z such that the corresponding point on the range of G can best approximate the measurements y under the mapping W. We used the default parameter settings and architectures proposed in [30]. \fUncertainty Autoencoders 0 20 40 60 80 100 num measurements 2 3 4 5 6 7 8 9 l2 reconstruction error LASSO CS-VAE UAE-SD UAE-SE (a) Source: MNIST, Target: Omniglot 0 20 40 60 80 100 num measurements 2 3 4 5 6 7 8 9 l2 reconstruction error (b) Source: Omniglot, Target: MNIST Figure 4: Test \u21132 reconstruction error (per image) for transfer compressed sensing. Original CS-VAE UAE-SD UAE-SE (a) Source: MNIST, Target: Omniglot Original CS-VAE UAE-SD UAE-SE (b) Source: Omniglot, Target: MNIST Figure 5: Reconstructions for m = 25. Top: Target. Second: CS-VAE. Third: UAE-SD. Last: UAE-SE. \u2022 RP-UAE. To independently evaluate the e\ufb00ect of variational decoding, this ablation baseline encodes the data using Gaussian random projections (RP) and trains the decoder based on the UAE objective. Since LASSO and CS-VAE both use an RP encoding, the di\ufb00erences in performance would arise only due to the decoding procedures. The UAE decoder and the CS-VAE encoder/decoder are multi-layer perceptrons consisting of two hidden layers with 500 units each. For a fair comparison with random Gaussian matrices, the UAE encoder is linear. Further, we perform \u21132 regularization on the norm of W. This helps in generalization to test signals outside the train set and is equivalent to solving the Lagrangian of a constrained UAE objective: max \u03b8,\u03c6 EQ\u03c6(X,Y ) [log p\u03b8(x|y)] subject to \u2225W\u2225F \u2264k. The Lagrangian parameter is chosen by line search on the above objective. The constraint ensures that UAE does not learn encodings W that trivially scale the measurement matrix to overcome noise. For each m, we choose k to be the expected norm of a random Gaussian matrix of dimensions n \u00d7 m for fair comparisons with other baselines. In practice, the norm of the learned W for a UAE is much smaller than those of random Gaussian matrices suggesting that the observed performance improvements are non-trivial. Results. The \u21132 reconstruction errors on the standard test sets are shown in Figure 2. For both datasets, we observe that UAE drastically outperforms both LASSO and CS-VAE for all values of m considered. LASSO (blue curves) is unable to reconstruct with such few measurements. The CS-VAE (red) error decays much more slowly compared to UAE as m grows. Even the RP-UAE baseline (yellow), which trains the decoder keeping the encoding \ufb01xed to a random projection, outperforms CS-VAE. Jointly training the encoder and the decoder using the UAE objective (green) exhibits the best performance. These results are also re\ufb02ected qualitatively for the reconstructed test signals shown in Figure 3 for m = 25 measurements. 5.2 Transfer compressed sensing To test the generalization of the learned models to similar, unseen datasets, we consider the task of transfer compressed sensing task introduced in [31]. Experimental setup. We train the models on a source domain that is related to a target domain. Since the dimensions of MNIST and Omniglot images match, transferring from one domain to another requires no additional processing. For UAE, we consider two variants. In UAE-SE, we used the encodings from the source domain and retrain the decoder on the target domain. For UAE-SD, we use source decoder and retrain the encoder on the target domain. \fAditya Grover, Stefano Ermon Table 1: PCA vs. UAE. Average test classi\ufb01cation accuracy for the MNIST dataset. Dimensions Method kNN DT RF MLP AdaB NB QDA SVM 2 PCA 0.4078 0.4283 0.4484 0.4695 0.4002 0.4455 0.4576 0.4503 UAE 0.4644 0.5085 0.5341 0.5437 0.4248 0.5226 0.5316 0.5256 5 PCA 0.7291 0.5640 0.6257 0.7475 0.5570 0.6587 0.7321 0.7102 UAE 0.8115 0.6331 0.7094 0.8262 0.6164 0.7286 0.7961 0.7873 10 PCA 0.9257 0.6354 0.6956 0.9006 0.7025 0.7789 0.8918 0.8440 UAE 0.9323 0.5583 0.7362 0.9258 0.7165 0.7895 0.9098 0.8753 25 PCA 0.9734 0.6382 0.6889 0.9521 0.7234 0.8635 0.9572 0.9194 UAE 0.9730 0.5407 0.7022 0.9614 0.7398 0.8306 0.9580 0.9218 50 PCA 0.9751 0.6381 0.6059 0.9580 0.7390 0.8786 0.9632 0.9376 UAE 0.9754 0.5424 0.6765 0.9597 0.7330 0.8579 0.9638 0.9384 100 PCA 0.9734 0.6380 0.4040 0.9584 0.7136 0.8763 0.9570 0.9428 UAE 0.9731 0.6446 0.6241 0.9597 0.7170 0.8809 0.9595 0.9431 Results. The \u21132 reconstruction errors are shown in Figure 4. LASSO (blue curves) does not involve any learning, and hence its performance is same as Figure 2. The CS-VAE (red) performance degrades signi\ufb01cantly in comparison, even performing worse than LASSO in some cases. The UAE based methods outperform these approaches and UAE-SE (green) fares better than UAE-SD (yellow). Qualitative di\ufb00erences are highlighted in Figure 5 for m = 25 measurements. 5.3 Dimensionality reduction Dimensionality reduction is a common preprocessing technique for specifying features for classi\ufb01cation. We compare PCA and UAE on this task. While Theorem 2 posits that the two techniques are equivalent in the regime of high noise given optimal UAE decodings, we set the noise as a hyperparameter based on a validation set to enable out-of-sample generalization. Setup. We learn the principal components and UAE projections on the MNIST training set for varying number of dimensions. We then learn classi\ufb01ers based on the these projections. Again, we use a linear encoder for the UAE for a fair evaluation. Since the inductive biases vary across di\ufb00erent classi\ufb01ers, we considered 8 commonly used classi\ufb01ers: k-Nearest Neighbors (kNN), Decision Trees (DT), Random Forests (RF), Multilayer Perceptron (MLP), AdaBoost (AdaB), Gaussian Naive Bayes (NB), Quadratic Discriminant Analysis (QDA), and Support Vector Machines (SVM) with a linear kernel. Results. The performance of the PCA and UAE feature representations for di\ufb00erent number of dimensions is shown in Table 1. We \ufb01nd that UAE outperforms PCA in a majority of the cases. Further, this trend is largely consistent across classi\ufb01ers. The improvements are especially high when the number of dimensions is low, suggesting the bene\ufb01ts of UAE as a dimensionality reduction technique for classi\ufb01cation. 6 RELATED WORK In this section, we contrast uncertainty autoencoders with related works in autoencoding, compressed sensing, and mutual information maximization. Autoencoders. To contrast uncertainty autoencoders with other commonly used autoencoding schemes, consider a UAE with a Gaussian observation model with \ufb01xed isotropic covariance for the decoder of all the autoencoding objectives we discuss subsequently. The UAE objective can be simpli\ufb01ed as: min \u03b8,\u03c6 Ex,y\u223cQ\u03c6(X,Y ) \u0002 \u2225x \u2212g\u03b8(y)\u22252 2 \u0003 Standard Autoencoder. If we assume no measurement noise (i.e., \u03f5 = 0) and assume the observation model P\u03b8(X|Y ) to be a Gaussian with mean g\u03b8(Y ) and a \ufb01xed isotropic \u03a3, then the UAE objective reduces to minimizing the mean squared error between the true and recovered datapoint: min \u03b8,W,\u03c8 Ex\u223cQdata(X) \u0002 \u2225x \u2212g\u03b8(Wf\u03c8(x))\u22252 2 \u0003 This special case of a UAE corresponds to a standard autoencoder [32] where the measurements Y signify a hidden representation for X. However, this case lacks the interpretation of an implicit generative model since the assumptions of Theorem 1 do not hold. Denoising Autoencoders. A DAE [33] adds noise at the level of the input datapoint X to learn robust representations. For a UAE, the noise model is de\ufb01ned at the level of the compressed measurements. Again, with the assumptions of a Gaussian decoder, the DAE objective can be expressed as: min \u03b8,W,\u03c8 Ex\u223cQdata(X),\u02dc x\u223cC( \u02dc X|x) \u0002 \u2225x \u2212g(Wf\u03c8(\u02dc x))\u22252 2 \u0003 where C(\u00b7|X) is some prede\ufb01ned noise corruption model. Similar to Theorem 1, a DAE also learns an implicit model of the data distribution [34, 35]. \fUncertainty Autoencoders Variational Autoencoders. A VAE [13, 22] explicitly learns a latent variable model P\u03b8(X, Y ) for the dataset. The learning objective is a variational lower bound to the marginal log-likelihood assigned by the model to the data X, which notationally corresponds to EQdata(X)[log P\u03b8(x)]. The variational objective that maximizes this quantity can be simpli\ufb01ed as: min \u03b8,\u03c6 Ex,y\u223cQ\u03c6(X,Y ) \u0002 \u2225x \u2212g\u03b8(y)\u22252 2 \u0003 + Ex\u223cQdata [KL(Q\u03c6(Y |x), P(Y ))] The learning objective includes a reconstruction error term, akin to the UAE objective. Crucially, it also includes a regularization term to minimize the KL divergence of the variational posterior over Y with a prior distribution over Y . A key di\ufb00erence is that a UAE does not explicitly need to model the prior distribution over Y . On the downside, a VAE can perform e\ufb03cient ancestral sampling while a UAE requires running relatively expensive Markov Chains to obtain samples. Recent works have attempted to unify the variants of variational autoencoders through the lens of mutual information [36, 37, 14]. These works also highlight scenarios where the VAE can learn to ignore the latent code in the presence of a strong decoder thereby a\ufb00ecting the reconstructions to attain a lower KL loss. One particular variant, the \u03b2-VAE, weighs the additional KL regularization term with a positive factor \u03b2 and can e\ufb00ectively learn disentangled representations [38, 39]. Although [38] does not consider this case, the UAE can be seen as a \u03b2-VAE with \u03b2 = 0. To summarize, our uncertainty autoencoding formulation provides a combination of unique desirable properties for representation learning that are absent in prior autoencoders. As discussed, a UAE de\ufb01nes an implicit generative model without specifying a prior (Theorem 1) even under realistic conditions (Corollary 1; unlike DAEs) and has rich connections with PCA even for non-linear decoders (Theorem 2; unlike any kind of existing autoencoder). Generative modeling and compressed sensing. The closely related works of [30, 31] also use generative models for compressed sensing. As highlighted in Section 5, their approach is radically di\ufb00erent from UAE. Similar to [30], a UAE learns a data distribution. However, in doing so, it additionally learns an acquisition/encoding function and a recovery/decoding function, unlike [30, 31] which rely on generic random matrices and \u21132 decoding. The cost of implicit learning in a UAE is that some of its inference capabilities, such as likelihood evaluation and sampling, are intractable or require running Markov chains. However, these inference queries are orthogonal to compressed sensing. Finally, our decoding is amortized and scales to large datasets, unlike [30, 31] which solve an independent optimization problem for each test datapoint. Mutual information maximization. The principle of mutual information maximization, often referred to as InfoMax in prior work, was \ufb01rst proposed for learning encodings for communication over a noisy channel [7]. The InfoMax objective has also been applied for statistical compressed sensing for learning both linear and non-linear encodings [26, 40, 41]. Our work di\ufb00ers from these existing frameworks in two fundamental ways. First, we optimize for a tractable variational lower bound to the MI that which allows our method to scale to high-dimensional data. Second, we learn an amortized [10, 11] decoder in addition to the encoder that sidesteps expensive, per-example optimization for the test datapoints. Further, we improve upon the IM algorithm proposed originally for variational information maximization [8]. While the IM algorithm proposes to optimize the lower bound on the mutual information in alternating \u201cwake-sleep\u201d phases for optimizing the encoder (\u201cwake\u201d) and decoder (\u201csleep\u201d) analogous to the expectation-maximization procedure used in [26], we optimize the encoder and decoder jointly using a single consistent objective leveraging recent advancements in gradient based variational stochastic optimization. 7" + }, + { + "url": "http://arxiv.org/abs/1811.09813v1", + "title": "Streamlining Variational Inference for Constraint Satisfaction Problems", + "abstract": "Several algorithms for solving constraint satisfaction problems are based on\nsurvey propagation, a variational inference scheme used to obtain approximate\nmarginal probability estimates for variable assignments. These marginals\ncorrespond to how frequently each variable is set to true among satisfying\nassignments, and are used to inform branching decisions during search; however,\nmarginal estimates obtained via survey propagation are approximate and can be\nself-contradictory. We introduce a more general branching strategy based on\nstreamlining constraints, which sidestep hard assignments to variables. We show\nthat streamlined solvers consistently outperform decimation-based solvers on\nrandom k-SAT instances for several problem sizes, shrinking the gap between\nempirical performance and theoretical limits of satisfiability by 16.3% on\naverage for k=3,4,5,6.", + "authors": "Aditya Grover, Tudor Achim, Stefano Ermon", + "published": "2018-11-24", + "updated": "2018-11-24", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.LO", + "stat.ML" + ], + "main_content": "Introduction Constraint satisfaction problems (CSP), such as boolean satis\ufb01ability (SAT), are useful modeling abstractions for many arti\ufb01cial intelligence and machine learning problems, including planning [13], scheduling [27], and logic-based probabilistic modeling frameworks such as Markov Logic Networks [30]. More broadly, the ability to combine constraints capturing domain knowledge with statistical reasoning has been successful across diverse areas such as ontology matching, information extraction, entity resolution, and computer vision [15, 4, 32, 29, 33]. Solving a CSP involves \ufb01nding an assignment to the variables that renders all of the problem\u2019s constraints satis\ufb01ed, if one exists. Solvers that explore the search space exhaustively do not scale since the state space is exponential in the number of variables; thus, the selection of branching criteria for variable assignments is the central design decision for improving the performance of these solvers [5]. Any CSP can be represented as a factor graph, with variables as nodes and the constraints between these variables (known as clauses in the SAT case) as factors. With such a representation, we can design branching strategies by inferring the marginal probabilities of each variable assignment. Intuitively, the variables with more extreme marginal probability for a particular value are more likely to assume that value across the satisfying assignments to the CSP. In fact, if we had access to an oracle that could perform exact inference, one could trivially branch on variable assignments with non-zero marginal probability and ef\ufb01ciently \ufb01nd solutions (if one exists) to hard CSPs such as SAT in time linear in the number of variables. In practice however, exact inference is intractable for even moderately sized CSPs and approximate inference techniques are essential for obtaining estimates of marginal probabilities. Variational inference is at the heart of many such approximate inference techniques. The key idea is to cast inference over an intractable joint distribution as an optimization problem over a family of tractable approximations to the true distribution [6, 34, 38]. Several such approximations exist, e.g., mean \ufb01eld, belief propagation etc. In this work, we focus on survey propagation. Inspired from 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montr\u00e9al, Canada. arXiv:1811.09813v1 [cs.AI] 24 Nov 2018 \fa b c i j k l m Figure 1: Factor graph for a 3-SAT instance with 5 variables (circles) and 3 clauses (squares). A solid (dashed) edge between a clause and a variable indicates that the clause contains the variable as a positive (negative) literal. This instance corresponds to (\u00acxi \u2228xk \u2228\u00acxl) \u2227(xi \u2228xj \u2228\u00acxk) \u2227 (xk \u2228xl \u2228\u00acxm), with the clauses a, b, c listed in order. statistical physics, survey propagation is a message-passing algorithm that corresponds to belief propagation in a \u201clifted\u201d version of the original CSP and underlines many state-of-the-art solvers for random CSPs [24, 22, 21]. Existing branching rules for survey propagation iteratively pick variables with the most con\ufb01dent marginals and \ufb01x their values (by adding unary constraints on these variables) in a process known as decimation. This heuristic works well in practice, but struggles with a high variance in the success of branching, as the unary constraints leave the survey inspired decimation algorithm unable to recover in the event that a contradictory assignment (i.e., one that cannot be completed to form a satisfying assignment) is made. Longer branching predicates, de\ufb01ned over multiple variables, have lower variance and are more effective both in theory and practice [14, 1, 2, 36, 19, 18]. In this work, we introduce improved branching heuristics for survey propagation by extending this idea to CSPs; namely, we show that branching on more complex predicates than single-variable constraints greatly improves survey propagation\u2019s ability to \ufb01nd solutions to CSPs. Appealingly, the more complex, multi-variable predicates which we refer to as streamlining constraints, can be easily implemented as additional factors (not necessarily unary anymore) in message-passing algorithms such as survey propagation. For this reason, branching on more complex predicates is a natural extension to survey propagation. Using these new branching heuristics, we develop an algorithm and empirically benchmark it on families of random CSPs. Random CSPs exhibit sharp phase transitions between satis\ufb01able and unsatis\ufb01able instances and are an important model to analyze the average hardness of CSPs, both in theory and practice [25, 26]. In particular, we consider two such CSPs: k-SAT where constraints are restricted to disjunctions involving exactly k (possibly negated) variables [3] and XORSAT which substitutes disjunctions in k-SAT for XOR constraints of \ufb01xed length. On both these problems, our proposed algorithm outperforms the competing survey inspired decimation algorithm that branches based on just single variables, increasing solver success rates. 2 Preliminaries Every CSP can be encoded as a boolean SAT problem expressed in Conjunctive Normal Form (CNF), and we will use this representation for the remainder of this work. Let V and C denote index sets for n Boolean variables and m clauses respectively. A literal is a variable or its negation; a clause is a disjunction of literals. A CNF formula F is a conjunction of clauses, and is written as (l11 \u2228. . . \u2228l1k1) \u2227. . . \u2227(lm1 \u2228. . . \u2228lmkm). Each (lj1 \u2228. . . \u2228ljkj) is a clause with kj literals. For notational convenience, the variables will be indexed with letters i, j, k, . . . and the clauses will be indexed with letters a, b, c, . . .. Each variable i is Boolean, taking values xi \u2208{0, 1}. A formula is satis\ufb01able is there exists an assignment to the variables such that all the clauses are satis\ufb01ed, where a clause is satis\ufb01ed if at least one literal evaluates to true. Any SAT instance can be represented as an undirected graphical model where each clause corresponds to a factor, and is connected to the variables in its scope. Given an assignment to the variables in its scope, a factor evaluates to 1 if the corresponding clause evaluates to True, and 0 otherwise. The corresponding joint probability distribution is uniform over the set of satisfying assignments. An example factor graph illustrating the use of our notation is given in Figure 1. 2 \fk-SAT formulas are ones where all clauses (lj1 \u2228. . . \u2228ljkj) have exactly k literals, i.e., kj = k for j = 1, \u00b7 \u00b7 \u00b7 , m. Random k-SAT instances are generated by choosing each literal\u2019s variable and negation independently and uniformly at random in each of the m clauses. It has been shown that these instances have a very distinctive behavior where the probability of an instance having a solution has a phase transition explained as a function of the constraint density, \u03b1 = m/n, for a problem with m clauses and n variables for large enough k. These instances exhibit a sharp crossover at a threshold density \u03b1s(k): they are almost always satis\ufb01able below this threshold, and they become unsatis\ufb01able for larger constraint densities [12, 10]. Empirically, random instances with constraint density close to the satis\ufb01ability threshold are dif\ufb01cult to solve [23]. 2.1 Survey propagation The base algorithm used in many state-of-the-art solvers for constraint satisfaction problems such as random k-SAT is survey inspired decimation [7, 24, 16, 23]. The algorithm employs survey propagation, a message passing procedure that computes approximate single-variable marginal probabilities for use in a decimation procedure. Our approach uses the same message passing procedure, and we review it here for completeness. Survey propagation is an iterative procedure for estimating variable marginals in a factor graph. In the context of a factor graph corresponding to a Boolean formula, these marginals represent approximately the probability of a variable taking on a particular assignment when sampling uniformly from the set of satisfying assignments of the formula. Survey propagation considers three kinds of assignments for a variable: 0, 1, or unconstrained (denoted by \u2217). A high value for marginals corresponding to either of the \ufb01rst two assignments indicates that the variables assuming the particular assignment make it likely for the overall formula to be satis\ufb01able, whereas a high value for the unconstrained marginal indicates that satis\ufb01ablility is likely regardless of the variable assignment. In order to estimate these marginals from a factor graph, we follow a message passing protocol where we \ufb01rst compute survey messages for each edge in the graph. There are two kinds of survey messages: messages {\u03b7i\u2192a}i\u2208V,a\u2208C(i) from variable nodes i to clauses a, and messages {\u03b7a\u2192i}a\u2208C,i\u2208V (a) from clauses to variables. These messages can be interpreted as warnings of unsatis\ufb01ability. 1. If we let V (a) to be the set of variables appearing in clause a, then the message sent from a clause a to variable i, \u03b7a\u2192i, is intuitively the probability that all variables in V (a)\\{i} are in the state that violates clause a. Hence, clause a is issuing a warning to variable i. 2. The reverse message from variable i to clause a for some value xi, \u03b7i\u2192a, is interpreted as the probability of variable i assuming the value xi that violates clause a. As shown in Algorithm 1, the messages from factors (clauses) to variables \u03b7a\u2192i are initialized randomly [Line 2] and updated until a prede\ufb01ned convergence criteria [Lines 5-7]. Once the messages converge to \u03b7\u2217 a\u2192i, we can estimate the approximate marginals \u00b5i(0), \u00b5i(1), \u00b5i(\u2217) for each variable i. In case survey propagation does not converge even after repeated runs, or a contradiction is found, the algorithm output is UNSAT. The message passing updates SP-Update [Line 6] and the marginalization procedure Marginalize [Line 9] are deferred to Appendix A for ease of presentation. We refer the reader to [24] and [7] for a detailed analysis of the algorithm and connections to statistical physics. 2.2 Decimation and Simpli\ufb01cation The magnetization of a variable i, de\ufb01ned as M(i) := |\u00b5i(0) \u2212\u00b5i(1)|, is used as a heuristic bias to determine how constrained the variable is to take a particular value. The magnetization can be a maximum of one which occurs when either of the marginals is one and a minimum of zero when the estimated marginals are equal.1 The decimation procedure involves setting the variable(s) with the highest magnetization(s) to their most likely values based on the relative magnitude of \u00b5i(0) vs. \u00b5i(1) [Lines 12-13]. The algorithm then branches on these variable assignments and simpli\ufb01es the formula by unit propagation [Line 15]. In unit propagation, we recursively iterate over all the clauses that the decimated variable appears in. If the polarity of the variable in a literal matches its assignment, the clause is satis\ufb01ed and hence, the corresponding clause node and all its incident variable edges are 1Other heuristic biases are also possible. For instance, [23] use the bias 1 \u2212min(\u00b5i(1), \u00b5i(0)). 3 \fAlgorithm 1 SurveyInspiredDecimation(V, C) 1: Initialize V \u2190V and C \u2190C 2: Initialize messages {\u03b7a\u2192i}a\u2208C,i\u2208V (a) at random 3: while (P i |\u00b5i(0) \u2212\u00b5i(1)| > \u03f5) do 4: \u25b7Message passing inference 5: repeat 6: {\u03b7a\u2192i} \u2190SP-Update(V, C, {\u03b7a\u2192i}) 7: until Convergence to {\u03b7\u2217 a\u2192i} 8: for i = 1, . . . , |V| do 9: \u00b5i(0), \u00b5i(1), \u00b5i(\u2217) \u2190Marginalize(V, C, {\u03b7a\u2192i) 10: end for 11: \u25b7Branching (Decimation) 12: Choose i\u2217\u2190arg maxi\u2208V |\u00b5i(0) \u2212\u00b5i(1)| 13: Set y\u2217\u2190arg maxy\u2208{0,1} \u00b5i\u2217(y) 14: \u25b7Simpli\ufb01cation 15: Update V, C \u2190UnitPropagate(V, C \u222a{xi\u2217= y\u2217}) 16: end while 17: return LocalSearch(V, C) removed from the factor graph. If the polarity in the literal does not match the assignment, only the edge originating from this particular variable node incident to the clause node is removed from the graph. For example, setting variable k to 0 in Figure 1 leads to removal of edges incident to k from a and c, as well as all outgoing edges from b (because b is satis\ufb01ed). 2.3 Survey Inspired Decimation The full iterative process of survey propagation (on the simpli\ufb01ed graph from the previous iteration) followed by decimation is continued until a satisfying assignment is found, or a stopping condition is reached beyond which the instance is assumed to be suf\ufb01ciently easy for local search using a standard algorithm such as WalkSAT [31]. Note that when the factor graph is a tree and survey propagation converges to the exact warning message probabilities, Algorithm 1 is guaranteed to select good variables to branch on and to \ufb01nd a solution (assuming one exists). However, the factor graphs for CSPs are far from tree-like in practice and thus, the main factor affecting the success of survey inspired decimation is the quality of the estimated marginals. If these estimates are inaccurate, it is possible that the decimation procedure chooses to \ufb01x variables in contradictory con\ufb01gurations. To address this issue, we propose to use streamlining constraints. 3 Streamlining survey propagation Combinatorial optimization algorithms critically depend on good heuristics for deciding where to branch during search [5]. Survey propagation provides a strong source of information for the decimation heuristic. As discussed above, the approximate nature of message-passing implies that the \u201csignal\" might be misleading. We now describe a more effective way to use the information from survey propagation. Whenever we have a combinatorial optimization problem over X = {0, 1}n and wish to \ufb01nd a solution s \u2208S \u2286X, we may augment the original feasibility problem with constraints that partition the statespace X into disjoint statespaces and recursively search the resulting subproblems. Such partitioning constraints can signi\ufb01cantly simplify search by exploiting the structure of the solution set S and are known as streamlining constraints [17]. Good streamlining constraints will provide a balance between yielding signi\ufb01cant shrinkage of the search space and safely avoiding reductions in the solution density of the resulting subproblems. Partitioning the space based on the value of a single variable (like in decimation) performs well on the former at the cost of the latter. We therefore introduce a different constraining strategy that strives to achieve a more balanced trade-off. 4 \f3.1 Streamlining constraints for constraint satisfaction problems The success of survey inspired decimation relies on the fact that marginals carry some signal about the likely assignments of variables. However, the factor graph becomes more dense as the constraint density approaches the phase transition threshold, making it harder for survey propagation to converge in practice. This suggests that the marginals might provide a weaker signal to the decimation procedure in early iterations. Instead of selecting a variable to freeze in some con\ufb01guration as in decimation, e.g., xi = 1, we propose a strictly more general streamlining approach where we use disjunction constraints between subsets of highly magnetized variables, e.g., (xi \u2228xj) = 1. The streamlined constraints can cut out smaller regions of the search space while still making use of the magnetization signal. For instance, introducing a disjunction constraint between any pair of variables reduces the state-space by a factor of 4/3 (since three out of four possible variable assignments satisfy the clause), in contrast to the decimation procedure in Algorithm 1 which reduces the state space by a factor of 2. Intuitively, when branching with a length-2 clause such as (xi \u2228xj) we make an (irreversible) mistake only if we guess the value of both variables wrong. Decimation can also be seen as a special case of streamlining for the same choice of literal. To see why, we note that in the above example the acceptable variable assignments for decimation (xi, xj) = {(1, 0), (1, 1)} are a subset of the valid assignments for streamlining (xi, xj) = {(1, 0), (1, 1), (0, 1)}. The success of the streamlining constraints is strongly governed by the literals selected for participating in these added disjunctions. Disjunctions could in principle involve any number of literals, and longer disjunctions result in more conservative branching rules. But there are diminishing returns with increasing length, and so we restrict ourselves to disjunctions of length at most two in this paper. Longer clauses can in principle be handled by the inference procedure used by message-passing algorithms, and we leave an exploration of this extension to future work. 3.2 Survey Inspired Streamlining The pseudocode for survey inspired streamlining is given in Algorithm 2. The algorithm replaces the decimation step of survey inspired decimation with a streamlining procedure that adds disjunction constraints to the original formula [Line 16], thereby making the problem increasingly constrained until the search space can be ef\ufb01ciently explored by local search. For designing disjunctions, we consider candidate variables with the highest magnetizations, similar to decimation. If a variable i is selected, the polarity of the literal containing the variable is positive if \u00b5i(1) > \u00b5i(0) and negative otherwise [Lines 12-15]. Disjunctions use the signal from the survey propagation messages without overcommitting to a particular variable assignment too early (as in decimation). Speci\ufb01cally, without loss of generality, if we are given marginals \u00b5i(1) > \u00b5i(0) and \u00b5j(1) > \u00b5j(0) for variables i and j, the new update adds the streamlining constraint xi \u2228xj to the problem instead of overcommitting by constraining i or j to its most likely state. This approach leverages the signal from survey propagation, namely that it is unlikely for \u00acxi \u2227\u00acxj to be true, while also allowing for the possibility that one of the two marginals may have been estimated incorrectly. As long as streamlined constraints and decimation use the same bias signal (such as magnetization) for ranking candidate variables, adding streamlined constraints through the above procedure is guaranteed to not degrade performance compared with the decimation strategy in the following sense. Proposition 1. Let F be a formula under consideration for satis\ufb01ability, Fd be the formula obtained after one round of survey inspired decimation, and Fs be the formula obtained after one round of survey inspired streamlining. If Fd is satis\ufb01able, then so is Fs. Proof. Because unit-propagation is sound, the formula obtained after one round of survey inspired decimation is satis\ufb01able if and only if (F \u2227\u2113i\u2217) is satis\ufb01able, where the literal \u2113i\u2217denotes either xi\u2217 or \u00acxi\u2217. By construction, the formula obtained after one round of streamlining is F \u2227(\u2113i\u2217\u2228\u2113j\u2217). It is clear that if (F \u2227\u2113i\u2217) is satis\ufb01able, so is F \u2227(\u2113i\u2217\u2228\u2113j\u2217). Clearly, the converse need not be true. 3.3 Algorithmic design choices A practical implementation of survey inspired streamlining requires setting some design hyperparameters. These hyperparameters have natural interpretations as discussed below. 5 \fAlgorithm 2 SurveyInspiredStreamlining(V, C, T) 1: Initialize V \u2190V and C \u2190C 2: Initialize messages {\u03b7a\u2192i}a\u2208C,i\u2208V (a) at random 3: while P i |\u00b5i(0) \u2212\u00b5i(1)| \u2265\u03f5 do 4: repeat 5: {\u03b7a\u2192i} \u2190SP-Update(V, C, {\u03b7a\u2192i}) 6: until Convergence to {\u03b7\u2217 a\u2192i} 7: for i = 1, . . . , |V| do 8: \u00b5i(0), \u00b5i(1), \u00b5i(\u2217) \u2190Marginalize(V, C, {\u03b7a\u2192i) 9: end for 10: if t < T then 11: \u25b7Add Streamlining Constraints 12: Choose i\u2217\u2190arg maxi\u2208V |\u00b5i(0) \u2212\u00b5i(1)| 13: Choose j\u2217\u2190arg maxi\u2208V,i\u0338=i\u2217|\u00b5i(0) \u2212\u00b5i(1)| 14: Set y\u2217\u2190arg maxy\u2208{0,1} \u00b5i\u2217(y) 15: Set w\u2217\u2190arg maxy\u2208{0,1} \u00b5j\u2217(y) 16: C \u2190C \u222a{xi\u2217= y\u2217\u2228xj\u2217= w\u2217} 17: else 18: Choose i\u2217\u2190arg maxi\u2208V |\u00b5i(0) \u2212\u00b5i(1)| 19: Set y\u2217\u2190arg maxy\u2208{0,1} \u00b5i\u2217(y) 20: V, C \u2190UnitPropagate(V, C \u222a{xi\u2217= y\u2217}) 21: end if 22: end while 23: return LocalSearch(V, C) Disjunction pairing. Survey inspired decimation scales to large instances by taking the top R variables as decimation candidates at every iteration instead of a single candidate (Line 13 in Algorithm 1). The parameter R is usually set as a certain fraction of the total number of variables n in the formula, e.g., 1%. For the streamlining constraints, we take the top 2 \u00b7 R variables, and pair the variables with the highest and lowest magnetizations as a disjunction constraint. We remove these variables from the candidate list, repeating until we have added R disjunctions to the original set of constraints. For instance, if v1, \u00b7 \u00b7 \u00b7 , v2R are our top decimation candidates (with signs) in a particular round, we add the constraints (v1 \u2228v2R) \u2227(v2 \u2228v2R\u22121) \u2227\u00b7 \u00b7 \u00b7 \u2227(vR \u2228vR+1). Our procedure for scaling to top R decimation candidates ensures that Proposition 1 holds, because survey inspired decimation would have added (v1) \u2227(v2) \u2227\u00b7 \u00b7 \u00b7 \u2227(vR) instead. Other pairing mechanisms are possible, such as for example (v1 \u2228vR+1) \u2227(v2 \u2228vR+2) \u2227\u00b7 \u00b7 \u00b7 \u2227(vR \u2228 vR+R). Our choice is motivated by the observation that v2R is the variable we are least con\ufb01dent about we therefore choose to pair it with the one we are most con\ufb01dent about (v1). We have found our pairing scheme to perform slightly better in practice. Constraint threshold. We maintain a streamlining constraint counter for every variable which is incremented each time the variable participates in a streamlining constraint. When the counter reaches the constraint threshold, we no longer consider it as a candidate in any of the subsequent rounds. This is done to ensure that no single variable dominates the constrained search space. Iteration threshold. The iteration threshold T determines how many rounds of streamlining constraints are performed. While streamlining constraints smoothly guide search to a solution cluster, the trade-off being made is in the complexity of the graph. With every round of addition of streamlining constraints, the number of edges in the graph increases which leads to a higher chance of survey propagation failing to converge. To sidestep the failure mode, we perform T rounds of streamlining before switching to decimation. 4 Empirical evaluation We streamlining constraints for random k-SAT instances for k = {3, 4, 5, 6} with n = {5 \u00d7 104, 4 \u00d7 104, 3 \u00d7 104, 104} variables respectively and constraint densities close to the theoretical predictions of the phase transitions for satis\ufb01ability. 6 \f4.23 4.24 4.25 4.26 4.27 \u03b1 0.0 0.2 0.4 0.6 0.8 1.0 Solver Success Rate k=3 9.70 9.75 9.80 9.85 9.90 9.95 \u03b1 0.0 0.2 0.4 0.6 0.8 1.0 Solver Success Rate k=4 Phase Transition SID SIS 20.0 20.2 20.4 20.6 20.8 21.0 21.2 \u03b1 0.0 0.2 0.4 0.6 0.8 1.0 Solver Success Rate k=5 38 39 40 41 42 43 44 \u03b1 0.0 0.2 0.4 0.6 0.8 1.0 Solver Success Rate k=6 Figure 2: Random k-SAT solver rates (with 95% con\ufb01dence intervals) for k \u2208{3, 4, 5, 6}, for varying constraint densities \u03b1. The red line denotes the theoretical prediction for the phase transition of satis\ufb01ability. Survey inspired streamlining (SIS) drastically outperforms survey inspired decimation (SID) for all values of k. 4.1 Solver success rates In the \ufb01rst set of experiments, we compare survey inspired streamlining (SIS) with survey inspired decimation (SID). In line with [7], we \ufb01x R = 0.01n and each success rate is the fraction of 100 instances solved for every combination of \u03b1 and k considered. The constraint threshold is \ufb01xed to 2. The iteration threshold T is a hyperparameter set as follows. We generate a set of 20 random k-SAT instances for every \u03b1 and k. For these 20 \u201ctraining\" instances, we compute the empirical solver success rates varying T over {10, 20, . . . , 100}. The best performing value of T on these train instances is chosen for testing on 100 fresh instances. All results are reported on the test instances. Results. As shown in Figure 2, the streamlining constraints have a major impact on the solver success rates. Besides the solver success rates, we compare the algorithmic thresholds which we de\ufb01ne to be the largest constraint density for which the algorithm achieves a success rate greater than 0.05. The algorithmic thresholds are pushed from 4.25 to 4.255 for k = 3, 9.775 to 9.8 for k = 4, 20.1 to 20.3 for k = 5, and 39 to 39.5 for k = 6, shrinking the gap between the algorithmic thresholds and theoretical limits of satis\ufb01ability by an average of 16.3%. This is signi\ufb01cant as there is virtually no performance overhead in adding streamlining constraints. Distribution of failure modes. Given a satis\ufb01able instance, solvers based on survey propagation could fail for two reasons. First, the solver could fail to converge during message passing. Second, the local search procedure invoked after simpli\ufb01cation of the original formula could timeout which is likely to be caused due to a pathological simpli\ufb01cation that prunes away most (or even all) of the solutions. In our experiments, we \ufb01nd that the percentage of failures due to local search timeouts in SID and SIS are 36% and 24% respectively (remaining due to non-convergence of message passing). These observations can be explained by observing the effect of decimation and streamlining on the corresponding factor graph representation of the random k-SAT instances. Decimation simpli\ufb01es the factor graph as it leads to the deletion of variable and factor nodes, as well as the edges induced by the deleted nodes. This typically reduces the likelihood of non-convergence of survey propagation since the graph becomes less \u201cloopy\u201d, but could lead to overcon\ufb01dent (incorrect) branching decisions especially in the early iterations of survey propagation. On the other hand, streamlining takes smaller steps in reducing the search space (as opposed to decimation) and hence are less likely to make inconsistent variable assignments. However, a potential pitfall is that these constraints add factor nodes that make the graph more dense, which could affect the convergence of survey propagation. 7 \f0 20 40 60 80 100 Iteration 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Marginal Prediction Calibration 0 200 400 600 800 1000 Average Solution Distances Figure 3: Marginal prediction calibration (blue) and sampled solution distances (green) during solver run on 3-SAT with 5000 variables, \u03b1 = 4.15, T = 90. \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 Survey Propagation Magnetization \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 Variable Magnetization Iteration 0 \u22121.0 \u22120.5 0.0 0.5 1.0 Survey Propagation Magnetization 0 100 200 300 400 500 600 700 Number of Variables \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 Survey Propagation Magnetization \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 Variable Magnetization Iteration 50 \u22121.0 \u22120.5 0.0 0.5 1.0 Survey Propagation Magnetization 0 100 200 300 400 500 600 700 Number of Variables \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 Survey Propagation Magnetization \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 Variable Magnetization Iteration 95 \u22121.0 \u22120.5 0.0 0.5 1.0 Survey Propagation Magnetization 0 200 400 600 800 1000 1200 Number of Variables Figure 4: Top: Correlation between magnetization and estimated marginal probabilities for the same problem instance as we add streamlining constraints. Bottom: Histogram of variables magnetizations. As streamlining constraints are added, the average con\ufb01dence of assignments increases. 4.2 Solution cluster analysis Figures 3 and 4 reveal the salient features of survey inspired streamlining as it runs on an instance of 3-SAT with a constraint density of \u03b1 = 4.15, which is below the best achievable density but is known to be above the clustering threshold \u03b1d(3) \u22483.86. The iteration threshold, T was \ufb01xed to 90. At each iteration of the algorithm we use SampleSAT [35] to sample 100 solutions of the streamlined formula. Using these samples we estimate the marginal probabilities of all variables i.e., the fraction of solutions where a given variable is set to true. We use these marginal probabilities to estimate the marginal prediction calibration i.e., the frequency that a variable which survey propagation predicts has magnetization at least 0.9 has an estimated marginal at least as high as the prediction. The increase in marginal prediction calibrations during the course of the algorithm (Figure 3, blue curve) suggests that the streamlining constraints are selecting branches that preserve most of the solutions. This might be explained by the decrease in the average Hamming distance between pairs of sampled solutions over the course of the run (green curve). This decrease indicates that the streamlining constraints are guiding survey propagation to a subset of the full set of solution clusters. Over time, the algorithm is also \ufb01nding more extreme magnetizations, as shown in the bottom three histograms of Figure 4 at iterations 0, 50, and 95. Because magnetization is used as a proxy for how reliably one can branch on a given variable, this indicates that the algorithm is getting more and more con\ufb01dent on which variables it is \u201csafe\u201d to branch on. The top plots of Figure 4 show the empirical marginal of each variable versus the survey propagation magnetization. These demonstrate that overall the survey propagation estimates are becoming more and more risk-averse: by picking variables with high magnetization to branch on, it will only select variables with (estimated) marginals close to one. 8 \f20.0 20.2 20.4 20.6 20.8 21.0 21.2 \u03b1 0.0 0.2 0.4 0.6 0.8 1.0 Solver Success Rate k=5 Phase Transition Dimetheus Streamlining + Dimetheus (a) 38 39 40 41 42 43 44 \u03b1 0.0 0.2 0.4 0.6 0.8 1.0 Solver Success Rate k=6 (b) 0.07 0.14 0.21 0.28 0.35 0.42 0.49 0.56 0.63 \u03b1 0.0 0.2 0.4 0.6 0.8 1.0 Solver Success Rate k=2 Phase Transition SID SIS (c) Figure 5: (a, b) Random k-SAT solver rates (with 95% con\ufb01dence intervals) for k \u2208{5, 6} testing integration with Dimetheus. (c) XORSAT solver rates (with 95% con\ufb01dence intervals). 4.3 Integration with downstream solvers The survey inspired streamlining algorithm provides an easy \u201cblack-box\" integration mechanism with other solvers. By adding streamlining constraints in the \ufb01rst few iterations as a preprocessing routine, the algorithm carefully prunes the search space and modi\ufb01es the original formula that can be subsequently fed to any external downstream solver. We tested this procedure with Dimetheus [16] \u2013 a competitive ensemble solver that won two recent iterations of the SAT competitions in the random k-SAT category. We \ufb01xed the hyperparameters to the ones used previously. We did not \ufb01nd any statistically signi\ufb01cant change in performance for k = 3, 4; however, we observe signi\ufb01cant improvements in solver rates for higher k (Figure 5a,5b). 4.4 Extension to other constraint satisfaction problems The survey inspired streamlining algorithm can be applied to any CSP in principle. Another class of CSPs commonly studied is XORSAT. An XORSAT formula is expressed as a conjunction of XOR constraints of a \ufb01xed length. Here, we consider constraints of length 2. An XOR operation \u2295between any two variables can be converted to a conjunction of disjunctions by noting that xi \u2295xj = (\u00acxi \u2228\u00acxj) \u2227(xi \u2228xj), and hence, any XORSAT formula can be expressed in CNF form. Figure 5c shows the improvements in performance due to streamlining. While we note that the phase transition is not as sharp as the ones observed for random k-SAT (in both theory and practice [11, 28]), including streamlining constraints can improve the solver performance. 5" + }, + { + "url": "http://arxiv.org/abs/1806.06464v2", + "title": "Learning Policy Representations in Multiagent Systems", + "abstract": "Modeling agent behavior is central to understanding the emergence of complex\nphenomena in multiagent systems. Prior work in agent modeling has largely been\ntask-specific and driven by hand-engineering domain-specific prior knowledge.\nWe propose a general learning framework for modeling agent behavior in any\nmultiagent system using only a handful of interaction data. Our framework casts\nagent modeling as a representation learning problem. Consequently, we construct\na novel objective inspired by imitation learning and agent identification and\ndesign an algorithm for unsupervised learning of representations of agent\npolicies. We demonstrate empirically the utility of the proposed framework in\n(i) a challenging high-dimensional competitive environment for continuous\ncontrol and (ii) a cooperative environment for communication, on supervised\npredictive tasks, unsupervised clustering, and policy optimization using deep\nreinforcement learning.", + "authors": "Aditya Grover, Maruan Al-Shedivat, Jayesh K. Gupta, Yura Burda, Harrison Edwards", + "published": "2018-06-17", + "updated": "2018-07-31", + "primary_cat": "cs.MA", + "cats": [ + "cs.MA", + "cs.AI", + "cs.LG", + "cs.NE", + "stat.ML" + ], + "main_content": "Introduction Intelligent agents rarely act in isolation in the real world and often seek to achieve their goals through interaction with other agents. Such interactions give rise to rich, complex behaviors formalized as per-agent policies in a multiagent system (Ferber, 1999; Wooldridge, 2009). Depending on the underlying motivations of the agents, interactions could be directed towards achieving a shared goal in a collaborative setting, opposing another agent in a competitive setting, or be a mixture of these in a setting where agents collaborate in teams to compete against other teams. Learning useful representations of the policies of agents based on their interactions is an important step towards characterization of the agent behavior and more generally inference and reasoning in multiagent systems. 1Stanford University 2Carnegie Mellon University 3OpenAI. Correspondence to: Aditya Grover . Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). In this work, we propose an unsupervised encoder-decoder framework for learning continuous representations of agent policies given access to only a few episodes of interaction. For any given agent, the representation function is an encoder that learns a mapping from an interaction (i.e., one or more episodes of observation and action pairs involving the agent) to a continuous embedding vector. Using such embeddings, we condition a policy network (decoder) and train it simultaneously with the encoder to imitate other interactions involving the same (or a coupled) agent. Additionally, we can explicitly discriminate between the embeddings corresponding to different agents using triplet losses. For the embeddings to be useful, the representation function should generalize to both unseen interactions and unseen agents for novel downstream tasks. Generalization is wellunderstood in the context of supervised learning where a good model is expected to attain similar train and test performance. For multiagent systems, we consider a notion of generalization based on agent-interaction graphs. An agentinteraction graph provides an abstraction for distinguishing the agents (nodes) and interactions (edges) observed during training, validation, and testing. Our framework is agnostic to the nature of interactions in multiagent systems, and hence broadly applicable to competitive and cooperative environments. In particular, we consider two multiagent environments: (i) a competitive continuous control environment, RoboSumo (Al-Shedivat et al., 2018), and (ii) a ParticleWorld environment of cooperative communication where agents collaborate to achieve a common goal (Mordatch & Abbeel, 2018). For evaluation, we show how representations learned by our framework are effective for downstream tasks that include clustering of agent policies (unsupervised), classi\ufb01cation such as win or loss outcomes in competitive systems (supervised), and policy optimization (reinforcement). In the case of policy optimization, we show how these representations can serve as privileged information for better training of agent policies. In RoboSumo, we train agent policies that can condition on the opponent\u2019s representation and achieve superior win rates much more quickly as compared to an equally expressive baseline policy with the same number of parameters. In ParticleWorld, we train speakers that can communicate more effectively with a much wider range of listeners given knowledge of their representations. arXiv:1806.06464v2 [cs.MA] 31 Jul 2018 \fLearning Policy Representations in Multiagent Systems 2. Preliminaries In this section, we present the necessary background and notation relevant to the problem setting of this work. Markov games. We use the classical framework of Markov games (Littman, 1994) to represent multiagent systems. A Markov game extends the general formulation of partially observable Markov decision processes (POMDP) to the multiagent setting. In a Markov game, we are given a set of n agents on a state-space S with action spaces A1, A2, \u00b7 \u00b7 \u00b7 , An and observation spaces O1, O2, \u00b7 \u00b7 \u00b7 , On respectively. At every time step t, an agent i receives an observation o(t) i \u2208Oi and executes an action a(t) i \u2208Ai based on a stochastic policy \u03c0(i) : Oi \u00d7 Ai \u2192[0, 1]. Based on the executed action, the agent receives a reward r(t) i : S \u00d7 Ai \u2192R and the next observation o(t+1) i . The state dynamics are determined by a transition function T : S \u00d7A1 \u00d7\u00b7 \u00b7 \u00b7\u00d7An \u2192S. The agent policies are trained to maximize their own expected reward \u00af ri = PH t=1 r(t) i over a time horizon H. Extended Markov games. In this work, we are interested in interactions that involve not all but only a subset of agents. For this purpose, we generalize Markov games as follows. First, we augment the action space of each agent with a NO-OP (i.e., no action). Then, we introduce a problem parameter, 2 \u2264k \u2264n, with the following semantics. During every rollout of the Markov game, all but k agents deterministically execute the NO-OP operator while the k agents execute actions as per the policies de\ufb01ned on the original observation and action spaces. Accordingly, we assume that each agent receives rewards only in the interaction episode it participates in. Informally, the extension allows for multiagent systems where all agents do not necessarily have to participate simultaneously in an interaction. For instance, this allows to consider one-vs-one multiagent tournaments where only two players participate in any given match. To further introduce the notation, consider a multiagent system as a generalized Markov game. We denote the set of agent policies with P = {\u03c0(i)}n i=1, interaction episodes with E = {EMj}m j=1 where Mj \u2286{1, 2, \u00b7 \u00b7 \u00b7 , n}, |Mj| = k is the set of k agents participating in episode EMj. To simplify presentation for the rest of the paper, we assume k = 2 and, consequently, denote the set of interaction episodes between agents i and j as Eij. A single episode, eij \u2208Eij, consists of a sequence of observations and actions for the speci\ufb01ed time horizon, H. Imitation learning. Our approach to learning policy representations relies on behavioral cloning (Pomerleau, 1991)\u2014 a type of imitation learning where we train a mapping from observations to actions in a supervised manner. Although there exist other imitation learning algorithms (e.g., inverse reinforcement learning, Abbeel & Ng, 2004), our framework is largely agnostic to the choice of the algorithm, and we restrict our presentation to behavioral cloning, leaving other imitation learning paradigms to future work. 3. Learning framework The dominant paradigm for unsupervised representation learning is to optimize the parameters of a representation function that can best explain or generate the observed data. For instance, the skip-gram objective used for language and graph data learns representations of words and nodes predictive of representations of surrounding context (Mikolov et al., 2013; Grover & Leskovec, 2016). Similarly, autoencoding objectives, often used for image data, learn representations that can reconstruct the input (Bengio et al., 2009). In this work, we wish to learn a representation function that maps episode(s) from an agent policy, \u03c0(i) \u2208\u03a0 to a real-valued vector embedding where \u03a0 is a class of representable policies. That is, we optimize for the parameters \u03b8 for a function f\u03b8 : E \u2192Rd where E denotes the space of episodes corresponding to a policy and d is the dimension of the embedding. Here, we have assumed the agent policies are black-boxes, i.e., we can only access them based on interaction episodes with other agents in a Markov game. Hence, for every agent i, we wish to learn policies using Ei = \u222ajE(i) ij . Here, E(i) ij refers the episode data for interactions between agent i and j, but consisting of only the observation and action pairs of agent i. For a multiagent system, we propose the following auxiliary tasks for learning a good representation of an agent\u2019s policy: 1. Generative representations. The representation should be useful for simulating the agent\u2019s policy. 2. Discriminative representations. The representation should be able to distinguish the agent\u2019s policy with the policies of other agents. Accordingly, we now propose generative and discriminative objectives for representation learning in multiagent systems. 3.1. Generative representations via imitation learning Imitation learning does not require direct access to the reward signal, making it an attractive task for unsupervised representation learning. Formally, we are interested in learning a policy \u03c0(i) \u03c6 : S\u00d7A \u2192[0, 1] for an agent i given access to observation and action pairs from interaction episode(s) involving the agent. For behavioral cloning, we maximize the following (negative) cross-entropy objective: Ee\u223cEi \uf8ee \uf8f0X \u27e8o,a\u27e9\u223ce h log \u03c0(i) \u03c6 (a|o) i \uf8f9 \uf8fb where the expectation is over interaction episodes of agent i and the optimization is over the parameters \u03c6. \fLearning Policy Representations in Multiagent Systems Algorithm 1 Learn Policy Embedding Function (f\u03b8) input {Ei}n i=1 \u2013 interaction episodes, \u03bb \u2013 hyperparameter. 1: Initialize \u03b8 and \u03c6 2: for i = 1, 2, . . . , n do 3: Sample a positive episode pe \u2190e+ \u223cEi 4: Sample a reference episode re \u2190e\u2217\u223cEi\\e+ 5: Compute Im loss \u2190\u2212 P \u27e8o,a\u27e9\u223ce+ log \u03c0\u03c6,\u03b8(a|o, e\u2217) 6: for j = 1, 2, . . . , n do 7: if j \u0338= i then 8: Sample a negative episode ne \u2190e\u2212\u223cEj 9: Compute Id loss \u2190d\u03b8(e+, e\u2212, e\u2217) 10: Set Loss \u2190Im loss + \u03bb \u00b7 Id loss 11: Update \u03b8 and \u03c6 to minimize Loss 12: end if 13: end for 14: end for output \u03b8 Learning individual policies for every agent can be computationally and statistically prohibitive for large-scale multiagent systems, especially when the number of interaction episodes per agent is small. Moreover, it precludes generalization across the behaviors of such agents. On the other hand, learning a single policy for all agents increases sample ef\ufb01ciency but comes at the cost of reduced modeling \ufb02exibility in simulating diverse agent behaviors. We offset this dichotomy by learning a single conditional policy network. To do so, we \ufb01rst specify a representation function, f\u03b8 : E \u2192Rd, with parameters \u03b8, where E represents the space of episodes. We use this embedding to condition the policy network. Formally, the policy network is denoted by \u03c0\u03c6,\u03b8 : S \u00d7 A \u00d7 E \u2192[0, 1] and \u03c6 are parameters for the function mapping the agent observation and embedding to a distribution over the agent\u2019s actions. The parameters \u03b8 and \u03c6 for the conditional policy network are learned jointly by maximizing the following objective: 1 n n X i=1 Ee1\u223cEi, e2\u223cEi\\e1 \uf8ee \uf8f0 X \u27e8o,a\u27e9\u223ce1 log \u03c0\u03c6,\u03b8(a|o, e2) \uf8f9 \uf8fb (1) For every agent, the objective function samples two distinct episodes e1 and e2. The observation and action pairs from e2 are used to learn an embedding f\u03b8(e2) that conditions the policy network trained on observation and action pairs from e1. The conditional policy network shares statistical strength through a common set of parameters for the policy network and the representation function across all agents. 3.2. Discriminative representations via identi\ufb01cation An intuitive requirement for any representation function learned for a multiagent system is that the embeddings should re\ufb02ect characteristics of an agent\u2019s behavior that distinguish it from other agents. To do so in an unsupervised manner, we propose an objective for agent identi\ufb01cation based on the triplet loss directly in the space of embeddings. To learn a representation for agent i based on interaction episodes, we use the representation function f\u03b8 to compute three sets of embeddings: (i) a positive embedding for an episode e+ \u223cEi involving agent i, (ii) a negative embedding for an episode e\u2212\u223cEj involving a random agent j \u0338= i, and (iii) a reference embedding for an episode e\u2217\u223cEi again involving agent i, but different from e+. Given these embeddings, we de\ufb01ne the triplet loss: d\u03b8(e+, e\u2212, e\u2217) = (1 + exp {\u2225re \u2212ne\u22252 \u2212\u2225re \u2212pe\u22252})\u22122 (2) where pe = f\u03b8(e+), ne = f\u03b8(e\u2212), re = f\u03b8(e\u2217). Intuitively, the loss encourages the positive embedding to be closer to the reference embedding than the negative embedding, which makes the embeddings of the same agent tend to cluster together and be further away from embeddings of other agents. We note that various other notions of distance can also be used. The one presented above corresponding to a squared softmax objective (Hoffer & Ailon, 2015). 3.3. Hybrid generative-discriminative representations Conditional imitation learning encourages f\u03b8 to learn representations that can learn and simulate the entire policy of the agents and agent identi\ufb01cation incentivizes representations that can distinguish between agent policies. Both objectives are complementary, and we combine Eq. (1) and Eq. (2) to get the \ufb01nal objective used for representation learning: 1 n n X i=1 Ee+\u223cEi, e\u2217\u223cEi\\e+ \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 X \u27e8o,a\u27e9\u223ce+ log \u03c0\u03c6,\u03b8(a|o, e\u2217) | {z } imitation \u2212 \u03bb X j\u0338=i Ee\u2212\u223cEj [d\u03b8(e+, e\u2212, e\u2217)] | {z } agent identi\ufb01cation \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (3) where \u03bb > 0 is a tunable hyperparameter that controls the relative weights of the discriminative and generative terms. The pseudocode for the proposed algorithm is given in Algorithm 1. In experiments, we parameterize the conditional policy \u03c0\u03b8,\u03c6 using neural networks and use stochastic gradient-based methods for optimization. 4. Generalization in MAS Generalization is well-understood for supervised learning\u2014 models that shows similar train and test performance exhibit good generalization. To measure the quality of the learned representations for a multiagent system (MAS), we introduce a graphical formalism for reasoning about agents and their interactions. \fLearning Policy Representations in Multiagent Systems F E A G B C D (a) Agent-Interaction Graph (b) The RoboSumo environment. 0 0 1 0 1 Speaker 1 Listener N E W S target landmark 1 1 0 1 0 0 Speaker 2 (c) The ParticleWorld environment. Figure 1: An example of a graph used for evaluating generalization in a multiagent system (a). Illustrations for the environments used in our experiments: competitive (b) and cooperative (c). 4.1. Generalization across agents & interactions In many scenarios, we are interested in generalization of the policy representation function f\u03b8 across novel agents and interactions in a multiagent system. For instance, we would like f\u03b8 to output useful embeddings for a downstream task, even when evaluated with respect to unseen agents and interactions. This notion of generalization is best understood using agent-interaction graphs (Grover et al., 2018). The agent-interaction graph describes interactions between a set of agent policies P and a set of interaction episodes I through a graph G = (P, I).1 An example graph is shown in Figure 1a. The graph represents a multiagent system consisting of interactions between pairs of agents, and we will especially focus on the interactions involving Alice, Bob, Charlie, and Davis. The interactions could be competitive (e.g., a match between two agents) or cooperative (e.g., two agents communicating for a navigation task). We learn the representation function f\u03b8 on a subset of the interactions, denoted by the solid black edges in Figure 1a. At test time, f\u03b8 is evaluated on some downstream task of interest. The agents and interactions observed at test time can be different from those used for training. In particular, we consider the following cases: Weak generalization.2 Here, we are interested in the generalization performance of the representation function on an unseen interaction between existing agents, all of which are observed during training. This corresponds to the red edge representing the interaction between Alice and Bob in Figure 1a. From the context of an agent-interaction graph, the test graph adds only edges to the train graph. Strong generalization. Generalization can also be evaluated with respect to unseen agents (and their interactions). This corresponds to the addition of agents Charlie and Davis in Figure 1a. Akin to a few shot learning setting, we observe a few of their interactions with existing agents Alice and 1If we have more than two participating agents per interaction episode, we could represent the interactions using a hypergraph. 2Also referred to as intermediate generalization by Grover et al. (2018). Bob (green edges) and generalization is evaluated on unseen interactions involving Charlie and Davis (blue edges). The test graph adds both nodes and edges to the train graph. For brevity, we skip discussion of weaker forms of generalization that involves evaluation of the test performance on unseen episodes of an existing training edge (black edge). 4.2. Generalization across tasks Since the representation function is learned using an unsupervised auxiliary objective, we test its generalization performance by evaluating the usefulness of these embeddings for various kinds downstream tasks described below. Unsupervised. These embeddings can be used for clustering, visualization, and interpretability of agent policies in a low-dimensional space. Such semantic associations between the learned embeddings can be de\ufb01ned for a single agent wherein we expect representations for the same agent based on distinct episodes to be embedded close to each other, or across agents wherein agents with similar policies will have similar embeddings on average. Supervised. Deep neural network representations are especially effective for predictive modeling. In a multiagent setting, the embeddings serve as useful features for learning agent properties and interactions, including assignment of role categories to agents with different skills in a collaborative setting, or prediction of win or loss outcomes of interaction matches between agents in a competitive setting. Reinforcement. Finally, we can use the learned representation functions to improve generalization of the policies learned from a reinforcement signal in competitive and cooperative settings. We design policy networks that, in addition to observations, take embedding vectors of the opposing agents as inputs. The embeddings are computed from the past interactions of the opposing agent either with the agent being trained or with other agents using the representation function (Figure 2). Such embeddings play the role of privileged information and allow us to train a policy network that uses this information to learn faster and generalize better to opponents or cooperators unseen at training time. \fLearning Policy Representations in Multiagent Systems previous interactions \u03c0t \u03c8 f\u03b8 et \u03c0A et\u22121 et\u22121 et\u22121 et\u22121 \u03c0t\u22121 \u03c8 Figure 2: Illustration of the proposed model for optimizing a policy \u03c0\u03c8 that conditions on an embedding of the opponent policy \u03c0A. At time t, the pre-trained representation function f\u03b8 computes the opponent embedding based on a past interaction et\u22121. We optimize \u03c0\u03c8 to maximize the expected rewards in its current interactions et with the opponent. 5. Evaluation methodology & results We evaluate the proposed framework for both competitive and collaborative environments on various downstream machine learning tasks. In particular, we use the RoboSumo and ParticleWorld environments for the competitive and collaborative scenarios, respectively. We consider the embedding objectives in Eq. (1), Eq. (2), and Eq. (3) independently and refer to them as Emb-Im, Emb-Id, and Emb-Hyb respectively. The hyperparameter \u03bb for Emb-Hyb is chosen by grid search over \u03bb \u2208 {0.01, 0.05, 0.1, 0.5} on a held-out set of interactions. In all our experiments, the representation function f\u03b8 is speci\ufb01ed through a multi-layer perceptron (MLP) that takes as input an episode and outputs an embedding of that episode. In particular, the MLP takes as input a single (observation, action) pair to output an intermediate embedding. We average the intermediate embeddings for all (observation, action) pairs in an episode to output an episode embedding. To condition a policy network on the embedding, we simply concatenate the observation fed as input to the network with the embedding. Experimental setup and other details beyond what we state below are deferred to the Appendix. 5.1. The RoboSumo environment For the competitive environment, we use RoboSumo (AlShedivat et al., 2018)\u2014a 3D environment with simulated physics (based on MuJoCo (Todorov et al., 2012)) that allows agents to control multi-legged 3D robots and compete against each other in continuous-time wrestling games (Figure 1b). For our analysis, we train a diverse collection of 25 agents, some of which are trained via self-play and others are trained in pairs concurrently using Proximal Policy Optimization (PPO) algorithm (Schulman et al., 2017). We start with a fully connected agent-interaction graph (clique) of 25 agents. Every edge in this graph corresponds to 10 rollout episodes involving the corresponding agents. Table 1: Intra-inter clustering ratios (IICR) and accuracies for outcome prediction (Acc) for weak (W) and strong (S) generalization on RoboSumo. IICR (W) IICR (S) Acc (W) Acc(S) Emb-Im 0.24 0.23 0.71 0.60 Emb-Id 0.25 0.27 0.67 0.56 Emb-Hyb 0.22 0.21 0.73 0.56 The maximum length (or horizon) of any episode is 500 time steps, after which the episode is declared a draw. To evaluate weak generalization, we sample a connected subgraph for training with approximately 60% of the edges preserved for training, and remaining split equally for validation and testing. For strong generalization, we preserve 15 agents and their interactions with each other for training, and similarly, 5 agents and their within-group interactions each for validation and testing. 5.1.1. EMBEDDING ANALYSIS To evaluate the robustness of the embeddings, we compute multiple embeddings for each policy based on different episodes of interaction at test time. Our evaluation metric is based on the intraand inter-cluster Euclidean distances between embeddings. The intra-cluster distance for an agent is the average pairwise distance between its embeddings computed on the set of test interaction episodes involving the agent. Similarly, the inter-cluster distance is the average pairwise distance between the embeddings of an agent with those of other agents. Let Ti = {t(i) c }ni c=1 denote the set of test interactions involving agent i. We de\ufb01ne the intra-inter cluster ratio (IICR) as: IICR = 1 n Pn i=1 1 n2 i Pni a=1 Pni b=1 \u2225t(i) a \u2212t(i) b \u22252 1 n(n\u22121) n P i=1 n P j\u0338=i 1 ninj ni P a=1 nj P b=1 \u2225t(i) a \u2212t(j) b \u22252 . The intra-inter clustering ratios are reported in Table 1. A ratio less than 1 suggests that there is signal that identi\ufb01es the agent, and the signal is stronger for lower ratios. Even though this task might seem especially suited for the agent identi\ufb01cation objective, we interestingly \ufb01nd that the Emb-Im attains lower clustering ratios than Emb-Id for both weak and strong generalization. Emb-Hyb outperforms both these methods. We qualitatively visualize the embeddings learned using Emb-Hyb by projecting them on the leading principal components, as shown in Figures 3a and 3b for 10 test interaction episodes of 5 randomly selected agents in the weak and strong generalization settings respectively. 5.1.2. OUTCOME PREDICTION We can use these embeddings directly for training a classi\ufb01er to predict the outcome of an episode (win/loss/draw). For classi\ufb01cation, we use an MLP with 3 hidden layers \fLearning Policy Representations in Multiagent Systems first eigenvector second eigenvector third eigenvector (a) RoboSumo: Weak first eigenvector second eigenvector third eigenvector (b) RoboSumo: Strong first eigenvector second eigenvector third eigenvector (c) ParticleWorld: Weak first eigenvector second eigenvector third eigenvector (d) ParticleWorld: Strong Figure 3: Embeddings learned using Emb-Hyb for 10 test interaction episodes of 5 agents projected on the \ufb01rst three principal components for RoboSumo and ParticleWorld. Color denotes agent policy. 0 1000 2000 Iteration 0.25 0.50 0.75 1.00 Win rate Train PPO PPO + Emb-Im PPO + Emb-Id PPO + Emb-Hyb 0 1000 2000 0.2 0.4 0.6 0.8 Test 0 1000 2000 Iteration 0.25 0.50 0.75 1.00 Win rate Train PPO PPO + Emb-online PPO + Emb-of\ufb02ine PPO + Emb-zero PPO + Emb-rand 0 1000 2000 0.25 0.50 0.75 1.00 Test Figure 4: Average win rates of the newly trained agents against 5 training agent and 5 testing agents. The left two charts compare baseline with policies that make use of Emb-Im, Emb-Id, and Emb-Hyb (all computed online). The right two charts compare different embeddings used at evaluation time (all embedding-conditioned policies use Emb-Hyb). At each iteration, win rates were computed based on 50 1-on-1 games. Each agent was trained 3 times, each time from a different random initialization. Shaded regions correspond to 95% CI. of 100 units each and the learning objective minimizes the cross entropy error. The input to the classi\ufb01er are the embeddings of the two agents involved in the episode. The results are reported in Table 1. Again, imitation based methods seem more suited for this task with Emb-Hyb and Emb-Im outperforming other methods for weak and strong generalization respectively. 5.1.3. POLICY OPTIMIZATION Here we ask whether embeddings can be used to improve learned policies in a reinforcement learning setting both in terms of end performance and generalization. To this end, we select 5 training, 5 validation, and 5 testing opponents from the pool of 25 pre-trained agents. Next, we train a new agent with reinforcement learning to compete against the selected 5 training opponents; the agent is trained concurrently against all 5 opponents using a distributed version of PPO algorithm, as described in Al-Shedivat et al. (2018). Throughout training, we evaluate new agents on the 5 testing opponents and record the average win and draw rates. Using this setup, we compare a baseline agent with MLP-based policy with an agent whose policy takes 100dimensional embeddings of the opponents as additional inputs at each time step and uses that information to condition its behavior on the opponent\u2019s representation. The embeddings for each opponent are either computed online, i.e., based on an interaction episode rolled out during training at a previous time step (Figure 2), or of\ufb02ine, i.e., pre-computed before training the new agent using only interactions between the pre-trained opponents. Figure 4 shows the average win rates against the set of training and testing opponents for the baseline and our agents that use different types of embeddings. While every new agent is able to achieve almost 100% win rate against the training opponents, we see that the agents that condition their policies on the opponent\u2019s embeddings perform better on the held-out set of opponents, i.e., generalize better, with the best performance achieved with Emb-Hyb. We also note that embeddings computed of\ufb02ine turn out to lead to better performance than if computed online3. As an ablation test, we also evaluate our agents when they are provided an incorrect embedding (either all zeros, Emb-zero, or an embedding selected for a different random opponent, Emb-rand) and observe that such embeddings lead to a degradation in performance4. 3Perhaps, this is due to differences in the interactions of the opponents between themselves and with the new agent that the embedding network was not able to capture entirely. 4Performance decrease is most signi\ufb01cant for Emb-zero, which is an out-of-distribution all-zeros vector. \fLearning Policy Representations in Multiagent Systems 0 1000 Iteration 0.0 0.5 1.0 Rate Emb-Hyb vs. PPO Win Loss Draw 0 1000 Emb-Hyb vs. Emb-Im 0 1000 Emb-Hyb vs. Emb-Id 0 1000 Emb-Im vs. PPO 0 1000 Emb-Im vs. Emb-Id 0 1000 Emb-Id vs. PPO Figure 5: Win, loss, and draw rates plotted for the \ufb01rst agent in each pair. Each pair of agents was evaluated after each training iteration on 50 1-on-1 games; curves are based on 5 evaluation runs. Shaded regions correspond to 95% CI. 1 2 3 4 PPO + Emb-Hyb 4 PPO + Emb-Id 3 PPO + Emb-Im 2 PPO 1 0.67 0.61 0.57 0.48 0.62 0.44 0.50 0.42 0.55 0.49 0.55 0.36 0.51 0.44 0.36 0.32 Figure 6: Win rates for agents speci\ufb01ed in each row at computed at iteration 1000. Finally, to evaluate strong generalization in the RL setting, we pit the newly trained baseline and agents with embedding-conditional policies against each other. Since the embedding network has never seen the new agents, it must exhibit strong generalization to be useful in such setting. The results are give in Figures 5 and 6. Even though the margin is not very large, the agents that use Emb-Hyb perform the best on average. 5.2. The ParticleWorld environment For the collaborative setting, we evaluate the framework on the ParticleWorld environment for cooperative communication (Mordatch & Abbeel, 2018; Lowe et al., 2017). The environment consists of a continuous 2D grid with 3 landmarks and two kinds of agents collaborating to navigate to a common landmark goal (Figure 1c). At the beginning of every episode, the speaker agent is shown the RGB color of a single target landmark on the grid. The speaker then communicates a \ufb01xed length binary message to the listener agent. Based on the received messages, the listener agent the moves in a particular direction. The \ufb01nal reward, shared across the speaker and listener agents, is the distance of the listener to the target landmark after a \ufb01xed time horizon. The agent-interaction graph for this environment is bipartite with only cross edges between speaker and listener agents. Every interaction edge in this graph corresponds to 1000 rollout episodes where the maximum length of any episode is 25 steps. We pretrain 28 MLP parameterized speaker and listener agent policies. Every speaker learns through communication with only two different listeners Table 2: Intra-inter clustering ratios (IICR) for weak (W) and strong (S) generalization on ParticleWorld. Lower is better. IICR (W) IICR (S) Emb-Im 0.58 0.86 Emb-Id 0.50 0.82 Emb-Hyb 0.54 0.85 Table 3: Average train and test rewards for speaker policies on ParticleWorld. Train Test MADDPG \u221211.66 \u221218.99 MADDPG + Emb-Im \u221211.68 \u221217.75 MADDPG + Emb-Id \u221211.68 \u221217.68 MADDPG + Emb-Hyb \u221211.77 \u221217.20 and vice-versa, giving an extremely sparse agent-interaction graph. We explicitly encoded diversity in these speakers and listener agents by masking bits in the communication channel. In particular, we masked 1 or 2 randomly selected bits for every speaker agent in the graph to give a total of \u00007 1 \u0001 + \u00007 2 \u0001 = 28 distinct speaker agents. Depending on the neighboring speaker agents in the agent-interaction graph, the listener agents also show diversity in the learned policies. The policies are learned using multiagent deep deterministic policy gradients (MADDPG, Lowe et al., 2017). In this environment, the speakers and listeners are tightly coupled. Hence we vary the setup used previously in the competitive scenario. We wish to learn embeddings of listeners based on their interactions with speakers. Since the agent-interaction graph is bipartite, we use the embeddings of listener agents to condition a shared policy network for the respective speaker agents. 5.2.1. EMBEDDING ANALYSIS For the weak generalization setting, we remove an outgoing edge from every listener agent in the original graph to obtain the training graph. In the case of strong generalization, we set aside 7 listener agents (and their outgoing edges) each for validation and testing while the representation function is learned on the remaining 14 listener agents and their interactions. The intra-inter clustering ratios are shown \fLearning Policy Representations in Multiagent Systems in Table 2, and the projections of the embeddings learned using Emb-Hyb are visualized in Figure 3c and Figure 3d for weak and strong generalization respectively. In spite of the high degree of sparsity in the training graph, the intrainter clustering ratio for the test interaction embeddings is less than unity suggesting an agent-speci\ufb01c signal. Emb-id works particularly well in this environment, achieving best results for both weak and strong generalization. 5.2.2. POLICY OPTIMIZATION Here, we are interested in learning speaker agents that can communicate more effectively with a much wider range of listeners given knowledge of their embeddings. Referring back to Figure 2, we learn a policy \u03c0\u03c8 for a speaker agent that conditions on the representation function f\u03b8 for the listener agents. For cooperative communication, we consider interactions with 14 pre-trained listener agents split as 6 training, 4 validation, and 4 test agents.5 Similar to the competitive setting, we compare performance against a baseline speaker agent that does not have access to any privilege information about the listeners. We summarize the results for the best validated models during training and 100 interaction episodes per test listener agent across 5 initializations in Table 3. From the results, we observe that online embedding based methods can generalize better than the baseline methods. The baseline MADDPG achieves the lowest training error, but fails to generalize well enough and incurs a low average reward for the test listener agents. 6. Discussion & Related Work Agent modeling is a well-studied topic within multiagent systems. See Albrecht & Stone (2017) for an excellent recent survey on this subject. The vast majority of literature concerns with learning models for a speci\ufb01c predictive task. Predictive tasks are typically de\ufb01ned over actions, goals, and beliefs of other agents (Stone & Veloso, 2000). In competitive domains such as Poker and Go, such tasks are often integrated with domain-speci\ufb01c heuristics to model opponents and learn superior policies (Rubin & Watson, 2011; Mnih et al., 2015). Similarly, intelligent tutoring systems take into account pedagogical features of students and teachers to accelerate learning of desired behaviors in a collaborative environment (McCalla et al., 2000). In this work, we proposed an approach for modeling agent behavior in multiagent systems through unsupervised representational learning of agent policies. Since we sidestep any domain speci\ufb01c assumptions and learn in an unsupervised manner, our framework learns representations that are 5None of the methods considered were able to learn a nontrivial speaker agent when trained simultaneously with all 28 listener agents. Hence, we simpli\ufb01ed the problem by considering the 14 listener agents that attained the best rewards during pretraining. useful for several downstream tasks. This extends the use of deep neural networks in multiagent systems to applications beyond traditional reinforcement learning and predictive modeling (Mnih et al., 2015; Hoshen, 2017). Both the generative and discriminative components of our framework have been explored independently in prior work. Imitation learning has been extensively studied in the singleagent setting and recent work by Le et al. (2017) proposes an algorithm for imitation in a coordinated multiagent system. Wang et al. (2017) proposed an imitation learning algorithm for learning robust controllers with few expert demonstrations in a single-agent setting that conditions the policy network on an inference network, similar to the encoder in our framework. In another recent work, Li et al. (2017) propose an algorithm for learning interpretable representations using generative adversarial imitation learning. Agent identi\ufb01cation which represents the discriminative term in the learning objective is inspired from triplet losses and Siamese networks that are used for learning representations of data using distance comparisons (Hoffer & Ailon, 2015). A key contribution of this work is a principled methodology for evaluating generalization of representations in multiagent systems based on the graphs of the agent interactions. Graphs are a fundamental abstraction for modeling relational data, such as the interactions arising in multiagent systems (Zhou et al., 2016a;b; Chen et al., 2017; Battaglia et al., 2016; Hoshen, 2017) and concurrent work proposes to learn such graphs directly from data (Kipf et al., 2018). 7." + }, + { + "url": "http://arxiv.org/abs/1804.01712v1", + "title": "Variational Rejection Sampling", + "abstract": "Learning latent variable models with stochastic variational inference is\nchallenging when the approximate posterior is far from the true posterior, due\nto high variance in the gradient estimates. We propose a novel rejection\nsampling step that discards samples from the variational posterior which are\nassigned low likelihoods by the model. Our approach provides an arbitrarily\naccurate approximation of the true posterior at the expense of extra\ncomputation. Using a new gradient estimator for the resulting unnormalized\nproposal distribution, we achieve average improvements of 3.71 nats and 0.21\nnats over state-of-the-art single-sample and multi-sample alternatives\nrespectively for estimating marginal log-likelihoods using sigmoid belief\nnetworks on the MNIST dataset.", + "authors": "Aditya Grover, Ramki Gummadi, Miguel Lazaro-Gredilla, Dale Schuurmans, Stefano Ermon", + "published": "2018-04-05", + "updated": "2018-04-05", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG", + "cs.NE" + ], + "main_content": "INTRODUCTION Latent variable models trained using stochastic variational inference can learn complex, high dimensional distributions [Ho\ufb00man et al., 2013, Ranganath et al., 2014]. Learning typically involves maximization of a lower bound to the intractable log-likelihood of the observed data, marginalizing over the latent, unobserved variables. To scale to large datasets, inference is amortized by introducing a recognition model approximating the true posterior over the latent variables, conditioned on the observed data [Dayan et al., 1995, Gershman and Goodman, 2014]. The generative and \u2217equal contribution. Proceedings of the 21st International Conference on Arti\ufb01cial Intelligence and Statistics (AISTATS) 2018, Lanzarote, Spain. PMLR: Volume 84. Copyright 2018 by the author(s). recognition models are jointly trained and commonly parameterized using deep neural networks. While this provides \ufb02exibility, it also leads to expectations without any closed form expressions in the learning objective and corresponding gradients. The general approach to stochastic optimization of such objectives involves Monte Carlo estimates of the gradients using the variational posterior (a.k.a. the recognition model) as a proposal distribution [Mnih and Rezende, 2016]. A simple feed forward network, however, may not capture the full complexity of the posterior, a di\ufb03culty which shows up in practice as high variance in the gradients estimated with respect to the parameters of the proposal distribution. There is a vast body of prior work in variance reduction for stochastic optimization, including recent work focusing on variational methods for generative modeling. The standard approach is to use score function estimators with appropriate baselines [Glynn, 1990, Williams, 1992, Fu, 2006]. Many continuous distributions are also amenable to reparameterization, which transforms the original problem of taking gradients with respect to the parameters of the proposal to the simpler problem of taking gradients with respect to a deterministic function [Kingma and Welling, 2014, Rezende et al., 2014, Titsias and L\u00b4 azaro-Gredilla, 2014]. Finally, a complementary technique for variance reduction is the use of multisample objectives which compute importance weighted gradient estimates based on multiple samples from the proposal [Burda et al., 2016, Mnih and Rezende, 2016]. We discuss these approaches in Section 2. In this work, we propose a new class of estimators for variational learning based on rejection sampling. The variational rejection sampling approach modi\ufb01es the sampling procedure into a two-step process: \ufb01rst, a proposal distribution (in our case, the variational posterior of a generative model) proposes a sample and then we explicitly accept or reject this sample based arXiv:1804.01712v1 [stat.ML] 5 Apr 2018 \fVariational Rejection Sampling on a novel di\ufb00erentiable accept-reject test. The test is designed to reject samples from the variational posterior that are assigned low likelihoods by the generative model, wherein the threshold for rejection can be controlled based on the available computation. We show how this procedure leads to a modi\ufb01cation of the original variational posterior to a richer family of approximating resampled proposal distributions. The modi\ufb01cation is de\ufb01ned implicitly [Mohamed and Lakshminarayanan, 2016] since the only requirement from the original variational posterior is that it should permit e\ufb03cient sampling. Hence, our soft accept-reject test provides a knob to smoothly interpolate between plain importance sampling with a \ufb01xed variational posterior (no rejections) to obtaining samples from the exact posterior in the limit (with potentially high rejection rate), thereby trading o\ufb00statistical accuracy for computational cost. Further, even though the resampled proposal is unnormalized due to the introduction of an accept-reject test, we can surprisingly derive unbiased gradient estimates with respect to the model parameters that only require the unnormalized density estimates of the resampled proposal, leading to an e\ufb03cient learning algorithm. Empirically, we demonstrate that variational rejection sampling outperforms competing single-sample and multi-sample approaches by 3.71 nats and 0.21 nats respectively on average for estimating marginal log-likelihoods using sigmoid belief networks on the MNIST dataset. 2 BACKGROUND In this section, we present the setup for stochastic optimization of expectations of arbitrary functions with respect to parameterized distributions. We also discuss prior work applicable in the context of variational learning. We use upper-case symbols to denote probability distributions and assume they admit densities on a suitable reference measure, denoted by the corresponding lower-case notation. Consider the following objective: L(\u03b8, \u03c6) = Ez\u223cQ\u03c6 [f\u03b8,\u03c6(z)] (1) where \u03b8 and \u03c6 denote sets of parameters and Q\u03c6 is a parameterized sampling distribution over z which can be discrete or continuous. We will assume that sampling z from Q\u03c6 is e\ufb03cient, and suppress subscript notation in expectations from z \u223cQ\u03c6 to simply Q wherever the context is clear. We are interested in optimizing the expectation of a function f\u03b8,\u03c6 with respect to the sampling distribution Q\u03c6 using gradient methods. In general, f\u03b8,\u03c6 and the density q\u03c6 need not be di\ufb00erentiable with respect to \u03b8 and \u03c6. Such objectives are intractable to even evaluate in general, but unbiased estimates can be obtained e\ufb03ciently using Monte Carlo techniques. The gradients of the objective with respect to \u03b8 are given by: \u2207\u03b8L(\u03b8, \u03c6) = EQ [\u2207\u03b8f\u03b8,\u03c6(z)] . As long as f\u03b8,\u03c6 is di\ufb00erentiable with respect to \u03b8, we can compute unbiased estimates of the gradients using Monte Carlo. There are two primary class of estimators for computing gradients with respect to \u03c6 which we discuss next. Score function estimators. Using the fact that \u2207\u03c6q\u03c6 = q\u03c6\u2207\u03c6 log q\u03c6, the gradients with respect to \u03c6 can be expressed as: \u2207\u03c6L(\u03b8, \u03c6) = EQ [\u2207\u03c6f\u03b8,\u03c6(z)] + EQ [f\u03b8,\u03c6(z)\u2207\u03c6log q\u03c6(z)] . The \ufb01rst term can be e\ufb03ciently estimated using Monte Carlo if f\u03b8,\u03c6 is di\ufb00erentiable with respect to \u03c6. The second term, referred to as the score function estimator or the likelihood-ratio estimator or REINFORCE by di\ufb00erent authors [Fu, 2006, Glynn, 1990, Williams, 1992], requires gradients with respect to the log density of the sampling distribution and can su\ufb00er from large variance [Glasserman, 2013, Schulman et al., 2015]. Hence, these estimators are used in conjunction with control variates (also referred to as baselines). A control variate, c, is any constant or random variable (could even be a function of z if we can correct for its bias) positively correlated with f\u03b8,\u03c6 that reduces the variance of the estimator without introducing any bias: EQ [f\u03b8,\u03c6(z)\u2207\u03c6log q\u03c6(z)] = EQ [(f\u03b8,\u03c6(z) \u2212c)\u2207\u03c6log q\u03c6(z)] . Reparameterization estimators. Many continuous distributions can be reparameterized such that it is possible to obtain samples from the original distribution by applying a deterministic transformation to a sample from a \ufb01xed distribution [Kingma and Welling, 2014, Rezende et al., 2014, Titsias and L\u00b4 azaro-Gredilla, 2014]. For instance, if the sampling distribution is an isotropic Gaussian, Q\u03c6 = N(\u00b5, \u03c32I), then a sample z \u223cQ\u03c6 can be equivalently obtained by sampling \u03f5 \u223cN(0, I) and passing through a deterministic function, z = g\u00b5,\u03c3(\u03f5) = \u00b5 + \u03c3\u03f5. This allows exchanging the gradient and expectation, giving a gradient with respect to \u03c6 after reparameterization as: \u2207\u03c6L(\u03b8, \u03c6) = E\u03f5\u223cS [\u2207zf\u03b8,\u03c6(z)\u2207\u03c6g\u03c6(\u03f5)] where S is a \ufb01xed sampling distribution and z = g\u03c6(\u03f5) is a deterministic transformation. Reparameterized gradient estimators typically have lower variance but are not widely applicable since they require g\u03c6 and f\u03b8,\u03c6 \fAditya Grover, Ramki Gummadi, Miguel L\u00b4 azaro-Gredilla, Dale Schuurmans, Stefano Ermon to be di\ufb00erentiable with respect to \u03c6 and z respectively unlike score function estimators [Glasserman, 2013, Schulman et al., 2015]. Recent work has tried to bridge this gap by reparameterizing continuous relaxations to discrete distributions (called Concrete distributions) that gives low variance, but biased gradient estimates [Maddison et al., 2017, Jang et al., 2017] and deriving gradient estimators that interpolate between score function estimators and reparameterization estimators for distributions that can be simulated using acceptance-rejection algorithms, such as the Gamma and Dirichlet distributions [Ruiz et al., 2016, Naesseth et al., 2017]. Further reductions in the variance of reparameterization estimators is possible as explored in recent work, potentially introducing bias [Roeder et al., 2017, Miller et al., 2017, Levy and Ermon, 2018]. 2.1 Variational learning We can cast variational learning as an objective of the form given in Eq. (1). Consider a generative model that speci\ufb01es a joint distribution p\u03b8(x, z) over the observed variables x and latent variables z respectively, parameterized by \u03b8. We assume the true posterior p\u03b8(z|x) over the latent variables is intractable, and we introduce a variational approximation to the posterior q\u03c6(z|x) represented by a recognition network and parameterized by \u03c6. The parameters of the generative model and the recognition network are learned jointly [Kingma and Welling, 2014, Rezende et al., 2014] by optimizing an evidence lower bound (ELBO) on the marginal log-likelihood of a datapoint x: log p\u03b8(x) \u2265EQ \u0014 log p\u03b8(x, z) q\u03c6(z|x) \u0015 \u225cELBO(\u03b8, \u03c6). (2) Besides reparameterization estimators that were introduced in the context of variational learning, there has been considerable research in the design of control variates (CV) for variational learning using the more broadly applicable score function estimators [Paisley et al., 2012]. In particular, Wingate and Weber [2013] and Ranganath et al. [2014] use simple scalar CV, NVIL proposed input-dependent CV [Mnih and Gregor, 2014], and MuProp combines input-dependent CV with deterministic \ufb01rst-order Taylor approximations to the mean-\ufb01eld approximation of the model [Gu et al., 2016]. Recently, REBAR used CV based on the Concrete distribution to give low variance, unbiased gradient updates [Tucker et al., 2017], which has been subsequently generalized to a more \ufb02exible parametric version in RELAX [Grathwohl et al., 2018]. In a parallel line of work, there is an increasing effort to learn models with more expressive posteriors. Major research in this direction focuses on continuous latent variable models, for e.g., see Gregor et al. [2014, 2015], Salimans et al. [2015], Rezende and Mohamed [2015], Chen et al. [2017], Song et al. [2017], Grover and Ermon [2018] and the references therein. Closely related to the current work is Gummadi [2014], which originally proposed a resampling scheme to improve the richness of the posterior approximation and derived unbiased estimates of gradients for the KL divergence from arbitrary unnormalized posterior approximations. Related work for discrete latent variable models is scarce. Hierarchical models impose a prior over the discrete latent variables to induce dependencies between the variables [Ranganath et al., 2016], which can also be speci\ufb01ed as an undirected model [Kuleshov and Ermon, 2017]. On the theoretical side, random projections of discrete posteriors have been shown to provide tight bounds on the quality of the variational approximation [Zhu and Ermon, 2015, Grover and Ermon, 2016, Hsu et al., 2016]. Multi-sample estimators. Multi-sample objectives improve the family of distributions represented by variational posteriors by trading o\ufb00computational e\ufb03ciency with statistical accuracy. Learning algorithms based on these objectives do not introduce additional parameters but instead draw multiple samples from the variational posterior to reduce the variance in gradient estimates as well as tighten the ELBO. A multi-sample ELBO is given as: log p\u03b8(x) \u2265Ez1,...,zk\u223cQ\u03c6 \" log 1 k k X i=1 p\u03b8(x, zi) q\u03c6(zi|x) # (3) Biased gradient estimators using similar objectives were \ufb01rst used by Raiko et al. [2015] for structured prediction. Burda et al. [2016] showed that a multisample ELBO is a tighter lower bound on the loglikelihood than the ELBO. Further, they derived unbiased gradient estimates for optimizing variational autoencoders trained using the objective in Eq. (3). VIMCO generalized this to discrete latent variable models with arbitrary Monte Carlo objectives using a score function estimator with per-sample control variates [Mnih and Rezende, 2016], which serves as a point of comparison in our experiments. Recently, Naesseth et al. [2018] proposed an importance weighted multisample objective for probabilistic models of dynamical systems based on sequential Monte Carlo. 3 THE VRS FRAMEWORK To motivate variational rejection sampling (VRS), consider the ELBO objective in Eq. (2) for any \ufb01xed \u03b8 and x. This is maximized when the variational posterior matches the true posterior p\u03b8(z|x). However, \fVariational Rejection Sampling (a) Target dist. (b) T, a, KL : \u221e, 1, 18 (c) 10, 0.5, 3.1 (d) 0, 0.2, 0.3 (e) \u22125, 0.01,1e-3 Figure 1: The resampled posterior approximation (b-e) gets closer (in terms of KL divergence) to a target 2D discrete distribution (a) as we decrease the parameter T, which controls the acceptance probability a. The triples shown are T, a, KL divergence to target. in practice, the approximate posterior could be arbitrarily far from the true posterior which we seek to mitigate by rejection sampling. 3.1 The Resampled ELBO (R-ELBO) Consider an alternate sampling distribution for the variational posterior with the density de\ufb01ned below: r\u03b8,\u03c6(z|x, T) \u221dq\u03c6(z|x)a\u03b8,\u03c6(z|x, T) (4) where a\u03b8,\u03c6(z|x, T) \u2208(0, 1] is an acceptance probability function that could depend on \u03b8, \u03c6, and additional parameter(s) T. Unlike p, q, and r, note that a\u03b8,\u03c6(z|x, T) does not represent a density over the latent variables z, but simply a function that maps each possible z to a number between 0-1 and hence, it denotes the probability of acceptance for each z. In order to sample from R\u03b8,\u03c6, we follow a two step sampling procedure de\ufb01ned in Algorithm 1. Hence, computing Monte Carlo expectations with respect to the modi\ufb01ed proposal involves resampling from the original proposal due to an additional accept-reject step. We refer to such sampling distributions as resampled proposal distributions. Algorithm 1 Sampler for R\u03b8,\u03c6(z|x, T) input a\u03b8,\u03c6(z|x, T), Q\u03c6(z|x) output z \u223cR\u03b8,\u03c6(z|x, T) 1: while True do 2: z \u2190sample from proposal Q\u03c6(z|x). 3: Compute acceptance probability a\u03b8,\u03c6(z|x, T) 4: Sample uniform: u \u223cU[0, 1]. 5: if u < a\u03b8,\u03c6(z|x, T) then 6: Output sample z. 7: end if 8: end while The resampled proposal de\ufb01nes a new evidence lower bound on the marginal log-likelihood of x, which we refer to as the \u201cresampled ELBO\u201d, or R-ELBO: log p\u03b8(x) \u2265ER \u0014 log p\u03b8(x, z)ZR(x, T) q\u03c6(z|x)a\u03b8,\u03c6(z|x, T) \u0015 \u225cR-ELBO(\u03b8, \u03c6). (5) where ZR(x, T) = EQ[a\u03b8,\u03c6(z|x, T)] is the (generally intractable) normalization constant for the resampled proposal distribution. To make the resampling framework described above work, we need to de\ufb01ne a suitable acceptance function and derive low variance Monte Carlo gradient estimators with respect to \u03b8 and \u03c6 for the R-ELBO which we discuss next. 3.2 Acceptance probability functions The general intuition behind designing an acceptance probability function is that it should allow for the resampled posterior to come \u201cclose\u201d to the target posterior p\u03b8(z|x) (possibly at the cost of extra computation). While there could be many possible ways of designing such acceptance probability functions, we draw inspiration from rejection sampling [Halton, 1970]. In order to draw samples from a target distribution T (z), a rejection sampler \ufb01rst draws samples from an easy-to-sample distribution z \u223cS(z) with a largeror-equal support, i.e., s(z) > 0 wherever t(z) > 0. Then, provided we have a \ufb01xed, \ufb01nite upper bound M \u2208[1, \u221e) on the likelihood ratio t(z)/s(z), we can obtain samples from the target by accepting samples from s(z) with a probability t(z) Ms(z). The choice of M guarantees that the acceptance probability is less than or equal to 1, and overall probability of any accepted sample z is proportional to t(z) Ms(z)s(z) which gives us z \u223cT (z) as desired. The constant M has to be large enough such that the acceptance probability does not exceed 1, but a very high value of M leads to an increase in computation due to a higher rejection rate. If the target is only known up to a normalization constant, then rejection sampling can be used provided M is large enough to ensure that the acceptance probability never exceeds 1. However, we do not know in general how large M should be and even if we did, it would be computationally infeasible to actually use it in a practical algorithm. A natural approximation that departs from the typical rejection sampler would be to accept proposed samples with probability \fAditya Grover, Ramki Gummadi, Miguel L\u00b4 azaro-Gredilla, Dale Schuurmans, Stefano Ermon min h 1, t(z) Ms(z) i for some M that is no longer guaranteed to dominate the likelihood ratios across the entire state space. In the setting of variational learning, the target corresponds to the true, but intractable posterior that can be speci\ufb01ed up to a normalization constant as p\u03b8(z|x) \u221dp\u03b8(x, z) for any \ufb01xed \u03b8 and x. If Q\u03c6(z|x) denotes the proposal distribution and M(T) is any function of the threshold parameter T, the acceptance probability for the approximate rejection sampler is given by: a\u03b8,\u03c6(z|x, T) = min \u0014 1, p\u03b8(x, z) M(T)q\u03c6(z|x) \u0015 . To get a fully di\ufb00erentiable approximation to the min operator, we consider: a\u03b8,\u03c6(z|x, T) = 1/ max \u0014 1, M(T)q\u03c6(z|x) p\u03b8(x, z) \u0015 \u22481/ \" 1t + \u0012M(T)q\u03c6(z|x) p\u03b8(x, z) \u0013t#1/t where the approximation in the last step holds for large t or when any of the two terms in the max expression dominates the other. For t = 1, we get the exponentiated negative softplus function which we will use to be the acceptance probability function in the remainder of this paper. We leave other approximations to future work. Letting T = \u2212log M, the log probability of acceptance is parameterized as: log a\u03b8,\u03c6(z|x, T) = \u2212log[1 + exp(l\u03b8,\u03c6(z|x, T))] = \u2212[l\u03b8,\u03c6(z|x, T)]+ (6) where l\u03b8,\u03c6(z|x, T) = \u2212log p\u03b8(x, z)+log q\u03c6(z|x)\u2212T and [\u2217]+ denotes the softplus function, i.e., log(1 + e\u2217). Informally, the resampling scheme of Algorithm 1 with the choice of acceptance probability function as in Eq. (6) enforces the following behavior: samples from the approximate posterior that disagree (as measured by the log-likelihoods) with the target posterior beyond a level implied by the corresponding threshold T have an exponentially decaying probability of getting accepted, while leaving the remaining samples with negligible interference from resampling. When the proposed sample z from Q\u03c6 is assigned a small likelihood by p\u03b8, the random variable l\u03b8,\u03c6(z|x, T) is correspondingly large with high probability (and linear in the negative log-likelihood assigned by p\u03b8), resulting in a low acceptance probability. Conversely, when p\u03b8 assigns a high likelihood to z, we get a higher acceptance probability. Furthermore, a large value of the scalar bias T results in an acceptance probability of 1, recovering the regular variational inference setting as a special case. On the other extreme, for a small value of T, we get the behavior of a rejection sampler with high computational cost that is also close to the target distribution in KL divergence. More formally, we have Theorem 1 which shows that the KL divergence can be improved monotonically by decreasing T. However, a smaller value of T would require more aggressive rejections and thereby, more computation. Theorem 1. For \ufb01xed \u03b8, \u03c6, the KL divergence between the approximate and true posteriors, KL(R\u03b8,\u03c6(z|x, T)\u2225P\u03b8(z|x)) is monotonically increasing in T where R\u03b8,\u03c6(z|x, T) is the resampled proposal distribution with the choice of acceptance probability function in Eq. (6). Furthermore, the behavior of the sampler in Algorithm 1 interpolates between the following two extremes: \u2022 As T \u2192 +\u221e, R\u03b8,\u03c6(z|x, T) is equivalent to Q\u03c6(z|x), with perfect sampling e\ufb03ciency for the accept-reject step i.e., a\u03b8,\u03c6(z|x, T) \u21921. \u2022 As T \u2192 \u2212\u221e, R\u03b8,\u03c6(z|x, T) is equivalent to P\u03b8(z|x), with the sampling e\ufb03ciency of a plain rejection sampler i.e., a\u03b8,\u03c6(z|x, T) \u21920 \u2200z. This phenomenon is illustrated in Figure 1 where we approximate an example 2D discrete target distribution on a 5 \u00d7 5 grid, with a uniform proposal distribution plus resampling. With no resampling (T = \u221e), the approximation is far from the target. As T is reduced, Figure 1 demonstrates progressive improvement in the posterior quality both visually as well as via an estimate of the KL divergence from approximation to the target along with an increasing computation cost re\ufb02ected in the lower acceptance probabilities. In summary, we can express the R-ELBO as: R-ELBO(\u03b8, \u03c6) = log p\u03b8(x)\u2212KL(R\u03b8,\u03c6(z|x, T)\u2225P\u03b8(z|x)). (7) Theorem 1 and Eq. (7) give the following corollary. Corollary 1. The R-ELBO gets tighter by decreasing T but more expensive to compute. With an appropriate acceptance probability function, we can therefore traverse the computational-statistical trade o\ufb00for maximum likelihood estimation by adaptively tuning the threshold T based on available computation. 3.3 Gradient estimation The resampled proposal distribution in Eq. (4) is unnormalized with an intractable normalization constant, ZR(x, T) = EQ [a\u03b8,\u03c6(z|x, T)]. The presence of an intractable normalization constant seems challenging for both evaluation and stochastic optimization of \fVariational Rejection Sampling the R-ELBO. Even though the constant cannot be computed in closed form,1 we can nevertheless compute Monte Carlo estimates of its gradients, as we show in Lemma 1 in the appendix. The resulting RELBO gradients are summarized below in Theorem 2. Theorem 2. Let COVR(A(z), B(z)) denote the covariance of the two random variables A(z) and B(z), where z \u223cR\u03b8,\u03c6. Then: \u2022 The R-ELBO gradients with respect to \u03b8: \u2207\u03c6R-ELBO(\u03b8, \u03c6) = COVR (A\u03b8,\u03c6(z|x, T), B\u03b8,\u03c6(z|x, T)) where the covariance is between the following r.v.: A\u03b8,\u03c6(z|x, T) \u225clog p\u03b8(x, z) \u2212log q\u03c6(z|x) \u2212[l\u03b8,\u03c6(z|x, T)]+ B\u03b8,\u03c6(z|x, T) \u225c(1 \u2212\u03c3(l\u03b8,\u03c6(z|x, T))) \u2207\u03c6 log q\u03c6(z|x)). \u2022 The R-ELBO gradients with respect to \u03c6: \u2207\u03b8R-ELBO(\u03b8, \u03c6) = ER [\u2207\u03b8 log p\u03b8(x, z)] \u2212COVR(A\u03b8,\u03c6(z|x, T), \u03c3(l\u03b8,\u03c6(z|x, T))\u2207\u03b8 log p\u03b8(x, z)) where \u03c3(\u2217) denotes the sigmoid function applied to \u2217, i.e., \u03c3(\u2217) = 1/(1 + exp(\u2212\u2217)). In the above expressions, the gradients are expressed as the covariance of two random variables that are a function of the latent variables sampled from the approximate posterior R\u03b8,\u03c6(z|x, T). Hence, we only need samples from R\u03b8,\u03c6 for learning, which can be done using Algorithm 1 followed by Monte Carlo estimation analogous to estimation of the usual ELBO gradients. 4 LEARNING ALGORITHM A practical implementation of variational rejection sampling as shown in Algorithm 2 requires several algorithmic design choices that we discuss in this section. 4.1 Threshold selection heuristic The monotonicity of KL divergence in Theorem 1 suggests that it is desirable to choose a value of T as low as computationally feasible for obtaining the highest accuracy. However, the quality of the approximate posterior, Q\u03c6(z|x) for a \ufb01xed parameter \u03c6 could vary signi\ufb01cantly across di\ufb00erent examples x. This would require making T dependent on x. Although learning T in a parametric way is one possibility, in this work, we restrict attention to a simple estimation based approach that reduces the design choice to a single hyperparameter that can be adjusted to trade extra computation for accuracy. For each \ufb01xed 1Note that Monte Carlo estimates of the partition function can be obtained e\ufb03ciently for evaluation. Algorithm 2 Variational Rejection Sampling input Network architectures for p\u03b8(x, z), q\u03c6(z|x); quantile hyperparameter \u03b3 \u2208(0, 1); initial parameters, \u03b80, \u03c60; threshold update frequency, F; quantile estimation sample count N; Covariance estimate sample count S \u22652, SGD based optimizer, OPT; Dataset {xk}K k=1, Number of epochs: N. output Final estimates, \u03b8, \u03c6. 1: Initialize \u03b8 \u2190\u03b80; \u03c6 \u2190\u03c60; T(x) = +\u221e\u2200x. 2: for e \u2208{1, . . . , N} do 3: if e mod F = 0 then 4: for each x in dataset do 5: Sample ZN \u2190{z1, . . . , zN} \u223cq\u03c6(z|x). 6: T(x) \u2190\u02c6 T N \u03b3 (x, \u03b8, \u03c6), the Monte Carlo estimate of Eq. (8) based on samples ZN. 7: end for 8: end if 9: for each x in dataset do 10: Draw S independent samples {z1, . . . , zS} \u223cR\u03b8,\u03c6(z|x, T(x)) using Algorithm 1. 11: Use Theorem 2 and Eq. (9) with {z1, . . . , zS} to estimate gradients, \u02c6 g\u03b8, \u02c6 g\u03c6. 12: Update \u03b8, \u03c6 \u2190OPT(\u03b8, \u03c6, \u02c6 g\u03b8, \u02c6 g\u03c6). 13: end for 14: end for x, let L\u03b8,\u03c6(x) denote the probability distribution of the scalar random variable, \u2212log p\u03b8(x, z)+log q\u03c6(z|x), where z \u223cQ\u03c6(z|x). Let QL denote the quantile function2 for any given 1-D distribution L. For each quantile parameter \u03b3 \u2208(0, 1], we consider a heuristic family of threshold parameters given x, \u03c6, \u03b8, de\ufb01ned as: T\u03b3(x, \u03b8, \u03c6) \u225cQL\u03b8,\u03c6(x)(\u03b3). (8) For example, for \u03b3 = 0.5, this is the median of L\u03b8,\u03c6(x). Eq. (8) implies that the acceptance probability stays roughly in the range of \u03b3 for most samples. This is due to the fact that the negative log of the acceptance probability, de\ufb01ned in Eq. (6) as [l\u03b8,\u03c6(z|x, T)]+ is positive approximately with probability 1 \u2212\u03b3, an event which is likely to result in a rejection. In Algorithm 2, we compute a Monte Carlo estimate for the threshold and denote the resulting value using N samples as \u02c6 T N \u03b3 (x, \u03b8, \u03c6) (Lines 3-8). This estimation is done independently from the SGD updates, once every F epochs, to save computational cost, and also implies that T is not continuously updated as a function of \u03b8, \u03c6. Technically speaking, this introduces a slight bias in the gradients through their dependence on T, but we ignore this correction since it only happens once every few epochs. 2Recall that for a given CDF F(x), the quantile function is its \u2018inverse\u2019, namely Q(p) = inf{x \u2208R : p \u2264F(x)}. \fAditya Grover, Ramki Gummadi, Miguel L\u00b4 azaro-Gredilla, Dale Schuurmans, Stefano Ermon 4.2 Computing covariance estimates To compute an unbiased Monte Carlo estimate of the covariance terms in the gradients, we need to subtract the mean of at least one random variable while forming the product term. In order to do this in Algorithm 2 (Lines 10-11), we process a \ufb01xed batch of (accepted) samples per gradient update, and for each sample, use all-but-one to compute the mean estimate to be subtracted, similar to the local learning signals proposed in Mnih and Rezende [2016]. This requires generating S \u22652 samples from R\u03b8,\u03c6 simultaneously at each step to be able to compute each gradient. More precisely, the leave-one-out unbiased Monte Carlo estimator for the covariance of two random variables A, B is de\ufb01ned as follows. Let (a1, b1), . . . , (aS, bS) \u223c(A, B) be S independent samples from the joint pair (A, B), and let \u02c6 mA denote the sample mean for A: \u02c6 mA \u225c1 S PS i=1 ai. Then the covariance estimate is given by: [ COVR(A, B) \u225c 1 S \u22121 S X i=1 (ai \u2212\u02c6 mA)bi. (9) 4.3 Hyperparameters and overall algorithm In summary, Algorithm 2 involves the following hyperparameters: S, the number of samples for estimating covariance; \u03b3, the quantile used for setting thresholds; F, the number of epochs between updating T(x). 5 EXPERIMENTAL EVALUATION We evaluated variational rejection sampling (VRS) against competing methods on a diagnostic synthetic experiment and a benchmark density estimation task. 5.1 Diagnostic experiments In this experiment, we consider a synthetic setup that involves \ufb01tting an approximate posterior candidate from a constrained family to a \ufb01xed target distribution that clearly falls outside the approximating family. We restrict attention to training a 1-D parameter, \u03c6 exclusively (i.e., we do not consider optimization over \u03b8 ), and for the non-amortized case (i.e., conditioning on x is not applicable). The target distribution is 1-D, with support on non-negative integers, z \u2208{0, 1, . . .} and denoted as P(z). This distribution, visualized in Figure 2, is obtained by removing the mass on the \ufb01rst c integers of a Poisson distribution with rate \u03bb\u2217> 0. More details are given in the Appendix. The approximate proposal is parameterized as Q\u03c6 \u225cPoi(e\u03c6), where \u03c6 is an unconstrained scalar, and denotes a (unmodi\ufb01ed) Poisson distribution with the (non-negative) rate parameter, e\u03c6. Note that for 0 2 4 6 8 10 12 14 16 18 20 22 24 0.00 0.02 0.04 0.06 0.08 0.10 0.12 target probability mass Figure 2: Target distribution, P. Figure 3: Acceptance probability vs. SGD iteration (a) Error: \u03c6 \u2212\u03c6\u2217 (b) Gradients for \u03c6. Figure 4: VRS learning dynamics. The x-axis shows the number of total samples (both accepted and rejected) at each SGD iteration. (a) Error: \u03c6 \u2212\u03c6\u2217. (b) Gradients for \u03c6. Figure 5: VIMCO learning dynamics. The x-axis shows the number of total samples, which is equal to k times the number of iterations at each SGD iteration. Poi(e\u03c6) to explicitly represent a small mass on z < c would require \u03c6 \u2192\u221e, but this would be a bad \ufb01t for points just above c. As a result, {Q\u03c6} does not contain candidates close to the target distribution in the sense of KL divergence, even a simple resampling modi\ufb01cation could transform the raw proposal Q\u03c6 into a better candidate approximation, R. In Figures 3 and 4 we illustrate the dynamics of SGD using VRS gradients for approximating P. To keep the analysis simple, the threshold T was kept \ufb01xed at a constant value during learning. Figure 3 shows the e\ufb03ciency of the sampler improving as the learning progresses due to a better \ufb01t to the target distribution. Figure 4a shows the di\ufb00erence between the current parameter \u03c6 and \u03c6\u2217= log \u03bb\u2217from the target distribution, quickly converging to 0 as learning proceeds. As a benchmark, we evaluated the dynamics based on VIMCO gradients [Mnih and Rezende, 2016]. Figure 5 suggests that the signal in gradients is too low (i.e., \fVariational Rejection Sampling Table 1: Test NLL (in nats) for MNIST comparing VRS with published results. Lower is better. (a) Baseline results from Tucker et al. [2017] Model / Architecture 200 200-200 NVIL (k = 1) 112.5 99.6 MuProp 111.7 99.07 REBAR (\u03bb = 0.1) 111.7 99 REBAR 111.6 99.8 Concrete (\u03bb = 0.1) 111.3 102.8 VRS (IS, \u03b3 = 0.95) 106.97 96.38 VRS (RS, \u03b3 = 0.95) 106.89 96.30 VRS (IS, \u03b3 = 0.9) 106.71 96.26 VRS (RS, \u03b3 = 0.9) 106.63 96.36 (b) Baseline results from Mnih and Rezende [2016] Model / Architecture 200-200-200 NVIL (k = 1) 95.2 NVIL (k = 2) 93.6 NVIL (k = 5) 93.7 NVIL (k = 10) 93.4 NVIL (k = 50) 96.2 RWS (k = 2) 94.6 RWS (k = 5) 93.4 RWS (k = 10) 93.0 RWS (k = 50) 92.5 VIMCO (k = 2) 93.5 VIMCO (k = 5) 92.8 VIMCO (k = 10) 92.6 VIMCO (k = 50) 91.9 VRS (IS, \u03b3 = 0.95) 92.01 VRS (RS, \u03b3 = 0.95) 91.93 VRS (IS, \u03b3 = 0.9) 92.09 VRS (RS, \u03b3 = 0.9) 91.69 high variance in gradient estimates). This behavior was persistent even with much smaller learning rates and large sample sizes compared to VRS gradients. One explanation is that the VIMCO gradient update for \u03c6 has a term that assigns the same average weight to the entire batch of samples, both good and bad ones (see Eq. (8) in Mnih and Rezende [2016]). In contrast, Algorithm 1 discards rejected samples from contributing to the gradients explicitly. Yet another qualitative aspect that distinguishes VRS gradients from importance weighted multi-sample objective gradients is that Algorithm 1 can dynamically adapt the amount of additional computation spent in resampling based on sample quality, as opposed to being \ufb01xed in advance. 5.2 Generative modeling We trained sigmoid belief networks (SBN) on the binarized MNIST dataset [LeCun et al., 2010]. Following prior work, the SBN benchmark architectures for this task consist of several linear layers of 200 hidden units, and the recognition model has the same architecture in the reverse direction. Training such models is sensitive to choices of hyperparamters, and hence we directly compare VRS with published baselines in Table 1. The hyperparameter details for SBNs trained with VRS are given in the Appendix. With regards to key baseline hyperparameters in Table 1, Concrete and REBAR specify a temperature controlling the degree of relaxation (denoted by \u03bb), whereas multi-sample estimators based on importance weighting specify the number of samples, k to trade-o\ufb00 computation for statistical accuracy. The relevant parameter, \u03b3 in our case is not directly comparable but we report results for values of \u03b3 where empirically, the average number of rejections per training example was somewhere between drawing k = 5 and k = 20 samples for an equivalent importance weighted objective for both \u03b3 = 0.95 and \u03b3 = 0.9 (with the latter requiring more computation). Additionally, we provide two estimators for evaluating the test lower bound for VRS. For the importance sampled (IS) version, we simply evaluate the ELBO using importance sampling with the original posterior, Q\u03c6. The resampled (RS) version, on the other hand uses the resampled proposal, R\u03b8,\u03c6 with the partition function, ZR estimated as a Monte Carlo expectation. From the results, we observe that VRS outperforms other methods, including multi-sample estimators with k as high as 50 that require much greater computation than the VRS models considered. Generally speaking, the RS estimates are better than the corresponding IS estimates, and decreasing \u03b3 improves performance (at the cost of increased computation). 6" + }, + { + "url": "http://arxiv.org/abs/1803.10937v1", + "title": "Best arm identification in multi-armed bandits with delayed feedback", + "abstract": "We propose a generalization of the best arm identification problem in\nstochastic multi-armed bandits (MAB) to the setting where every pull of an arm\nis associated with delayed feedback. The delay in feedback increases the\neffective sample complexity of standard algorithms, but can be offset if we\nhave access to partial feedback received before a pull is completed. We propose\na general framework to model the relationship between partial and delayed\nfeedback, and as a special case we introduce efficient algorithms for settings\nwhere the partial feedback are biased or unbiased estimators of the delayed\nfeedback. Additionally, we propose a novel extension of the algorithms to the\nparallel MAB setting where an agent can control a batch of arms. Our\nexperiments in real-world settings, involving policy search and hyperparameter\noptimization in computational sustainability domains for fast charging of\nbatteries and wildlife corridor construction, demonstrate that exploiting the\nstructure of partial feedback can lead to significant improvements over\nbaselines in both sequential and parallel MAB.", + "authors": "Aditya Grover, Todor Markov, Peter Attia, Norman Jin, Nicholas Perkins, Bryan Cheong, Michael Chen, Zi Yang, Stephen Harris, William Chueh, Stefano Ermon", + "published": "2018-03-29", + "updated": "2018-03-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "INTRODUCTION Intelligent agents often need to interact with the environment and make rational decisions that optimize for a suitable objective. One such setting that commonly arises is the best arm identi\ufb01cation problem in stochastic multi-armed bandits [Bubeck et al., 2009, Audibert and Bubeck, 2010]. In a multi-armed bandit (MAB) problem, an agent is given a set of n \ufb01nite \u2217primary author. Email: adityag@cs.stanford.edu Proceedings of the 21st International Conference on Arti\ufb01cial Intelligence and Statistics (AISTATS) 2018, Lanzarote, Spain. PMLR: Volume 84. Copyright 2018 by the author(s). actions (or arms), each associated with a reward drawn from an arm-speci\ufb01c probability distribution. In a pure exploration setting, the goal is to reliably identify the top-k arms while minimizing the exploration cost. This problem has numerous applications, including optimal experimental design. We consider a new variant of this problem where the feedback rewards are received after a delay. Delayed feedback is common in the real-world. For instance, hypothesis testing in science and engineering often suffers from delayed feedback since they involve expensive, time-consuming experiments. In one of the motivating applications of this work we want to search over fast-charging policies for electrochemical batteries to maximize lifetime, overcoming the di\ufb03culties posed due to lengthy experiments. Even within the \ufb01eld of machine learning, \ufb01nding the best hyperparameter settings for a given learning algorithm and dataset can be modeled as a best arm identi\ufb01cation problem involving a non-trivial delay [Jamieson and Talwalkar, 2016]. However, many scenarios of interest are not complete black-boxes during the intermediate time steps before receiving a delayed feedback reward. Depending on the application, we often have access to side-information in the form of partial feedback that can aid decision making. These could be extra measurements such as temperature and remaining capacity while charging batteries in the aforementioned scenario, or learning curves for hyperparameter optimization. In this work, we propose a general-purpose framework for modeling delayed feedback in MAB, and take a deeper dive into several practically relevant instantiations. In particular, we design and analyze algorithms for best arm identi\ufb01cation in the \ufb01xed con\ufb01dence setting where the partial feedback are biased or unbiased estimators of the delayed feedback. Our proposed algorithms adaptively tune the mean and con\ufb01dence estimates wherever the partial feedback reduces the overall uncertainty. We also extend these algorithms to the parallel MAB setting where we are allowed to pull a batch of arms at every time step [Jun et al., 2016]. arXiv:1803.10937v1 [cs.LG] 29 Mar 2018 \fBest arm identi\ufb01cation in multi-armed bandits with delayed feedback Finally, we empirically validate the proposed algorithms on simulated data and real world datasets drawn from two domains. The \ufb01rst corresponds to experimental design for \ufb01nding the optimal charging policy for a battery that maximizes overall lifetime [Moura et al., 2017]. In the second domain, we perform hyperparameter optimization for \ufb01nding the best cut strategy for a standard mixed integer programming solver with performance tested on a benchmark set of problem instances drawn from computational sustainability [Gomes et al., 2008]. Our experiments demonstrate that accounting for partial feedback can reduce the delayed sample complexity on average by 15.6% and 80.8% for sequential MAB over baselines for the two application scenarios respectively. The corresponding average savings over baselines for parallel MAB are 20.7% and 87.6% respectively. 2 BACKGROUND & MODELING FRAMEWORK The chief workhorse of our analysis will be the law of iterated logarithms (LIL) that analyzes the limiting behavior of random walks (sequence of pulls for a given arm in our case) de\ufb01ned over sub-Gaussian random variables [Darling and Robbins, 1967]. Several \ufb01nite LIL bounds have been proposed in the literature; we consider the one proposed by Zhao et al. [2016] which has been shown to outperform others empirically while retaining the same asymptotic behavior. Alternate bounds, such as the one by Jamieson et al. [2014], could also be used with no e\ufb00ect on the theoretical analysis of this work. Lemma 1. Let X(1), X(2), . . . be i.i.d. sub-Gaussian random variables with scale parameter \u03c3 and mean \u00b5. Let \u03c4 be any random variable with domain N. For any c > 1, 2a > c, b > 0, the following holds with probability at least 1 \u22122\u03b6(2a/c)e \u22122b/c: \f \f \f \f \f 1 \u03c4 \u03c4 X l=1 X(l) \u2212\u00b5 \f \f \f \f \f \u2264\u03c3 r a log(logc \u03c4 + 1) + b \u03c4 where \u03b6 denotes the Riemannian zeta function. The constants in Lemma 1 are chosen such that the lemma holds for a target con\ufb01dence. To simplify the notation, we denote the the error probability by \u03b4\u2032 and the right hand side of Lemma 1 by C (\u03c3, \u03c4, \u03b4\u2032) such that the following holds with probability 1 \u2212\u03b4\u2032 for any \u03c4 \u2208N: \f \f \f \f \f 1 \u03c4 \u03c4 X l=1 X(l) \u2212\u00b5 \f \f \f \f \f \u2264C (\u03c3, \u03c4, \u03b4\u2032) . (1) We consider a stochastic multi-armed bandit (MAB) problem characterized by a set of n arms, indexed by i = 1, . . . , n. Each arm is associated with a \ufb01xed, unknown probability distribution with means {\u00b5i}n i=1. We assume that the means are unique. Without loss of generality, assume that the arm indices are sorted as per the means, such that \u00b51 > \u00b52 > . . . > \u00b5n. We are interested in the pure exploration setting, also known as the best arm identi\ufb01cation problem, where the goal of an agent is to identify the top-k arms (with the highest means) with a target con\ufb01dence 1\u2212\u03b4 while minimizing the total time spent on exploration. Exploration in our setting, however, is not the same across the pulls of a given arm. In particular, we assume that each pull of an arm is associated with an unknown (stochastic) delay that contributes to the total exploration time. The presentation in this section assumes a sequential MAB setting where the agent can pull/run only one arm at a given time step; the alternate parallel MAB setting where an agent can control a \u201cbatch\u201d of arms at once is discussed in Section 4 [Perchet et al., 2015, Wu et al., 2015, Jun et al., 2016]. Formally, the stochastic data generating process with delayed feedback can be described as follows. At any given start time ts: 1. Agent chooses an arm i. 2. Nature samples a delay Ds \u22651 from an (unknown) arm speci\ufb01c delay distribution. 3. Nature samples a sequence of partial feedback, (Yi,ts+1, . . . , Yi,ts+Ds) | Ds jointly. The joint distribution of the partial feedback depends on \u00b5i. In general, the delay and partial feedback sequence are unknown to the agent at time ts. At time ts + \u2206where \u2206\u2208[1, Ds], 4. Nature reveals Yi,ts+\u2206to the agent. If \u2206= Ds, the agent goes to step 1. Otherwise, the agent decides whether to continue the current pull (step 4) or start another pull (step 1) in which case any remaining partial feedback for the current pull will not be observed. The agent and nature continue to play the above game until the agent has selected a set of candidate top-k arms. The delay Ds can contribute signi\ufb01cantly to the total time spent on exploration. Under appropriate assumptions however, we can exploit the structure in the partial feedback to signi\ufb01cantly reduce the overall exploration cost of delayed feedback. The data generating process described above is very general and one can make many natural assumptions on the distribution of the partial feedback (Yi,ts+1, \u00b7 \u00b7 \u00b7 , Yi,ts+Ds) | Ds. \fGrover et al. For instance, we can model the following scenarios: \u2022 Full delayed feedback: The partial feedback at the last delay, Yi,ts+Ds is sub-Gaussian with mean \u00b5i and scale parameter \u03c3i. For the intermediate time steps, \u2206\u2208[1, Ds \u22121], we have Yi,ts+\u2206= 0, and hence, we receive no information about \u00b5i at these time steps. \u2022 Incremental partial feedback: The set of partial feedback Yi,ts+\u2206for every time step \u2206\u2208 [1, Ds] consists of mutually independent, subGaussian random variables with mean \u00b5i/Ds and scale parameter \u03c3i/\u221aDs. Hence, the cumulative partial feedback PDs \u2206=1 Yi,ts+\u2206is also sub-Gaussian with mean \u00b5i and scale parameter \u03c3i. \u2022 Unbiased noisy partial feedback: The partial feedback at the last delay, Yi,ts+Ds is sub-Gaussian with mean \u00b5i and scale parameter \u03c3i. For the intermediate time steps, \u2206\u2208[1, Ds \u22121], the set of partial feedback Yi,ts+\u2206| Yi,ts+Ds \u2212Yi,ts+Ds consists of mutually independent, sub-Gaussian random variables with zero mean and scale parameter \u03c3(p) i . \u2022 Biased noisy partial feedback: The partial feedback at the last delay, Yi,ts+Ds is sub-Gaussian with mean \u00b5i and scale parameter \u03c3i. For the intermediated time steps, \u2206\u2208[1, Ds \u22121], the set of partial feedback Yi,ts+\u2206| Yi,ts+Ds \u2212Yi,ts+Ds consists of mutually independent, sub-Gaussian random variables with mean bi and scale parameter \u03c3(p) i . Here, bi is a \ufb01xed, but unknown bias associated with the partial feedback for the arm. Note that the standard MAB setting where we observe the feedback at the immediate next time step is a special case of the full delayed feedback with a constant delay Ds = 1 for every pull. In fact, the algorithms for best arm identi\ufb01cation in the full delayed and incremental partial feedback settings can be derived naturally from the standard MAB algorithms with no delays. Speci\ufb01cally, the agent can simply chose to ignore the time instants at which delayed feedback is unavailable for the full delayed feedback setting. The sample complexity of any such algorithm is hence the number of arm pulls required in the standard MAB setting weighted by the delay of every pull. These settings are still interesting for parallel MAB where information can be shared across arms; we discuss this case in Section 4. The partial feedback settings, however, present an interesting scenario where the agent can extract information from noisy feedback. For such settings, we propose modi\ufb01ed algorithms based on racing-style procedures Algorithm 1 RacingSubroutines function UpdateArmSets(arm sets A, R, S, top k, con\ufb01dence bounds {LCBi, UCBi}i\u2208S) Initialize kt \u2190k \u2212|A|. Update A \u2190A \u222a{i \u2208S | LCBi > max(kt+1) j\u2208S UCBj}. Update R \u2190R \u222a{i \u2208S | UCBi < max(kt) j\u2208S LCBj}. Update S \u2190S\\{R \u222aA}. return A, R, S. end function function GetBatchArms(surviving arms S, counts {Ni, ai}i\u2208S, e\ufb00ective batch size e, limit r) Initialize new arm pulls m \u21900 \u2208Rn. for slot s = {1, \u00b7 \u00b7 \u00b7 , min (e, |S|r)} do Least pulled arm j \u2190arg mini\u2208S:ai\u2264r Ni Update aj \u2190aj + 1. Update mj \u2190mj + 1. Update Nj \u2190Nj + 1. end for return m, {Ni}i\u2208S, {ai}i\u2208S end function typically used for the standard MAB setting [Maron and Moore, 1994]. Typically, racing algorithms maintain three disjoint arm sets: accepted arms A, rejected arms R, and surviving arms S. Initially, all arms are assigned to the surviving set S. Racing procedures uniformly sample arms while removing them from the surviving set based on con\ufb01dence bounds. For convenience, de\ufb01ne the lower con\ufb01dence bounds (LCB) and upper con\ufb01dence bounds (UCB) for every arm i as: LCBi := b \u00b5i \u2212Ci (2) UCBi := b \u00b5i + Ci (3) where b \u00b5i is the empirical mean of the feedback for arm i and the con\ufb01dence bound Ci will depend on the particular racing algorithm under consideration. Let kt := k \u2212|A| be the e\ufb00ective number of top arms remaining to be identi\ufb01ed at a time step t. Each time we receive a feedback reward (full or partial), the racing procedures update these sets based on the rule that any arm in S whose LCB is greater than the UCB of |S| \u2212kt arms is accepted. Similarly, any arm in S whose UCB is less than the LCB of kt arms is rejected. The racing procedure is repeated until S is empty. The pseudocode for the subroutine that updates the arm sets is given in Algorithm 1. 3 SEQUENTIAL MAB In sequential MAB, we assume that the agent can receive (partial) feedback from only a single arm pull at any given time step, e.g., we can only perform one \fBest arm identi\ufb01cation in multi-armed bandits with delayed feedback experiment at a time. We skip a separate discussion on the trivial full feedback (and the related incremental feedback) setting and discuss it only in the context of the noisy feedback settings. For convenience, we denote the partial feedback at the last delay as Xi,ts = Yi,ts+Ds. Here, Xi,ts is a sub-Gaussian random variable with mean \u00b5i and scale parameter \u03c3i. The proofs of all results in this section are given in the Appendix. 3.1 Unbiased noisy partial feedback In this setting, an agent has access to unbiased partial feedback at the intermediate time steps before receiving the full delayed feedback. In the following result, we derive a variation of the \ufb01nite LIL bound for the unbiased partial feedback setting. Proposition 1. Let {Yi,t1+1, Yi,t1+2, . . . , Yi,t1+D1, Yi,t2+1, . . . , Yi,t2+D2, . . .} denote the partial feedback sequences for the pulls of an arm i started at time steps t1, t2, . . . and delays D1, D2, . . .. Then, under the distributional assumptions on the unbiased partial feedback (see Section 2) for any F \u2208N, P \u2208[1, DF ], \u03b4f > 0, \u03b4p > 0, we have with probability 1 \u2212\u03b4f \u2212\u03b4p: \f \f \f \f \f \f 1 F \uf8ee \uf8f0 F \u22121 X f=1 Xi,tf + 1 P P X l=1 Yi,tF +l \uf8f9 \uf8fb\u2212\u00b5i \f \f \f \f \f \f \u2264C (\u03c3i, F, \u03b4f/n) + 1 F C \u0010 \u03c3(p) i , P, \u03b4p/n \u0011 \u2200i \u2208[1, n] (4) where Xi,tf = Yi,tf +Df by de\ufb01nition. At any intermediate time step between the the start and end of the F-th arm pull, Proposition 1 adaptively \u201csplits\u201d the con\ufb01dence bounds pertaining to the full delayed feedback for F steps (\ufb01rst term in the RHS) and the partial delayed feedback for the F-th arm pull (second term in the RHS). Contrast this with the full delayed feedback setting where the following con\ufb01dence bound holds with probability 1 \u2212\u03b4: \f \f \f \f \f \f 1 F \u22121 F \u22121 X f=1 Xi,tf \u2212\u00b5i \f \f \f \f \f \f \u2264C (\u03c3i, F \u22121, \u03b4/n) \u2200i \u2208[1, n] (5) To obtain the same target con\ufb01dence in the two cases above, we constrain \u03b4 = \u03b4f + \u03b4p. Solving for the optimal \u03b4\u2217 f, \u03b4\u2217 p that minimize the RHS of Eq. (4) under the constraint due to \u03b4 corresponds to a convex optimization problem that can be solved in closed form. Comparing the mean estimators in Eq. (4) and Eq. (5), we note that the agent can only use the full delayed feedback up till the (F \u22121)-th arm pull while waiting for the outcome of the F-th arm pull in the latter case while the former dynamically incorporates the partial feedback observed for the F-th arm pull. Algorithm 2 RacingUnbiasedPF (arm parameters {i, \u03c3i, \u03c3(p) i }n i=1, top k, con\ufb01dence \u03b4) 1: Initialize global time step t = 0, surviving S = {i}n i=1, accepted A = {}, rejected R = {}. 2: Initialize per-arm full delayed feedback counter Fi = 0, empirical means \u02c6 \u00b5i = 0, con\ufb01dence bounds LCBi = \u2212\u221e, UCBi = \u221efor all i \u2208S. 3: while S is not empty do 4: while True do 5: Increment t \u2190t + 1. 6: Collect partial feedback Ya,t. 7: Update b \u00b5(p) \u2190(P b \u00b5(p)+Ya,t) (P +1) . 8: Increment P \u2190P + 1. 9: Set C(partial) \u2190C(\u03c3a, Fa + 1, \u03b4\u2217 f/n) + C(\u03c3(p) a ,p,\u03b4\u2217 p/n) Fa+1 . 10: Choose FOrP \u2190arg min \u0000C(\u03c3a, Fa, \u03b4/n), C(partial)\u0001 . 11: Update Ca \u2190C(\u03c3a, Fa, \u03b4/n) if FOrP = F else C(partial). 12: Update b \u00b5a \u2190b \u00b5(f) if FOrP = F else Fab \u00b5(f)+b \u00b5(p) Fa+1 . 13: Update LCBa, UCBa. 14: A, R, S \u2190UpdateArmSets(A, R, S, k, {LCBi, UCBi)}i\u2208S). 15: if P = Da,ta or a \u0338\u2208S then 16: Break \u25b7Pull on termination/elimination 17: end if 18: end while 19: Pull arm a where a \u2190arg mina\u2208S Fa. 20: Initialize start ta \u2190t, partial feedback counter P = 0, partial mean b \u00b5(p) = 0, full mean b \u00b5(f) \u2190b \u00b5i. 21: end while 22: return A Based on the above analysis, we propose a racing algorithm for the unbiased partial feedback setting with the psuedocode given in Algorithm 2. At any intermediate time step, the agent chooses a mean estimator and a con\ufb01dence bound for the current arm (Lines 10-13). The choice corresponds to the tighter con\ufb01dence bound obtained either by optimizing Eq. (4) over \u03b4p, \u03b4f or the one obtained by Eq. (5) where only the full delayed feedback are considered. Thereafter, the agent invokes the racing subroutine that checks whether a surviving arm can be rejected or accepted (Line 14). If the pull has \ufb01nished running or the current arm is itself eliminated (Line 15), the agent pulls a new arm in the next time step which has the least number of full delayed feedback (Line 19). We can make some observations about Algorithm 2. First, we see that an agent adopting the proposed algorithm can never do worse than the alternate racing strategy that considers estimates only based on the full delayed feedback. This is because even at the intermediate time steps, the agent considers the mean estimator corresponding to the smaller of the two con\ufb01dence bounds, which can only reduce the delayed sample complexity of the algorithm. Whenever an arm \fGrover et al. Algorithm 3 BatchRacingFullDF(arm parameters {i, \u03c3i}n i=1, top k, con\ufb01dence \u03b4, batch b, limit r) 1: Initialize global time step t = 0, pull status counts running = 0, surviving arms S = {i}n i=1, accepted arms A = {}, rejected arms R = {}. 2: Initialize per-arm global pull counts Ni = 0, running pull counts ai = 0, full delayed feedback Fi = 0, empirical means \u02c6 \u00b5i = 0, con\ufb01dence bounds LCBi = \u2212\u221e, UCBi = \u221efor all i \u2208S. 3: while S is not empty do 4: if running > 0 then 5: Increment t \u2190t + 1. 6: Collect batch full delayed feedback Y . 7: for all Yh,t \u2208Y do 8: Update b \u00b5h \u2190(Fhb \u00b5(f)+Yh,t)/(Fh+1). 9: Increment Fh \u2190Fh + 1. 10: Update LCBh, UCBh. 11: Decrement ah \u2190ah \u22121. 12: end for 13: if Y is not empty then 14: A, R, S \u2190UpdateArmSets(A, R, S, k, {LCBi, UCBi}i\u2208S). 15: Decrement running \u2190running \u2212|Y |. 16: end if 17: end if 18: Update arms m, counts {Ni, ai}i\u2208S \u2190 GetBatchArms(S, {Ni, ai}i\u2208S, b \u2212running, r). 19: Pull every arm j \u2208m mj times. 20: Update running \u2190running + P j\u2208m mj. 21: end while 22: return A pull has \ufb01nished, the agent also updates the mean and con\ufb01dence interval by an arithmetic averaging over only the full delayed feedback. Using partial feedback is impractical at such time steps since the partial feedback only introduce noise and do not provide any additional information about the true mean. If the maximum possible delay associated with any arm pull is given by Dmax, then we can trivially extend bounds for the sample complexity of racing style procedures [Jamieson and Nowak, 2014] to derive similar bounds on the delayed sample complexity with an extra multiplicative factor of Dmax.1 This is similar to what one would expect from the full delayed feedback setting and is not surprising for Algorithm 2 since in the absence of any additional assumptions, the partial feedback could be completely uninformative and the algorithm will choose to ignore them. We believe domain-speci\ufb01c assumptions about the delay distribution and the noise associated with the partial feedback as a function of time could lead to a tighter analysis 1The delayed sample complexity for an algorithm refers to the total number of time steps (including delays) before termination. and is an interesting direction of future work. The correctness of Algorithm 2 can be summarized below. Theorem 1. Assuming the delay associated with any arm pull is bounded, then Algorithm 2 outputs the top-k arms with probability at least 1 \u2212\u03b4. To get further intuition about the working of Algorithm 2, consider the situation where all arms have been pulled once except one. When the last remaining arm is pulled for the \ufb01rst time, the full delayed feedback setting will necessarily have to wait for the pull to \ufb01nish running before eliminating the arms whereas Algorithm 2 can potentially start eliminating arms right after the \ufb01rst partial delayed feedback is received. 3.2 Biased noisy partial feedback The partial feedback at the intermediate time steps before a full delayed feedback can also correspond to biased estimates of the full delayed feedback. Although the bias for the arms is unknown, it can be estimated empirically based on di\ufb00erences in the full delayed feedback and the partial feedback at the corresponding intermediate time steps. Formally, we assume the bias for a particular arm is an unknown constant bi and derive the following LIL bounds. Proposition 2. Let {Yi,t1+1, Yi,t1+2, . . . , Yi,t1+D1, Yi,t2+1, . . . , Yi,t2+D2 . . .} denote the partial feedback sequences for the pulls of an arm i started at time steps t1, t2, . . . and delays D1, D2, . . . with bias bi. Then, under the distributional assumptions on the partial feedback (see Section 2) for any F \u2208N\\{1}, P \u2208[1, DF ], \u03b4f > 0, \u03b4p > 0, \u03b4b > 0, we have with probability 1 \u2212\u03b4f \u2212\u03b4p \u2212\u03b4b: \f \f \f \f \f \f 1 F \uf8ee \uf8f0 F \u22121 X f=1 Xi,tf + 1 P P X p=1 (Yi,tF +p \u2212Zi,F ) \uf8f9 \uf8fb\u2212\u00b5i \f \f \f \f \f \f \u2264C (\u03c3i, F, \u03b4f/n) + 1 F h C \u0010 \u03c3(p) i , P, \u03b4p/n \u0011 + C \u0010 \u03c3(p) i , F \u22121, \u03b4b/n \u0011i (6) \u2200i \u2208[1, n]where Zi,F = 1 F \u22121 F \u22121 X f=1 PDf \u22121 p=1 Yi,tf +p Df \u22121 \u2212Xi,Df \u22121 ! . Comparing Eq. (6) with Eq. (5) by constraining \u03b4 = \u03b4f + \u03b4p + \u03b4b, we see that the mean estimator takes into account the partial feedback as before but also has a bias correction term. The bias correction term is an empirical average of the biases observed from the past full delayed feedback. This correction has the e\ufb00ect of introducing additional uncertainty (third term in the RHS) and we need at least one full feedback to estimate the bias before we can use the above bound. The corresponding racing algorithm runs similar to Algorithm 2 with the key di\ufb00erence being that the \fBest arm identi\ufb01cation in multi-armed bandits with delayed feedback 101 102 Number of arms (n) 0.80 0.85 0.90 0.95 tpartial/tfull 101 102 103 Number of arms (n) 0.3 0.4 0.5 0.6 0.7 tpartial/tfull 101 102 103 Delay (d) 0.54 0.56 0.58 0.60 0.62 tpartial/tfull 101 102 Number of arms (n) 0.6 0.7 0.8 0.9 1.0 tpartial/tfull (a) Number of arms Bounded means 101 102 103 Number of arms (n) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 tpartial/tfull (b) Number of arms Free means 101 102 103 Delay (d) 0.550 0.575 0.600 0.625 0.650 0.675 0.700 0.725 0.750 tpartial/tfull (c) Delay Figure 1: Synthetic experiments evaluating performance. Top: sequential. Bottom: parallel. Lower is better. mean estimator corresponds to the minimum of the con\ufb01dence bounds in Eq. (5) and Eq. (6), where the RHS of Eq. (6) is speci\ufb01ed for the optimal \u03b4\u2217 f, \u03b4\u2217 p, \u03b4\u2217 b minimizing the expression under the constraint due to \u03b4. We defer the pseudocode for this setting to the Appendix (see Algorithm 4). 4 PARALLEL MAB In parallel MAB, an agent has the additional ability to \u201caccumulate\u201d bulk information by controlling a batch of arm pulls. We extend the (b, r) setting proposed in Jun et al. [2016] where the agent is allowed to run at most b arm pulls in parallel at any given time step with an upper limit r on the number of pulls of each arm. Even the full delayed feedback setting becomes interesting, as the agent can exploit information from arm pulls which have \ufb01nished running in parallel to accept/reject delayed arm pulls that are still running thereby avoiding the pitfalls of long delays. The pseudocode for the proposed batch racing algorithm with full delayed feedback is given in Algorithm 3. At every time step, an agent pulls a batch of arms with the least pull count Ni that obeys the (b, r) constraints (Lines 18-19). Whenever we obtain at least one full delayed feedback, we can update our arm sets as per the racing criteria (Lines 13-15). The algorithms for the noisy partial feedback settings discussed in Section 3 can be extended for parallel MAB in a similar manner and are skipped here to keep the presentation clean. The theoretical analysis of the batch MAB setting in Jun et al. [2016] builds on the analysis of standard MAB in ways independent of the choice of LIL bounds and hence, a merged analysis for delayed batch MAB using the LIL bounds for delayed feedback (as in Propositions 1 and 2) suggests a reduction factor of b in the corresponding upper bounds. 5 EXPERIMENTS We empirically validated the proposed algorithms on a simulated setting and two real world datasets. All experiments use an error probability of \u03b4 = 0.05 and we observed that in each case, the algorithm obtains the desired con\ufb01dence level empirically. For the parallel MAB setting, we set b = r = 10. 5.1 Simulated data We performed an ablation study of the proposed algorithms for sequential and parallel MAB under di\ufb00erent settings of delayed feedback. All experiments were repeated for 100 random runs such that the standard errors are vanishingly small and the number of top arms to be identi\ufb01ed, k is set to 0.2n. We quantify improvement as the ratio (=tpartial/tfull) of the time taken by Algorithm 2 or its parallel MAB extension (i.e., tpartial) and the time taken by a full delayed feedback racing procedure (i.e., tfull). We evaluate performance as a function of the following problem parameters. Number of arms. To analyze the di\ufb00erence in performance as a function of the number of arms (n), we further consider two distribution of means. \fGrover et al. 10\u22123 10\u22122 10\u22121 100 Scale parameter, \u03c3(p) 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 tpartial/tfull (a) Sequential 10\u22123 10\u22122 10\u22121 100 Scale parameter, \u03c3(p) 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 tpartial/tfull (b) Parallel Figure 2: Experiments on battery charging. In the bounded means case, we set the means of the arms as \u00b5i = c \u2212(i/n)\u02dc c for any choice of constants c and \u02dc c > 0. Hence, the range of the means does not vary with n. In Figure 1a, we observe that accounting for unbiased partial feedback can give gains of up to 25% and 40% for the sequential and parallel MAB when the number of arms is low. The gains are reduced when the number of arms is large, which suggests that partial feedback is less advantageous in scenarios where a large number of full pulls are required for disambiguating very closely spaced means. In the free means case, we set the means of the arms as \u00b5i = c \u2212\u02dc ci for any choice of constants c and \u02dc c > 0. Here, the range of the means increases with n. From the results in Figure 1b, we observe that the gains due to partial feedback improve as the number of arms increases. This suggests that when the relative separation in means between the arms is \ufb01xed, Algorithm 2 and its parallel MAB extension quickly eliminate arms with extreme means (very high or very low) unlike the racing algorithms that wait for full delayed feedback. Delay. Here, we \ufb01x n = 100 and vary the delay of the arms. For all settings of the delay in Figure 1c, Algorithm 2 and its parallel MAB extension require a signi\ufb01cantly lower fraction of the time with the lowest ratios observed to be 0.59 and 0.57 for sequential and parallel MAB respectively. While we did not see much variation in improvements for sequential MAB, the improvements are better for longer delays in the case of parallel MAB. 5.2 Policy search for fast battery charging For any given battery chemistry, the charging (and discharging) policy has a signi\ufb01cant impact on the lifetime of the cells. However, a single run of a particular policy however takes months to complete since every cell needs to be repeatedly charged and discharged until the end of its lifetime. Hence, delayed feedback can signi\ufb01cantly slow down the search procedure. The true, unknown reward for any arm (charging policy) is stochastic and corresponds to the lifetime of the battery [Harris et al., 2017, Baumh\u00a8 ofer et al., 2014, Schuster et al., 2015].2 We model the search for the best charging policy for the Li-ion battery chemistry as a best arm identi\ufb01cation problem in a stochastic MAB with n = 40 arms, k = 1. The true mean cycle life, cell-to-cell variances, and delays are obtained from a battery charging simulator [Moura et al., 2017, Perez et al., 2016]. While a battery cell undergoes charging and discharging, we can additionally monitor key indicators such as voltage, temperature, and internal resistance. Predictive models of lifetime based on these factors is an active area of research, and can serve the purpose of partial feedback estimator [Burns et al., 2013, Dubarry et al., 2017]. We assume the existence of such an estimator and test the robustness of our algorithm by evaluating the relative improvements obtained from Algorithm 2 on varying the noise \u03c3(p) i associated with the partial feedback. The results are shown in Figure 2. When the estimator is \u201ctrustworthy\u201d (low \u03c3(p) i ), we can achieve improvements of up to 35% in the number of experiments required. As expected, the gains diminish for poorer models of partial feedback in which case the algorithm can choose to ignore the noisy feedback. 5.3 Hyperparameter optimization for mixed integer programming The CPLEX solver3 for mixed integer programming has a host of hyperparameters, including options to switch on or o\ufb00di\ufb00erent cut strategies employed by the solver during the search process. We model the task of \ufb01nding the best cut strategy as a stochastic MAB problem with n = 32 arms (i.e., cut strategies), 2Formally, the lifetime of the cell is de\ufb01ned to be the number of cycles until a battery reaches 80% of its original capacity at which point a battery is considered dead. 3https://www.ibm.com/software/commerce/ optimization/cplex-optimizer/index.html \fBest arm identi\ufb01cation in multi-armed bandits with delayed feedback k = 1. The performance is measured on CORLAT, a benchmark set of 2, 000 (maximization) mixed integer linear programming instances derived from real world data used for the construction of a wildlife corridor for grizzly bears in the Northern Rockies region [Gomes et al., 2008, Hutter et al., 2010]. The true mean for each arm is the average of lower bounds attained by the cut strategy on the feasible instances in the dataset under speci\ufb01ed time and resource constraints per instance (10 seconds on 1 core). Every pull of an arm corresponds to running a cut strategy on a sampled problem instance. Instead of waiting for the solver to completely solve (or time out) a sampled problem instance, we can save computation by using partial feedback about the search process. In particular, the solver outputs the best integral lower bound (LB) and real valued upper bound (UB) found after executing each cut during search. The \ufb01nal output of the solver is the best lower bound. To obtain an unbiased partial feedback estimator, we use a training subset of 500 instances to learn a linear model that predicts the \ufb01nal lower bound for a given input instance based on the intermediate lower and upper bounds. The best arm identi\ufb01cation algorithms are tested on the remaining instances in the dataset. Conditioned on a problem instance, the uncertainty associated with the partial feedback, \u03c3(p) i is given by (UB \u2212LB)/2 and shrinks with an increase in the time steps elapsed. Note that the delays are not \ufb01xed and depend on both the cut strategy and the problem instance under consideration. We directly report the \ufb01nal results: the percentage reduction in time taken by the unbiased partial feedback scenarios over full delayed feedback is 80.8% and 87.6% for sequential and parallel MAB respectively stressing the importance of partial feedback for this particular application scenario. 6 RELATED WORK Early work in pure exploration is attributed to Bechhofer [1958] and Paulson [1964] who studied this problem in the context of optimal experimental design. Modern day literature can be categorized into either the \ufb01xed budget or the \ufb01xed con\ufb01dence settings. Algorithms for the \ufb01xed budget setting strive to maximize the probability of identifying the top-k arms [Audibert and Bubeck, 2010, Bubeck et al., 2013, Kaufmann et al., 2015]. In the \ufb01xed con\ufb01dence setting, which is the one we consider in this paper, the goal is to minimize the number of pulls to attain a target con\ufb01dence [Maron and Moore, 1994, Bubeck et al., 2009]. See Gabillon et al. [2012] for a uni\ufb01ed treatment of the two settings. Algorithms for the \ufb01xed con\ufb01dence setting can be broadly classi\ufb01ed into racing style procedures which sample arms uniformly and eliminate sub-optimal arms [Maron and Moore, 1994, Even-Dar et al., 2002] and the UCB/LUCB style procedures which adaptively sample arms without explicit elimination. We direct the reader to the excellent survey by Jamieson and Nowak [2014] that summarizes the major advancements in the analysis of the sample complexity of these algorithms. Algorithmic generalizations of the best arm identi\ufb01cation include top-k identi\ufb01cation [Heidrich-Meisner and Igel, 2009] and the parallel MAB settings for batch arm pulls [Perchet et al., 2015, Jun et al., 2016, Wu et al., 2015] among others. While the delayed feedback framework we propose is novel to the pure exploration problem, online learning with delays has been studied previously in the regret minimization setting [Weinberger and Ordentlich, 2002, Joulani et al., 2013, Desautels et al., 2014]. In particular, algorithms designed particularly for hyperparameter optimization have enjoyed great success. Krueger et al. [2015] proposes a modi\ufb01ed cross-validation procedure performed on increasing subsets of data coupled with a sequential testing strategy to eliminate the poor parameter con\ufb01gurations early on. Jamieson and Talwalkar [2016] and Li et al. [2017] recently proposed algorithms for hyperparameter optimization based on non-stochastic MAB. Here, the arms correspond to hyperparameter con\ufb01gurations, and a pull is equivalent to observing a \ufb01xed sequence of losses. For many real-world problems, we have access to a shared structure across arms that makes the overall problem amenable to Bayesian optimization techniques [Snoek et al., 2012, Eggensperger et al., 2013, Snoek et al., 2015, Feurer et al., 2015, McIntire et al., 2016b,a]. Combining the LIL bounds we proposed for noisy partial feedback with Bayesian multi-armed bandits [Srinivas et al., 2010, Krause and Ong, 2011, Ho\ufb00man et al., 2014] is a promising extension we are pursuing for our on-going real world application relating to e\ufb03cient search of fast charging policies for Li-ion battery cells [Ermon et al., 2012]. 7" + }, + { + "url": "http://arxiv.org/abs/1803.10459v4", + "title": "Graphite: Iterative Generative Modeling of Graphs", + "abstract": "Graphs are a fundamental abstraction for modeling relational data. However,\ngraphs are discrete and combinatorial in nature, and learning representations\nsuitable for machine learning tasks poses statistical and computational\nchallenges. In this work, we propose Graphite, an algorithmic framework for\nunsupervised learning of representations over nodes in large graphs using deep\nlatent variable generative models. Our model parameterizes variational\nautoencoders (VAE) with graph neural networks, and uses a novel iterative graph\nrefinement strategy inspired by low-rank approximations for decoding. On a wide\nvariety of synthetic and benchmark datasets, Graphite outperforms competing\napproaches for the tasks of density estimation, link prediction, and node\nclassification. Finally, we derive a theoretical connection between message\npassing in graph neural networks and mean-field variational inference.", + "authors": "Aditya Grover, Aaron Zweig, Stefano Ermon", + "published": "2018-03-28", + "updated": "2019-05-15", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "cs.NE", + "cs.SI" + ], + "main_content": "Introduction Latent variable generative modeling is an effective approach for unsupervised representation learning of highdimensional data (Loehlin, 1998). In recent years, representations learned by latent variable models parameterized by deep neural networks have shown impressive performance on many tasks such as semi-supervised learning and structured prediction (Kingma et al., 2014; Sohn et al., 2015). However, these successes have been largely restricted to speci\ufb01c data modalities such as images and speech. In particular, it is challenging to apply current deep generative models for large scale graph-structured data which arise in a wide variety of domains in physical sciences, information sciences, and social sciences. To effectively model the relational structure of large graphs for deep learning, prior works have proposed to use graph 1Department of Computer Science, Stanford University, USA. Correspondence to: Aditya Grover . Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s). neural networks (Gori et al., 2005; Scarselli et al., 2009; Bruna et al., 2013). A graph neural network learns nodelevel representations by parameterizing an iterative message passing procedure between nodes and their neighbors. The tasks which have bene\ufb01ted from graph neural networks, including semi-supervised learning (Kipf & Welling, 2017) and few shot learning (Garcia & Bruna, 2018), involve encoding an input graph to a \ufb01nal output representation (such as the labels associated with the nodes). The inverse problem of learning to decode a hidden representation into a graph, as in the case of a latent variable generative model, is a pressing challenge that we address in this work. We propose Graphite, a latent variable generative model for graphs based on variational autoencoding (Kingma & Welling, 2014). Speci\ufb01cally, we learn a directed model expressing a joint distribution over the entries of adjacency matrix of graphs and latent feature vectors for every node. Our framework uses graph neural networks for inference (encoding) and generation (decoding). While the encoding is straightforward and can use any existing graph neural network, the decoding of these latent features to reconstruct the original graph is done using a multi-layer iterative procedure. The procedure starts with an initial reconstruction based on the inferred latent features, and iteratively re\ufb01nes the reconstructed graph via a message passing operation. The iterative re\ufb01nement can be ef\ufb01ciently implemented using graph neural networks. In addition to the Graphite model, we also contribute to the theoretical understanding of graph neural networks by deriving equivalences between message passing in graph neural networks with mean-\ufb01eld inference in latent variable models via kernel embeddings (Smola et al., 2007; Dai et al., 2016), formalizing what has thus far has been largely speculated empirically to the best of our knowledge (Yoon et al., 2018). In contrast to recent works focussing on generation of small graphs e.g., molecules (You et al., 2018; Li et al., 2018), the Graphite framework is particularly suited for representation learning on large graphs. Such representations are useful for several downstream tasks. In particular, we demonstrate that representations learned via Graphite outperform competing approaches for graph representation learning empirically for the tasks of density estimation (over entire graphs), link prediction, and semi-supervised node classi\ufb01cation on synthetic and benchmark datasets. arXiv:1803.10459v4 [stat.ML] 15 May 2019 \fGraphite: Iterative Generative Modeling of Graphs 2. Preliminaries Throughout this work, we assume that all probability distributions admit absolutely continuous densities on a suitable reference measure. Consider a weighted undirected graph G = (V, E) where V and E denote index sets of nodes and edges respectively. Additionally, we denote the (optional) feature matrix associated with the graph as X \u2208Rn\u00d7m for an m-dimensional signal associated with each node, for e.g., these could refer to the user attributes in a social network. We represent the graph structure using a symmetric adjacency matrix A \u2208Rn\u00d7n where n = |V | and the entries Aij denote the weight of the edge between node i and j. 2.1. Weisfeiler-Lehman algorithm The Weisfeiler-Lehman (WL) algorithm (Weisfeiler & Lehman, 1968; Douglas, 2011) is a heuristic test of graph isomorphism between any two graphs G and G\u2032. The algorithm proceeds in iterations. Before the \ufb01rst iteration, we label every node in G and G\u2032 with a scalar isomorphism invariant initialization (e.g., node degrees). That is, if G and G\u2032 are assumed to be isomorphic, then an isomorphism invariant initialization is one where the matching nodes establishing the isomorphism in G and G\u2032 have the same labels (a.k.a. messages). Let H(0) = [h(l) 1 , h(l) 2 , \u00b7 \u00b7 \u00b7 , h(l) n ]T denote the vector of initializations for the nodes in the graph at iteration l \u2208N\u222a0. At every iteration l > 0, we perform a relabelling of nodes in G and G\u2032 based on a message passing update rule: H(l) \u2190hash \u0010 AH(l\u22121)\u0011 (1) where A denotes the adjacency matrix of the corresponding graph and hash : Rn \u2192Rn is any suitable hash function e.g., a non-linear activation. Hence, the message for every node is computed as a hashed sum of the messages from the neighboring nodes (since Aij \u0338= 0 only if i and j are neighbors). We repeat the process for a speci\ufb01ed number of iterations, or until convergence. If the label sets for the nodes in G and G\u2032 are equal (which can be checked using sorting in O(n log n) time), then the algorithm declares the two graphs G and G\u2032 to be isomorphic. The \u201ck-dim\u201d WL algorithm extends the 1-dim algorithm above by simultaneously passing messages of length k (each initialized with some isomorphism invariant scheme). A positive test for isomorphism requires equality in all k dimensions for nodes in G and G\u2032 after the termination of message passing. This algorithmic test is a heuristic which guarantees no false negatives but can give false positives wherein two non-isomorphic graphs can be falsely declared isomorphic. Empirically, the test has been shown to fail on some regular graphs but gives excellent performance on real-world graphs (Shervashidze et al., 2011). 2.2. Graph neural networks Intuitively, the WL algorithm encodes the structure of the graph in the form of messages at every node. Graph neural networks (GNN) build on this observation and parameterize an unfolding of the iterative message passing procedure which we describe next. A GNN consists of many layers, indexed by l \u2208N with each layer associated with an activation \u03b7l and a dimensionality dl. In addition to the input graph A, every layer l \u2208N of the GNN takes as input the activations from the previous layer H(l\u22121) \u2208Rn\u00d7dl\u22121, a family of linear transformations Fl : Rn\u00d7n \u2192Rn\u00d7n, and a matrix of learnable weight parameters Wl \u2208Rdl\u22121\u00d7dl and optional bias parameters Bl \u2208Rn\u00d7dl. Recursively, the layer wise propagation rule in a GNN is given by: H(l) \u2190\u03b7l \uf8eb \uf8edBl + X f\u2208Fl f(A)H(l\u22121)Wl \uf8f6 \uf8f8 (2) with the base cases H(0) = X and d0 = m. Here, m is the feature dimensionality. If there are no explicit node features, we set H(0) = In (identity) and d0 = n. Several variants of graph neural networks have been proposed in prior work. For instance, graph convolutional networks (GCN) (Kipf & Welling, 2017) instantiate graph neural networks with a permutation equivariant propagation rule: H(l) \u2190\u03b7l \u0010 Bl + \u02dc AH(l\u22121)Wl \u0011 (3) where \u02dc A = D\u22121/2AD\u22121/2 is the symmetric diagonalization of A given the diagonal degree matrix D (i.e., Dii = P (i,j)\u2208E Aij), and same base cases as before. Comparing the above with the WL update rule in Eq. (1), we can see that the activations for every layer in a GCN are computed via parameterized, scaled activations (messages) of the previous layer being propagated over the graph, with the hash function implicitly speci\ufb01ed using an activation function \u03b7l. Our framework is agnostic to instantiations of message passing rule of a graph neural network in Eq. (2), and we use graph convolutional networks for experimental validation due to the permutation equivariance property. For brevity, we denote the output H for the \ufb01nal layer of a multi-layer graph neural network with input adjacency matrix A, node feature matrix X, and parameters \u27e8W, B\u27e9as H = GNN\u27e8W,B\u27e9(A, X), with appropriate activation functions and linear transformations applied at each hidden layer of the network. 3. Generative Modeling via Graphite For generative modeling of graphs, we are interested in learning a parameterized distribution over adjacency matri\fGraphite: Iterative Generative Modeling of Graphs Figure 1. Latent variable model for Graphite. Observed evidence variables in gray. ces A. In this work, we restrict ourselves to modeling graph structure only, and any additional information in the form of node features X is incorporated as conditioning evidence. In Graphite, we adopt a latent variable approach for modeling the generative process. That is, we introduce latent variable vectors Zi \u2208Rk and evidence feature vectors Xi \u2208Rm for each node i \u2208{1, 2, \u00b7 \u00b7 \u00b7 , n} along with an observed variable for each pair of nodes Aij \u2208R. Unless necessary, we use a succinct representation Z \u2208Rn\u00d7k, X \u2208Rn\u00d7m, and A \u2208Rn\u00d7n for the variables henceforth. The conditional independencies between the variables can be summarized in the directed graphical model (using plate notation) in Figure 1. We can learn the model parameters \u03b8 by maximizing the marginal likelihood of the observed adjacency matrix conditioned on X: max \u03b8 log p\u03b8(A|X) = log Z Z p\u03b8(A, Z|X)dZ (4) Here, p(Z|X) is a \ufb01xed prior distribution over the latent features of every node e.g., isotropic Gaussian. If we have multiple graphs in our dataset, we maximize the expected log-likelihoods over all the corresponding adjacency matrices. We can obtain a tractable, stochastic evidence lower bound (ELBO) to the above objective by introducing a variational posterior q\u03c6(Z|A, X) with parameters \u03c6: log p\u03b8(A|X) \u2265Eq\u03c6(Z|A,X) \u0014 log p\u03b8(A, Z|X) q\u03c6(Z|A, X) \u0015 (5) The lower bound is tight when the variational posterior q\u03c6(Z|A, X) matches the true posterior p\u03b8(Z|A, X) and hence maximizing the above objective optimizes for the parameters that de\ufb01ne the best approximation to the true posterior within the variational family (Blei et al., 2017). We now discuss parameterizations for specifying q\u03c6(Z|A, X) (i.e., encoder) and p\u03b8(A|Z, X) (i.e., decoder). Encoding using forward message passing. Typically we use the mean \ufb01eld approximation for de\ufb01ning the variational family and hence: q\u03c6(Z|A, X) \u2248 n Y i=1 q\u03c6i(Zi|A, X) (6) Additionally, we would like to make distributional assumptions on each variational marginal density q\u03c6i(Zi|A, X) such that it is reparameterizable and easy-to-sample, such that the gradients w.r.t. \u03c6i have low variance (Kingma & Welling, 2014). In Graphite, we assume isotropic Gaussian variational marginals with diagonal covariance. The parameters for the variational marginals q\u03c6i(Z|A, X) are speci\ufb01ed using a graph neural network: \u00b5, \u03c3 = GNN\u03c6(A, X) (7) where \u00b5 and \u03c3 denote the vector of means and standard deviations for the variational marginals {q\u03c6i(Zi|A, X)}n i=1 and \u03c6 = {\u03c6i}n i=1 are the full set of variational parameters. Decoding using reverse message passing. For specifying the observation model p\u03b8(A|Z, X), we cannot directly use a graph neural network since we do not have an input graph for message passing. To sidestep this issue, we propose an iterative two-step approach that alternates between de\ufb01ning an intermediate graph and then gradually re\ufb01ning this graph through message passing. Formally, given a latent matrix Z and an input feature matrix X, we iterate over the following sequence of operations: b A = ZZT \u2225Z\u22252 + 11T , (8) Z\u2217= GNN\u03b8( b A, [Z|X]) (9) where the second argument to the GNN is a concatenation of Z and X. The \ufb01rst step constructs an intermediate weighted graph b A \u2208Rn\u00d7n by applying an inner product of Z with itself and adding an additional constant of 1 to ensure entries are non-negative. And the second step performs a pass through a parameterized graph neural network. We can repeat the above sequence to gradually re\ufb01ne the feature matrix Z\u2217. The \ufb01nal distribution over graph parameters is obtained using an inner product step on Z\u2217\u2208Rn\u00d7k\u2217akin to Eq. (8), where k\u2217\u2208N is determined via the network architecture. For ef\ufb01cient sampling, we assume the observation model factorizes: p\u03b8(A|Z, X) = n Y i=1 n Y j=1 p(i,j) \u03b8 (Aij|Z\u2217). (10) The distribution over the individual edges can be expressed as a Bernoulli or Gaussian distribution for unweighted and real-valued edges respectively. E.g., the edge probabilities for an unweighted graph are given as sigmoid(Z\u2217Z\u2217T ). \fGraphite: Iterative Generative Modeling of Graphs Table 1. Mean reconstruction errors and negative log-likelihood estimates (in nats) for autoencoders and variational autoencoders respectively on test instances from six different generative families. Lower is better. Erdos-Renyi Ego Regular Geometric Power Law Barabasi-Albert GAE 221.79 \u00b1 7.58 197.3 \u00b1 1.99 198.5 \u00b1 4.78 514.26 \u00b1 41.58 519.44 \u00b1 36.30 236.29 \u00b1 15.13 Graphite-AE 195.56 \u00b1 1.49 182.79 \u00b1 1.45 191.41 \u00b1 1.99 181.14 \u00b1 4.48 201.22 \u00b1 2.42 192.38 \u00b1 1.61 VGAE 273.82 \u00b1 0.07 273.76 \u00b1 0.06 275.29 \u00b1 0.08 274.09 \u00b1 0.06 278.86 \u00b1 0.12 274.4 \u00b1 0.08 Graphite-VAE 270.22 \u00b1 0.15 270.70 \u00b1 0.32 266.54 \u00b1 0.12 269.71 \u00b1 0.08 263.92 \u00b1 0.14 268.73 \u00b1 0.09 3.1. Scalable learning & inference in Graphite For representation learning of large graphs, we require the encoding and decoding steps in Graphite to be computationally ef\ufb01cient. On the surface, the decoding step involves inner products of potentially dense matrices Z, which is an O(n2k) operation. Here, k is the dimension of the per-node latent vectors Zi used to de\ufb01ne b A. For any intermediate decoding step as in Eq. (8), we propose to offset this expensive computation by using the associativity property of matrix multiplications for the message passing step in Eq. (9). For notational brevity, consider the simpli\ufb01ed graph propagation rule for a GNN: H(l) \u2190\u03b7l \u0010 b AH(l\u22121)\u0011 where b A is de\ufb01ned in Eq. (8). Instead of directly taking an inner product of Z with itself, we note that the subsequent operation involves another matrix multiplication and hence, we can perform right multiplication instead. If dl and dl\u22121 denote the size of the layers H(l) and H(l\u22121) respectively, then the time complexity of propagation based on right multiplication is given by O(nkdl\u22121 + ndl\u22121dl). The above trick sidesteps the quadratic n2 complexity for decoding in the intermediate layers without any loss in statistical accuracy. The \ufb01nal layer however still involves an inner product with respect to Z\u2217between potentially dense matrices. However, since the edges are generated independently, we can approximate the loss objective by performing a Monte Carlo evaluation of the reconstructed adjacency matrix parameters in Eq. (10). By adaptively choosing the number of entries for Monte Carlo approximation, we can trade-off statistical accuracy for computational budget. 4. Experimental Evaluation We evaluate Graphite on tasks involving entire graphs, nodes, and edges. We consider two variants of our proposed framework: the Graphite-VAE, which corresponds to a directed latent variable model as described in Section 3 and Graphite-AE, which corresponds to an autoencoder trained to minimize the error in reconstructing an input adjacency matrix. For unweighted graphs (i.e., A \u2208{0, 1}n\u00d7n), the reconstruction terms in the objectives for both GraphiteVAE and Graphite-AE minimize the negative cross entropy between the input and reconstructed adjacency matrices. For weighted graphs, we use the mean squared error. Additional hyperparameter details are described in Appendix B. 4.1. Reconstruction & density estimation In the \ufb01rst set of tasks, we evaluate learning in Graphite based on held-out reconstruction losses and log-likelihoods estimated by the learned Graphite-VAE and Graphite-AE models respectively on a collection of graphs with varying sizes. In direct contrast to modalities such as images, graphs cannot be straightforwardly reduced to a \ufb01xed number of vertices for input to a graph convolutional network. One simplifying modi\ufb01cation taken by Bojchevski et al. (2018) is to consider only the largest connected component for evaluating and optimizing the objective, which we appeal to as well. Thus by setting the dimensions of Z\u2217to a maximum number of vertices, Graphite can be used for inference tasks over entire graphs with potentially smaller sizes by considering only the largest connected component. We create datasets from six graph families with \ufb01xed, known generative processes: the Erdos-Renyi, ego-nets, random regular graphs, random geometric graphs, random Power Law Tree and Barabasi-Albert. For each family, 300 graph instances were sampled with each instance having 10 \u221220 nodes and evenly split into train/validation/test instances. As a benchmark comparison, we compare against the Graph Autoencoder/Variational Graph Autoencoder (GAE/VGAE) (Kipf & Welling, 2016). The GAE/VGAE models consist of an encoding procedure similar to Graphite. However, the decoder has no learnable parameters and reconstruction is done solely through an inner product operation (such as the one in Eq. (8)). The mean reconstruction errors and the negative loglikelihood results on a test set of instances are shown in Table 1. Both Graphite-AE and Graphite-VAE outperform AE and VGAE signi\ufb01cantly on these tasks, indicating the usefulness of learned decoders in Graphite. \fGraphite: Iterative Generative Modeling of Graphs Table 2. Citation network statistics Nodes Edges Node Features Labels Cora 2708 5429 1433 7 Citeseer 3327 4732 3703 6 Pubmed 19717 44338 500 3 4.2. Link prediction The task of link prediction is to predict whether an edge exists between a pair of nodes (Loehlin, 1998). Even though Graphite learns a distribution over graphs, it can be used for predictive tasks within a single graph. In order to do so, we learn a model for a random, connected training subgraph of the true graph. For validation and testing, we add a balanced set of positive and negative (false) edges to the original graph and evaluate the model performance based on the reconstruction probabilities assigned to the validation and test edges (similar to denoising of the input graph). In our experiments, we held out a set of 5% edges for validation, 10% edges for testing, and train all models on the remaining subgraph. Additionally, the validation and testing sets also each contain an equal number of non-edges. Datasets. We compared across standard benchmark citation network datasets: Cora, Citeseer, and Pubmed with papers as nodes and citations as edges (Sen et al., 2008). The node-level features correspond to the text attributes in the papers. The dataset statistics are summarized in Table 2. Baselines and evaluation metrics. We evaluate performance based on the Area Under the ROC Curve (AUC) and Average Precision (AP) metrics. We evaluated GraphiteVAE and Graphite-AE against the following baselines: Spectral Clustering (SC) (Tang & Liu, 2011), DeepWalk (Perozzi et al., 2014), node2vec (Grover & Leskovec, 2016), and GAE/VGAE (Kipf & Welling, 2016). SC, DeepWalk, and node2vec do not provide the ability to incorporate node features while learning embeddings, and hence we evaluate them only on the featureless datasets. Results. The AUC and AP results (along with standard errors) are shown in Table 3 and Table 4 respectively averaged over 50 random train/validation/test splits. On both metrics, Graphite-VAE gives the best performance overall. Graphite-AE also gives good results, generally outperforming its closest competitor GAE. Qualitative evaluation. We visualize the embeddings learned by Graphite and given by a 2D t-SNE projection (Maaten & Hinton, 2008) of the latent feature vectors (given as rows for Z with \u03bb = 0.5) on the Cora dataset in Figure 2. Even without any access to label information for (a) Graphite-AE (b) Graphite-VAE Figure 2. t-SNE embeddings of the latent feature vectors for the Cora dataset. Colors denote labels. the nodes during training, the name models are able to cluster the nodes (papers) as per their labels (paper categories). 4.3. Semi-supervised node classi\ufb01cation Given labels for a subset of nodes in an underlying graph, the goal of this task is to predict the labels for the remaining nodes. We consider a transductive setting, where we have access to the test nodes (without their labels) during training. Closest approach to Graphite for this task is a supervised graph convolutional network (GCN) trained end-to-end. We consider an extension of this baseline, wherein we augment the GCN objective with the Graphite objective and a hyperparameter to control the relative importance of the two terms in the combined objective. The parameters \u03c6 for the encoder are shared across these two objectives, with an additional GCN layer for mapping the encoder output to softmax probabilities over the requisite number of classes. All parameters are learned jointly. \fGraphite: Iterative Generative Modeling of Graphs Table 3. Area Under the ROC Curve (AUC) for link prediction (* denotes dataset with features). Higher is better. Cora Citeseer Pubmed Cora* Citeseer* Pubmed* SC 89.9 \u00b1 0.20 91.5 \u00b1 0.17 94.9 \u00b1 0.04 DeepWalk 85.0 \u00b1 0.17 88.6 \u00b1 0.15 91.5 \u00b1 0.04 node2vec 85.6 \u00b1 0.15 89.4 \u00b1 0.14 91.9 \u00b1 0.04 GAE 90.2 \u00b1 0.16 92.0 \u00b1 0.14 92.5 \u00b1 0.06 93.9 \u00b1 0.11 94.9 \u00b1 0.13 96.8 \u00b1 0.04 VGAE 90.1 \u00b1 0.15 92.0 \u00b1 0.17 92.3 \u00b1 0.06 94.1 \u00b1 0.11 96.7 \u00b1 0.08 95.5 \u00b1 0.13 Graphite-AE 91.0 \u00b1 0.15 92.6 \u00b1 0.16 94.5 \u00b1 0.05 94.2 \u00b1 0.13 96.2 \u00b1 0.10 97.8 \u00b1 0.03 Graphite-VAE 91.5 \u00b1 0.15 93.5 \u00b1 0.13 94.6 \u00b1 0.04 94.7 \u00b1 0.11 97.3 \u00b1 0.06 97.4 \u00b1 0.04 Table 4. Average Precision (AP) scores for link prediction (* denotes dataset with features). Higher is better. Cora Citeseer Pubmed Cora* Citeseer* Pubmed* SC 92.8 \u00b1 0.12 94.4 \u00b1 0.11 96.0 \u00b1 0.03 DeepWalk 86.6 \u00b1 0.17 90.3 \u00b1 0.12 91.9 \u00b1 0.05 node2vec 87.5 \u00b1 0.14 91.3 \u00b1 0.13 92.3 \u00b1 0.05 GAE 92.4 \u00b1 0.12 94.0 \u00b1 0.12 94.3 \u00b1 0.5 94.3 \u00b1 0.12 94.8 \u00b1 0.15 96.8 \u00b1 0.04 VGAE 92.3 \u00b1 0.12 94.2 \u00b1 0.12 94.2 \u00b1 0.04 94.6 \u00b1 0.11 97.0 \u00b1 0.08 95.5 \u00b1 0.12 Graphite-AE 92.8 \u00b1 0.13 94.1 \u00b1 0.14 95.7 \u00b1 0.06 94.5 \u00b1 0.14 96.1 \u00b1 0.12 97.7 \u00b1 0.03 Graphite-VAE 93.2 \u00b1 0.13 95.0 \u00b1 0.10 96.0 \u00b1 0.03 94.9 \u00b1 0.13 97.4 \u00b1 0.06 97.4 \u00b1 0.04 Table 5. Classi\ufb01cation accuracies (* denotes dataset with features). Baseline numbers from Kipf & Welling (2017). Cora* Citeseer* Pubmed* SemiEmb 59.0 59.6 71.1 DeepWalk 67.2 43.2 65.3 ICA 75.1 69.1 73.9 Planetoid 75.7 64.7 77.2 GCN 81.5 70.3 79.0 Graphite 82.1 \u00b1 0.06 71.0 \u00b1 0.07 79.3 \u00b1 0.03 Results. The classi\ufb01cation accuracy of the semisupervised models is given in Table 5. We \ufb01nd that Graphitehybrid outperforms the competing models on all datasets and in particular the GCN approach which is the closest baseline. Recent work in Graph Attention Networks shows that extending GCN by incoporating attention can boost performance on this task (Veli\u02c7 ckovi\u00b4 c et al., 2018). Using GATs in place of GCNs for parameterizing Graphite could yield similar performance boost in future work. 5. Theoretical Analysis In this section, we derive a theoretical connection between message passing in graph neural networks and approximate inference in related undirected graphical models. 5.1. Kernel embeddings We \ufb01rst provide a brief background on kernel embeddings. A kernel de\ufb01nes a notion of similarity between pairs of objects (Sch\u00a8 olkopf & Smola, 2002; Shawe-Taylor & Cristianini, 2004). Let K : Z \u00d7 Z \u2192R be the kernel function de\ufb01ned over a space of objects, say Z. With every kernel function K, we have an associated feature map \u03c8 : Z \u2192H where H is a potentially in\ufb01nite dimensional feature space. Kernel methods can be used to specify embeddings of distributions of arbitrary objects (Smola et al., 2007; Gretton et al., 2007). Formally, we denote these functional mappings as T\u03c8 : P \u2192H where P speci\ufb01es the space of all distributions on Z. These mappings, referred to as kernel embeddings of distributions, are de\ufb01ned as: T\u03c8(p) := EZ\u223cp[\u03c8(Z)] (11) for any p \u2208P. We are particularly interested in kernels with feature maps \u03c8 that de\ufb01ne injective embeddings, i.e., for any pair of distributions p1 and p2, we have T\u03c8(p1) \u0338= T\u03c8(p2) if p1 \u0338= p2. For injective embeddings, we can compute functionals of any distribution by directly applying a corresponding function on its embedding. Formally, for every function O : P \u2192Rd, d \u2208N and injective embedding T\u03c8, there exists a function \u02dc O\u03c8 : H \u2192Rd such that: O(p) = \u02dc O\u03c8(T\u03c8(p)) \u2200p \u2208P. (12) Informally, we can see that the operator \u02dc O\u03c8 can be de\ufb01ned as the composition of O with the inverse of T\u03c8. \fGraphite: Iterative Generative Modeling of Graphs 2 3 1 (a) Input graph with edge set E = {(1, 2), (1, 3)}. X2 Z2 X3 Z3 X1 Z1 A12 A23 A13 (b) Latent variable model G satisfying Property 1 with A12 \u0338= 0, A23 = 0, A13 \u0338= 0. Figure 3. Interpreting message passing in Graph Neural Networks via Kernel Embeddings and Mean-\ufb01eld inference 5.2. Connections with mean-\ufb01eld inference Locality preference for representational learning is a key inductive bias for graphs. We formulate this using an (undirected) graphical model G over X, A, and {Z1, \u00b7 \u00b7 \u00b7 , Zn}. As in a GNN, we assume that X and A are observed and specify conditional independence structure in a conditional distribution over the latent variables, denoted as r(Z1, \u00b7 \u00b7 \u00b7 , Zn|A, X). We are particularly interested in models that satisfy the following property. Property 1. The edge set E de\ufb01ned by the adjacency matrix A is an undirected I-map for the distribution r(Z1, \u00b7 \u00b7 \u00b7 , Zn|A, X). In words, the above property implies that according to the conditional distribution over Z, any individual Zi is independent of all other Zj when conditioned on A, X, and the neighboring latent variables of node i as determined by the edge set E. See Figure 3 for an illustration. A mean-\ufb01eld (MF) approximation for G approximates the conditional distribution r(Z1, \u00b7 \u00b7 \u00b7 , Zn|A, X) as: r(Z1, \u00b7 \u00b7 \u00b7 , Zn|A, X) \u2248 n Y i=1 q\u03c6i(Zi|A, X) (13) where \u03c6i denotes the set of parameters for the i-th variational marginal. These parameters are optimized by minimizing the KL-divergence between the variational and the true conditional distributions: min \u03c61,\u00b7\u00b7\u00b7 ,\u03c6n KL n Y i=1 q\u03c6i (Zi|A, X)\u2225r(Z1, \u00b7 \u00b7 \u00b7 , Zn|A, X) ! (14) Using standard variational arguments (Wainwright et al., 2008), we know that the optimal variational marginals assume the following functional form: q\u03c6i(Zi|A, X) = OMF G \u0000Zi, {q\u03c6j}j\u2208N (i) \u0001 (15) where N(i) denotes the neighbors of Zi in G and O is a function determined by the \ufb01xed point equations that depends on the potentials associated with G. Importantly, the above functional form suggests that the optimal marginals in mean \ufb01eld inference are locally consistent that they are only a function of the neighboring marginals. An iterative algorithm for mean-\ufb01eld inference is to perform message passing over the underlying graph until convergence. With an appropriate initialization at l = 0, the updated marginals at iteration l \u2208N are given as: q(l) \u03c6i (Zi|A, X) = OMF G \u0010 Zi, {q(l\u22121) \u03c6j }j\u2208N (i) \u0011 . (16) We will sidestep deriving O, and instead use the kernel embeddings of the variational marginals to directly reason in the embedding space. That is, we assume we have an injective embedding for each marginal q\u03c6i given by \u00b5i = EZi\u223cq\u03c6i [\u03c8(Zi)] for some feature map \u03c8 : Rk \u2192Rk\u2032 and directly use the equivalence established in Eq. (12) iteratively. For mean-\ufb01eld inference via message passing as in Eq. (16), this gives us the following recursive expression for iteratively updating the embeddings at iteration l \u2208N: \u00b5(l) i = \u02dc OMF \u03c8,G \u0010 {\u00b5(l\u22121) j }j\u2208N (i) \u0011 (17) with an appropriate base case for \u00b5(0) i . We then have the following result: Theorem 2. Let G be any undirected latent variable model such that the conditional distribution r(Z1, \u00b7 \u00b7 \u00b7 , Zn|A, X) expressed by the model satis\ufb01es Property 1. Then there exists a choice of \u03b7l, Fl, Wl, and Bl such that for all {\u00b5(l\u22121) i }n i=1, the GNN propagation rule in Eq. (2) is computationally equivalent to updating {\u00b5(l\u22121) i }n i=1 via a \ufb01rst order approximation of Eq. (17). Proof. See Appendix A. While \u03b7l and Fl are typically \ufb01xed beforehand, the parameters Wl, and Bl are directly learned from data in practice. Hence we have shown that a GNN is a good model for computation with respect to latent variable models that attempt to capture inductive biases relevant to graphs, i.e., ones where the latent feature vector for every node is conditionally independent from everything else given the feature vectors of its neighbors (and A, X). Note that such a graphical model would satisfy Property 1 but is in general different \fGraphite: Iterative Generative Modeling of Graphs from the posterior speci\ufb01ed by the one in Figure 1. However if the true (but unknown) posterior on the latent variables for the model proposed in Figure 1 could be expressed as an equivalent model satisfying the desired property, then Theorem 2 indeed suggests the use of GNNs for parameterizing variational posteriors, as we do so in the case of Graphite. 6. Discussion & Related Work Our framework effectively marries probabilistic modeling and representation learning on graphs. We review some of the dominant prior works in these \ufb01elds below. Probabilistic modeling of graphs. The earliest probabilistic models of graphs proposed to generate graphs by creating an edge between any pair of nodes with a constant probability (Erd\u00a8 os & R\u00b4 enyi, 1959). Several alternatives have been proposed since; e.g., the small-world model generates graphs that exhibit local clustering (Watts & Strogatz, 1998), the Barabasi-Albert models preferential attachment wherein high-degree nodes are likely to form edges with newly added nodes (Barabasi & Albert, 1999), the stochastic block model is based on inter and intra community linkages (Holland et al., 1983) etc. We direct the interested reader to prominent surveys on this topic (Newman, 2003; Mitzenmacher, 2004; Chakrabarti & Faloutsos, 2006). Representation learning on graphs. For representation learning on graphs, there are broadly three kinds of approaches: matrix factorization, random walk based approaches, and graph neural networks. We include a brief discussion on the \ufb01rst two kinds in Appendix C and refer the reader to Hamilton et al. (2017b) for a recent survey. Graph neural networks, a collective term for networks that operate over graphs using message passing, have shown success on several downstream applications, e.g., (Duvenaud et al., 2015; Li et al., 2016; Kearnes et al., 2016; Kipf & Welling, 2017; Hamilton et al., 2017a) and the references therein. Gilmer et al. (2017) provides a comprehensive characterization of these networks in the message passing setup. We used Graph Convolution Networks, partly to provide a direct comparison with GAE/VGAE and leave the exploration of other GNN variants for future work. Latent variable models for graphs. Hierarchical Bayesian models parameterized by deep neural networks have been recently proposed for graphs (Hu et al., 2017; Wang et al., 2017). Besides being restricted to single graphs, these models are limited since inference requires running expensive Markov chains (Hu et al., 2017) or are task-speci\ufb01c (Wang et al., 2017). Johnson (2017) and Kipf et al. (2018) generate graphs as latent representations learned directly from data. In contrast, we are interested in modeling observed (and not latent) relational structure. Finally, there has been a fair share of recent work for generation of special kinds of graphs, such as parsed trees of source code (Maddison & Tarlow, 2014) and SMILES representations for molecules (Olivecrona et al., 2017). Several deep generative models for graphs have recently been proposed. Amongst adversarial generation approaches, Wang et al. (2018) and Bojchevski et al. (2018) model local graph neighborhoods and random walks on graphs respectively. Li et al. (2018) and You et al. (2018) model graphs as sequences and generate graphs via autoregressive procedures. Adversarial and autoregressive approaches are successful at generating graphs, but do not directly allow for inferring latent variables via encoders. Latent variable generative models have also been proposed for generating small molecular graphs (Jin et al., 2018; Samanta et al., 2018; Simonovsky & Komodakis, 2018). These methods involve an expensive decoding procedure that limits scaling to large graphs. Finally, closest to our framework is the GAE/VGAE approach (Kipf & Welling, 2016) discussed in Section 4. Pan et al. (2018) extends this approach with an adversarial regularization framework but retain the inner product decoder. Our work proposes a novel multi-step decoding mechanism based on graph re\ufb01nement. 7." + }, + { + "url": "http://arxiv.org/abs/1705.08868v2", + "title": "Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models", + "abstract": "Adversarial learning of probabilistic models has recently emerged as a\npromising alternative to maximum likelihood. Implicit models such as generative\nadversarial networks (GAN) often generate better samples compared to explicit\nmodels trained by maximum likelihood. Yet, GANs sidestep the characterization\nof an explicit density which makes quantitative evaluations challenging. To\nbridge this gap, we propose Flow-GANs, a generative adversarial network for\nwhich we can perform exact likelihood evaluation, thus supporting both\nadversarial and maximum likelihood training. When trained adversarially,\nFlow-GANs generate high-quality samples but attain extremely poor\nlog-likelihood scores, inferior even to a mixture model memorizing the training\ndata; the opposite is true when trained by maximum likelihood. Results on MNIST\nand CIFAR-10 demonstrate that hybrid training can attain high held-out\nlikelihoods while retaining visual fidelity in the generated samples.", + "authors": "Aditya Grover, Manik Dhar, Stefano Ermon", + "published": "2017-05-24", + "updated": "2018-01-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE", + "stat.ML" + ], + "main_content": "Introduction Highly expressive parametric models have enjoyed great success in supervised learning, where learning objectives and evaluation metrics are typically well-speci\ufb01ed and easy to compute. On the other hand, the learning objective for unsupervised settings is less clear. At a fundamental level, the idea is to learn a generative model that minimizes some notion of divergence with respect to the data distribution. Minimizing the Kullback-Liebler divergence between the data distribution and the model, for instance, is equivalent to performing maximum likelihood estimation (MLE) on the observed data. Maximum likelihood estimators are asymptotically statistically ef\ufb01cient, and serve as natural objectives for learning prescribed generative models (Mohamed and Lakshminarayanan 2016). In contrast, an alternate principle that has recently attracted much attention is based on adversarial learning, where the objective is to generate data indistinguishable from the training data. Adversarially learned models such as generative adversarial networks (GAN; (Goodfellow et al. 2014)) can sidestep specifying an explicit density for any data point and belong to the class of implicit generative models (Diggle and Gratton 1984). Copyright c \u20dd2018, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. The lack of characterization of an explicit density in GANs is however problematic for two reasons. Several application areas of deep generative models rely on density estimates; for instance, count based exploration strategies based on density estimation using generative models have recently achieved state-of-the-art performance on challenging reinforcement learning environments (Ostrovski et al. 2017). Secondly, it makes the quantitative evaluation of the generalization performance of such models challenging. The typical evaluation criteria based on ad-hoc sample quality metrics (Salimans et al. 2016; Che et al. 2017) do not address this issue since it is possible to generate good samples by memorizing the training data, or missing important modes of the distribution, or both (Theis, Oord, and Bethge 2016). Alternatively, density estimates based on approximate inference techniques such as annealed importance sampling (AIS; (Neal 2001; Wu et al. 2017)) and non-parameteric methods such as kernel density estimation (KDE; (Parzen 1962; Goodfellow et al. 2014)) are computationally slow and crucially rely on assumptions of a Gaussian observation model for the likelihood that could lead to misleading estimates as we shall demonstrate in this paper. To sidestep the above issues, we propose Flow-GANs, a generative adversarial network with a normalizing \ufb02ow generator. A Flow-GAN generator transforms a prior noise density into a model density through a sequence of invertible transformations. By using an invertible generator, FlowGANs allow us to tractably evaluate exact likelihoods using the change-of-variables formula and perform exact posterior inference over the latent variables while still permitting ef\ufb01cient ancestral sampling, desirable properties of any probabilistic model that a typical GAN would not provide. Using a Flow-GAN, we perform a principled quantitative comparison of maximum likelihood and adversarial learning on benchmark datasets viz. MNIST and CIFAR-10. While adversarial learning outperforms MLE on sample quality metrics as expected based on strong evidence in prior work, the log-likelihood estimates of adversarial learning are orders of magnitude worse than those of MLE. The difference is so stark that a simple Gaussian mixture model baseline outperforms adversarially learned models on both sample quality and held-out likelihoods. Our quantitative analysis reveals that the poor likelihoods of adversarial learning can be explained as a result of an ill-conditioned Jacobian maarXiv:1705.08868v2 [cs.LG] 3 Jan 2018 \ftrix for the generator function suggesting a mode collapse, rather than over\ufb01tting to the training dataset. To resolve the dichotomy of perceptually good-looking samples at the expense of held-out likelihoods in the case of adversarial learning (and vice versa in the case of MLE), we propose a hybrid objective that bridges implicit and prescribed learning by augmenting the adversarial training objective with an additional term corresponding to the loglikelihood of the observed data. While the hybrid objective achieves the intended effect of smoothly trading-off the two goals in the case of CIFAR-10, it has a regularizing effect on MNIST where it outperforms MLE and adversarial learning on both held-out likelihoods and sample quality metrics. Overall, this paper makes the following contributions: 1. We propose Flow-GANs, a generative adversarial network with an invertible generator that can perform ef\ufb01cient ancestral sampling and exact likelihood evaluation. 2. We propose a hybrid learning objective for Flow-GANs that attains good log-likelihoods and generates highquality samples on MNIST and CIFAR-10 datasets. 3. We demonstrate the limitations of AIS and KDE for loglikelihood evaluation and ranking of implicit models. 4. We analyze the singular value distribution for the Jacobian of the generator function to explain the low loglikelihoods observed due to adversarial learning. 2 Preliminaries We begin with a review of maximum likelihood estimation and adversarial learning in the context of generative models. For ease of presentation, all distributions are w.r.t. any arbitrary x \u2208Rd, unless otherwise speci\ufb01ed. We use upper-case to denote probability distributions and assume they all admit absolutely continuous densities (denoted by the corresponding lower-case notation) on a reference measure dx. Consider the following setting for learning generative models. Given some data X = {xi \u2208Rd}m i=1 sampled i.i.d. from an unknown probability density pdata, we are interested in learning a probability density p\u03b8 where \u03b8 denotes the parameters of a model. Given a parameteric family of models M, the typical approach to learn \u03b8 \u2208M is to minimize a notion of divergence between Pdata and P\u03b8. The choice of divergence and the optimization procedure dictate learning, leading to the following two objectives. 2.1 Maximum likelihood estimation In maximum likelihood estimation (MLE), we minimize the Kullback-Liebler (KL) divergence between the data distribution and the model distribution. Formally, the learning objective can be expressed as: min \u03b8\u2208M KL(Pdata, P\u03b8) = Ex\u223cPdata \u0014 log pdata(x) p\u03b8(x) \u0015 Since pdata is independent of \u03b8, the above optimization problem can be equivalently expressed as: max \u03b8\u2208M Ex\u223cPdata [log p\u03b8(x)] (1) Hence, evaluating the learning objective for MLE in Eq. (1) requires the ability to evaluate the model density p\u03b8(x). Models that provide an explicit characterization of the likelihood function are referred to as prescribed generative models (Mohamed and Lakshminarayanan 2016). 2.2 Adversarial learning A generative model can be learned to optimize divergence notions beyond the KL divergence. A large family of divergences can be conveniently expressed as: max \u03c6\u2208F Ex\u223cP\u03b8 [h\u03c6(x)] \u2212Ex\u223cPdata \u0002 h\u2032 \u03c6(x) \u0003 (2) where F denotes a set of parameters, h\u03c6 and h\u2032 \u03c6 are appropriate real-valued functions parameterized by \u03c6. Different choices of F, h\u03c6 and h\u2032 \u03c6 can lead to a variety of fdivergences such as Jenson-Shannon divergence and integral probability metrics such as the Wasserstein distance. For instance, the GAN objective proposed by Goodfellow et al. (2014) can also be cast in the form of Eq. (2) below: max \u03c6\u2208F Ex\u223cP\u03b8 [log (1 \u2212D\u03c6(x))] + Ex\u223cPdata [D\u03c6(x)] (3) where \u03c6 denotes the parameters of a neural network function D\u03c6. We refer the reader to (Nowozin, Cseke, and Tomioka 2016; Mescheder, Nowozin, and Geiger 2017b) for further details on other possible choices of divergences. Importantly, a Monte Carlo estimate of the objective in Eq. (2) requires only samples from the model. Hence, any model that allows tractable sampling can be used to evaluate the following minimax objective: min \u03b8\u2208M max \u03c6\u2208F Ex\u223cP\u03b8 [h\u03c6(x)] \u2212Ex\u223cPdata \u0002 h\u2032 \u03c6(x) \u0003 . (4) As a result, even differentiable implicit models which do not provide a characterization of the model likelihood1 but allow tractable sampling can be learned adversarially by optimizing minimax objectives of the form given in Eq. (4). 2.3 Adversarial learning of latent variable models From a statistical perspective, maximum likelihood estimators are statistically ef\ufb01cient asymptotically (under some conditions) and hence minimizing the KL divergence is a natural objective for many prescribed models (Huber 1967). However, not all models allow for a well-de\ufb01ned, tractable, and easy-to-optimize likelihood. For example, exact likelihood evaluation and sampling are tractable in directed, fully observed models such as Bayesian networks and autoregressive models (Larochelle and Murray 2011; Oord, Kalchbrenner, and Kavukcuoglu 2016). Hence, they are usually trained by maximum likelihood. Undirected models, on the other hand, provide only unnormalized likelihoods and are sampled from using expensive Markov chains. Hence, they are usually learned by approximating the likelihood using methods such as contrastive divergence (Carreira-Perpinan and Hinton 2005) and pseudolikelihood (Besag 1977). The likelihood is generally intractable to compute in latent variable models (even directed 1This could be either due to computational intractability in evaluating likelihoods or because the likelihood is ill-de\ufb01ned. \fones) as it requires marginalization. These models are typically learned by optimizing a stochastic lower bound to the log-likelihood using variational Bayes approaches (Kingma and Welling 2014). Directed latent variable models allow for ef\ufb01cient ancestral sampling and hence these models can also be trained using other divergences, e.g., adversarially (Mescheder, Nowozin, and Geiger 2017a; Mao et al. 2017; Song, Zhao, and Ermon 2017). A popular class of latent variable models learned adversarially consist of generative adversarial networks (GAN; (Goodfellow et al. 2014)). GANs comprise of a pair of generator and discriminator networks. The generator G\u03b8 : Rk \u2192Rd is a deterministic function differentiable with respect to the parameters \u03b8. The function takes as input a source of randomness z \u2208Rk sampled from a tractable prior density p(z) and transforms it to a sample G\u03b8(z) through a forward pass. Evaluating likelihoods assigned by a GAN is challenging because the model density p\u03b8 is speci\ufb01ed only implicitly using the prior density p(z) and the generator function G\u03b8. In fact, the likelihood for any data point is ill-de\ufb01ned (with respect to the Lesbegue measure over Rn) if the prior distribution over z is de\ufb01ned over a support smaller than the support of the data distribution. GANs are typically learned adversarially with the help of a discriminator network. The discriminator D\u03c6 : Rd \u2192R is another real-valued function that is differentiable with respect to a set of parameters \u03c6. Given the discriminator function, we can express the functions h and h\u2032 in Eq. (4) as compositions of D\u03c6 with divergence-speci\ufb01c functions. For instance, the Wasserstein GAN (WGAN; (Arjovsky, Chintala, and Bottou 2017)) optimizes the following objective: min \u03b8 max \u03c6\u2208F Ex\u223cPdata [D\u03c6(x)] \u2212Ez\u223cPz [D\u03c6(G\u03b8(z))] (5) where F is de\ufb01ned such that D\u03c6 is 1-Lipschitz. Empirically, GANs generate excellent samples of natural images (Radford, Metz, and Chintala 2015), audio signals (Pascual, Bonafonte, and Serr` a 2017), and of behaviors in imitation learning (Ho and Ermon 2016; Li, Song, and Ermon 2017). 3 Flow Generative Adversarial Networks As discussed above, generative adversarial networks can tractably generate high-quality samples but have intractable or ill-de\ufb01ned likelihoods. Monte Carlo techniques such as AIS and non-parameteric density estimation methods such as KDE get around this by assuming a Gaussian observation model p\u03b8(x|z) for the generator.2 This assumption alone is not suf\ufb01cient for quantitative evaluation since the marginal likelihood of the observed data, p\u03b8(x) = R p\u03b8(x, z)dz in this case would be intractable as it requires integrating over all the latent factors of variation. This would then require approximate inference (e.g., Monte Carlo or variational methods) which in itself is a computational challenge for highdimensional distributions. To circumvent these issues, we propose \ufb02ow generative adversarial networks (Flow-GAN). 2The true observation model for a GAN is a Dirac delta distribution, i.e., p\u03b8(x|z) is in\ufb01nite when x = G\u03b8(z) and zero otherwise. A Flow-GAN consists of a pair of generator-discriminator networks with the generator speci\ufb01ed as a normalizing \ufb02ow model (Dinh, Krueger, and Bengio 2014). A normalizing \ufb02ow model speci\ufb01es a parametric transformation from a prior density p(z) : Rd \u2192R+ 0 to another density over the same space, p\u03b8(x) : Rd \u2192R+ 0 where R+ 0 is the set of nonnegative reals. The generator transformation G\u03b8 : Rd \u2192 Rd is invertible, such that there exists an inverse function f\u03b8 = G\u22121 \u03b8 . Using the change-of-variables formula and letting z = f\u03b8(x), we have: p\u03b8(x) = p(z) \f \f \f \fdet\u2202f\u03b8(x) \u2202x \f \f \f \f (6) where \u2202f\u03b8(x) \u2202x denotes the Jacobian of f\u03b8 at x. The above formula can be applied recursively over compositions of many invertible transformations to produce a complex \ufb01nal density. Hence, we can evaluate and optimize for the loglikelihood assigned by the model to a data point as long as the prior density is tractable and the determinant of the Jacobian of f\u03b8 evaluated at x can be ef\ufb01ciently computed. Evaluating the likelihood assigned by a Flow-GAN model in Eq. (6) requires overcoming two major challenges. First, requiring the generator function G\u03b8 to be reversible imposes a constraint on the dimensionality of the latent variable z to match that of the data x. Thereafter, we require the transformations between the various layers of the generator to be invertible such that their overall composition results in an invertible G\u03b8. Secondly, the Jacobian of high-dimensional distributions can however be computationally expensive to compute. If the transformations are designed such that the Jacobian is an upper or lower triangular matrix, then the determinant can be easily evaluated as the product of its diagonal entries. We consider two such family of transformations. 1. Volume preserving transformations. Here, the Jacobian of the transformations have a unit determinant. For example, the NICE model consists of several layers performing a location transformation (Dinh, Krueger, and Bengio 2014). The top layer is a diagonal scaling matrix with nonzero log determinant. 2. Non-volume preserving transformations. The determinant of the Jacobian of the transformations is not necessarily unity. For example, in Real-NVP, layers performs both location and scale transformations (Dinh, Sohl-Dickstein, and Bengio 2017). For brevity, we direct the reader to Dinh, Krueger, and Bengio (2014) and Dinh, Sohl-Dickstein, and Bengio (2017) for the speci\ufb01cations of NICE and Real-NVP respectively. Crucially, both volume preserving and non-volume preserving transformations are invertible such that the determinant of the Jacobian can be computed tractably. 3.1 Learning objectives In a Flow-GAN, the likelihood is well-de\ufb01ned and computationally tractable for exact evaluation of even expressive volume preserving and non-volume preserving transformations. Hence, a Flow-GAN can be trained via maximum likelihood estimation using Eq. (1) in which case the discriminator is redundant. Additionally, we can perform ancestral \f(a) MLE (b) ADV (c) Hybrid Figure 1: Samples generated by Flow-GAN models with different objectives for MNIST (top) and CIFAR-10 (bottom). sampling just like a regular GAN whereby we sample a random vector z \u223cPz and transform it to a model generated sample via G\u03b8 = f \u22121 \u03b8 . This makes it possible to learn a Flow-GAN using an adversarial learning objective (for example, the WGAN objective in Eq. (5)). A natural question to ask is why should one use adversarial learning given that MLE is statistically ef\ufb01cient asymptotically (under some conditions). Besides dif\ufb01culties that could arise due to optimization (in both MLE and adversarial learning), the optimality of MLE holds only when there is no model misspeci\ufb01cation for the generator i.e., the true data distribution Pdata is a member of the parametric family of distributions under consideration (White 1982). This is generally not the case for high-dimensional distributions, and hence the choice of the learning objective becomes largely an empirical question. Unlike other models, a Flow-GAN allows both maximum likelihood and adversarial learning, and hence we can investigate this question experimentally. 3.2 Evaluation metrics and experimental setup Our criteria for evaluation is based on held-out loglikelihoods and sample quality metrics. We focus on natural images since they allow visual inspection as well as quanti\ufb01cation using recently proposed metrics. A \u201cgood\u201d generative model should generalize to images outside the training data and assign high log-likelihoods to held-out data. The Inception and MODE scores are standard quantitative measures of the quality of generated samples of natural images for labelled datasets (Salimans et al. 2016; Che et al. 2017). The Inception scores are computed as: exp (Ex\u2208P\u03b8[KL(p(y|x)\u2225p(y)]) where x is a sample generated by the model, p(y|x) is the softmax probability for the labels y assigned by a pretrained classi\ufb01er for x, and p(y) is the overall distribution of labels in the generated samples (as predicted by the pretrained classi\ufb01er). The intuition is that the conditional distribution p(y|x) should have low entropy for good looking images while the marginal distribution p(y) has high entropy to ensure sample diversity. Hence, a generative model can perform well on this metric if the KL divergence between the two distributions (and consequently, the Inception score for the generated samples) is large. The MODE score given below modi\ufb01es the Inception score to take into account the distribution of labels in the training data, p\u2217(y): exp (Ex\u2208P\u03b8[KL(p(y|x)\u2225p\u2217(y)] \u2212KL(p\u2217(y)\u2225p(y))) . We compare learning of Flow-GANs using MLE and adversarial learning (ADV) for the MNIST dataset of handwritten digits (LeCun, Cortes, and Burges 2010) and the CIFAR-10 dataset of natural images (Krizhevsky and Hinton 2009). The normalizing \ufb02ow generator architectures are chosen to be NICE (Dinh, Krueger, and Bengio 2014) and Real-NVP (Dinh, Sohl-Dickstein, and Bengio 2017) for MNIST and CIFAR-10 respectively. We \ufb01x the Wasserstein distance as the choice of the divergence being optimized by ADV (see Eq. (5)) with the Lipschitz constraint over the critic imposed by penalizing the norm of the gradient with respect to the input (Arjovsky, Chintala, and Bottou 2017; Gulrajani et al. 2017). The discriminator is based on the DCGAN architecture (Radford, Metz, \fand Chintala 2015). The above choices are among the current state-of-the-art in maximum likelihood estimation and adversarial learning and greatly stabilize GAN training. Further experimental setup details are provided in Appendix A. The code for reproducing the results is available at https://github.com/ermongroup/flow-gan. 0.0 0.5 1.0 1e5 generator iterations 3 2 1 0 1e3 NLL (in nats) 0.0 0.5 1.0 1e5 generator iterations 3 2 1 0 1e3 NLL (in nats) train nll val nll 0.00 0.25 0.50 0.75 1.00 1e5 generator iterations 0.0 0.2 0.4 0.6 0.8 1.0 1e5 NLL (in nats) 0 5 10 15 20 wgan loss 0.00 0.25 0.50 0.75 1.00 1e5 generator iterations 0.0 0.2 0.4 0.6 0.8 1.0 1e5 NLL (in nats) train nll val nll wgan loss 0 1 2 1e5 generator iterations 4 6 8 10 NLL (in bits per dim) 0 1 2 1e5 generator iterations 4 6 8 10 NLL (in bits per dim) train nll val nll (a) MLE 0 1 2 1e5 generator iterations 103 104 105 106 NLL (in bits per dim) 0 100 200 300 400 wgan loss 0 1 2 1e5 generator iterations 103 104 105 106 NLL (in bits per dim) train nll val nll wgan loss (b) ADV Figure 2: Learning curves for negative log-likelihood (NLL) evaluation on MNIST (top, in nats) and CIFAR (bottom, in bits/dim). Lower NLLs are better. 3.3 Evaluation results Log-likelihood. The log-likelihood learning curves for Flow-GAN models learned using MLE and ADV are shown in Figure 2a and Figure 2b respectively. Following convention, we report the negative log-likelihoods (NLL) in nats for MNIST and bits/dimension for CIFAR-10. MLE. In Figure 2a, we see that normalizing \ufb02ow models attain low validation NLLs (blue curves) after few gradient updates as expected because it is explicitly optimizing for the MLE objective in Eq. (1). Continued training however could lead to over\ufb01tting as the train NLLs (red curves) begin to diverge from the validation NLLs. ADV. Surprisingly, ADV models show a consistent increase in validation NLLs as training progresses as shown in Figure 2b (for CIFAR-10, the estimates are reported on a log scale!). Based on the learning curves, we can disregard over\ufb01tting as an explanation since the increase in NLLs is observed even on the training data. The training and validation NLLs closely track each other suggesting that ADV models are not simply memorizing the training data. Comparing the left vs. right panels in Figure 2, we see that the log-likelihoods attained by ADV are orders of magnitude worse than those attained by MLE after suf\ufb01cient training. Finally, we note that the WGAN loss (green curves) does not correlate well with NLL estimates. While the WGAN loss stabilizes after few iterations of training, the NLLs continue to increase. This observation is in contrast to prior work showing the loss to be strongly correlated with sample quality metrics (Arjovsky, Chintala, and Bottou 2017). Sample quality. Samples generated from MLE and ADVbased models with the best MODE/Inception are shown in Figure 1a and Figure 1b respectively. ADV models signi\ufb01cantly outperform MLE with respect to the \ufb01nal MODE/Inception scores achieved. Visual inspection of samples con\ufb01rms the observations made on the based of the sample quality metrics. Curves monitoring the sample quality metrics at every training iteration are given in Appendix B. 3.4 Gaussian mixture models The above experiments suggest that ADV can produce excellent samples but assigns low likelihoods to the observed data. However, a direct comparison of ADV with the loglikelihoods of MLE is unfair since the latter is explicitly optimizing for the desired objective. To highlight that generating good samples at the expense of low likelihoods is not a challenging goal, we propose a simple baseline. We compare the adversarially learned Flow-GAN models that achieves the highest MODE/Inception score with a Gaussian Mixture Model consisting of m isotropic Gaussians with equal weights centered at each of the m training points as the baseline Gaussian Mixture Model (GMM). The bandwidth hyperparameter, \u03c3, is the same for each of the mixture components and optimized for the lowest validation NLL by doing a line search in (0, 1]. We show results for CIFAR10 in Figure 3. Our observations below hold for MNIST as well; results deferred to Appendix C. We overload the y-axis in Figure 3 to report both NLLs and sample quality metrics. The horizontal maroon and cyan dashed lines denote the best attainable MODE/Inception scores and corresponding validation NLLs respectively attained by the adversarially learned Flow-GAN model. The GMM can clearly attain better sample quality metrics since it is explicitly over\ufb01tting to the training data for low values of the bandwidth parameter (any \u03c3 for which the red curve is above the maroon dashed line). Surprisingly, the simple GMM also outperforms the adversarially learned model with respect to NLLs attained for several values of the bandwidth parameter (any \u03c3 for which the blue curve is below the cyan dashed line). Bandwidth parameters for which GMM models outperform the adversarially learned model on both loglikelihoods and sample quality metrics are highlighted using the green shaded area. We show samples from the GMM in the appendix. Hence, a trivial baseline that is memorizing the training data can generate high quality samples and better held-out log-likelihoods, suggesting that the loglikelihoods attained by adversarial training are very poor. \f0.0 0.1 0.2 0.3 bandwidth 101 102 103 104 validation NLL (in bits per dim) 2 3 4 5 6 7 8 9 inception scores 0.0 0.1 0.2 0.3 bandwidth 101 102 103 104 validation NLL (in bits per dim) NLL of ADV model with best inception score Best Inception Score of ADV model Figure 3: Gaussian Mixture Models outperform adversarially learned models on both held-out log-likelihoods and sampling metrics on CIFAR-10 (green shaded region). 4 Hybrid learning of Flow-GANs In the previous section, we observed that adversarially learning Flow-GANs models attain poor held-out log-likelihoods. This makes it challenging to use such models for applications requiring density estimation. On the other hand, FlowGANs learned using MLE are \u201cmode covering\u201d but do not generate high quality samples. With a Flow-GAN, it is possible to trade-off the two goals by combining the learning objectives corresponding to both these inductive principles. Without loss of generality, let V (G\u03b8, D\u03c6) denote the minimax objective of any GAN model (such as WGAN). The hybrid objective of a Flow-GAN can be expressed as: min \u03b8 max \u03c6 V (G\u03b8, D\u03c6) \u2212\u03bbEx\u223cPdata [log p\u03b8(x)] (7) where \u03bb \u22650 is a hyperparameter for the algorithm. By varying \u03bb, we can interpolate between plain adversarial training (\u03bb = 0) and MLE (very high \u03bb). We summarize the results from MLE, ADV, and Hybrid for log-likelihood and sample quality evaluation in Table 1 and Table 2 for MNIST and CIFAR-10 respectively. The tables report the test log-likelihoods corresponding to the best validated MLE and ADV models and the highest MODE/Inception scores observed during training. The samples generated by models with the best MODE/Inception scores for each objective are shown in Figure 1c. While the results on CIFAR-10 are along expected lines, the hybrid objective interestingly outperforms MLE and ADV on both test log-likelihoods and sample quality metrics in the case of MNIST. One potential explanation for this is that the ADV objective can regularize MLE to generalize to the test set and in turn, the MLE objective can stabilize the optimization of the ADV objective. Hence, the hybrid objective in Eq. (7) can smoothly balance the two objectives using the tunable hyperparameter \u03bb, and in some cases such as MNIST, the performance on both tasks could improve as a result of the hybrid objective. Table 1: Best MODE scores and test negative log-likelihood estimates for Flow-GAN models on MNIST. Objective MODE Score Test NLL (in nats) MLE 7.42 \u22123334.56 ADV 9.24 \u22121604.09 Hybrid (\u03bb = 0.1) 9.37 \u22123342.95 Table 2: Best Inception scores and test negative loglikelihood estimates for Flow-GAN models on CIFAR-10. Objective Inception Score Test NLL (in bits/dim) MLE 2.92 3.54 ADV 5.76 8.53 Hybrid (\u03bb = 1) 3.90 4.21 5 Interpreting the results Our \ufb01ndings are in contrast with prior work which report much better log-likelihoods for adversarially learned models with a standard generator architecture based on annealed importance sampling (AIS; (Wu et al. 2017)) and kernel density estimation (KDE; (Goodfellow et al. 2014)). These methods rely on approximate inference techniques for log-likelihood evaluation and make assumptions about a Gaussian observation model which does not hold for GANs. Since Flow-GANs allow us to compute exact loglikelihoods, we can evaluate the quality of approximation made by AIS and KDE for density estimation of invertible generators. For a detailed description of the methods, we refer the reader to prior work (Neal 2001; Parzen 1962). We consider the MNIST dataset where these methods have been previously applied to by Wu et al. (2017) and Goodfellow et al. (2014) respectively. Since both AIS and KDE inherently rely on the samples generated, we evaluate these methods for the MLE, ADV, and Hybrid FlowGAN model checkpoints corresponding to the best MODE scores observed during training. In Table 3, we observe that both AIS and KDE produce estimates of log-likelihood that are far from the ground truth, accessible through the exact Flow-GAN log-likelihoods. Even worse, the ranking of log-likelihood estimates for AIS (ADV>Hybrid>MLE) and KDE (Hybrid>MLE>ADV) do not obey the relative rankings of the Flow-GAN estimates (MLE>Hybrid>ADV). 5.1 Explaining log-likelihood trends In order to explain the variation in log-likelihoods attained by various Flow-GAN learning objectives, we investigate the distribution of the magnitudes of singular values for the Jacobian matrix of several generator functions, G\u03b8 for MNIST in Figure 4 evaluated at 64 noise vectors z randomly sampled from the prior density p(z). The x-axis of the \ufb01gure shows the singular value magnitudes on a log scale and for each singular value s, we show the corresponding cumulative distribution function value on the y-axis which signi\ufb01es the fraction of singular values less than s. The results on CIFAR-10 in Appendix D show a similar trend. The Jacobian is a good \ufb01rst-order approximation of the generator function locally. In Figure 4, we observe that the \fTable 3: Comparison of inference techniques for negative log-likelihood estimation of Flow-GAN models on MNIST. Objective Flow-GAN NLL AIS KDE MLE -3287.69 -2584.40 -167.10 ADV 26350.30 -2916.10 -3.03 Hybrid -3121.53 -2703.03 -205.69 singular value distribution for the Jacobian of an invertible generator learned using MLE (orange curves) is concentrated in a narrow range, and hence the Jacobian matrix is well-conditioned and easy to invert. In the case of invertible generators learned using ADV with Wasserstein distance (green curves) however, the spread of singular values is very wide, and hence the Jacobian matrix is ill-conditioned. The average log determinant of the Jacobian matrices for MLE, ADV, and Hybrid models are \u22124170.34, \u221215588.34, and \u22125184.40 respectively which translates to the trend ADV