title
stringlengths 18
162
| url
stringlengths 42
44
| detail_url
stringlengths 42
44
| authors
stringlengths 10
429
| tags
stringclasses 3
values | abstract
stringlengths 400
2.37k
| pdf
stringlengths 71
71
|
|---|---|---|---|---|---|---|
TDR-CL: Targeted Doubly Robust Collaborative Learning for Debiased Recommendations
|
https://openreview.net/forum?id=EIgLnNx_lC
|
https://openreview.net/forum?id=EIgLnNx_lC
|
Haoxuan Li,Yan Lyu,Chunyuan Zheng,Peng Wu
|
ICLR 2023,Poster
|
Bias is a common problem inherent in recommender systems, which is entangled with users' preferences and poses a great challenge to unbiased learning. For debiasing tasks, the doubly robust (DR) method and its variants show superior performance due to the double robustness property, that is, DR is unbiased when either imputed errors or learned propensities are accurate.
However, our theoretical analysis reveals that DR usually has a large variance. Meanwhile, DR would suffer unexpectedly large bias and poor generalization caused by inaccurate imputed errors and learned propensities, which usually occur in practice. In this paper, we propose a principled approach that can effectively reduce the bias and variance simultaneously for existing DR approaches when the error imputation model is misspecified. In addition, we further propose a novel semi-parametric collaborative learning approach that decomposes imputed errors into parametric and nonparametric parts and updates them collaboratively, resulting in more accurate predictions. Both theoretical analysis and experiments demonstrate the superiority of the proposed methods compared with existing debiasing methods.
|
https://openreview.net/pdf/6a5e83f301fab75655796fae74cf9cf0eae5e4bf.pdf
|
Domain Generalisation via Domain Adaptation: An Adversarial Fourier Amplitude Approach
|
https://openreview.net/forum?id=7IG0wsTND7w
|
https://openreview.net/forum?id=7IG0wsTND7w
|
Minyoung Kim,Da Li,Timothy Hospedales
|
ICLR 2023,Poster
|
We tackle the domain generalisation (DG) problem by posing it as a domain adaptation (DA) task where we adversarially synthesise the worst-case `target' domain and adapt a model to that worst-case domain, thereby improving the model’s robustness. To synthesise data that is challenging yet semantics-preserving, we generate Fourier amplitude images and combine them with source domain phase images, exploiting the widely believed conjecture from signal processing that amplitude spectra mainly determines image style, while phase data mainly captures image semantics. To synthesise a worst-case domain for adaptation, we train the classifier and the amplitude generator adversarially. Specifically, we exploit the maximum classifier discrepancy (MCD) principle from DA that relates the target domain performance to the discrepancy of classifiers in the model hypothesis space. By Bayesian hypothesis modeling, we express the model hypothesis space effectively as a posterior distribution over classifiers given the source domains, making adversarial MCD minimisation feasible. On the DomainBed benchmark including the large-scale DomainNet dataset, the proposed approach yields significantly improved domain generalisation performance over the state-of-the-art.
|
https://openreview.net/pdf/606ab5f6ce734035bd7fb28ab57e8aca0216384d.pdf
|
Consolidator: Mergable Adapter with Group Connections for Visual Adaptation
|
https://openreview.net/forum?id=J_Cja7cpgW
|
https://openreview.net/forum?id=J_Cja7cpgW
|
Tianxiang Hao,Hui Chen,Yuchen Guo,Guiguang Ding
|
ICLR 2023,Poster
|
Recently, transformers have shown strong ability as visual feature extractors, surpassing traditional convolution-based models in various scenarios. However, the success of vision transformers largely owes to their capacity to accommodate numerous parameters. As a result, new challenges for adapting a well-trained transformer to downstream tasks arise. On the one hand, classic fine-tuning tunes all parameters in a huge model for every downstream task and thus easily falls into an overfitting situation, leading to inferior performance. On the other hand, on resource-limited devices, fine-tuning stores a full copy of all parameters and thus is usually impracticable for the shortage of storage space. However, few works have focused on how to efficiently and effectively transfer knowledge in a vision transformer. Existing methods did not dive into the properties of visual features, leading to inferior performance. Moreover, some of them bring heavy inference cost though benefiting storage. To tackle these problems, we propose consolidator to achieve efficient transfer learning for large vision models. Our consolidator modifies the pre-trained model with the addition of a small set of tunable parameters to temporarily store the task-specific knowledge while freezing the backbone model during adaptation. Motivated by the success of group-wise convolution, we adopt grouped connections across the features extracted by fully connected layers to construct tunable parts in a consolidator. To further enhance the model's capacity to transfer knowledge under a constrained storage budget and keep inference efficient, we consolidate the parameters in two stages: 1. between adaptation and storage, and 2. between loading and inference. On a series of downstream visual tasks, our consolidator can reach up to 7.56 better accuracy than full fine-tuning with merely 0.35% parameters, and outperform state-of-the-art parameter-efficient tuning methods by a clear margin. Code is available at github.
|
https://openreview.net/pdf/23beca117af91084d66eba68e6b3577b2d602065.pdf
|
Statistical Theory of Differentially Private Marginal-based Data Synthesis Algorithms
|
https://openreview.net/forum?id=hxUwnEGxW87
|
https://openreview.net/forum?id=hxUwnEGxW87
|
Ximing Li,Chendi Wang,Guang Cheng
|
ICLR 2023,Poster
|
Marginal-based methods achieve promising performance in the synthetic data competition hosted by the National Institute of Standards and Technology (NIST).
To deal with high-dimensional data, the distribution of synthetic data is represented by a probabilistic graphical model (e.g., a Bayesian network), while the raw data distribution is approximated by a collection of low-dimensional marginals.
Differential privacy (DP) is guaranteed by introducing random noise to each low-dimensional marginal distribution.
Despite its promising performance in practice, the statistical properties of marginal-based methods are rarely studied in the literature.
In this paper, we study DP data synthesis algorithms based on Bayesian networks (BN) from a statistical perspective. We establish a rigorous accuracy guarantee for BN-based algorithms, where the errors are measured by the total variation (TV) distance or the $L^2$ distance.
Related to downstream machine learning tasks, an upper bound for the utility error of the DP synthetic data is also derived. To complete the picture, we establish a lower bound for TV accuracy that holds for every $\epsilon$-DP synthetic data generator.
|
https://openreview.net/pdf/3058893e9f851ac461611b0b24f93d716651a067.pdf
|
Anti-Symmetric DGN: a stable architecture for Deep Graph Networks
|
https://openreview.net/forum?id=J3Y7cgZOOS
|
https://openreview.net/forum?id=J3Y7cgZOOS
|
Alessio Gravina,Davide Bacciu,Claudio Gallicchio
|
ICLR 2023,Poster
|
Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to their efficiency and ability to implement an adaptive message-passing scheme between the nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. As a result,
we can expect them to under-perform, since different problems require to capture interactions at different (and possibly large) radii in order to be effectively solved. In this work, we present Anti-Symmetric Deep Graph Networks (A-DGNs), a framework for stable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.
|
https://openreview.net/pdf/48b3d62fa0cacd2579c8fe3196c54df1c124b713.pdf
|
Contrastive Learning for Unsupervised Domain Adaptation of Time Series
|
https://openreview.net/forum?id=xPkJYRsQGM
|
https://openreview.net/forum?id=xPkJYRsQGM
|
Yilmazcan Ozyurt,Stefan Feuerriegel,Ce Zhang
|
ICLR 2023,Poster
|
Unsupervised domain adaptation (UDA) aims at learning a machine learning model using a labeled source domain that performs well on a similar yet different, unlabeled target domain. UDA is important in many applications such as medicine, where it is used to adapt risk scores across different patient cohorts. In this paper, we develop a novel framework for UDA of time series data, called CLUDA. Specifically, we propose a contrastive learning framework to learn contextual representations in multivariate time series, so that these preserve label information for the prediction task. In our framework, we further capture the variation in the contextual representations between source and target domain via a custom nearest-neighbor contrastive learning. To the best of our knowledge, ours is the first framework to learn domain-invariant, contextual representation for UDA of time series data. We evaluate our framework using a wide range of time series datasets to demonstrate its effectiveness and show that it achieves state-of-the-art performance for time series UDA.
|
https://openreview.net/pdf/8db66ee8b82ca7ce1a69f87119d02bf7f0b4630e.pdf
|
Online Low Rank Matrix Completion
|
https://openreview.net/forum?id=47KG_AvNqeZ
|
https://openreview.net/forum?id=47KG_AvNqeZ
|
Soumyabrata Pal,Prateek Jain
|
ICLR 2023,Poster
|
We study the problem of online low-rank matrix completion with $\mathsf{M}$ users, $\mathsf{N}$ items and $\mathsf{T}$ rounds. In each round, the algorithm recommends one item per user, for which it gets a (noisy) reward sampled from a low-rank user-item preference matrix. The goal is to design a method with sub-linear regret (in $\mathsf{T}$) and nearly optimal dependence on $\mathsf{M}$ and $\mathsf{N}$. The problem can be easily mapped to the standard multi-armed bandit problem where each item is an independent arm, but that leads to poor regret as the correlation between arms and users is not exploited. On the other hand, exploiting the low-rank structure of reward matrix is challenging due to non-convexity of the low-rank manifold. We first demonstrate that the low-rank structure can be exploited using a simple explore-then-commit (ETC) approach that ensures a regret of $O(\mathsf{polylog} (\mathsf{M}+\mathsf{N}) \mathsf{T}^{2/3})$. That is, roughly only $\mathsf{polylog} (\mathsf{M}+\mathsf{N})$ item recommendations are required per user to get a non-trivial solution. We then improve our result for the rank-$1$ setting which in itself is quite challenging and encapsulates some of the key issues. Here, we propose OCTAL (Online Collaborative filTering using iterAtive user cLustering) that guarantees nearly optimal regret of $O(\mathsf{polylog} (\mathsf{M}+\mathsf{N}) \mathsf{T}^{1/2})$. OCTAL is based on a novel technique of clustering users that allows iterative elimination of items and leads to a nearly optimal minimax rate.
|
https://openreview.net/pdf/36a5aac4f386a08dc0b2f40b93629065cf6f29c2.pdf
|
Explaining RL Decisions with Trajectories
|
https://openreview.net/forum?id=5Egggz1q575
|
https://openreview.net/forum?id=5Egggz1q575
|
Shripad Vilasrao Deshmukh,Arpan Dasgupta,Balaji Krishnamurthy,Nan Jiang,Chirag Agarwal,Georgios Theocharous,Jayakumar Subramanian
|
ICLR 2023,Poster
|
Explanation is a key component for the adoption of reinforcement learning (RL) in many real-world decision-making problems. In the literature, the explanation is often provided by saliency attribution to the features of the RL agent's state. In this work, we propose a complementary approach to these explanations, particularly for offline RL, where we attribute the policy decisions of a trained RL agent to the trajectories encountered by it during training. To do so, we encode trajectories in offline training data individually as well as collectively (encoding a set of trajectories). We then attribute policy decisions to a set of trajectories in this encoded space by estimating the sensitivity of the decision with respect to that set. Further, we demonstrate the effectiveness of the proposed approach in terms of quality of attributions as well as practical scalability in diverse environments that involve both discrete and continuous state and action spaces such as grid-worlds, video games (Atari) and continuous control (MuJoCo). We also conduct a human study on a simple navigation task to observe how their understanding of the task compares with data attributed for a trained RL policy.
|
https://openreview.net/pdf/8c14263279e4c45dd0a74d7e52a0c6d707338882.pdf
|
FastFill: Efficient Compatible Model Update
|
https://openreview.net/forum?id=rnRiiHw8Vy
|
https://openreview.net/forum?id=rnRiiHw8Vy
|
Florian Jaeckle,Fartash Faghri,Ali Farhadi,Oncel Tuzel,Hadi Pouransari
|
ICLR 2023,Poster
|
In many retrieval systems the original high dimensional data (e.g., images) is mapped to a lower dimensional feature through a learned embedding model. The task of retrieving the most similar data from a gallery set to a given query data is performed through similarity comparison on features. When the embedding model is updated, it might produce features that are not comparable/compatible with features already in the gallery computed with the old model. Subsequently, all features in the gallery need to be re-computed using the new embedding model -- a computationally expensive process called backfilling. Recently, compatible representation learning methods have been proposed to avoid back-filling. Despite their relative success, there is an inherent trade-off between new model performance and its compatibility with the old model. In this work, we introduce FastFill: a compatible model update process using feature alignment and policy based partial backfilling to promptly elevate retrieval performance. We show that previous backfilling strategies suffer from decreased performance and demonstrate the importance of both the training objective and the ordering in online partial backfilling. We propose a new training method for feature alignment between old and new embedding models using uncertainty estimation. Compared to previous works, we obtain significantly improved backfilling results on a variety of datasets: mAP on ImageNet (+4.4%), Places-365 (+2.7%), and VGG-Face2 (+1.3%). Further, we demonstrate that when updating a biased model with FastFill, the minority subgroup accuracy gap promptly vanishes with a small fraction of partial backfilling.
|
https://openreview.net/pdf/8d20dc462785f41d1a42a73d5f251100169ec180.pdf
|
Learnable Graph Convolutional Attention Networks
|
https://openreview.net/forum?id=WsUMeHPo-2
|
https://openreview.net/forum?id=WsUMeHPo-2
|
Adrián Javaloy,Pablo Sanchez Martin,Amit Levi,Isabel Valera
|
ICLR 2023,Poster
|
Existing Graph Neural Networks (GNNs) compute the message exchange between nodes by either aggregating uniformly (convolving) the features of all the neighbor- ing nodes, or by applying a non-uniform score (attending) to the features. Recent works have shown the strengths and weaknesses of the resulting GNN architectures, respectively, GCNs and GATs. In this work, we aim at exploiting the strengths of both approaches to their full extent. To this end, we first introduce the graph convolutional attention layer (CAT), which relies on convolutions to compute the attention scores. Unfortunately, as in the case of GCNs and GATs, we show that there exists no clear winner between the three—neither theoretically nor in practice—as their performance directly depends on the nature of the data (i.e., of the graph and features). This result brings us to the main contribution of our work, the learnable graph convolutional attention network (L-CAT): a GNN architecture that automatically interpolates between GCN, GAT and CAT in each layer, by adding only two scalar parameters. Our results demonstrate that L-CAT is able to efficiently combine different GNN layers along the network, outperforming competing methods in a wide range of datasets, and resulting in a more robust model that reduces the need of cross-validating.
|
https://openreview.net/pdf/937d12fbf7e1e0cb377a513fbe5e821bf5b724c9.pdf
|
Scaffolding a Student to Instill Knowledge
|
https://openreview.net/forum?id=N4K5ck-BTT
|
https://openreview.net/forum?id=N4K5ck-BTT
|
Anil Kag,Durmus Alp Emre Acar,Aditya Gangrade,Venkatesh Saligrama
|
ICLR 2023,Poster
|
We propose a novel knowledge distillation (KD) method to selectively instill teacher knowledge into a student model motivated by situations where the student's capacity is significantly smaller than that of the teachers. In vanilla KD, the teacher primarily sets a predictive target for the student to follow, and we posit that this target is overly optimistic due to the student's lack of capacity. We develop a novel scaffolding scheme where the teacher, in addition to setting a predictive target, also scaffolds the student's prediction by censoring hard-to-learn examples. Scaffolding utilizes the same information as the teacher's soft-max predictions as inputs, and in this sense, our proposal can be viewed as a natural variant of vanilla KD. We show on synthetic examples that censoring hard-examples leads to smoothening the student's loss landscape so that the student encounters fewer local minima. As a result, it has good generalization properties. Against vanilla KD, we achieve improved performance and are comparable to more intrusive techniques that leverage feature matching on benchmark datasets.
|
https://openreview.net/pdf/c8a1e11f100899f2bd81fb3442a4721f0872fe17.pdf
|
User-Interactive Offline Reinforcement Learning
|
https://openreview.net/forum?id=a4COps0uokg
|
https://openreview.net/forum?id=a4COps0uokg
|
Phillip Swazinna,Steffen Udluft,Thomas Runkler
|
ICLR 2023,Poster
|
Offline reinforcement learning algorithms still lack trust in practice due to the risk that the learned policy performs worse than the original policy that generated the dataset or behaves in an unexpected way that is unfamiliar to the user. At the same time, offline RL algorithms are not able to tune their most important hyperparameter - the proximity of the learned policy to the original policy. We propose an algorithm that allows the user to tune this hyperparameter at runtime, thereby addressing both of the above mentioned issues simultaneously. This allows users to start with the original behavior and grant successively greater deviation, as well as stopping at any time when the policy deteriorates or the behavior is too far from the familiar one.
|
https://openreview.net/pdf/81fc7c68487b51f4cf252f7eaa1571f38b45e4cb.pdf
|
SLTUNET: A Simple Unified Model for Sign Language Translation
|
https://openreview.net/forum?id=EBS4C77p_5S
|
https://openreview.net/forum?id=EBS4C77p_5S
|
Biao Zhang,Mathias Müller,Rico Sennrich
|
ICLR 2023,Poster
|
Despite recent successes with neural models for sign language translation (SLT), translation quality still lags behind spoken languages because of the data scarcity and modality gap between sign video and text. To address both problems, we investigate strategies for cross-modality representation sharing for SLT. We propose SLTUNET, a simple unified neural model designed to support multiple SLTrelated tasks jointly, such as sign-to-gloss, gloss-to-text and sign-to-text translation. Jointly modeling different tasks endows SLTUNET with the capability to explore the cross-task relatedness that could help narrow the modality gap. In addition, this allows us to leverage the knowledge from external resources, such as abundant parallel data used for spoken-language machine translation (MT). We show in experiments that SLTUNET achieves competitive and even state-of-the-art performance on PHOENIX-2014T and CSL-Daily when augmented with MT data and equipped with a set of optimization techniques. We further use the DGS Corpus for end-to-end SLT for the first time. It covers broader domains with a significantly larger vocabulary, which is more challenging and which we consider to allow for a more realistic assessment of the current state of SLT than the former two. Still, SLTUNET obtains improved results on the DGS Corpus. Code is available at https://github.com/bzhangGo/sltunet.
|
https://openreview.net/pdf/9507e2116df8f18cfd2b45279e7e83f544567dbe.pdf
|
Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization
|
https://openreview.net/forum?id=iUYpN14qjTF
|
https://openreview.net/forum?id=iUYpN14qjTF
|
Difan Zou,Yuan Cao,Yuanzhi Li,Quanquan Gu
|
ICLR 2023,Poster
|
Adaptive gradient methods such as Adam have gained increasing popularity in deep learning optimization. However, it has been observed in many deep learning applications such as image classification, Adam can converge to a different solution with a worse test error compared to (stochastic) gradient descent, even with a fine-tuned regularization. In this paper, we provide a theoretical explanation for this phenomenon: we show that in the nonconvex setting of learning over-parameterized two-layer convolutional neural networks starting from the same random initialization, for a class of data distributions (inspired from image data), Adam and gradient descent (GD) can converge to different global solutions of the training objective with provably different generalization errors, even with weight decay regularization. In contrast, we show that if the training objective is convex, and the weight decay regularization is employed, any optimization algorithms including Adam and GD will converge to the same solution if the training is successful. This suggests that the generalization gap between Adam and SGD in the presence of weight decay regularization is closely tied to the nonconvex landscape of deep learning optimization, which cannot be covered by the recent neural tangent kernel (NTK) based analysis.
|
https://openreview.net/pdf/1102aa0c20df94c5e53cd25c553ec1a8d1c0d953.pdf
|
A law of adversarial risk, interpolation, and label noise
|
https://openreview.net/forum?id=0_TxFpAsEI
|
https://openreview.net/forum?id=0_TxFpAsEI
|
Daniel Paleka,Amartya Sanyal
|
ICLR 2023,Poster
|
In supervised learning, it has been shown that label noise in the data can be interpolated without penalties on test accuracy. We show that interpolating label noise induces adversarial vulnerability, and prove the first theorem showing the relationship between label noise and adversarial risk for any data distribution. Our results are almost tight if we do not make any assumptions on the inductive bias of the learning algorithm. We then investigate how different components of this problem affect this result including properties of the distribution. We also discuss non-uniform label noise distributions; and prove a new theorem showing uniform label noise induces nearly as large an adversarial risk as the worst poisoning with the same noise rate. Then, we provide theoretical and empirical evidence that uniform label noise is more harmful than typical real-world label noise. Finally, we show how inductive biases amplify the effect of label noise and argue the need for future work in this direction.
|
https://openreview.net/pdf/01caf3dd0ea1be0ef1782bbb1dbc2a81569dfb0d.pdf
|
Learning ReLU networks to high uniform accuracy is intractable
|
https://openreview.net/forum?id=nchvKfvNeX0
|
https://openreview.net/forum?id=nchvKfvNeX0
|
Julius Berner,Philipp Grohs,Felix Voigtlaender
|
ICLR 2023,Poster
|
Statistical learning theory provides bounds on the necessary number of training samples needed to reach a prescribed accuracy in a learning problem formulated over a given target class. This accuracy is typically measured in terms of a generalization error, that is, an expected value of a given loss function. However, for several applications --- for example in a security-critical context or for problems in the computational sciences --- accuracy in this sense is not sufficient. In such cases, one would like to have guarantees for high accuracy on every input value, that is, with respect to the uniform norm. In this paper we precisely quantify the number of training samples needed for any conceivable training algorithm to guarantee a given uniform accuracy on any learning problem formulated over target classes containing (or consisting of) ReLU neural networks of a prescribed architecture. We prove that, under very general assumptions, the minimal number of training samples for this task scales exponentially both in the depth and the input dimension of the network architecture.
|
https://openreview.net/pdf/329f643f8f5be6019b42395906af35a4bf07256c.pdf
|
Active Learning for Object Detection with Evidential Deep Learning and Hierarchical Uncertainty Aggregation
|
https://openreview.net/forum?id=MnEjsw-vj-X
|
https://openreview.net/forum?id=MnEjsw-vj-X
|
Younghyun Park,Wonjeong Choi,Soyeong Kim,Dong-Jun Han,Jaekyun Moon
|
ICLR 2023,Poster
|
Despite the huge success of object detection, the training process still requires an immense amount of labeled data. Although various active learning solutions for object detection have been proposed, most existing works do not take advantage of epistemic uncertainty, which is an important metric for capturing the usefulness of the sample. Also, previous works pay little attention to the attributes of each bounding box (e.g., nearest object, box size) when computing the informativeness of an image. In this paper, we propose a new active learning strategy for object detection that overcomes the shortcomings of prior works. To make use of epistemic uncertainty, we adopt evidential deep learning (EDL) and propose a new module termed model evidence head (MEH), that makes EDL highly compatible with object detection. Based on the computed epistemic uncertainty of each bounding box, we propose hierarchical uncertainty aggregation (HUA) for obtaining the informativeness of an image. HUA realigns all bounding boxes into multiple levels based on the attributes and aggregates uncertainties in a bottom-up order, to effectively capture the context within the image. Experimental results show that our method outperforms existing state-of-the-art methods by a considerable margin.
|
https://openreview.net/pdf/fedddd142f0627ab8151b2158be801f8e9c917c4.pdf
|
How Sharpness-Aware Minimization Minimizes Sharpness?
|
https://openreview.net/forum?id=5spDgWmpY6x
|
https://openreview.net/forum?id=5spDgWmpY6x
|
Kaiyue Wen,Tengyu Ma,Zhiyuan Li
|
ICLR 2023,Poster
|
Sharpness-Aware Minimization (SAM) is a highly effective regularization technique for improving the generalization of deep neural networks for various settings. However, the underlying working of SAM remains elusive because of various intriguing approximations in the theoretical characterizations. SAM intends to penalize a notion of sharpness of the model but implements a computationally efficient variant; moreover, a third notion of sharpness was used for proving generalization guarantees. The subtle differences in these notions of sharpness can indeed lead to significantly different empirical results. This paper rigorously nails down the exact sharpness notion that SAM regularizes and clarifies the underlying mechanism. We also show that the two steps of approximations in the original motivation of SAM individually lead to inaccurate local conclusions, but their combination accidentally reveals the correct effect, when full-batch gradients are applied. Furthermore, we also prove that the stochastic version of SAM in fact regularizes the third notion of sharpness mentioned above, which is most likely to be the preferred notion for practical performance. The key mechanism behind this intriguing phenomenon is the alignment between the gradient and the top eigenvector of Hessian when SAM is applied.
|
https://openreview.net/pdf/99a6274fc82db36d3d97dac70aa2eefee7af43fc.pdf
|
The Implicit Bias of Minima Stability in Multivariate Shallow ReLU Networks
|
https://openreview.net/forum?id=xtbog7cfsr
|
https://openreview.net/forum?id=xtbog7cfsr
|
Mor Shpigel Nacson,Rotem Mulayoff,Greg Ongie,Tomer Michaeli,Daniel Soudry
|
ICLR 2023,Poster
|
We study the type of solutions to which stochastic gradient descent converges when used to train a single hidden-layer multivariate ReLU network with the quadratic loss. Our results are based on a dynamical stability analysis. In the univariate case, it was shown that linearly stable minima correspond to network functions (predictors), whose second derivative has a bounded weighted $L^1$ norm. Notably, the bound gets smaller as the step size increases, implying that training with a large step size leads to `smoother' predictors. Here we generalize this result to the multivariate case, showing that a similar result applies to the Laplacian of the predictor. We demonstrate the tightness of our bound on the MNIST dataset, and show that it accurately captures the behavior of the solutions as a function of the step size. Additionally, we prove a depth separation result on the approximation power of ReLU networks corresponding to stable minima of the loss. Specifically, although shallow ReLU networks are universal approximators, we prove that stable shallow networks are not. Namely, there is a function that cannot be well-approximated by stable single hidden-layer ReLU networks trained with a non-vanishing step size. This is while the same function can be realized as a stable two hidden-layer ReLU network. Finally, we prove that if a function is sufficiently smooth (in a Sobolev sense) then it can be approximated arbitrarily well using shallow ReLU networks that correspond to stable solutions of gradient descent.
|
https://openreview.net/pdf/9199b87f3cc5785da4589585c06cb7653f471739.pdf
|
MAST: Masked Augmentation Subspace Training for Generalizable Self-Supervised Priors
|
https://openreview.net/forum?id=5KUPKjHYD-l
|
https://openreview.net/forum?id=5KUPKjHYD-l
|
Chen Huang,Hanlin Goh,Jiatao Gu,Joshua M. Susskind
|
ICLR 2023,Poster
|
Recent Self-Supervised Learning (SSL) methods are able to learn feature representations that are invariant to different data augmentations, which can then be transferred to downstream tasks of interest. However, different downstream tasks require different invariances for their best performance, so the optimal choice of augmentations for SSL depends on the target task. In this paper, we aim to learn self-supervised features that generalize well across a variety of downstream tasks (e.g., object classification, detection and instance segmentation) without knowing any task information beforehand. We do so by Masked Augmentation Subspace Training (or MAST) to encode in the single feature space the priors from different data augmentations in a factorized way. Specifically, we disentangle the feature space into separate subspaces, each induced by a learnable mask that selects relevant feature dimensions to model invariance to a specific augmentation. We show the success of MAST in jointly capturing generalizable priors from different augmentations, using both unique and shared features across the subspaces. We further show that MAST benefits from uncertainty modeling to reweight ambiguous samples from strong augmentations that may cause similarity mismatch in each subspace. Experiments demonstrate that MAST consistently improves generalization on various downstream tasks, while being task-agnostic and efficient during SSL. We also provide interesting insights about how different augmentations are related and how uncertainty reflects learning difficulty.
|
https://openreview.net/pdf/a6ee04305991a9efa073850bebbec9317ac4de6d.pdf
|
Graph-based Deterministic Policy Gradient for Repetitive Combinatorial Optimization Problems
|
https://openreview.net/forum?id=yHIIM9BgOo
|
https://openreview.net/forum?id=yHIIM9BgOo
|
Zhongyuan Zhao,Ananthram Swami,Santiago Segarra
|
ICLR 2023,Poster
|
We propose an actor-critic framework for graph-based machine learning pipelines with non-differentiable blocks, and apply it to repetitive combinatorial optimization problems (COPs) under hard constraints. Repetitive COP refers to problems to be solved repeatedly on graphs of the same or slowly changing topology but rapidly changing node or edge weights. Compared to one-shot COPs, repetitive COPs often rely on fast heuristics to solve one instance of the problem before the next one arrives, at the cost of a relatively large optimality gap. Through numerical experiments on several discrete optimization problems, we show that our approach can learn reusable node or edge representations to reduce the optimality gap of fast heuristics for independent repetitive COPs, and can optimize the long-term objectives for repetitive COPs embedded in graph-based Markov decision processes. Source code at https://github.com/XzrTGMu/twin-nphard
|
https://openreview.net/pdf/375cd8060d66d3802592086cf8d856f477c86f39.pdf
|
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes
|
https://openreview.net/forum?id=2mvALOAWaxY
|
https://openreview.net/forum?id=2mvALOAWaxY
|
Christian Alexander Haase,Christoph Hertrich,Georg Loho
|
ICLR 2023,Poster
|
We prove that the set of functions representable by ReLU neural networks with integer weights strictly increases with the network depth while allowing arbitrary width. More precisely, we show that $\lceil\log_2(n)\rceil$ hidden layers are indeed necessary to compute the maximum of $n$ numbers, matching known upper bounds. Our results are based on the known duality between neural networks and Newton polytopes via tropical geometry. The integrality assumption implies that these Newton polytopes are lattice polytopes. Then, our depth lower bounds follow from a parity argument on the normalized volume of faces of such polytopes.
|
https://openreview.net/pdf/cf2ba337d410809186cefd884234463af489a061.pdf
|
Wasserstein Auto-encoded MDPs: Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees
|
https://openreview.net/forum?id=JLLTtEdh1ZY
|
https://openreview.net/forum?id=JLLTtEdh1ZY
|
Florent Delgrange,Ann Nowe,Guillermo Perez
|
ICLR 2023,Poster
|
Although deep reinforcement learning (DRL) has many success stories, the large-scale deployment of policies learned through these advanced techniques in safety-critical scenarios is hindered by their lack of formal guarantees. Variational Markov Decision Processes (VAE-MDPs) are discrete latent space models that provide a reliable framework for distilling formally verifiable controllers from any RL policy. While the related guarantees address relevant practical aspects such as the satisfaction of performance and safety properties, the VAE approach suffers from several learning flaws (posterior collapse, slow learning speed, poor dynamics estimates), primarily due to the absence of abstraction and representation guarantees to support latent optimization. We introduce the Wasserstein auto-encoded MDP (WAE-MDP), a latent space model that fixes those issues by minimizing a penalized form of the optimal transport between the behaviors of the agent executing the original policy and the distilled policy, for which the formal guarantees apply. Our approach yields bisimulation guarantees while learning the distilled policy, allowing concrete optimization of the abstraction and representation model quality. Our experiments show that, besides distilling policies up to 10 times faster, the latent model quality is indeed better in general. Moreover, we present experiments from a simple time-to-failure verification algorithm on the latent space. The fact that our approach enables such simple verification techniques highlights its applicability.
|
https://openreview.net/pdf/0f568427fb05b2660614b52c5f8dab551fc4d702.pdf
|
Global Explainability of GNNs via Logic Combination of Learned Concepts
|
https://openreview.net/forum?id=OTbRTIY4YS
|
https://openreview.net/forum?id=OTbRTIY4YS
|
Steve Azzolin,Antonio Longa,Pietro Barbiero,Pietro Lio,Andrea Passerini
|
ICLR 2023,Poster
|
While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned.
In this work, we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations.
Contrary to existing solutions, GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations (on synthetic data) or match existing domain knowledge (on real-world data). Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model, making GLGExplainer a promising diagnostic tool for learned GNNs.
|
https://openreview.net/pdf/4bc7378db838b1014f2e7b981b34a3e0aadaaf09.pdf
|
Gradient Gating for Deep Multi-Rate Learning on Graphs
|
https://openreview.net/forum?id=JpRExTbl1-
|
https://openreview.net/forum?id=JpRExTbl1-
|
T. Konstantin Rusch,Benjamin Paul Chamberlain,Michael W. Mahoney,Michael M. Bronstein,Siddhartha Mishra
|
ICLR 2023,Poster
|
We present Gradient Gating (G$^2$), a novel framework for improving the performance of Graph Neural Networks (GNNs). Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph. Local gradients are harnessed to further modulate message passing updates. Our framework flexibly allows one to use any basic GNN layer as a wrapper around which the multi-rate gradient gating mechanism is built. We rigorously prove that G$^2$ alleviates the oversmoothing problem and allows the design of deep GNNs. Empirical results are presented to demonstrate that the proposed framework achieves state-of-the-art performance on a variety of graph learning tasks, including on large-scale heterophilic graphs.
|
https://openreview.net/pdf/20a87eb23a2421db12fece55b995dcd0b8d4dfe7.pdf
|
MAESTRO: Open-Ended Environment Design for Multi-Agent Reinforcement Learning
|
https://openreview.net/forum?id=sKWlRDzPfd7
|
https://openreview.net/forum?id=sKWlRDzPfd7
|
Mikayel Samvelyan,Akbir Khan,Michael D Dennis,Minqi Jiang,Jack Parker-Holder,Jakob Nicolaus Foerster,Roberta Raileanu,Tim Rocktäschel
|
ICLR 2023,Poster
|
Open-ended learning methods that automatically generate a curriculum of increasingly challenging tasks serve as a promising avenue toward generally capable reinforcement learning agents. Existing methods adapt curricula independently over either environment parameters (in single-agent settings) or co-player policies (in multi-agent settings). However, the strengths and weaknesses of co-players can manifest themselves differently depending on environmental features. It is thus crucial to consider the dependency between the environment and co-player when shaping a curriculum in multi-agent domains. In this work, we use this insight and extend Unsupervised Environment Design (UED) to multi-agent environments. We then introduce Multi-Agent Environment Design Strategist for Open-Ended Learning (MAESTRO), the first multi-agent UED approach for two-player zero-sum settings. MAESTRO efficiently produces adversarial, joint curricula over both environments and co-players and attains minimax-regret guarantees at Nash equilibrium. Our experiments show that MAESTRO outperforms a number of strong baselines on competitive two-player games, spanning discrete and continuous control settings.
|
https://openreview.net/pdf/e65b3ee9b4e5db11d48c773dc4e02702868ef8e6.pdf
|
Almost Linear Constant-Factor Sketching for $\ell_1$ and Logistic Regression
|
https://openreview.net/forum?id=gu-SC0dpkvw
|
https://openreview.net/forum?id=gu-SC0dpkvw
|
Alexander Munteanu,Simon Omlor,David Woodruff
|
ICLR 2023,Poster
|
We improve upon previous oblivious sketching and turnstile streaming results for $\ell_1$ and logistic regression, giving a much smaller sketching dimension achieving $O(1)$-approximation and yielding an efficient optimization problem in the sketch space. Namely, we achieve for any constant $c>0$ a sketching dimension of $\tilde{O}(d^{1+c})$ for $\ell_1$ regression and $\tilde{O}(\mu d^{1+c})$ for logistic regression, where $\mu$ is a standard measure that captures the complexity of compressing the data. For $\ell_1$-regression our sketching dimension is near-linear and improves previous work which either required $\Omega(\log d)$-approximation with this sketching dimension, or required a larger $\operatorname{poly}(d)$ number of rows. Similarly, for logistic regression previous work had worse $\operatorname{poly}(\mu d)$ factors in its sketching dimension. We also give a tradeoff that yields a $1+\varepsilon$ approximation in input sparsity time by increasing the total size to $(d\log(n)/\varepsilon)^{O(1/\varepsilon)}$ for $\ell_1$ and to $(\mu d\log(n)/\varepsilon)^{O(1/\varepsilon)}$ for logistic regression. Finally, we show that our sketch can be extended to approximate a regularized version of logistic regression where the data-dependent regularizer corresponds to the variance of the individual logistic losses.
|
https://openreview.net/pdf/5ec1b2a74ffe778ab1db68db880be68967bcde49.pdf
|
Neural-based classification rule learning for sequential data
|
https://openreview.net/forum?id=7tJyBmu9iCj
|
https://openreview.net/forum?id=7tJyBmu9iCj
|
Marine Collery,Philippe Bonnard,François Fages,Remy Kusters
|
ICLR 2023,Poster
|
Discovering interpretable patterns for classification of sequential data is of key importance for a variety of fields, ranging from genomics to fraud detection or more generally interpretable decision-making.
In this paper, we propose a novel differentiable fully interpretable method to discover both local and global patterns (i.e. catching a relative or absolute temporal dependency) for rule-based binary classification.
It consists of a convolutional binary neural network with an interpretable neural filter and a training strategy based on dynamically-enforced sparsity.
We demonstrate the validity and usefulness of the approach on synthetic datasets and on an open-source peptides dataset.
Key to this end-to-end differentiable method is that the expressive patterns used in the rules are learned alongside the rules themselves.
|
https://openreview.net/pdf/6ccbb01a7fbb6ea6edfc173772ccb005e84f5af1.pdf
|
Leveraging Unlabeled Data to Track Memorization
|
https://openreview.net/forum?id=ORp91sAbzI
|
https://openreview.net/forum?id=ORp91sAbzI
|
Mahsa Forouzesh,Hanie Sedghi,Patrick Thiran
|
ICLR 2023,Poster
|
Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, called $\textit{susceptibility}$, to gauge such memorization for neural networks. Susceptibility is simple and easy to compute during training. Moreover, it does not require access to ground-truth labels and it only uses unlabeled data. We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric. Finally, we show through extensive experiments on datasets with synthetic and real-world label noise that one can utilize susceptibility and the overall training accuracy to distinguish models that maintain a low memorization on the training set and generalize well to unseen clean data.
|
https://openreview.net/pdf/fd7c9f38e71ea9b881d14aa1c553aee5ee725757.pdf
|
Policy-Based Self-Competition for Planning Problems
|
https://openreview.net/forum?id=SmufNDN90G
|
https://openreview.net/forum?id=SmufNDN90G
|
Jonathan Pirnay,Quirin Göttl,Jakob Burger,Dominik Gerhard Grimm
|
ICLR 2023,Poster
|
AlphaZero-type algorithms may stop improving on single-player tasks in case the value network guiding the tree search is unable to approximate the outcome of an episode sufficiently well. One technique to address this problem is transforming the single-player task through self-competition. The main idea is to compute a scalar baseline from the agent’s historical performances and to reshape an episode’s reward into a binary output, indicating whether the baseline has been exceeded or not. However, this baseline only carries limited information for the agent about strategies how to improve. We leverage the idea of self-competition and directly incorporate a historical policy into the planning process instead of its scalar performance. Based on the recently introduced Gumbel AlphaZero (GAZ), we propose our algorithm GAZ ‘Play-to-Plan’ (GAZ PTP), in which the agent learns to find strong trajectories by planning against possible strategies of its past self. We show the effectiveness of our approach in two well-known combinatorial optimization problems, the Traveling Salesman Problem and the Job-Shop Scheduling Problem. With only half of the simulation budget for search, GAZ PTP consistently outperforms all selected single-player variants of GAZ.
|
https://openreview.net/pdf/e3b41709a4697b503c61af4852343e17df76da28.pdf
|
Out-of-Distribution Detection based on In-Distribution Data Patterns Memorization with Modern Hopfield Energy
|
https://openreview.net/forum?id=KkazG4lgKL
|
https://openreview.net/forum?id=KkazG4lgKL
|
Jinsong Zhang,Qiang Fu,Xu Chen,Lun Du,Zelin Li,Gang Wang,xiaoguang Liu,Shi Han,Dongmei Zhang
|
ICLR 2023,Poster
|
Out-of-Distribution (OOD) detection is essential for safety-critical applications of deep neural networks. OOD detection is challenging since DNN models may produce very high logits value even for OOD samples. Hence, it is of great difficulty to discriminate OOD data by directly adopting Softmax on output logits as the confidence score. Differently, we detect the OOD sample with Hopfield energy in a store-then-compare paradigm. In more detail, penultimate layer outputs on the training set are considered as the representations of in-distribution (ID) data. Thus they can be transformed into stored patterns that serve as anchors to measure the discrepancy of unseen data for OOD detection. Starting from the energy function defined in Modern Hopfield Network for the discrepancy score calculation, we derive a simplified version SHE with theoretical analysis. In SHE, we utilize only one stored pattern to present each class, and these patterns can be obtained by simply averaging the penultimate layer outputs of training samples within this class. SHE has the advantages of hyperparameterfree
and high computational efficiency. The evaluations of nine widely-used OOD datasets show the promising performance of such a simple yet effective approach and its superiority over State-of-the-Art models. Code is available at https://github.com/zjs975584714/SHE ood detection.
|
https://openreview.net/pdf/db847582b0289141e0a8fd4a951dd9cc96f9c347.pdf
|
Scaling Pareto-Efficient Decision Making via Offline Multi-Objective RL
|
https://openreview.net/forum?id=Ki4ocDm364
|
https://openreview.net/forum?id=Ki4ocDm364
|
Baiting Zhu,Meihua Dang,Aditya Grover
|
ICLR 2023,Poster
|
The goal of multi-objective reinforcement learning (MORL) is to learn policies that simultaneously optimize multiple competing objectives. In practice, an agent's preferences over the objectives may not be known apriori, and hence, we require policies that can generalize to arbitrary preferences at test time. In this work, we propose a new data-driven setup for offline MORL, where we wish to learn a preference-agnostic policy agent using only a finite dataset of offline demonstrations of other agents and their preferences. The key contributions of this work are two-fold. First, we introduce D4MORL, (D)atasets for MORL that are specifically designed for offline settings. It contains 1.8 million annotated demonstrations obtained by rolling out reference policies that optimize for randomly sampled preferences on 6 MuJoCo environments with 2-3 objectives each. Second, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that builds and extends Decision Transformers via a novel preference-and-return-conditioned policy. Empirically, we show that PEDA closely approximates the behavioral policy on the D4MORL benchmark and provides an excellent approximation of the Pareto-front with appropriate conditioning, as measured by the hypervolume and sparsity metrics.
|
https://openreview.net/pdf/3d73c1e257eb3d1dd034f43fe3b51884a6dfade4.pdf
|
NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs
|
https://openreview.net/forum?id=8KYeilT3Ow
|
https://openreview.net/forum?id=8KYeilT3Ow
|
Jinsong Chen,Kaiyuan Gao,Gaichao Li,Kun He
|
ICLR 2023,Poster
|
The graph Transformer emerges as a new architecture and has shown superior performance on various graph mining tasks. In this work, we observe that existing graph Transformers treat nodes as independent tokens and construct a single long sequence composed of all node tokens so as to train the Transformer model, causing it hard to scale to large graphs due to the quadratic complexity on the number of nodes for the self-attention computation. To this end, we propose a Neighborhood Aggregation Graph Transformer (NAGphormer) that treats each node as a sequence containing a series of tokens constructed by our proposed Hop2Token module. For each node, Hop2Token aggregates the neighborhood features from different hops into different representations and thereby produces a sequence of token vectors as one input. In this way, NAGphormer could be trained in a mini-batch manner and thus could scale to large graphs. Moreover, we mathematically show that as compared to a category of advanced Graph Neural Networks (GNNs), the decoupled Graph Convolutional Network, NAGphormer could learn more informative node representations from the multi-hop neighborhoods. Extensive experiments on benchmark datasets from small to large are conducted to demonstrate that NAGphormer consistently outperforms existing graph Transformers and mainstream GNNs. Code is available at https://github.com/JHL-HUST/NAGphormer.
|
https://openreview.net/pdf/3a8c7adf426da03a4f4aaba8d76342ec203e9517.pdf
|
Bayesian Oracle for bounding information gain in neural encoding models
|
https://openreview.net/forum?id=iYC5hOMqUg
|
https://openreview.net/forum?id=iYC5hOMqUg
|
Konstantin-Klemens Lurz,Mohammad Bashiri,Edgar Y. Walker,Fabian H. Sinz
|
ICLR 2023,Poster
|
In recent years, deep learning models have set new standards in predicting neural population responses. Most of these models currently focus on predicting the mean response of each neuron for a given input. However, neural variability around this mean is not just noise and plays a central role in several theories on neural computation. To capture this variability, we need models that predict full response distributions for a given stimulus. However, to measure the quality of such models, commonly used correlation-based metrics are not sufficient as they mainly care about the mean of the response distribution. An interpretable alternative evaluation metric for likelihood-based models is \textit{Information Gain} (IG) which evaluates the likelihood of a model relative to a lower and upper bound. However, while a lower bound is usually easy to obtain, constructing an upper bound turns out to be challenging for neural recordings with relatively low numbers of repeated trials, high (shared) variability, and sparse responses. In this work, we generalize the jack-knife oracle estimator for the mean---commonly used for correlation metrics---to a flexible Bayesian oracle estimator for IG based on posterior predictive distributions. We describe and address the challenges that arise when estimating the lower and upper bounds from small datasets. We then show that our upper bound estimate is data-efficient and robust even in the case of sparse responses and low signal-to-noise ratio. We further provide the derivation of the upper bound estimator for a variety of common distributions including the state-of-the-art zero-inflated mixture models, and relate IG to common mean-based metrics. Finally, we use our approach to evaluate such a mixture model resulting in $90\%$ IG performance.
|
https://openreview.net/pdf/ace015edbe847d04dc86cb008bdbe3451ffa804c.pdf
|
$\Lambda$-DARTS: Mitigating Performance Collapse by Harmonizing Operation Selection among Cells
|
https://openreview.net/forum?id=oztkQizr3kk
|
https://openreview.net/forum?id=oztkQizr3kk
|
Sajad Movahedi,Melika Adabinejad,Ayyoob Imani,Arezou Keshavarz,Mostafa Dehghani,Azadeh Shakery,Babak N Araabi
|
ICLR 2023,Poster
|
Differentiable neural architecture search (DARTS) is a popular method for neural architecture search (NAS), which performs cell-search and utilizes continuous relaxation to improve the search efficiency via gradient-based optimization. The main shortcoming of DARTS is performance collapse, where the discovered architecture suffers from a pattern of declining quality during search. Performance collapse has become an important topic of research, with many methods trying to solve the issue through either regularization or fundamental changes to DARTS.
However, the weight-sharing framework used for cell-search in DARTS and the convergence of architecture parameters has not been analyzed yet. In this paper, we provide a thorough and novel theoretical and empirical analysis on DARTS and its point of convergence.
We show that DARTS suffers from a specific structural flaw due to its weight-sharing framework that limits the convergence of DARTS to saturation points of the softmax function. This point of convergence gives an unfair advantage to layers closer to the output in choosing the optimal architecture, causing performance collapse. We then propose two new regularization terms that aim to prevent performance collapse by harmonizing operation selection via aligning gradients of layers.
Experimental results on six different search spaces and three different datasets show that our method ($\Lambda$-DARTS) does indeed prevent performance collapse, providing justification for our theoretical analysis and the proposed remedy.
|
https://openreview.net/pdf/f18d77e73348d1310d4316feb6fe8272a357042e.pdf
|
Learning Vortex Dynamics for Fluid Inference and Prediction
|
https://openreview.net/forum?id=nYWqxUwFc3x
|
https://openreview.net/forum?id=nYWqxUwFc3x
|
Yitong Deng,Hong-Xing Yu,Jiajun Wu,Bo Zhu
|
ICLR 2023,Poster
|
We propose a novel differentiable vortex particle (DVP) method to infer and predict fluid dynamics from a single video. Lying at its core is a particle-based latent space to encapsulate the hidden, Lagrangian vortical evolution underpinning the observable, Eulerian flow phenomena. Our differentiable vortex particles are coupled with a learnable, vortex-to-velocity dynamics mapping to effectively capture the complex flow features in a physically-constrained, low-dimensional space. This representation facilitates the learning of a fluid simulator tailored to the input video that can deliver robust, long-term future predictions. The value of our method is twofold: first, our learned simulator enables the inference of hidden physics quantities (e.g., velocity field) purely from visual observation; secondly, it also supports future prediction, constructing the input video's sequel along with its future dynamics evolution. We compare our method with a range of existing methods on both synthetic and real-world videos, demonstrating improved reconstruction quality, visual plausibility, and physical integrity.
|
https://openreview.net/pdf/2027a65f6a0320b8b07a33d5d3ff6b1cb6a0580a.pdf
|
Discovering Generalizable Multi-agent Coordination Skills from Multi-task Offline Data
|
https://openreview.net/forum?id=53FyUAdP7d
|
https://openreview.net/forum?id=53FyUAdP7d
|
Fuxiang Zhang,Chengxing Jia,Yi-Chen Li,Lei Yuan,Yang Yu,Zongzhang Zhang
|
ICLR 2023,Poster
|
Cooperative multi-agent reinforcement learning (MARL) faces the challenge of adapting to multiple tasks with varying agents and targets. Previous multi-task MARL approaches require costly interactions to simultaneously learn or fine-tune policies in different tasks. However, the situation that an agent should generalize to multiple tasks with only offline data from limited tasks is more in line with the needs of real-world applications. Since offline multi-task data contains a variety of behaviors, an effective data-driven approach is to extract informative latent variables that can represent universal skills for realizing coordination across tasks. In this paper, we propose a novel Offline MARL algorithm to Discover coordInation Skills (ODIS) from multi-task data. ODIS first extracts task-invariant coordination skills from offline multi-task data and learns to delineate different agent behaviors with the discovered coordination skills. Then we train a coordination policy to choose optimal coordination skills with the centralized training and decentralized execution paradigm. We further demonstrate that the discovered coordination skills can assign effective coordinative behaviors, thus significantly enhancing generalization to unseen tasks. Empirical results in cooperative MARL benchmarks, including the StarCraft multi-agent challenge, show that ODIS obtains superior performance in a wide range of tasks only with offline data from limited sources.
|
https://openreview.net/pdf/d365ffe4e9b099c3b0b62134ead3eaeba4105768.pdf
|
Quality-Similar Diversity via Population Based Reinforcement Learning
|
https://openreview.net/forum?id=bLmSMXbqXr
|
https://openreview.net/forum?id=bLmSMXbqXr
|
Shuang Wu,Jian Yao,Haobo Fu,Ye Tian,Chao Qian,Yaodong Yang,QIANG FU,Yang Wei
|
ICLR 2023,Poster
|
Diversity is a growing research topic in Reinforcement Learning (RL). Previous research on diversity has mainly focused on promoting diversity to encourage exploration and thereby improve quality (the cumulative reward), maximizing diversity subject to quality constraints, or jointly maximizing quality and diversity, known as the quality-diversity problem. In this work, we present the quality-similar diversity problem that features diversity among policies of similar qualities. In contrast to task-agnostic diversity, we focus on task-specific diversity defined by a set of user-specified Behavior Descriptors (BDs). A BD is a scalar function of a trajectory (e.g., the fire action rate for an Atari game), which delivers the type of diversity the user prefers. To derive the gradient of the user-specified diversity with respect to a policy, which is not trivially available, we introduce a set of BD estimators and connect it with the classical policy gradient theorem. Based on the diversity gradient, we develop a population-based RL algorithm to adaptively and efficiently optimize the population diversity at multiple quality levels throughout training. Extensive results on MuJoCo and Atari demonstrate that our algorithm significantly outperforms previous methods in terms of generating user-specified diverse policies across different quality levels.
|
https://openreview.net/pdf/d8f5cd723fc580efed10ef08df13eda8c3ad877f.pdf
|
Better Teacher Better Student: Dynamic Prior Knowledge for Knowledge Distillation
|
https://openreview.net/forum?id=M0_sUuEyHs
|
https://openreview.net/forum?id=M0_sUuEyHs
|
Martin Zong,Zengyu Qiu,Xinzhu Ma,Kunlin Yang,Chunya Liu,Jun Hou,Shuai Yi,Wanli Ouyang
|
ICLR 2023,Poster
|
Knowledge distillation (KD) has shown very promising capabilities in transferring learning representations from large models (teachers) to small models (students). However, as the capacity gap between students and teachers becomes larger, existing KD methods fail to achieve better results. Our work shows that the 'prior knowledge' is vital to KD, especially when applying large teachers. Particularly, we propose the dynamic prior knowledge (DPK), which integrates part of teacher's features as the prior knowledge before the feature distillation. This means that our method also takes the teacher's feature as `input', not just `target'. Besides, we dynamically adjust the ratio of the prior knowledge during the training phase according to the feature gap, thus guiding the student in an appropriate difficulty. To evaluate the proposed method, we conduct extensive experiments on two image classification benchmarks (i.e. CIFAR100 and ImageNet) and an object detection benchmark (\i.e. MS COCO). The results demonstrate the superiority of our method in performance under varying settings. Besides, our DPK makes the performance of the student model positively correlated with that of the teacher model, which means that we can further boost the accuracy of students by applying larger teachers. More importantly, DPK provides a fast solution in teacher model selection for any given model. Our codes will be publicly available for reproducibility.
|
https://openreview.net/pdf/2fb7e9a90ff72f47a0a922f31bcff587c30cc103.pdf
|
Tensor-Based Sketching Method for the Low-Rank Approximation of Data Streams.
|
https://openreview.net/forum?id=rOFKmzNTbC
|
https://openreview.net/forum?id=rOFKmzNTbC
|
Cuiyu Liu,Xiao Chuanfu,Mingshuo Ding,Chao Yang
|
ICLR 2023,Poster
|
Low-rank approximation in data streams is a fundamental and significant task in computing science, machine learning and statistics. Multiple streaming algorithms have emerged over years and most of them are inspired by randomized algorithms, more specifically, sketching methods. However, many algorithms are not able to leverage information of data streams and consequently suffer from low accuracy. Existing data-driven methods improve accuracy but the training cost is expensive in practice. In this paper, from a subspace perspective, we propose a tensor-based sketching method for low-rank approximation of data streams. The proposed algorithm fully exploits the structure of data streams and obtains quasi-optimal sketching matrices by performing tensor decomposition on training data. A series of experiments are carried out and show that the proposed tensor-based method can be more accurate and much faster than the previous work.
|
https://openreview.net/pdf/b2a9927f9feb2d015380aa8aa45bebeead510f30.pdf
|
Language Models are Realistic Tabular Data Generators
|
https://openreview.net/forum?id=cEygmQNOeI
|
https://openreview.net/forum?id=cEygmQNOeI
|
Vadim Borisov,Kathrin Sessler,Tobias Leemann,Martin Pawelczyk,Gjergji Kasneci
|
ICLR 2023,Poster
|
Tabular data is among the oldest and most ubiquitous forms of data. However, the generation of synthetic samples with the original data’s characteristics remains a significant challenge for tabular data. While many generative models from the computer vision domain, such as variational autoencoders or generative adversarial networks, have been adapted for tabular data generation, less research has been directed towards recent transformer-based large language models (LLMs), which are also generative in nature. To this end, we propose GReaT (Generation of Realistic Tabular data), which exploits an auto-regressive generative LLM to sample synthetic and yet highly realistic tabular data. Furthermore, GReaT can model tabular data distributions by conditioning on any subset of features; the remaining features are sampled without additional overhead. We demonstrate the effectiveness of the proposed approach in a series of experiments that quantify the validity and quality of the produced data samples from multiple angles. We find that GReaT maintains state-of-the-art performance across numerous real-world and synthetic data sets with heterogeneous feature types coming in various sizes.
|
https://openreview.net/pdf/93e938176cd4da2511c79883813c6bb7781f9804.pdf
|
Data augmentation alone can improve adversarial training
|
https://openreview.net/forum?id=y4uc4NtTWaq
|
https://openreview.net/forum?id=y4uc4NtTWaq
|
Lin Li,Michael W. Spratling
|
ICLR 2023,Poster
|
Adversarial training suffers from the issue of robust overfitting, which seriously impairs its generalization performance. Data augmentation, which is effective at preventing overfitting in standard training, has been observed by many previous works to be ineffective in mitigating overfitting in adversarial training. This work proves that, contrary to previous findings, data augmentation alone can significantly boost accuracy and robustness in adversarial training. We find that the hardness and the diversity of data augmentation are important factors in combating robust overfitting. In general, diversity can improve both accuracy and robustness, while hardness can boost robustness at the cost of accuracy within a certain limit and degrade them both over that limit. To mitigate robust overfitting, we first propose a new crop transformation Cropshift with improved diversity compared to the conventional one (Padcrop). We then propose a new data augmentation scheme, based on Cropshift, with much improved diversity and well-balanced hardness. Empirically, our augmentation method achieves the state-of-the-art accuracy and robustness for data augmentations in adversarial training. Furthermore, it matches, or even exceeds when combined with weight averaging, the performance of the best contemporary regularization methods for alleviating robust overfitting.
|
https://openreview.net/pdf/e4ec59a80e39f443efdbbcad0a5e73fbfc19eee5.pdf
|
CUTS: Neural Causal Discovery from Irregular Time-Series Data
|
https://openreview.net/forum?id=UG8bQcD3Emv
|
https://openreview.net/forum?id=UG8bQcD3Emv
|
Yuxiao Cheng,Runzhao Yang,Tingxiong Xiao,Zongren Li,Jinli Suo,Kunlun He,Qionghai Dai
|
ICLR 2023,Poster
|
Causal discovery from time-series data has been a central task in machine learning. Recently, Granger causality inference is gaining momentum due to its good explainability and high compatibility with emerging deep neural networks. However, most existing methods assume structured input data and degenerate greatly when encountering data with randomly missing entries or non-uniform sampling frequencies, which hampers their applications in real scenarios. To address this issue, here we present CUTS, a neural Granger causal discovery algorithm to jointly impute unobserved data points and build causal graphs, via plugging in two mutually boosting modules in an iterative framework: (i) Latent data prediction stage: designs a Delayed Supervision Graph Neural Network (DSGNN) to hallucinate and register unstructured data which might be of high dimension and with complex distribution; (ii) Causal graph fitting stage: builds a causal adjacency matrix with imputed data under sparse penalty. Experiments show that CUTS effectively infers causal graphs from irregular time-series data, with significantly superior performance to existing methods. Our approach constitutes a promising step towards applying causal discovery to real applications with non-ideal observations.
|
https://openreview.net/pdf/e5927dd1ff7287d972c5b409d85506a0c1b5825f.pdf
|
Quantized Compressed Sensing with Score-Based Generative Models
|
https://openreview.net/forum?id=OOWLRfAI_V_
|
https://openreview.net/forum?id=OOWLRfAI_V_
|
Xiangming Meng,Yoshiyuki Kabashima
|
ICLR 2023,Poster
|
We consider the general problem of recovering a high-dimensional signal from noisy quantized measurements. Quantization, especially coarse quantization such as 1-bit sign measurements, leads to severe information loss and thus a good prior knowledge of the unknown signal is helpful for accurate recovery. Motivated by the power of score-based generative models (SGM, also known as diffusion models) in capturing the rich structure of natural signals beyond simple sparsity, we propose an unsupervised data-driven approach called quantized compressed sensing with SGM (QCS-SGM), where the prior distribution is modeled by a pre-trained SGM. To perform posterior sampling, an annealed pseudo-likelihood score called ${\textit{noise perturbed pseudo-likelihood score}}$ is introduced and combined with the prior score of SGM. The proposed QCS-SGM applies to an arbitrary number of quantization bits. Experiments on a variety of baseline datasets demonstrate that the proposed QCS-SGM significantly outperforms existing state-of-the-art algorithms by a large margin for both in-distribution and out-of-distribution samples. Moreover, as a posterior sampling method, QCS-SGM can be easily used to obtain confidence intervals or uncertainty estimates of the reconstructed results. $\textit{The code is available at}$ https://github.com/mengxiangming/QCS-SGM.
|
https://openreview.net/pdf/4f6f0e2347a3d6f9a88b39e445f77c1e7503064e.pdf
|
Valid P-Value for Deep Learning-driven Salient Region
|
https://openreview.net/forum?id=qihMOPw4Sf_
|
https://openreview.net/forum?id=qihMOPw4Sf_
|
Miwa Daiki,Vo Nguyen Le Duy,Ichiro Takeuchi
|
ICLR 2023,Poster
|
Various saliency map methods have been proposed to interpret and explain predictions of deep learning models. Saliency maps allow us to interpret which parts of the input signals have a strong influence on the prediction results. However, since a saliency map is obtained by complex computations in deep learning models, it is often difficult to know how reliable the saliency map itself is. In this study, we propose a method to quantify the reliability of a saliency region in the form of p-values. Our idea is to consider a saliency map as a selected hypothesis by the trained deep learning model and employ the selective inference framework. The proposed method provably provides a valid p-value for the detected salient region, i.e., we can provably control the false positive rate of the detected salient region. We demonstrate the validity of the proposed method through numerical examples in synthetic and real datasets. Furthermore, we develop a Keras-based framework for conducting the proposed selective inference for a wide class of CNNs without additional implementation cost.
|
https://openreview.net/pdf/4648f8cd96b71924f57fd9418c3d9e7fe45ce8c8.pdf
|
Complexity-Based Prompting for Multi-step Reasoning
|
https://openreview.net/forum?id=yf1icZHC-l9
|
https://openreview.net/forum?id=yf1icZHC-l9
|
Yao Fu,Hao Peng,Ashish Sabharwal,Peter Clark,Tushar Khot
|
ICLR 2023,Poster
|
We study the task of prompting large-scale language models to perform multi-step reasoning. Existing work shows that when prompted with a chain of thoughts (CoT), sequences of short sentences describing intermediate reasoning steps towards a final answer, large language models can generate new reasoning chains and predict answers for new inputs. A central question is which reasoning examples make the most effective prompts. In this work, we propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning. We show that prompts with higher reasoning complexity, i.e., chains with more reasoning steps, achieve substantially better performance on math word reasoning tasks over strong baselines. We further extend our complexity-based criteria from prompting (selecting inputs) to decoding (selecting outputs), where we sample multiple reasoning chains from the model, then choose the majority
of generated answers from complex reasoning chains (over simple chains). When used to prompt GPT-3, our approach substantially improves multi-step reasoning accuracy, with an 8.6% absolute improvement on GSM8K, and 6.4% on MathQA. Compared with existing example selection schemes like manual tuning or retrieval-based selection, selection based on reasoning complexity is intuitive, easy to implement, and annotation-efficient. Further results demonstrate the robustness of performance gains from complex prompts under format perturbation and distribution shift.
|
https://openreview.net/pdf/45dc479ddf081da97bf30a319e61cd0509d1f701.pdf
|
Unsupervised 3D Object Learning through Neuron Activity aware Plasticity
|
https://openreview.net/forum?id=mXPoBtnpMnuy
|
https://openreview.net/forum?id=mXPoBtnpMnuy
|
Beomseok Kang,Biswadeep Chakraborty,Saibal Mukhopadhyay
|
ICLR 2023,Poster
|
We present an unsupervised deep learning model for 3D object classification. Conventional Hebbian learning, a well-known unsupervised model, suffers from loss of local features leading to reduced performance for tasks with complex geometric objects. We present a deep network with a novel Neuron Activity Aware (NeAW) Hebbian learning rule that dynamically switches the neurons to be governed by Hebbian learning or anti-Hebbian learning, depending on its activity. We analytically show that NeAW Hebbian learning relieves the bias in neuron activity, allowing more neurons to attend to the representation of the 3D objects. Empirical results show that the NeAW Hebbian learning outperforms other variants of Hebbian learning and shows higher accuracy over fully supervised models when training data is limited.
|
https://openreview.net/pdf/4bb9100e1fbde4d43b7401d608de56c74c737a5a.pdf
|
Visually-Augmented Language Modeling
|
https://openreview.net/forum?id=8IN-qLkl215
|
https://openreview.net/forum?id=8IN-qLkl215
|
Weizhi Wang,Li Dong,Hao Cheng,Haoyu Song,Xiaodong Liu,Xifeng Yan,Jianfeng Gao,Furu Wei
|
ICLR 2023,Poster
|
Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on the text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VaLM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending on both text context and visual knowledge in images. We evaluate VaLM on various visual knowledge intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VaLM outperforms all strong language-only and vision-language baselines with substantial gains on reasoning object commonsense including color, size, and shape.
|
https://openreview.net/pdf/c73c81bf4faecceb125dd37e5452d0ba0431a662.pdf
|
Incremental Learning of Structured Memory via Closed-Loop Transcription
|
https://openreview.net/forum?id=XrgjF5-M3xi
|
https://openreview.net/forum?id=XrgjF5-M3xi
|
Shengbang Tong,Xili Dai,Ziyang Wu,Mingyang Li,Brent Yi,Yi Ma
|
ICLR 2023,Poster
|
This work proposes a minimal computational model for learning structured memories of multiple object classes in an incremental setting. Our approach is based on establishing a {\em closed-loop transcription} between the classes and a corresponding set of subspaces, known as a linear discriminative representation, in a low-dimensional feature space. Our method is simpler than existing approaches for incremental learning, and more efficient in terms of model size, storage, and computation: it requires only a single, fixed-capacity autoencoding network with a feature space that is used for both discriminative and generative purposes. Network parameters are optimized simultaneously without architectural manipulations, by solving a constrained minimax game between the encoding and decoding maps over a single rate reduction-based objective. Experimental results show that our method can effectively alleviate catastrophic forgetting, achieving significantly better performance than prior work of generative replay on MNIST, CIFAR-10, and ImageNet-50, despite requiring fewer resources.
|
https://openreview.net/pdf/b51d5d9398934da9dd8dcfa5636d9d2c50e068e0.pdf
|
When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning
|
https://openreview.net/forum?id=lMO7TC7cuuh
|
https://openreview.net/forum?id=lMO7TC7cuuh
|
Jianxiong Li,Xianyuan Zhan,Haoran Xu,Xiangyu Zhu,Jingjing Liu,Ya-Qin Zhang
|
ICLR 2023,Poster
|
In offline reinforcement learning (RL), one detrimental issue to policy learning is the error accumulation of deep \textit{Q} function in out-of-distribution (OOD) areas. Unfortunately, existing offline RL methods are often over-conservative, inevitably hurting generalization performance outside data distribution. In our study, one interesting observation is that deep \textit{Q} functions approximate well inside the convex hull of training data. Inspired by this, we propose a new method, \textit{DOGE (Distance-sensitive Offline RL with better GEneralization)}. DOGE marries dataset geometry with deep function approximators in offline RL, and enables exploitation in generalizable OOD areas rather than strictly constraining policy within data distribution. Specifically, DOGE trains a state-conditioned distance function that can be readily plugged into standard actor-critic methods as a policy constraint. Simple yet elegant, our algorithm enjoys better generalization compared to state-of-the-art methods on D4RL benchmarks. Theoretical analysis demonstrates the superiority of our approach to existing methods that are solely based on data distribution or support constraints. Code is available at https://github.com/Facebear-ljx/DOGE.
|
https://openreview.net/pdf/94dbfca2646bd9ee54214755138d35cafa230611.pdf
|
Budgeted Training for Vision Transformer
|
https://openreview.net/forum?id=sVzBN-DlJRi
|
https://openreview.net/forum?id=sVzBN-DlJRi
|
zhuofan xia,Xuran Pan,Xuan Jin,Yuan He,Hui Xue',Shiji Song,Gao Huang
|
ICLR 2023,Poster
|
The superior performances of Vision Transformers often come with higher training costs. Compared to their CNN counterpart, Transformer models are hungry for large-scale data and their training schedules are usually prolonged. This sets great restrictions on training Transformers with limited resources, where a proper trade-off between training cost and model performance is longed. In this paper, we address the problem by proposing a framework that enables the training process under \textit{any training budget} from the perspective of model structure, while achieving competitive model performances. Specifically, based on the observation that Transformer exhibits different levels of model redundancies at different training stages, we propose to dynamically control the activation rate of the model structure along the training process and meet the demand on the training budget by adjusting the duration on each level of model complexity. Extensive experiments demonstrate that our framework is applicable to various Vision Transformers, and achieves competitive performances on a wide range of training budgets.
|
https://openreview.net/pdf/e8c2b415f838d55e789b3fdc1bfc23f714aa7a75.pdf
|
Mind's Eye: Grounded Language Model Reasoning through Simulation
|
https://openreview.net/forum?id=4rXMRuoJlai
|
https://openreview.net/forum?id=4rXMRuoJlai
|
Ruibo Liu,Jason Wei,Shixiang Shane Gu,Te-Yen Wu,Soroush Vosoughi,Claire Cui,Denny Zhou,Andrew M. Dai
|
ICLR 2023,Poster
|
Successful and effective communication between humans and AI relies on a shared experience of the world. By training solely on written text, current language models (LMs) miss the grounded experience of humans in the real-world---their failure to relate language to the physical world causes knowledge to be misrepresented and obvious mistakes in their reasoning. We present Mind's Eye, a paradigm to ground language model reasoning in the physical world. Given a physical reasoning question, we use a computational physics engine (DeepMind's MuJoCo) to simulate the possible outcomes, and then use the simulation results as part of the input, which enables language models to perform reasoning. Experiments on 39 tasks in a physics alignment benchmark demonstrate that Mind's Eye can improve reasoning ability by a large margin (27.9% zero-shot, and 46.0% few-shot absolute accuracy improvement on average). Smaller language models armed with Mind's Eye can obtain similar performance to models that are 100x larger. Finally, we confirm the robustness of Mind's Eye through ablation studies.
|
https://openreview.net/pdf/0a6dcae7aef4fb3b11746b7175c70cfa12d4c3a3.pdf
|
What Do Self-Supervised Vision Transformers Learn?
|
https://openreview.net/forum?id=azCKuYyS74
|
https://openreview.net/forum?id=azCKuYyS74
|
Namuk Park,Wonjae Kim,Byeongho Heo,Taekyung Kim,Sangdoo Yun
|
ICLR 2023,Poster
|
We present a comparative study on how and why contrastive learning (CL) and masked image modeling (MIM) differ in their representations and in their performance of downstream tasks. In particular, we demonstrate that self-supervised Vision Transformers (ViTs) have the following properties: (1) CL trains self-attentions to capture longer-range global patterns than MIM, such as the shape of an object, especially in the later layers of the ViT architecture. This CL property helps ViTs linearly separate images in their representation spaces. However, it also makes the self-attentions collapse into homogeneity for all query tokens and heads. Such homogeneity of self-attention reduces the diversity of representations, worsening scalability and dense prediction performance. (2) CL utilizes the low-frequency signals of the representations, but MIM utilizes high-frequencies. Since low- and high-frequency information respectively represent shapes and textures, CL is more shape-oriented and MIM more texture-oriented. (3) CL plays a crucial role in the later layers, while MIM mainly focuses on the early layers. Upon these analyses, we find that CL and MIM can complement each other and observe that even the simplest harmonization can help leverage the advantages of both methods. The code is available at https://github.com/naver-ai/cl-vs-mim.
|
https://openreview.net/pdf/ad19dc82b73d87a8e38ef475dc72f82e75ea328e.pdf
|
Population-size-Aware Policy Optimization for Mean-Field Games
|
https://openreview.net/forum?id=fB4V-2QvCEm
|
https://openreview.net/forum?id=fB4V-2QvCEm
|
Pengdeng Li,Xinrun Wang,Shuxin Li,Hau Chan,Bo An
|
ICLR 2023,Poster
|
In this work, we attempt to bridge the two fields of finite-agent and infinite-agent games, by studying how the optimal policies of agents evolve with the number of agents (population size) in mean-field games, an agent-centric perspective in contrast to the existing works focusing typically on the convergence of the empirical distribution of the population. To this end, the premise is to obtain the optimal policies of a set of finite-agent games with different population sizes. However, either deriving the closed-form solution for each game is theoretically intractable, training a distinct policy for each game is computationally intensive, or directly applying the policy trained in a game to other games is sub-optimal. We address these challenges through the \textbf{P}opulation-size-\textbf{A}ware \textbf{P}olicy \textbf{O}ptimization (PAPO). Our contributions are three-fold. First, to efficiently generate efficient policies for games with different population sizes, we propose PAPO, which unifies two natural options (augmentation and hypernetwork) and achieves significantly better performance. PAPO consists of three components: i) the population-size encoding which transforms the original value of population size to an equivalent encoding to avoid training collapse, ii) a hypernetwork to generate a distinct policy for each game conditioned on the population size, and iii) the population size as an additional input to the generated policy. Next, we construct a multi-task-based training procedure to efficiently train the neural networks of PAPO by sampling data from multiple games with different population sizes. Finally, extensive experiments on multiple environments show the significant superiority of PAPO over baselines, and the analysis of the evolution of the generated policies further deepens our understanding of the two fields of finite-agent and infinite-agent games.
|
https://openreview.net/pdf/453e068e0d07905abfd73e6b1e73851d095de2e6.pdf
|
On The Relative Error of Random Fourier Features for Preserving Kernel Distance
|
https://openreview.net/forum?id=qs2YCziX2o-
|
https://openreview.net/forum?id=qs2YCziX2o-
|
Kuan Cheng,Shaofeng H.-C. Jiang,Luojian Wei,Zhide Wei
|
ICLR 2023,Poster
|
The method of random Fourier features (RFF), proposed in a seminal paper by Rahimi and Recht (NIPS'07), is a powerful technique to find approximate low-dimensional representations of points in (high-dimensional) kernel space, for shift-invariant kernels. While RFF has been analyzed under various notions of error guarantee, the ability to preserve the kernel distance with \emph{relative} error is less understood. We show that for a significant range of kernels, including the well-known Laplacian kernels, RFF cannot approximate the kernel distance with small relative error using low dimensions. We complement this by showing as long as the shift-invariant kernel is analytic, RFF with $\mathrm{poly}(\epsilon^{-1} \log n)$ dimensions achieves $\epsilon$-relative error for pairwise kernel distance of $n$ points, and the dimension bound is improved to $\mathrm{poly}(\epsilon^{-1}\log k)$ for the specific application of kernel $k$-means. Finally, going beyond RFF, we make the first step towards data-oblivious dimension-reduction for general shift-invariant kernels, and we obtain a similar $\mathrm{poly}(\epsilon^{-1} \log n)$ dimension bound for Laplacian kernels. We also validate the dimension-error tradeoff of our methods on simulated datasets, and they demonstrate superior performance compared with other popular methods including random-projection and Nystr\"{o}m methods.
|
https://openreview.net/pdf/2cfb96f6ef7713b8278f6877fc8a48505389d909.pdf
|
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
|
https://openreview.net/forum?id=sE7-XhLxHA
|
https://openreview.net/forum?id=sE7-XhLxHA
|
Pengcheng He,Jianfeng Gao,Weizhu Chen
|
ICLR 2023,Poster
|
This paper presents a new pre-trained language model, NewModel, which improves the original DeBERTa model by replacing mask language modeling (MLM) with replaced token detection (RTD), a more sample-efficient pre-training task. Our analysis shows that vanilla embedding sharing in ELECTRA hurts training efficiency and model performance. This is because the training losses of the discriminator and the generator pull token embeddings in different directions, creating the “tug-of-war” dynamics. We thus propose a new gradient-disentangled embedding sharing method that avoids the tug-of-war dynamics, improving both training efficiency and the quality of the pre-trained model. We have pre-trained NewModel using the same settings as DeBERTa to demonstrate its exceptional performance on a wide range of downstream natural language understanding (NLU) tasks. Taking the GLUE benchmark with eight tasks as an example, the NewModel Large model achieves a 91.37% average score, which is 1.37% over DeBERTa and 1.91% over ELECTRA, setting a new state-of-the-art (SOTA) among the models with a similar structure. Furthermore, we have pre-trained a multi-lingual model mNew-Model and observed a larger improvement over strong baselines compared to English models. For example, the mNewModel Base achieves a 79.8% zero-shot cross-lingual accuracy on XNLI and a 3.6% improvement over XLM-R Base, creating a new SOTA on this benchmark. We will make our model and code publicly available.
|
https://openreview.net/pdf/553181e6a53d384858f9fdfabb4dc41b0c245d8e.pdf
|
Squeeze Training for Adversarial Robustness
|
https://openreview.net/forum?id=Z_tmYu060Kr
|
https://openreview.net/forum?id=Z_tmYu060Kr
|
Qizhang Li,Yiwen Guo,Wangmeng Zuo,Hao Chen
|
ICLR 2023,Poster
|
The vulnerability of deep neural networks (DNNs) to adversarial examples has attracted great attention in the machine learning community. The problem is related to non-flatness and non-smoothness of normally obtained loss landscapes. Training augmented with adversarial examples (a.k.a., adversarial training) is considered as an effective remedy. In this paper, we highlight that some collaborative examples, nearly perceptually indistinguishable from both adversarial and benign examples yet show extremely lower prediction loss, can be utilized to enhance adversarial training. A novel method is therefore proposed to achieve new state-of-the-arts in adversarial robustness. Code: https://github.com/qizhangli/ST-AT.
|
https://openreview.net/pdf/fb81a290a4e3da505226d83c5e1f62c94f0a3c53.pdf
|
Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play
|
https://openreview.net/forum?id=MofT9KEF0kw
|
https://openreview.net/forum?id=MofT9KEF0kw
|
Jeremiah Zhe Liu,Krishnamurthy Dj Dvijotham,Jihyeon Lee,Quan Yuan,Balaji Lakshminarayanan,Deepak Ramachandran
|
ICLR 2023,Poster
|
Standard empirical risk minimization (ERM) training can produce deep neural network (DNN) models that are accurate on average but under-perform in under-represented population subgroups, especially when there are imbalanced group distributions in the long-tailed training data. Therefore, approaches that improve the accuracy - group robustness trade-off frontier of a DNN model (i.e. improving worst-group accuracy without sacrificing average accuracy, or vice versa) is of crucial importance. Uncertainty-based active learning (AL) can potentially improve the frontier by preferentially sampling underrepresented subgroups to create a more balanced training dataset. However, the quality of uncertainty estimates from modern DNNs tend to degrade in the presence of spurious correlations and dataset bias, compromising the effectiveness of AL for sampling tail groups. In this work, we propose Introspective Self-play (ISP), a simple approach to improve the uncertainty estimation of a deep neural network under dataset bias, by adding an auxiliary introspection task requiring a model to predict the bias for each data point in addition to the label. We show that ISP provably improves the bias-awareness of the model representation and the resulting uncertainty estimates. On two real-world tabular and language tasks,ISP serves as a simple “plug-in” for AL model training, consistently improving both the tail-group sampling rate and the final accuracy-fairness trade-off frontier of popular AL methods.
|
https://openreview.net/pdf/51a2f70ce702f351ea58c6f9bfad71a8c3c055fc.pdf
|
Max-Margin Works while Large Margin Fails: Generalization without Uniform Convergence
|
https://openreview.net/forum?id=n-hKHMzBgy
|
https://openreview.net/forum?id=n-hKHMzBgy
|
Margalit Glasgow,Colin Wei,Mary Wootters,Tengyu Ma
|
ICLR 2023,Poster
|
A major challenge in modern machine learning is theoretically understanding the generalization properties of overparameterized models. Many existing tools rely on uniform convergence (UC), a property that, when it holds, guarantees that the test loss will be close to the training loss, uniformly over a class of candidate models. Nagarajan and Kolter (2019) show that in certain simple linear and neural-network settings, any uniform convergence bound will be vacuous, leaving open the question of how to prove generalization in settings where UC fails. Our main contribution is proving novel generalization bounds in two such settings, one linear, and one non-linear. We study the linear classification setting of Nagarajan and Kolter (2019), and a quadratic ground truth function learned via a two-layer neural network in the non-linear regime. We prove a new type of margin bound showing that above a certain signal-to-noise threshold, any near-max-margin classifier will achieve almost no test loss in these two settings. Our results show that near-max-margin is important: while any model that achieves at least a $(1 - \epsilon)$-fraction of the max-margin generalizes well, a classifier achieving half of the max-margin may fail terribly. Our analysis provides insight on why memorization can coexist with generalization: we show that in this challenging regime where generalization occurs but UC fails, near-max-margin classifiers simultaneously contain some generalizable components and some overfitting components that memorize the data. The presence of the overfitting components is enough to preclude UC, but the near-extremal margin guarantees that sufficient generalizable components are present.
|
https://openreview.net/pdf/84164345c9689be4f102bbd39af7c6b924f29271.pdf
|
Asymptotic Instance-Optimal Algorithms for Interactive Decision Making
|
https://openreview.net/forum?id=oGVu9spZaJJ
|
https://openreview.net/forum?id=oGVu9spZaJJ
|
Kefan Dong,Tengyu Ma
|
ICLR 2023,Poster
|
Past research on interactive decision making problems (bandits, reinforcement learning, etc.) mostly focuses on the minimax regret that measures the algorithm's performance on the hardest instance. However, an ideal algorithm should adapt to the complexity of a particular problem instance and incur smaller regrets on easy instances than worst-case instances. In this paper, we design the first asymptotic instance-optimal algorithm for general interactive decision making problems with finite number of decisions under mild conditions. On every instance $f$, our algorithm outperforms all consistent algorithms (those achieving non-trivial regrets on all instances), and has asymptotic regret $\mathcal{C}(f) \ln n$, where $\mathcal{C}(f)$ is an exact characterization of the complexity of $f$. The key step of the algorithm involves hypothesis testing with active data collection. It computes the most economical decisions with which the algorithm collects observations to test whether an estimated instance is indeed correct; thus, the complexity $\mathcal{C}(f)$ is the minimum cost to test the instance $f$ against other instances. Our results, instantiated on concrete problems, recover the classical gap-dependent bounds for multi-armed bandits and prior works on linear bandits, and improve upon the previous best instance-dependent upper bound for reinforcement learning.
|
https://openreview.net/pdf/c6b6e7c349fa0b0fadc827dc82f9257d1655f8c7.pdf
|
Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation
|
https://openreview.net/forum?id=SNwH0dDGl7_
|
https://openreview.net/forum?id=SNwH0dDGl7_
|
Dan Qiao,Yu-Xiang Wang
|
ICLR 2023,Poster
|
We study the problem of deployment efficient reinforcement learning (RL) with linear function approximation under the \emph{reward-free} exploration setting. This is a well-motivated problem because deploying new policies is costly in real-life RL applications. Under the linear MDP setting with feature dimension $d$ and planning horizon $H$, we propose a new algorithm that collects at most $\widetilde{O}(\frac{d^2H^5}{\epsilon^2})$ trajectories within $H$ deployments to identify $\epsilon$-optimal policy for any (possibly data-dependent) choice of reward functions. To the best of our knowledge, our approach is the first to achieve optimal deployment complexity and optimal $d$ dependence in sample complexity at the same time, even if the reward is known ahead of time. Our novel techniques include an exploration-preserving policy discretization and a generalized G-optimal experiment design, which could be of independent interest. Lastly, we analyze the related problem of regret minimization in low-adaptive RL and provide information-theoretic lower bounds for switching cost and batch complexity.
|
https://openreview.net/pdf/126aa9751c7215e2be8182af8e11d1d39167d337.pdf
|
An Equal-Size Hard EM Algorithm for Diverse Dialogue Generation
|
https://openreview.net/forum?id=k5PEHHY4spM
|
https://openreview.net/forum?id=k5PEHHY4spM
|
Yuqiao Wen,Yongchang Hao,Yanshuai Cao,Lili Mou
|
ICLR 2023,Poster
|
Open-domain dialogue systems aim to interact with humans through natural language texts in an open-ended fashion. Despite the recent success of super large dialogue systems such as ChatGPT, using medium-to-small-sized dialogue systems remains the common practice as they are more lightweight and accessible; however, generating diverse dialogue responses is challenging, especially with smaller models. In this work, we propose an Equal-size Hard Expectation--Maximization (EqHard-EM) algorithm to train a multi-decoder model for diverse dialogue generation. Our algorithm assigns a sample to a decoder in a hard manner and additionally imposes an equal-assignment constraint to ensure that all decoders are well-trained. We provide detailed theoretical analysis to justify our approach. Further, experiments on two large-scale open-domain dialogue datasets verify that our EqHard-EM algorithm generates high-quality diverse responses.
|
https://openreview.net/pdf/e35691a0aba5f3cd36e304c2bdd3ecb53711ebb1.pdf
|
The hidden uniform cluster prior in self-supervised learning
|
https://openreview.net/forum?id=04K3PMtMckp
|
https://openreview.net/forum?id=04K3PMtMckp
|
Mido Assran,Randall Balestriero,Quentin Duval,Florian Bordes,Ishan Misra,Piotr Bojanowski,Pascal Vincent,Michael Rabbat,Nicolas Ballas
|
ICLR 2023,Poster
|
A successful paradigm in representation learning is to perform self-supervised pretraining using tasks based on mini-batch statistics; (e.g., SimCLR, VICReg, SwAV, MSN). We show that in the formulation of all these methods is an overlooked prior to learn features that enable uniform clustering of the data. While this prior has led to remarkably semantic representations when pretraining on class-balanced data, such as ImageNet, we demonstrate that it can hamper performance when pretraining on class-imbalanced data. By moving away from conventional uniformity priors and instead preferring power-law distributed feature clusters, we show that one can improve the quality of the learned representations on real-world class-imbalanced datasets. To demonstrate this, we develop an extension of the Masked Siamese Networks (MSN) method to support the use of arbitrary features priors.
|
https://openreview.net/pdf/a96a1f9b1547d4f0c5db1e0faf80244de4de3293.pdf
|
Long-Tailed Partial Label Learning via Dynamic Rebalancing
|
https://openreview.net/forum?id=sXfWoK4KvSW
|
https://openreview.net/forum?id=sXfWoK4KvSW
|
Feng Hong,Jiangchao Yao,Zhihan Zhou,Ya Zhang,Yanfeng Wang
|
ICLR 2023,Poster
|
Real-world data usually couples the label ambiguity and heavy imbalance, challenging the algorithmic robustness of partial label learning (PLL) and long-tailed learning (LT). The straightforward combination of LT and PLL, i.e., LT-PLL, suffers from a fundamental dilemma: LT methods build upon a given class distribution that is unavailable in PLL, and the performance of PLL is severely influenced in long-tailed context. We show that even with the auxiliary of an oracle class prior, the state-of-the-art methods underperform due to an adverse fact that the constant rebalancing in LT is harsh to the label disambiguation in PLL. To overcome this challenge, we thus propose a dynamic rebalancing method, termed as RECORDS, without assuming any prior knowledge about the class distribution. Based on a parametric decomposition of the biased output, our method constructs a dynamic adjustment that is benign to the label disambiguation process and theoretically converges to the oracle class prior. Extensive experiments on three benchmark datasets demonstrate the significant gain of RECORDS compared with a range of baselines. The code is publicly available.
|
https://openreview.net/pdf/318115286f07cfda977b9ade502830c961c8a335.pdf
|
Task Ambiguity in Humans and Language Models
|
https://openreview.net/forum?id=QrnDe_9ZFd8
|
https://openreview.net/forum?id=QrnDe_9ZFd8
|
Alex Tamkin,Kunal Handa,Avash Shrestha,Noah Goodman
|
ICLR 2023,Poster
|
Language models have recently achieved strong performance across a wide range of NLP benchmarks. However, real world tasks are often poorly specified, and agents must deduce the intended behavior from a combination of context, instructions, and examples. We investigate how both humans and models behave in the face of such task ambiguity by proposing AmbiBench, a new benchmark of six ambiguously-specified classification tasks. We evaluate humans and models on AmbiBench by seeing how well they identify the intended task using 1) instructions with varying degrees of ambiguity, and 2) different numbers of labeled examples. We find that the combination of model scaling (to 175B parameters) and reinforcement learning from human feedback (RLHF) enables models to approach or exceed the accuracy of human participants across tasks, but that either one of these alone is not sufficient. In addition, we show how to dramatically improve the accuracy of language models trained without RLHF by finetuning on a small number of ambiguous in-context examples, providing a promising direction for teaching models to generalize well in the face of ambiguity.
|
https://openreview.net/pdf/ad665e20278befbeb0b883432e1a85e44109b64d.pdf
|
Winning Both the Accuracy of Floating Point Activation and the Simplicity of Integer Arithmetic
|
https://openreview.net/forum?id=z92lBy1ehjI
|
https://openreview.net/forum?id=z92lBy1ehjI
|
Yulhwa Kim,Jaeyong Jang,Jehun Lee,Jihoon Park,Jeonghoon Kim,Byeongwook Kim,Baeseong park,Se Jung Kwon,Dongsoo Lee,jae-joon kim
|
ICLR 2023,Poster
|
Even though floating point (FP) numbers have been adopted as a de facto standard data format for deep learning computing, the complexity of FP arithmetic impedes a broader deployment of Deep Neural Networks (DNNs). Recent works such as quantization have attempted to replace the FP matrix multiplication (MatMul) of DNNs with simple integer MatMul by transforming the datatypes of both weights and activations into integers. Unfortunately, unlike weight values that are static, it is challenging to represent dynamic activations with integers. In this paper, to simultaneously achieve the accuracy of FP activation and the simplicity of integer arithmetic, we present a method for replacing FP arithmetic with integer one without changing FP activations in the storage format while weights are quantized. The proposed method pre-aligns the significands of FP activations just ahead of the MatMul on-the-fly so that the aligned significands (integers) can be used for the computation. Inspired by an observation that conventional FP arithmetic does not produce precise results due to rounding, we demonstrate that our proposed integer arithmetic-based scheme can produce the same level of errors as that of the FP arithmetic in case DNNs use FP activations and quantized weights. Experimental results show that the hardware based on the proposed scheme shows significant improvement over FP arithmetic-based designs in terms of energy efficiency and throughput-per-area while maintaining a similar level of accuracy.
|
https://openreview.net/pdf/23401b9976bc4354ab4085a198b754762b23330f.pdf
|
Preference Transformer: Modeling Human Preferences using Transformers for RL
|
https://openreview.net/forum?id=Peot1SFDX0
|
https://openreview.net/forum?id=Peot1SFDX0
|
Changyeon Kim,Jongjin Park,Jinwoo Shin,Honglak Lee,Pieter Abbeel,Kimin Lee
|
ICLR 2023,Poster
|
Preference-based reinforcement learning (RL) provides a framework to train agents using human preferences between two behaviors. However, preference-based RL has been challenging to scale since it requires a large amount of human feedback to learn a reward function aligned with human intent. In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers. Unlike prior approaches assuming human judgment is based on the Markovian rewards which contribute to the decision equally, we introduce a new preference model based on the weighted sum of non-Markovian rewards. We then design the proposed preference model using a transformer architecture that stacks causal and bidirectional self-attention layers. We demonstrate that Preference Transformer can solve a variety of control tasks using real human preferences, while prior approaches fail to work. We also show that Preference Transformer can induce a well-specified reward and attend to critical events in the trajectory by automatically capturing the temporal dependencies in human decision-making. Code is available on the project website: https://sites.google.com/view/preference-transformer.
|
https://openreview.net/pdf/8a47190a33890c3b90463d493dc6f9bb78af91ee.pdf
|
More Centralized Training, Still Decentralized Execution: Multi-Agent Conditional Policy Factorization
|
https://openreview.net/forum?id=znLlSgN-4S0
|
https://openreview.net/forum?id=znLlSgN-4S0
|
Jiangxing Wang,Deheng Ye,Zongqing Lu
|
ICLR 2023,Poster
|
In cooperative multi-agent reinforcement learning (MARL), combining value decomposition with actor-critic enables agents to learn stochastic policies, which are more suitable for the partially observable environment. Given the goal of learning local policies that enable decentralized execution, agents are commonly assumed to be independent of each other, even in centralized training. However, such an assumption may prohibit agents from learning the optimal joint policy. To address this problem, we explicitly take the dependency among agents into centralized training. Although this leads to the optimal joint policy, it may not be factorized for decentralized execution. Nevertheless, we theoretically show that from such a joint policy, we can always derive another joint policy that achieves the same optimality but can be factorized for decentralized execution. To this end, we propose multi-agent conditional policy factorization (MACPF), which takes more centralized training but still enables decentralized execution. We empirically verify MACPF in various cooperative MARL tasks and demonstrate that MACPF achieves better performance or faster convergence than baselines. Our code is available at https://github.com/PKU-RL/FOP-DMAC-MACPF.
|
https://openreview.net/pdf/8258fe1c50fe61494176aa41b2c207716e3d556b.pdf
|
Edgeformers: Graph-Empowered Transformers for Representation Learning on Textual-Edge Networks
|
https://openreview.net/forum?id=2YQrqe4RNv
|
https://openreview.net/forum?id=2YQrqe4RNv
|
Bowen Jin,Yu Zhang,Yu Meng,Jiawei Han
|
ICLR 2023,Poster
|
Edges in many real-world social/information networks are associated with rich text information (e.g., user-user communications or user-product reviews). However, mainstream network representation learning models focus on propagating and aggregating node attributes, lacking specific designs to utilize text semantics on edges. While there exist edge-aware graph neural networks, they directly initialize edge attributes as a feature vector, which cannot fully capture the contextualized text semantics of edges. In this paper, we propose Edgeformers, a framework built upon graph-enhanced Transformers, to perform edge and node representation learning by modeling texts on edges in a contextualized way. Specifically, in edge representation learning, we inject network information into each Transformer layer when encoding edge texts; in node representation learning, we aggregate edge representations through an attention mechanism within each node’s ego-graph. On five public datasets from three different domains, Edgeformers consistently outperform state-of-the-art baselines in edge classification and link prediction, demonstrating the efficacy in learning edge and node representations, respectively.
|
https://openreview.net/pdf/d4e8e9e71eb8c7b7f49164daa7d89ed12538df19.pdf
|
Any-scale Balanced Samplers for Discrete Space
|
https://openreview.net/forum?id=lEkl0jdSb7B
|
https://openreview.net/forum?id=lEkl0jdSb7B
|
Haoran Sun,Bo Dai,Charles Sutton,Dale Schuurmans,Hanjun Dai
|
ICLR 2023,Poster
|
The locally balanced informed proposal has proved to be highly effective for sampling from discrete spaces. However, its success relies on the "local'' factor, which ensures that whenever the proposal distribution is restricted to be near the current state, the locally balanced weight functions are asymptotically optimal and the gradient approximations are accurate. In seeking a more efficient sampling algorithm, many recent works have considered increasing the scale of the proposal distributions, but this causes the "local'' factor to no longer hold. Instead, we propose any-scale balanced samplers to repair the gap in non-local proposals. In particular, we substitute the locally balanced function with an any-scale balanced function that can self-adjust to achieve better efficiency for proposal distributions at any scale. We also use quadratic approximations to capture curvature of the target distribution and reduce the error in the gradient approximation, while employing a Gaussian integral trick with a special estimated diagonal to efficiently sample from the quadratic proposal distribution. On various synthetic and real distributions, the proposed sampler substantially outperforms existing approaches.
|
https://openreview.net/pdf/52af7225d7de609b61840c7cb0d899b963a2013e.pdf
|
Equivariant Shape-Conditioned Generation of 3D Molecules for Ligand-Based Drug Design
|
https://openreview.net/forum?id=4MbGnp4iPQ
|
https://openreview.net/forum?id=4MbGnp4iPQ
|
Keir Adams,Connor W. Coley
|
ICLR 2023,Poster
|
Shape-based virtual screening is widely used in ligand-based drug design to search chemical libraries for molecules with similar 3D shapes yet novel 2D graph structures compared to known ligands. 3D deep generative models can potentially automate this exploration of shape-conditioned 3D chemical space; however, no existing models can reliably generate geometrically realistic drug-like molecules in conformations with a specific shape. We introduce a new multimodal 3D generative model that enables shape-conditioned 3D molecular design by equivariantly encoding molecular shape and variationally encoding chemical identity. We ensure local geometric and chemical validity of generated molecules by using autoregressive fragment-based generation with heuristic bonding geometries, allowing the model to prioritize the scoring of rotatable bonds to best align the growing conformation to the target shape. We evaluate our 3D generative model in tasks relevant to drug design including shape-conditioned generation of chemically diverse molecular structures and shape-constrained molecular property optimization, demonstrating its utility over virtual screening of enumerated libraries.
|
https://openreview.net/pdf/42d19238140ebee340546b8dafb68f8b62d7cb2b.pdf
|
Imbalanced Semi-supervised Learning with Bias Adaptive Classifier
|
https://openreview.net/forum?id=rVM8wD2G7Dy
|
https://openreview.net/forum?id=rVM8wD2G7Dy
|
Renzhen Wang,Xixi Jia,Quanziang Wang,Yichen Wu,Deyu Meng
|
ICLR 2023,Poster
|
Pseudo-labeling has proven to be a promising semi-supervised learning (SSL) paradigm. Existing pseudo-labeling methods commonly assume that the class distributions of training data are balanced. However, such an assumption is far from realistic scenarios and thus severely limits the performance of current pseudo-labeling methods under the context of class-imbalance. To alleviate this problem, we design a bias adaptive classifier that targets the imbalanced SSL setups. The core idea is to automatically assimilate the training bias caused by class imbalance via the bias adaptive classifier, which is composed of a novel bias attractor and the original linear classifier. The bias attractor is designed as a light-weight residual network and learned through a bi-level learning framework, which enables the bias adaptive classifier to fit imbalanced training data, while the linear classifier can provide unbiased label prediction for each class. We conduct extensive experiments under various imbalanced semi-supervised setups, and the results demonstrate that our method can be applied to different pseudo-labeling models and is superior to current state-of-the-art methods.
|
https://openreview.net/pdf/ce2d7303e3379522b33ee866e6f80db67acce889.pdf
|
On Compositional Uncertainty Quantification for Seq2seq Graph Parsing
|
https://openreview.net/forum?id=rJcLocAJpA6
|
https://openreview.net/forum?id=rJcLocAJpA6
|
Zi Lin,Du Phan,Panupong Pasupat,Jeremiah Zhe Liu,Jingbo Shang
|
ICLR 2023,Poster
|
Recent years have witnessed the success of applying seq2seq models to graph parsing tasks, where the outputs are compositionally structured (e.g., a graph or a tree). However, these seq2seq approaches pose a challenge in quantifying the model’s compositional uncertainty on graph structures due to the gap between seq2seq output probability and structural probability on the graph. This work is the first to quantify and evaluate compositional uncertainty for seq2seq graph parsing tasks. First, we proposed a generic, probabilistically interpretable framework that allows correspondences between seq2seq output probability to structural probability on the graph. This framework serves as a powerful medium for quantifying a seq2seq model's compositional uncertainty on graph elements (i.e., nodes or edges). Second, to evaluate uncertainty quality in terms of calibration, we propose a novel metric called Compositional Expected Calibration Error (CECE) which can measure a model’s calibration behavior in predicting graph structures. By a thorough evaluation for compositional uncertainty on three different tasks across ten domains, we demonstrate that CECE is a better reflection for distributional shift compared to vanilla sequence ECE. Finally, we validate the effectiveness of compositional uncertainty considering the task of collaborative semantic parsing, where the model is allowed to send limited subgraphs for human review. The results show that the collaborative performance based on uncertain subgraph selection consistently outperforms random subgraph selection (30% average error reduction rate) and performs comparably to oracle subgraph selection (only 0.33 difference in average prediction error), indicating that compositional uncertainty is an ideal signal for model errors and can benefit various downstream tasks.
|
https://openreview.net/pdf/b36dc0f3f80cd19f7198d10e37c8003772a84b06.pdf
|
Free Lunch for Domain Adversarial Training: Environment Label Smoothing
|
https://openreview.net/forum?id=GPTjnA57h_3
|
https://openreview.net/forum?id=GPTjnA57h_3
|
YiFan Zhang,xue wang,Jian Liang,Zhang Zhang,Liang Wang,Rong Jin,Tieniu Tan
|
ICLR 2023,Poster
|
A fundamental challenge for machine learning models is how to generalize learned models for out-of-distribution (OOD) data. Among various approaches, exploiting invariant features by Domain Adversarial Training (DAT) received widespread attention. Despite its success, we observe training instability from DAT, mostly due to over-confident domain discriminator and environment label noise. To address this issue, we proposed Environment Label Smoothing (ELS), which encourages the discriminator to output soft probability, which thus reduces the confidence of the discriminator and alleviates the impact of noisy environment labels. We demonstrate, both experimentally and theoretically, that ELS can improve training stability, local convergence, and robustness to noisy environment labels. By incorporating ELS with DAT methods, we are able to yield state-of-art results on a wide range of domain generalization/adaptation tasks, particularly when the environment labels are highly noisy.
|
https://openreview.net/pdf/a57a488c5037baaa9467aa0442529885253edeee.pdf
|
Scaling Forward Gradient With Local Losses
|
https://openreview.net/forum?id=JxpBP1JM15-
|
https://openreview.net/forum?id=JxpBP1JM15-
|
Mengye Ren,Simon Kornblith,Renjie Liao,Geoffrey Hinton
|
ICLR 2023,Poster
|
Forward gradient learning computes a noisy directional gradient and is a biologically plausible alternative to backprop for learning deep neural networks. The standard forward gradient algorithm suffers from the curse of dimensionality in the number of parameters. In this paper, we propose to scale forward gradient by adding a large number of local greedy loss functions. We consider block-wise, patch-wise, and channel group-wise local losses, and show that activity perturbation reduces variance compared to weight perturbation. Inspired by MLPMixer, we also propose a new architecture, LocalMixer, that is more suitable for local learning. We find local learning can work well with both supervised classification and self-supervised contrastive learning. Empirically, it can match backprop on MNIST and CIFAR-10 and significantly outperform backprop-free algorithms on ImageNet.
|
https://openreview.net/pdf/8b67f407d22905f08e467f5c91f96f9eb5c64a19.pdf
|
Understanding Embodied Reference with Touch-Line Transformer
|
https://openreview.net/forum?id=ugA1HX69sf
|
https://openreview.net/forum?id=ugA1HX69sf
|
Yang Li,Xiaoxue Chen,Hao Zhao,Jiangtao Gong,Guyue Zhou,Federico Rossano,Yixin Zhu
|
ICLR 2023,Poster
|
We study embodied reference understanding, the task of locating referents using embodied gestural signals and language references. Human studies have revealed that, contrary to popular belief, objects referred to or pointed to do not lie on the elbow-wrist line, but rather on the so-called virtual touch line. Nevertheless, contemporary human pose representations lack the virtual touch line. To tackle this problem, we devise the touch-line Transformer: It takes as input tokenized visual and textual features and simultaneously predicts the referent’s bounding box and a touch-line vector. Leveraging this touch-line prior, we further devise a geometric consistency loss that promotes co-linearity between referents and touch lines. Using the touch line as gestural information dramatically improves model performances: Experiments on the YouRefIt dataset demonstrate that our method yields a +25.0% accuracy improvement under the 0.75 IoU criterion, hence closing 63.6% of the performance difference between models and humans. Furthermore, we computationally validate prior human studies by demonstrating that computational models more accurately locate referents when employing the virtual touch line than when using the elbow-wrist line.
|
https://openreview.net/pdf/f05337a1a545b0bea49b8f29bbfa4c42519f1b18.pdf
|
Calibration Matters: Tackling Maximization Bias in Large-scale Advertising Recommendation Systems
|
https://openreview.net/forum?id=wzlWiO_WY4
|
https://openreview.net/forum?id=wzlWiO_WY4
|
Yewen Fan,Nian Si,Kun Zhang
|
ICLR 2023,Poster
|
Calibration is defined as the ratio of the average predicted click rate to the true click rate. The optimization of calibration is essential to many online advertising recommendation systems because it directly affects the downstream bids in ads auctions and the amount of money charged to advertisers. Despite its importance, calibration often suffers from a problem called “maximization bias”. Maximization bias refers to the phenomenon that the maximum of predicted values overestimates the true maximum. The problem is introduced because the calibration is computed on the set selected by the prediction model itself. It persists even if unbiased predictions are achieved on every datapoint and worsens when covariate shifts exist between the training and test sets. To mitigate this problem, we quantify maximization bias and propose a variance-adjusting debiasing (VAD) meta-algorithm in this paper. The algorithm is efficient, robust, and practical as it is able to mitigate maximization bias problem under covariate shifts, without incurring additional online serving costs or compromising the ranking performance. We demonstrate the effectiveness of the proposed algorithm using a state-of-the-art recommendation neural network model on a large-scale real-world dataset.
|
https://openreview.net/pdf/76998aaf8526e1f00dd7667ee8aac7d903203be0.pdf
|
Memorization-Dilation: Modeling Neural Collapse Under Noise
|
https://openreview.net/forum?id=cJWxqmmDL2b
|
https://openreview.net/forum?id=cJWxqmmDL2b
|
Duc Anh Nguyen,Ron Levie,Julian Lienen,Eyke Hüllermeier,Gitta Kutyniok
|
ICLR 2023,Poster
|
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the layer-peeled model, in which the network is assumed to have ``infinite expressivity'' and can map each data point to any arbitrary representation. In this work we study a more realistic variant of the layer-peeled model, which takes the positivity of the features into account. Furthermore, we extend this model to also incorporate the limited expressivity of the network. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
|
https://openreview.net/pdf/9f22b86b155fa265265ff7806589485c313427c8.pdf
|
Spacetime Representation Learning
|
https://openreview.net/forum?id=qV_M_rhYajc
|
https://openreview.net/forum?id=qV_M_rhYajc
|
Marc T. Law,James Lucas
|
ICLR 2023,Poster
|
Much of the data we encounter in the real world can be represented as directed graphs. In this work, we introduce a general family of representations for directed graphs through connected time-oriented Lorentz manifolds, called "spacetimes" in general relativity. Spacetimes intrinsically contain a causal structure that indicates whether or not there exists a causal or even chronological order between points of the manifold, called events. This chronological order allows us to naturally represent directed edges via imposing the correct ordering when the nodes are embedded as events in the spacetime. Previous work in machine learning only considers embeddings lying on the simplest Lorentz manifold or does not exploit the connection between Lorentzian pre-length spaces and directed graphs. We introduce a well-defined approach to map data onto a general family of spacetimes. We empirically evaluate our framework in the tasks of hierarchy extraction of undirected graphs, directed link prediction and representation of directed graphs.
|
https://openreview.net/pdf/36cdcef566ad431e1ed53bed1c6ffee73624b56e.pdf
|
Learning to Extrapolate: A Transductive Approach
|
https://openreview.net/forum?id=lid14UkLPd4
|
https://openreview.net/forum?id=lid14UkLPd4
|
Aviv Netanyahu,Abhishek Gupta,Max Simchowitz,Kaiqing Zhang,Pulkit Agrawal
|
ICLR 2023,Poster
|
Machine learning systems, especially with overparameterized deep neural networks, can generalize to novel test instances drawn from the same distribution as the training data. However, they fare poorly when evaluated on out-of-support test points. In this work, we tackle the problem of developing machine learning systems that retain the power of overparameterized function approximators while enabling extrapolation to out-of-support test points when possible. This is accomplished by noting that under certain conditions, a "transductive" reparameterization can convert an out-of-support extrapolation problem into a problem of within-support combinatorial generalization. We propose a simple strategy based on bilinear embeddings to enable this type of combinatorial generalization, thereby addressing the out-of-support extrapolation problem under certain conditions. We instantiate a simple, practical algorithm applicable to various supervised learning and imitation learning tasks.
|
https://openreview.net/pdf/4cb097b958a256f64b05297454e400b8097ea80f.pdf
|
Label-free Concept Bottleneck Models
|
https://openreview.net/forum?id=FlCg47MNvBA
|
https://openreview.net/forum?id=FlCg47MNvBA
|
Tuomas Oikarinen,Subhro Das,Lam M. Nguyen,Tsui-Wei Weng
|
ICLR 2023,Poster
|
Concept bottleneck models (CBM) are a popular way of creating more interpretable neural networks by having hidden layer neurons correspond to human-understandable concepts. However, existing CBMs and their variants have two crucial limitations: first, they need to collect labeled data for each of the predefined concepts, which is time consuming and labor intensive; second, the accuracy of a CBM is often significantly lower than that of a standard neural network, especially on more complex datasets. This poor performance creates a barrier for adopting CBMs in practical real world applications. Motivated by these challenges, we propose Label-free CBM which is a novel framework to transform any neural network into an interpretable CBM without labeled concept data, while retaining a high accuracy. Our Label-free CBM has many advantages, it is: scalable - we present the first CBM scaled to ImageNet, efficient - creating a CBM takes only a few hours even for very large datasets, and automated - training it for a new dataset requires minimal human effort. Our code is available at https://github.com/Trustworthy-ML-Lab/Label-free-CBM.
|
https://openreview.net/pdf/f6e0ef2be0578f171a67040f6fc6873a70f1ac9c.pdf
|
Multi-level Protein Structure Pre-training via Prompt Learning
|
https://openreview.net/forum?id=XGagtiJ8XC
|
https://openreview.net/forum?id=XGagtiJ8XC
|
Zeyuan Wang,Qiang Zhang,Shuang-Wei HU,Haoran Yu,Xurui Jin,Zhichen Gong,Huajun Chen
|
ICLR 2023,Poster
|
A protein can focus on different structure levels to implement its functions. Each structure has its own merit and driving forces in describing some specific characteristics, and they cannot replace each other. Most existing function prediction methods take the tertiary structure as input, unintentionally ignoring the other levels of protein structures. Considering protein sequences can determine multi-level structures, in this paper, we aim to realize the comprehensive potential of protein sequences for function prediction. Specifically, we propose a new prompt-guided multi-task pre-training and fine-tuning framework, and the resulting protein model is called PromptProtein. Through the prompt-guided multi-task pre-training, we learn multiple prompt signals to steer the model to focus on different structure levels. We also design a prompt fine-tuning module to provide downstream tasks the on-demand flexibility of utilizing respective levels of structure information. Extensive experiments on function prediction and protein engineering show that PromptProtein outperforms state-of-the-art methods by large margins.
|
https://openreview.net/pdf/dc43e5fd99a5392aedf0a73a5a11819a2d7ec708.pdf
|
GLM-130B: An Open Bilingual Pre-trained Model
|
https://openreview.net/forum?id=-Aw0rrrPUF
|
https://openreview.net/forum?id=-Aw0rrrPUF
|
Aohan Zeng,Xiao Liu,Zhengxiao Du,Zihan Wang,Hanyu Lai,Ming Ding,Zhuoyi Yang,Yifan Xu,Wendi Zheng,Xiao Xia,Weng Lam Tam,Zixuan Ma,Yufei Xue,Jidong Zhai,Wenguang Chen,Zhiyuan Liu,Peng Zhang,Yuxiao Dong,Jie Tang
|
ICLR 2023,Poster
|
We introduce GLM-130B, a bilingual (English and Chinese) pre-trained language model with 130 billion parameters. It is an attempt to open-source a 100B-scale model as good as GPT-3 (davinci) and unveil how models of such a scale can be successfully pre-trained. Over the course of this effort, we face numerous unexpected technical and engineering challenges, particularly on loss spikes and divergence. In this paper, we introduce the pre-training process of GLM-130B including its design choices, training strategies for both efficiency and stability, and engineering efforts. The resultant GLM-130B model offers significant outperformance over GPT-3 175B on a wide range of popular English benchmarks while the performance advantage is not observed in OPT-175B and BLOOM-176B. It also consistently and significantly outperforms ERNIE TITAN 3.0 260B—the largest Chinese language model—across related benchmarks. Finally, we leverage a unique scaling property of GLM-130B to reach INT4 quantization with almost no performance loss, making it the first among 100B-scale models and more importantly, allowing its effective inference on 4×RTX 3090 (24G) or 8×RTX 2080 Ti (11G) GPUs, the most ever affordable GPUs required for using 100B-scale models. The GLM-130B model weights are publicly accessible and its code, training logs, related toolkit, and lessons learned are open-sourced at https://github.com/THUDM/GLM-130B/.
|
https://openreview.net/pdf/8b5b1ff47d47935e500a223aa3c138cb0755002f.pdf
|
Causal Estimation for Text Data with (Apparent) Overlap Violations
|
https://openreview.net/forum?id=Ha2MnQM9Ph
|
https://openreview.net/forum?id=Ha2MnQM9Ph
|
Lin Gui,Victor Veitch
|
ICLR 2023,Poster
|
Consider the problem of estimating the causal effect of some attribute of a text document; for example: what effect does writing a polite vs. rude email have on response time? To estimate a causal effect from observational data, we need to adjust for confounding aspects of the text that affect both the treatment and outcome---e.g., the topic or writing level of the text. These confounding aspects are unknown a priori, so it seems natural to adjust for the entirety of the text (e.g., using a transformer). However, causal identification and estimation procedures rely on the assumption of overlap: for all levels of the adjustment variables, there is randomness leftover so that every unit could have (not) received treatment. Since the treatment here is itself an attribute of the text, it is perfectly determined, and overlap is apparently violated. The purpose of this paper is to show how to handle causal identification and obtain robust causal estimation in the presence of apparent overlap violations. In brief, the idea is to use supervised representation learning to produce a data representation that preserves confounding information while eliminating information that is only predictive of the treatment. This representation then suffices for adjustment and satisfies overlap. Adapting results on non-parametric estimation, we show that this procedure shows robustness with respect to conditional outcome misestimation and yields a low-bias estimator that admits valid uncertainty quantification under weak conditions. Empirical results show reductions in bias and strong improvements in uncertainty quantification relative to the natural (transformer-based) baseline.
|
https://openreview.net/pdf/e84bb816534c7c06f4113662602be4227637df15.pdf
|
MoDem: Accelerating Visual Model-Based Reinforcement Learning with Demonstrations
|
https://openreview.net/forum?id=JdTnc9gjVfJ
|
https://openreview.net/forum?id=JdTnc9gjVfJ
|
Nicklas Hansen,Yixin Lin,Hao Su,Xiaolong Wang,Vikash Kumar,Aravind Rajeswaran
|
ICLR 2023,Poster
|
Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 160%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100k interaction steps, 5 demonstrations). Code and videos are available at https://nicklashansen.github.io/modemrl.
|
https://openreview.net/pdf/6a5db4c4bb6e33558fd94164736145a682ec92a3.pdf
|
PD-MORL: Preference-Driven Multi-Objective Reinforcement Learning Algorithm
|
https://openreview.net/forum?id=zS9sRyaPFlJ
|
https://openreview.net/forum?id=zS9sRyaPFlJ
|
Toygun Basaklar,Suat Gumussoy,Umit Ogras
|
ICLR 2023,Poster
|
Multi-objective reinforcement learning (MORL) approaches have emerged to tackle many real-world problems with multiple conflicting objectives by maximizing a joint objective function weighted by a preference vector. These approaches find fixed customized policies corresponding to preference vectors specified during training. However, the design constraints and objectives typically change dynamically in real-life scenarios. Furthermore, storing a policy for each potential preference is not scalable. Hence, obtaining a set of Pareto front solutions for the entire preference space in a given domain with a single training is critical. To this end, we propose a novel MORL algorithm that trains a single universal network to cover the entire preference space scalable to continuous robotic tasks. The proposed approach, Preference-Driven MORL (PD-MORL), utilizes the preferences as guidance to update the network parameters. It also employs a novel parallelization approach to increase sample efficiency. We show that PD-MORL achieves up to 25% larger hypervolume for challenging continuous control tasks and uses an order of magnitude fewer trainable parameters compared to prior approaches.
|
https://openreview.net/pdf/55e6fec64df989d7ba873c05d2c219d23391d668.pdf
|
Understanding the Role of Nonlinearity in Training Dynamics of Contrastive Learning
|
https://openreview.net/forum?id=s130rTE3U_X
|
https://openreview.net/forum?id=s130rTE3U_X
|
Yuandong Tian
|
ICLR 2023,Poster
|
While the empirical success of self-supervised learning (SSL) heavily relies on the usage of deep nonlinear models, existing theoretical works on SSL understanding still focus on linear ones. In this paper, we study the role of nonlinearity in the training dynamics of contrastive learning (CL) on one and two-layer nonlinear networks with homogeneous activation $h(x) = h'(x)x$. We have two major theoretical discoveries. First, the presence of nonlinearity can lead to many local optima even in 1-layer setting, each corresponding to certain patterns from the data distribution, while with linear activation, only one major pattern can be learned. This suggests that models with lots of parameters can be regarded as a \emph{brute-force} way to find these local optima induced by nonlinearity. Second, in the 2-layer case, linear activation is proven not capable of learning specialized weights into diverse patterns, demonstrating the importance of nonlinearity. In addition, for 2-layer setting, we also discover \emph{global modulation}: those local patterns discriminative from the perspective of global-level patterns are prioritized to learn, further characterizing the learning process. Simulation verifies our theoretical findings.
|
https://openreview.net/pdf/935ac2fd5682275bff54081dde2d671b9cd5cb3f.pdf
|
M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast Self-Adaptation
|
https://openreview.net/forum?id=s7oOe6cNRT8
|
https://openreview.net/forum?id=s7oOe6cNRT8
|
Junjie Yang,Xuxi Chen,Tianlong Chen,Zhangyang Wang,Yingbin Liang
|
ICLR 2023,Poster
|
Learning to Optimize (L2O) has drawn increasing attention as it often remarkably accelerates the optimization procedure of complex tasks by "overfitting" specific task type, leading to enhanced performance compared to analytical optimizers. Generally, L2O develops a parameterized optimization method (i.e., "optimizer") by learning from solving sample problems. This data-driven procedure yields L2O that can efficiently solve problems similar to those seen in training, that is, drawn from the same "task distribution". However, such learned optimizers often struggle when new test problems come with a substantially deviation from the training task distribution. This paper investigates a potential solution to this open challenge, by meta-training an L2O optimizer that can perform fast test-time self-adaptation to a out-of-distribution task, in only a few steps. We theoretically characterize the generalization of L2O, and further show that our proposed framework (termed as M-L2O) provably facilitates rapid task adaptation by locating well-adapted initial points for the optimizer weight. Empirical observations on several classic tasks like LASSO and Quadratic, demonstrate that M-L2O converges significantly faster than vanilla L2O with only $5$ steps of adaptation, echoing our theoretical results. Codes are available in https://github.com/VITA-Group/M-L2O.
|
https://openreview.net/pdf/28968ae7fa07510ed3a4c526367db53e51ef3115.pdf
|
3D UX-Net: A Large Kernel Volumetric ConvNet Modernizing Hierarchical Transformer for Medical Image Segmentation
|
https://openreview.net/forum?id=wsZsjOSytRA
|
https://openreview.net/forum?id=wsZsjOSytRA
|
Ho Hin Lee,Shunxing Bao,Yuankai Huo,Bennett A. Landman
|
ICLR 2023,Poster
|
The recent 3D medical ViTs (e.g., SwinUNETR) achieve the state-of-the-art performances on several 3D volumetric data benchmarks, including 3D medical image segmentation. Hierarchical transformers (e.g., Swin Transformers) reintroduced several ConvNet priors and further enhanced the practical viability of adapting volumetric segmentation in 3D medical datasets. The effectiveness of hybrid approaches is largely credited to the large receptive field for non-local self-attention and the large number of model parameters. We hypothesize that volumetric ConvNets can simulate the large receptive field behavior of these learning approaches with fewer model parameters using depth-wise convolution. In this work, we propose a lightweight volumetric ConvNet, termed 3D UX-Net, which adapts the hierarchical transformer using ConvNet modules for robust volumetric segmentation. Specifically, we revisit volumetric depth-wise convolutions with large kernel (LK) size (e.g. starting from $7\times7\times7$) to enable the larger global receptive fields, inspired by Swin Transformer. We further substitute the multi-layer perceptron (MLP) in Swin Transformer blocks with pointwise depth convolutions and enhance model performances with fewer normalization and activation layers, thus reducing the number of model parameters. 3D UX-Net competes favorably with current SOTA transformers (e.g. SwinUNETR) using three challenging public datasets on volumetric brain and abdominal imaging: 1) MICCAI Challenge 2021 FLARE, 2) MICCAI Challenge 2021 FeTA, and 3) MICCAI Challenge 2022 AMOS. 3D UX-Net consistently outperforms SwinUNETR with improvement from 0.929 to 0.938 Dice (FLARE2021) and 0.867 to 0.874 Dice (Feta2021). We further evaluate the transfer learning capability of 3D UX-Net with AMOS2022 and demonstrates another improvement of $2.27\%$ Dice (from 0.880 to 0.900). The source code with our proposed model are available at https://github.com/MASILab/3DUX-Net.
|
https://openreview.net/pdf/66c7d9008e0864a9dd656e4b98c11672a8799de1.pdf
|
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small
|
https://openreview.net/forum?id=NpsVSN6o4ul
|
https://openreview.net/forum?id=NpsVSN6o4ul
|
Kevin Ro Wang,Alexandre Variengien,Arthur Conmy,Buck Shlegeris,Jacob Steinhardt
|
ICLR 2023,Poster
|
Research in mechanistic interpretability seeks to explain behaviors of ML models in terms of their internal components. However, most previous work either focuses on simple behaviors in small models, or describes complicated behaviors in larger models with broad strokes. In this work, we bridge this gap by presenting an explanation for how GPT-2 small performs a natural language task that requires logical reasoning: indirect object identification (IOI). Our explanation encompasses 28 attention heads grouped into 7 main classes, which we discovered using a combination of interpretability approaches including causal interventions and projections.
To our knowledge, this investigation is the largest end-to-end attempt at reverse-engineering a natural behavior "in the wild" in a language model. We evaluate the reliability of our explanation using three quantitative criteria - faithfulness, completeness and minimality. Though these criteria support our explanation, they also point to remaining gaps in our understanding.
Our work provides evidence that a mechanistic understanding of large ML models is feasible, opening opportunities to scale our understanding to both larger models and more complex tasks.
|
https://openreview.net/pdf/1e69126a0944a99f44e32245d87e213a30cc2eb2.pdf
|
Equivariant Descriptor Fields: SE(3)-Equivariant Energy-Based Models for End-to-End Visual Robotic Manipulation Learning
|
https://openreview.net/forum?id=dnjZSPGmY5O
|
https://openreview.net/forum?id=dnjZSPGmY5O
|
Hyunwoo Ryu,Hong-in Lee,Jeong-Hoon Lee,Jongeun Choi
|
ICLR 2023,Poster
|
End-to-end learning for visual robotic manipulation is known to suffer from sample inefficiency, requiring large numbers of demonstrations. The spatial roto-translation equivariance, or the SE(3)-equivariance can be exploited to improve the sample efficiency for learning robotic manipulation. In this paper, we present SE(3)-equivariant models for visual robotic manipulation from point clouds that can be trained fully end-to-end. By utilizing the representation theory of the Lie group, we construct novel SE(3)-equivariant energy-based models that allow highly sample efficient end-to-end learning. We show that our models can learn from scratch without prior knowledge and yet are highly sample efficient (5~10 demonstrations are enough). Furthermore, we show that our models can generalize to tasks with (i) previously unseen target object poses, (ii) previously unseen target object instances of the category, and (iii) previously unseen visual distractors. We experiment with 6-DoF robotic manipulation tasks to validate our models' sample efficiency and generalizability. Codes are available at: https://github.com/tomato1mule/edf
|
https://openreview.net/pdf/57f26ad8208da7af3c9709e887023336732e3ce3.pdf
|
Explaining Temporal Graph Models through an Explorer-Navigator Framework
|
https://openreview.net/forum?id=BR_ZhvcYbGJ
|
https://openreview.net/forum?id=BR_ZhvcYbGJ
|
Wenwen Xia,Mincai Lai,Caihua Shan,Yao Zhang,Xinnan Dai,Xiang Li,Dongsheng Li
|
ICLR 2023,Poster
|
While GNN explanation has recently received significant attention, existing works are consistently designed for static graphs. Due to the prevalence of temporal graphs, many temporal graph models have been proposed, but explaining their predictions remains to be explored. To bridge the gap, in this paper, we propose T-GNNExplainer for temporal graph model explanation. Specifically, we regard a temporal graph constituted by a sequence of temporal events. Given a target event, our task is to find a subset of previously occurred events that lead to the model's prediction for it. To handle this combinatorial optimization problem, T-GNNExplainer includes an explorer to find the event subsets with Monte Carlo Tree Search (MCTS) and a navigator that learns the correlations between events and helps reduce the search space. In particular, the navigator is trained in advance and then integrated with the explorer to speed up searching and achieve better results. To the best of our knowledge, T-GNNExplainer is the first explainer tailored for temporal graph models. We conduct extensive experiments to evaluate the performance of T-GNNExplainer. Experimental results on both real-world and synthetic datasets demonstrate that T-GNNExplainer can achieve superior performance with up to about 50% improvement in Area under Fidelity-Sparsity Curve.
|
https://openreview.net/pdf/f236c13f60a0bf5c74cb31ee8bd5bf77939d656b.pdf
|
Soft Neighbors are Positive Supporters in Contrastive Visual Representation Learning
|
https://openreview.net/forum?id=l9vM_PaUKz
|
https://openreview.net/forum?id=l9vM_PaUKz
|
Chongjian GE,Jiangliu Wang,Zhan Tong,Shoufa Chen,Yibing Song,Ping Luo
|
ICLR 2023,Poster
|
Contrastive learning methods train visual encoders by comparing views (e.g., often created via a group of data augmentations on the same instance) from one instance to others. Typically, the views created from one instance are set as positive, while views from other instances are negative. This binary instance discrimination is studied extensively to improve feature representations in self-supervised learning. In this paper, we rethink the instance discrimination framework and find the binary instance labeling insufficient to measure correlations between different samples. For an intuitive example, given a random image instance, there may exist other images in a mini-batch whose content meanings are the same (i.e., belonging to the same category) or partially related (i.e., belonging to a similar category). How to treat the images that correlate similarly to the current image instance leaves an unexplored problem. We thus propose to support the current image by exploring other correlated instances (i.e., soft neighbors). We first carefully cultivate a candidate neighbor set, which will be further utilized to explore the highly-correlated instances. A cross-attention module is then introduced to predict the correlation score (denoted as positiveness) of other correlated instances with respect to the current one. The positiveness score quantitatively measures the positive support from each correlated instance, and is encoded into the objective for pretext training. To this end, our proposed method benefits in discriminating uncorrelated instances while absorbing correlated instances for SSL. We evaluate our soft neighbor contrastive learning method (SNCLR) on standard visual recognition benchmarks, including image classification, object detection, and instance segmentation. The state-of-the-art recognition performance shows that SNCLR is effective in improving feature representations from both ViT and CNN encoders.
|
https://openreview.net/pdf/fcf2bcabecc2de6a0abb518daa6c6bb2c960aa82.pdf
|
Offline RL for Natural Language Generation with Implicit Language Q Learning
|
https://openreview.net/forum?id=aBH_DydEvoH
|
https://openreview.net/forum?id=aBH_DydEvoH
|
Charlie Victor Snell,Ilya Kostrikov,Yi Su,Sherry Yang,Sergey Levine
|
ICLR 2023,Poster
|
Large language models distill broad knowledge from text corpora. However, they can be inconsistent when it comes to completing user specified tasks. This issue can be addressed by finetuning such models via supervised learning on curated datasets, or via reinforcement learning. In this work, we propose a novel offline RL method, implicit language Q-learning (ILQL), designed for use on language models, that combines both the flexible utility maximization framework of RL algorithms with the ability of supervised learning to leverage previously collected data, as well as its simplicity and stability. Our method employs a combination of value conservatism alongside an implicit dataset support constraint in learning value functions, which are then used to guide language model generations towards maximizing user-specified utility functions. In addition to empirically validating ILQL, we present a detailed empirical analysis of situations where offline RL can be useful in natural language generation settings, demonstrating how it can be a more effective utility optimizer than prior approaches for end-to-end dialogue, and how it can effectively optimize high variance reward functions based on subjective judgement, such as whether to label a comment as toxic or not.
|
https://openreview.net/pdf/52d92f1f776e26f5cd3935e847b0f2b77d30b7e9.pdf
|
CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos
|
https://openreview.net/forum?id=H-T3F0dMbyj
|
https://openreview.net/forum?id=H-T3F0dMbyj
|
Hao-Wen Dong,Naoya Takahashi,Yuki Mitsufuji,Julian McAuley,Taylor Berg-Kirkpatrick
|
ICLR 2023,Poster
|
Recent years have seen progress beyond domain-specific sound separation for speech or music towards universal sound separation for arbitrary sounds. Prior work on universal sound separation has investigated separating a target sound out of an audio mixture given a text query. Such text-queried sound separation systems provide a natural and scalable interface for specifying arbitrary target sounds. However, supervised text-queried sound separation systems require costly labeled audio-text pairs for training. Moreover, the audio provided in existing datasets is often recorded in a controlled environment, causing a considerable generalization gap to noisy audio in the wild. In this work, we aim to approach text-queried universal sound separation by using only unlabeled data. We propose to leverage the visual modality as a bridge to learn the desired audio-textual correspondence. The proposed CLIPSep model first encodes the input query into a query vector using the contrastive language-image pretraining (CLIP) model, and the query vector is then used to condition an audio separation model to separate out the target sound. While the model is trained on image-audio pairs extracted from unlabeled videos, at test time we can instead query the model with text inputs in a zero-shot setting, thanks to the joint language-image embedding learned by the CLIP model. Further, videos in the wild often contain off-screen sounds and background noise that may hinder the model from learning the desired audio-textual correspondence. To address this problem, we further propose an approach called noise invariant training for training a query-based sound separation model on noisy data. Experimental results show that the proposed models successfully learn text-queried universal sound separation using only noisy unlabeled videos, even achieving competitive performance against a supervised model in some settings.
|
https://openreview.net/pdf/dab45d6edbf6928631998496ce070789638a5484.pdf
|
On the Soft-Subnetwork for Few-Shot Class Incremental Learning
|
https://openreview.net/forum?id=z57WK5lGeHd
|
https://openreview.net/forum?id=z57WK5lGeHd
|
Haeyong Kang,Jaehong Yoon,Sultan Rizky Hikmawan Madjid,Sung Ju Hwang,Chang D. Yoo
|
ICLR 2023,Poster
|
Inspired by Regularized Lottery Ticket Hypothesis, which states that competitive smooth (non-binary) subnetworks exist within a dense network, we propose a few-shot class-incremental learning method referred to as Soft-SubNetworks (SoftNet). Our objective is to learn a sequence of sessions incrementally, where each session only includes a few training instances per class while preserving the knowledge of the previously learned ones. SoftNet jointly learns the model weights and adaptive non-binary soft masks at a base training session in which each mask consists of the major and minor subnetwork; the former aims to minimize catastrophic forgetting during training, and the latter aims to avoid overfitting to a few samples in each new training session. We provide comprehensive empirical validations demonstrating that our SoftNet effectively tackles the few-shot incremental learning problem by surpassing the performance of state-of-the-art baselines over benchmark datasets.
|
https://openreview.net/pdf/917067cad3106d4ad6406f9730ecb51a250db36e.pdf
|
An Adaptive Policy to Employ Sharpness-Aware Minimization
|
https://openreview.net/forum?id=6Wl7-M2BC-
|
https://openreview.net/forum?id=6Wl7-M2BC-
|
Weisen Jiang,Hansi Yang,Yu Zhang,James Kwok
|
ICLR 2023,Poster
|
Sharpness-aware minimization (SAM), which searches for flat minima by min-max optimization, has been shown to be useful in improving model generalization. However, since each SAM update requires computing two gradients, its computational cost and training time are both doubled compared to standard empirical risk minimization (ERM). Recent state-of-the-arts reduce the fraction of SAM updates and thus accelerate SAM by switching between SAM and ERM updates randomly or periodically. In this paper, we design an adaptive policy to employ SAM based on the loss landscape geometry. Two efficient algorithms, AE-SAM and AE-LookSAM, are proposed. We theoretically show that AE-SAM has the same convergence rate as SAM. Experimental results on various datasets and architectures demonstrate the efficiency and effectiveness of the adaptive policy.
|
https://openreview.net/pdf/190e801aceaf60174555a92c2de693d6aafaa8f6.pdf
|
Fairness and Accuracy under Domain Generalization
|
https://openreview.net/forum?id=jBEXnEMdNOL
|
https://openreview.net/forum?id=jBEXnEMdNOL
|
Thai-Hoang Pham,Xueru Zhang,Ping Zhang
|
ICLR 2023,Poster
|
As machine learning (ML) algorithms are increasingly used in high-stakes applications, concerns have arisen that they may be biased against certain social groups. Although many approaches have been proposed to make ML models fair, they typically rely on the assumption that data distributions in training and deployment are identical. Unfortunately, this is commonly violated in practice and a model that is fair during training may lead to an unexpected outcome during its deployment. Although the problem of designing robust ML models under dataset shifts has been widely studied, most existing works focus only on the transfer of accuracy. In this paper, we study the transfer of both fairness and accuracy under domain generalization where the data at test time may be sampled from never-before-seen domains. We first develop theoretical bounds on the unfairness and expected loss at deployment, and then derive sufficient conditions under which fairness and accuracy can be perfectly transferred via invariant representation learning. Guided by this, we design a learning algorithm such that fair ML models learned with training data still have high fairness and accuracy when deployment environments change. Experiments on real-world data validate the proposed algorithm.
|
https://openreview.net/pdf/ca63e74c35378c2ac402e62c116efa7b42d9bbb1.pdf
|
Language Models Can Teach Themselves to Program Better
|
https://openreview.net/forum?id=SaRj2ka1XZ3
|
https://openreview.net/forum?id=SaRj2ka1XZ3
|
Patrick Haluptzok,Matthew Bowers,Adam Tauman Kalai
|
ICLR 2023,Poster
|
Recent Language Models (LMs) achieve breakthrough performance in code generation when trained on human-authored problems, even solving some competitive-programming problems. Self-play has proven useful in games such as Go, and thus it is natural to ask whether LMs can generate their own instructive programming problems to improve their performance. We show that it is possible for an LM to synthesize programming problems and solutions, which are filtered for correctness by a Python interpreter. The LM’s performance is then seen to improve when it is fine-tuned on its own synthetic problems and verified solutions; thus the model “improves itself” using the Python interpreter. Problems are specified formally as programming puzzles [Schuster et al. , 2021], a code-based problem format where solutions can easily be verified for correctness by execution. In experiments on publicly-available LMs, test accuracy more than doubles. This work demonstrates the potential for code LMs, with an interpreter, to generate instructive problems and improve their own performance.
|
https://openreview.net/pdf/70f1e70fce89088da12b7493b7d1a8d444a2acec.pdf
|
Latent Bottlenecked Attentive Neural Processes
|
https://openreview.net/forum?id=yIxtevizEA
|
https://openreview.net/forum?id=yIxtevizEA
|
Leo Feng,Hossein Hajimirsadeghi,Yoshua Bengio,Mohamed Osama Ahmed
|
ICLR 2023,Poster
|
Neural Processes (NPs) are popular methods in meta-learning that can estimate predictive uncertainty on target datapoints by conditioning on a context dataset. Previous state-of-the-art method Transformer Neural Processes (TNPs) achieve strong performance but require quadratic computation with respect to the number of context datapoints, significantly limiting its scalability. Conversely, existing sub-quadratic NP variants perform significantly worse than that of TNPs. Tackling this issue, we propose Latent Bottlenecked Attentive Neural Processes (LBANPs), a new computationally efficient sub-quadratic NP variant, that has a querying computational complexity independent of the number of context datapoints. The model encodes the context dataset into a constant number of latent vectors on which self-attention is performed. When making predictions, the model retrieves higher-order information from the context dataset via multiple cross-attention mechanisms on the latent vectors. We empirically show that LBANPs achieve results competitive with the state-of-the-art on meta-regression, image completion, and contextual multi-armed bandits. We demonstrate that LBANPs can trade-off the computational cost and performance according to the number of latent vectors. Finally, we show LBANPs can scale beyond existing attention-based NP variants to larger dataset settings.
|
https://openreview.net/pdf/a1d6e9f4add6577dd4a6d3d899e40db8cdda95d1.pdf
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.