title
stringlengths
18
162
url
stringlengths
42
44
detail_url
stringlengths
42
44
authors
stringlengths
10
429
tags
stringclasses
3 values
abstract
stringlengths
400
2.37k
pdf
stringlengths
71
71
Revisiting Graph Adversarial Attack and Defense From a Data Distribution Perspective
https://openreview.net/forum?id=dSYoPjM5J_W
https://openreview.net/forum?id=dSYoPjM5J_W
Kuan Li,Yang Liu,Xiang Ao,Qing He
ICLR 2023,Poster
Recent studies have shown that structural perturbations are significantly effective in degrading the accuracy of Graph Neural Networks (GNNs) in the semi-supervised node classification (SSNC) task. However, why the gradient-based methods are so destructive is rarely explored. In this work, we discover an interesting phenomenon: the adversarial edges are not uniformly distributed on the graph. Nearly all perturbations are generated around the training nodes in poisoning attack. Combined with this phenomenon, we provide an explanation for the effectiveness of the gradient-based attack method from a data distribution perspective and revisit both poisoning attack and evasion attack in SSNC. From this new perspective, we empirically and theoretically discuss some other attack tendencies. Based on the analysis, we provide nine practical tips on both attack and defense and meanwhile leverage them to improve existing attack and defense methods. Moreover, we design a fast attack method and a self-training defense method, which outperform the state-of-the-art methods and can effectively scale to large graphs like ogbn-arxiv. We conduct extensive experiments on four benchmark datasets to verify our claims.
https://openreview.net/pdf/99c1a6d0b7e2c429546a50e3a030610f5bb3c5a0.pdf
Provable Sim-to-real Transfer in Continuous Domain with Partial Observations
https://openreview.net/forum?id=S31oTB72m0G
https://openreview.net/forum?id=S31oTB72m0G
Jiachen Hu,Han Zhong,Chi Jin,Liwei Wang
ICLR 2023,Poster
Sim-to-real transfer, which trains RL agents in the simulated environments and then deploys them in the real world, has been widely used to overcome the limitations of gathering samples in the real world. Despite the empirical success of the sim-to-real transfer, its theoretical foundation is much less understood. In this paper, we study the sim-to-real transfer in continuous domain with partial observations, where the simulated environments and real-world environments are modeled by linear quadratic Gaussian (LQG) systems. We show that a popular robust adversarial training algorithm is capable of learning a policy from the simulated environment that is competitive to the optimal policy in the real-world environment. To achieve our results, we design a new algorithm for infinite-horizon average-cost LQGs and establish a regret bound that depends on the intrinsic complexity of the model class. Our algorithm crucially relies on a novel history clipping scheme, which might be of independent interest.
https://openreview.net/pdf/515b0ec07db7d7e603d68eda09eb8978669607f4.pdf
Globally Optimal Training of Neural Networks with Threshold Activation Functions
https://openreview.net/forum?id=_9k5kTgyHT
https://openreview.net/forum?id=_9k5kTgyHT
Tolga Ergen,Halil Ibrahim Gulluk,Jonathan Lacotte,Mert Pilanci
ICLR 2023,Poster
Threshold activation functions are highly preferable in neural networks due to their efficiency in hardware implementations. Moreover, their mode of operation is more interpretable and resembles that of biological neurons. However, traditional gradient based algorithms such as Gradient Descent cannot be used to train the parameters of neural networks with threshold activations since the activation function has zero gradient except at a single non-differentiable point. To this end, we study weight decay regularized training problems of deep neural networks with threshold activations. We first show that regularized deep threshold network training problems can be equivalently formulated as a standard convex optimization problem, which parallels the LASSO method, provided that the last hidden layer width exceeds a certain threshold. We also derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network. We corroborate our theoretical results with various numerical experiments.
https://openreview.net/pdf/8c4e7faf70efca88da4a7f7305b2bc5c907944f3.pdf
Molecule Generation For Target Protein Binding with Structural Motifs
https://openreview.net/forum?id=Rq13idF0F73
https://openreview.net/forum?id=Rq13idF0F73
ZAIXI ZHANG,Yaosen Min,Shuxin Zheng,Qi Liu
ICLR 2023,Poster
Designing ligand molecules that bind to specific protein binding sites is a fundamental problem in structure-based drug design. Although deep generative models and geometric deep learning have made great progress in drug design, existing works either sample in the 2D graph space or fail to generate valid molecules with realistic substructures. To tackle these problems, we propose a Fragment-based LigAnd Generation framework (FLAG), to generate 3D molecules with valid and realistic substructures fragment-by-fragment. In FLAG, a motif vocabulary is constructed by extracting common molecular fragments (i.e., motif) in the dataset. At each generation step, a 3D graph neural network is first employed to encode the intermediate context information. Then, our model selects the focal motif, predicts the next motif type, and attaches the new motif. The bond lengths/angles can be quickly and accurately determined by cheminformatics tools. Finally, the molecular geometry is further adjusted according to the predicted rotation angle and the structure refinement. Our model not only achieves competitive performances on conventional metrics such as binding affinity, QED, and SA, but also outperforms baselines by a large margin in generating molecules with realistic substructures.
https://openreview.net/pdf/2f51ce0e41434fa7eac1af155e6ce6dbd0234098.pdf
Towards Robustness Certification Against Universal Perturbations
https://openreview.net/forum?id=7GEvPKxjtt
https://openreview.net/forum?id=7GEvPKxjtt
Yi Zeng,Zhouxing Shi,Ming Jin,Feiyang Kang,Lingjuan Lyu,Cho-Jui Hsieh,Ruoxi Jia
ICLR 2023,Poster
In this paper, we investigate the problem of certifying neural network robustness against universal perturbations (UPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing robustness certification methods aim to provide robustness guarantees for each sample with respect to the worst-case perturbations given a neural network. However, those sample-wise bounds will be loose when considering the UP threat model as they overlook the important constraint that the perturbation should be shared across all samples. We propose a method based on a combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming to establish the first robust certification method for UP. In addition, we develop a theoretical framework for computing error bounds on the entire population using the certification results from a randomly sampled batch. Aside from an extensive evaluation of the proposed certification, we further show how the certification facilitates efficient comparison of robustness among different models or efficacy among different universal adversarial attack defenses and enables accurate detection of backdoor target classes.
https://openreview.net/pdf/ee043d338b4d99fee86cf7332af005a6da63696f.pdf
Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models
https://openreview.net/forum?id=M9u_ctqFUlg
https://openreview.net/forum?id=M9u_ctqFUlg
Yong Zhong,Hongtao Liu,Xiaodong Liu,Fan Bao,Weiran Shen,Chongxuan Li
ICLR 2023,Poster
Deep generative models (DGMs) are data-eager because learning a complex model on limited data suffers from a large variance and easily overfits. Inspired by the classical perspective of the bias-variance tradeoff, we propose regularized deep generative model (Reg-DGM), which leverages a nontransferable pre-trained model to reduce the variance of generative modeling with limited data. Formally, Reg-DGM optimizes a weighted sum of a certain divergence and the expectation of an energy function, where the divergence is between the data and the model distributions, and the energy function is defined by the pre-trained model w.r.t. the model distribution. We analyze a simple yet representative Gaussian-fitting case to demonstrate how the weighting hyperparameter trades off the bias and the variance. Theoretically, we characterize the existence and the uniqueness of the global minimum of Reg-DGM in a non-parametric setting and prove its convergence with neural networks trained by gradient-based methods. Empirically, with various pre-trained feature extractors and a data-dependent energy function, Reg-DGM consistently improves the generation performance of strong DGMs with limited data and achieves competitive results to the state-of-the-art methods. Our implementation is available at https://github.com/ML-GSAI/Reg-ADA-APA.
https://openreview.net/pdf/f33b211732a9bbce5a0a49de0d4a54f0f976f1cc.pdf
Basic Binary Convolution Unit for Binarized Image Restoration Network
https://openreview.net/forum?id=h8T5dZWTZ-Z
https://openreview.net/forum?id=h8T5dZWTZ-Z
Bin Xia,Yulun Zhang,Yitong Wang,Yapeng Tian,Wenming Yang,Radu Timofte,Luc Van Gool
ICLR 2023,Poster
Lighter and faster image restoration (IR) models are crucial for the deployment on resource-limited devices. Binary neural network (BNN), one of the most promising model compression methods, can dramatically reduce the computations and parameters of full-precision convolutional neural networks (CNN). However, there are different properties between BNN and full-precision CNN, and we can hardly use the experience of designing CNN to develop BNN. In this study, we reconsider components in binary convolution, such as residual connection, BatchNorm, activation function, and structure, for IR tasks. We conduct systematic analyses to explain each component's role in binary convolution and discuss the pitfalls. Specifically, we find that residual connection can reduce the information loss caused by binarization; BatchNorm can solve the value range gap between residual connection and binary convolution; The position of the activation function dramatically affects the performance of BNN. Based on our findings and analyses, we design a simple yet efficient basic binary convolution unit (BBCU). Furthermore, we divide IR networks into four parts and specially design variants of BBCU for each part to explore the benefit of binarizing these parts. We conduct experiments on different IR tasks, and our BBCU significantly outperforms other BNNs and lightweight models, which shows that BBCU can serve as a basic unit for binarized IR networks. All codes and models will be released.
https://openreview.net/pdf/3f11e62dcff084eee7280a7d57ccbd806de2c07c.pdf
Multimodal Federated Learning via Contrastive Representation Ensemble
https://openreview.net/forum?id=Hnk1WRMAYqg
https://openreview.net/forum?id=Hnk1WRMAYqg
Qiying Yu,Yang Liu,Yimu Wang,Ke Xu,Jingjing Liu
ICLR 2023,Poster
With the increasing amount of multimedia data on modern mobile systems and IoT infrastructures, harnessing these rich multimodal data without breaching user privacy becomes a critical issue. Federated learning (FL) serves as a privacy-conscious alternative to centralized machine learning. However, existing FL methods extended to multimodal data all rely on model aggregation on single modality level, which restrains the server and clients to have identical model architecture for each modality. This limits the global model in terms of both model complexity and data capacity, not to mention task diversity. In this work, we propose \textit{Contrastive Representation Ensemble and Aggregation for Multimodal FL (CreamFL)}, a multimodal federated learning framework that enables training larger server models from clients with heterogeneous model architectures and data modalities, while only communicating knowledge on public dataset. To achieve better multimodal representation fusion, we design a global-local cross-modal ensemble strategy to aggregate client representations. To mitigate local model drift caused by two unprecedented heterogeneous factors stemming from multimodal discrepancy (\textit{modality gap} and \textit{task gap}), we further propose two inter-modal and intra-modal contrasts to regularize local training, which complements information of the absent modality for uni-modal clients and regularizes local clients to head towards global consensus. Thorough evaluations and ablation studies on image-text retrieval and visual question answering tasks showcase the superiority of CreamFL over state-of-the-art FL methods and its practical value.
https://openreview.net/pdf/a51674bdcfdff01c5f5e3ad87446148a5ce6b4be.pdf
Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation
https://openreview.net/forum?id=_Mic8V96Voy
https://openreview.net/forum?id=_Mic8V96Voy
Lin Zhang,Shaohuai Shi,Bo Li
ICLR 2023,Poster
Second-order optimization algorithms exhibit excellent convergence properties for training deep learning models, but often incur significant computation and memory overheads. This can result in lower training efficiency than the first-order counterparts such as stochastic gradient descent (SGD). In this work, we present a memory- and time-efficient second-order algorithm named Eva with two novel techniques: 1) we construct the second-order information with the Kronecker factorization of small stochastic vectors over a mini-batch of training data to reduce memory consumption, and 2) we derive an efficient update formula without explicitly computing the inverse of matrices using the Sherman-Morrison formula. We further provide a theoretical interpretation of Eva from a trust-region optimization point of view to understand how it works. Extensive experimental results on different models and datasets show that Eva reduces the end-to-end training time up to $2.05\times$ and $2.42\times$ compared to first-order SGD and second-order algorithms (K-FAC and Shampoo), respectively.
https://openreview.net/pdf/0173bbe26be71a72abd5e239d93649b479c408bf.pdf
Can CNNs Be More Robust Than Transformers?
https://openreview.net/forum?id=TKIFuQHHECj
https://openreview.net/forum?id=TKIFuQHHECj
Zeyu Wang,Yutong Bai,Yuyin Zhou,Cihang Xie
ICLR 2023,Poster
The recent success of Vision Transformers is shaking the long dominance of Convolutional Neural Networks (CNNs) in image recognition for a decade. Specifically, in terms of robustness on out-of-distribution samples, recent research finds that Transformers are inherently more robust than CNNs, regardless of different training setups. Moreover, it is believed that such superiority of Transformers should largely be credited to their \emph{self-attention-like architectures per se}. In this paper, we question that belief by closely examining the design of Transformers. Our findings lead to three highly effective architecture designs for boosting robustness, yet simple enough to be implemented in several lines of code, namely a) patchifying input images, b) enlarging kernel size, and c) reducing activation layers and normalization layers. Bringing these components together, we are able to build pure CNN architectures without any attention-like operations that are as robust as, or even more robust than, Transformers. We hope this work can help the community better understand the design of robust neural architectures. The code is publicly available at https://github.com/UCSC-VLAA/RobustCNN.
https://openreview.net/pdf/a0002e329d4a358d2fa21a2a24095d9ef78f509a.pdf
Risk-Aware Reinforcement Learning with Coherent Risk Measures and Non-linear Function Approximation
https://openreview.net/forum?id=-RwZOVybbj
https://openreview.net/forum?id=-RwZOVybbj
Thanh Lam,Arun Verma,Bryan Kian Hsiang Low,Patrick Jaillet
ICLR 2023,Poster
We study the risk-aware reinforcement learning (RL) problem in the episodic finite-horizon Markov decision process with unknown transition and reward functions. In contrast to the risk-neutral RL problem, we consider minimizing the risk of having low rewards, which arise due to the intrinsic randomness of the MDPs and imperfect knowledge of the model. Our work provides a unified framework to analyze the regret of risk-aware RL policy with coherent risk measures in conjunction with non-linear function approximation, which gives the first sub-linear regret bounds in the setting. Finally, we validate our theoretical results via empirical experiments on synthetic and real-world data.
https://openreview.net/pdf/12ed9c605b0fa8dc9237a38935722f9687e15641.pdf
Bi-level Physics-Informed Neural Networks for PDE Constrained Optimization using Broyden's Hypergradients
https://openreview.net/forum?id=kkpL4zUXtiw
https://openreview.net/forum?id=kkpL4zUXtiw
Zhongkai Hao,Chengyang Ying,Hang Su,Jun Zhu,Jian Song,Ze Cheng
ICLR 2023,Poster
Deep learning based approaches like Physics-informed neural networks (PINNs) and DeepONets have shown promise on solving PDE constrained optimization (PDECO) problems. However, existing methods are insufficient to handle those PDE constraints that have a complicated or nonlinear dependency on optimization targets. In this paper, we present a novel bi-level optimization framework to resolve the challenge by decoupling the optimization of the targets and constraints. For the inner loop optimization, we adopt PINNs to solve the PDE constraints only. For the outer loop, we design a novel method by using Broyden's method based on the Implicit Function Theorem (IFT), which is efficient and accurate for approximating hypergradients. We further present theoretical explanations and error analysis of the hypergradients computation. Extensive experiments on multiple large-scale and nonlinear PDE constrained optimization problems demonstrate that our method achieves state-of-the-art results compared with strong baselines.
https://openreview.net/pdf/2b30b8c2d9d64ebef78683aa290df344b7e79706.pdf
On the Saturation Effect of Kernel Ridge Regression
https://openreview.net/forum?id=tFvr-kYWs_Y
https://openreview.net/forum?id=tFvr-kYWs_Y
Yicheng Li,Haobo Zhang,Qian Lin
ICLR 2023,Poster
The saturation effect refers to the phenomenon that the kernel ridge regression (KRR) fails to achieve the information theoretical lower bound when the smoothness of the underground truth function exceeds certain level. The saturation effect has been widely observed in practices and a saturation lower bound of KRR has been conjectured for decades. In this paper, we provide a proof of this long-standing conjecture.
https://openreview.net/pdf/d96aacc83b173439f5771f484954fdf07a50af3a.pdf
Protein Representation Learning by Geometric Structure Pretraining
https://openreview.net/forum?id=to3qCB3tOh9
https://openreview.net/forum?id=to3qCB3tOh9
Zuobai Zhang,Minghao Xu,Arian Rokkum Jamasb,Vijil Chenthamarakshan,Aurelie Lozano,Payel Das,Jian Tang
ICLR 2023,Poster
Learning effective protein representations is critical in a variety of tasks in biology such as predicting protein function or structure. Existing approaches usually pretrain protein language models on a large number of unlabeled amino acid sequences and then finetune the models with some labeled data in downstream tasks. Despite the effectiveness of sequence-based approaches, the power of pretraining on known protein structures, which are available in smaller numbers only, has not been explored for protein property prediction, though protein structures are known to be determinants of protein function. In this paper, we propose to pretrain protein representations according to their 3D structures. We first present a simple yet effective encoder to learn the geometric features of a protein. We pretrain the protein graph encoder by leveraging multiview contrastive learning and different self-prediction tasks. Experimental results on both function prediction and fold classification tasks show that our proposed pretraining methods outperform or are on par with the state-of-the-art sequence-based methods, while using much less pretraining data. Our implementation is available at https://github.com/DeepGraphLearning/GearNet.
https://openreview.net/pdf/fcf1d34c0463c5072ee49ce930224e980675829b.pdf
Trainable Weight Averaging: Efficient Training by Optimizing Historical Solutions
https://openreview.net/forum?id=8wbnpOJY-f
https://openreview.net/forum?id=8wbnpOJY-f
Tao Li,Zhehao Huang,Qinghua Tao,Yingwen Wu,Xiaolin Huang
ICLR 2023,Poster
Stochastic gradient descent (SGD) and its variants are considered as the de-facto methods to train deep neural networks (DNNs). While recent improvements to SGD mainly focus on the descent algorithm itself, few works pay attention to utilizing the historical solutions---as an iterative method, SGD has gone through substantial explorations before convergence. Recently, an interesting attempt is stochastic weight averaging (SWA), which significantly improves the generalization by simply averaging the solutions at the tail stage of training. In this paper, we realize that the averaging coefficients could be determined in a trainable manner and propose Trainable Weight Averaging (TWA), a novel optimization method in the reduced subspace spanned by historical solutions. TWA has much greater flexibility and can be applied to the head stage of training to achieve training efficiency while preserving good generalization capability. Further, we propose a distributed training scheme to resolve the memory burden of large-scale training with efficient parallel computation. In the extensive numerical experiments, (i) TWA achieves consistent improvements over SWA with less sensitivity to learning rate; (ii) applying TWA in the head stage of training largely speeds up the convergence, resulting in over $40\%$ time saving on CIFAR and $30\%$ on ImageNet with improved generalization compared with regular training.
https://openreview.net/pdf/d32020780d5a44be516cb5ee9c13e38535e964ed.pdf
Deep Declarative Dynamic Time Warping for End-to-End Learning of Alignment Paths
https://openreview.net/forum?id=UClBPxIZqnY
https://openreview.net/forum?id=UClBPxIZqnY
Ming Xu,Sourav Garg,Michael Milford,Stephen Gould
ICLR 2023,Poster
This paper addresses learning end-to-end models for time series data that include a temporal alignment step via dynamic time warping (DTW). Existing approaches to differentiable DTW either differentiate through a fixed warping path or apply a differentiable relaxation to the min operator found in the recursive steps used to solve the DTW problem. We instead propose a DTW layer based around bi-level optimisation and deep declarative networks, which we name DecDTW. By formulating DTW as a continuous, inequality constrained optimisation problem, we can compute gradients for the solution of the optimal alignment (with respect to the underlying time series) using implicit differentiation. An interesting byproduct of this formulation is that DecDTW outputs the optimal warping path between two time series as opposed to a soft approximation, recoverable from Soft-DTW. We show that this property is particularly useful for applications where downstream loss functions are defined on the optimal alignment path itself. This naturally occurs, for instance, when learning to improve the accuracy of predicted alignments against ground truth alignments. We evaluate DecDTW on two such applications, namely the audio-to-score alignment task in music information retrieval and the visual place recognition task in robotics, demonstrating state-of-the-art results in both.
https://openreview.net/pdf/8e0e635f5694c8b92dd11327c5307fd352d33740.pdf
Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning
https://openreview.net/forum?id=3itjR9QxFw
https://openreview.net/forum?id=3itjR9QxFw
Ting Chen,Ruixiang ZHANG,Geoffrey Hinton
ICLR 2023,Poster
We present Bit Diffusion: a simple and generic approach for generating discrete data with continuous state and continuous time diffusion models. The main idea behind our approach is to first represent the discrete data as binary bits, and then train a continuous diffusion model to model these bits as real numbers which we call analog bits. To generate samples, the model first generates the analog bits, which are then thresholded to obtain the bits that represent the discrete variables. We further propose two simple techniques, namely Self-Conditioning and Asymmetric Time Intervals, which lead to a significant improvement in sample quality. Despite its simplicity, the proposed approach can achieve strong performance in both discrete image generation and image captioning tasks. For discrete image generation, we significantly improve previous state-of-the-art on both CIFAR-10 (which has 3K discrete 8-bit tokens) and ImageNet-64x64 (which has 12K discrete 8-bit tokens), outperforming the best autoregressive model in both sample quality (measured by FID) and efficiency. For image captioning on MS-COCO dataset, our approach achieves competitive results compared to autoregressive models.
https://openreview.net/pdf/a44a46ea8deb0fda30f91b5e7a0712d7544a462a.pdf
Understanding Edge-of-Stability Training Dynamics with a Minimalist Example
https://openreview.net/forum?id=p7EagBsMAEO
https://openreview.net/forum?id=p7EagBsMAEO
Xingyu Zhu,Zixuan Wang,Xiang Wang,Mo Zhou,Rong Ge
ICLR 2023,Poster
Recently, researchers observed that gradient descent for deep neural networks operates in an ``edge-of-stability'' (EoS) regime: the sharpness (maximum eigenvalue of the Hessian) is often larger than stability threshold $2/\eta$ (where $\eta$ is the step size). Despite this, the loss oscillates and converges in the long run, and the sharpness at the end is just slightly below $2/\eta$. While many other well-understood nonconvex objectives such as matrix factorization or two-layer networks can also converge despite large sharpness, there is often a larger gap between sharpness of the endpoint and $2/\eta$. In this paper, we study EoS phenomenon by constructing a simple function that has the same behavior. We give rigorous analysis for its training dynamics in a large local region and explain why the final converging point has sharpness close to $2/\eta$. Globally we observe that the training dynamics for our example has an interesting bifurcating behavior, which was also observed in the training of neural nets.
https://openreview.net/pdf/4787f700ee012e7abba6247943a20977fd42e705.pdf
Learning Proximal Operators to Discover Multiple Optima
https://openreview.net/forum?id=PzBGIu-llo7
https://openreview.net/forum?id=PzBGIu-llo7
Lingxiao Li,Noam Aigerman,Vladimir Kim,Jiajin Li,Kristjan Greenewald,Mikhail Yurochkin,Justin Solomon
ICLR 2023,Poster
Finding multiple solutions of non-convex optimization problems is a ubiquitous yet challenging task. Most past algorithms either apply single-solution optimization methods from multiple random initial guesses or search in the vicinity of found solutions using ad hoc heuristics. We present an end-to-end method to learn the proximal operator of a family of training problems so that multiple local minima can be quickly obtained from initial guesses by iterating the learned operator, emulating the proximal-point algorithm that has fast convergence. The learned proximal operator can be further generalized to recover multiple optima for unseen problems at test time, enabling applications such as object detection. The key ingredient in our formulation is a proximal regularization term, which elevates the convexity of our training loss: by applying recent theoretical results, we show that for weakly-convex objectives with Lipschitz gradients, training of the proximal operator converges globally with a practical degree of over-parameterization. We further present an exhaustive benchmark for multi-solution optimization to demonstrate the effectiveness of our method.
https://openreview.net/pdf/d4dea9af12d57bd58abf72c6c2c9d76013bed435.pdf
Guiding continuous operator learning through Physics-based boundary constraints
https://openreview.net/forum?id=gfWNItGOES6
https://openreview.net/forum?id=gfWNItGOES6
Nadim Saad,Gaurav Gupta,Shima Alizadeh,Danielle C. Maddix
ICLR 2023,Poster
Boundary conditions (BCs) are important groups of physics-enforced constraints that are necessary for solutions of Partial Differential Equations (PDEs) to satisfy at specific spatial locations. These constraints carry important physical meaning, and guarantee the existence and the uniqueness of the PDE solution. Current neural-network based approaches that aim to solve PDEs rely only on training data to help the model learn BCs implicitly, however, there is no guarantee of BC satisfaction by these models during evaluation. In this work, we propose Boundary enforcing Operator Network (BOON) that enables the BC satisfaction of neural operators by making structural changes to the operator kernel. We provide our refinement procedure, and demonstrate the satisfaction of physics-based BCs such as Dirichlet, Neumann, and periodic by the solutions obtained by BOON. Numerical experiments based on multiple PDEs with a wide variety of applications indicate that the proposed approach ensures satisfaction of BCs, and leads to more accurate solutions over the whole domain. The proposed method exhibits a (2X-20X) improvement in accuracy (0.000084 relative $L^2$ error for Burgers' equation). Code available at: https://github.com/amazon-science/boon.
https://openreview.net/pdf/22e595c8defe365743f6349da79c0f6b1e7f4099.pdf
Neural Radiance Field Codebooks
https://openreview.net/forum?id=mX56bKDybu5
https://openreview.net/forum?id=mX56bKDybu5
Matthew Wallingford,Aditya Kusupati,Alex Fang,Vivek Ramanujan,Aniruddha Kembhavi,Roozbeh Mottaghi,Ali Farhadi
ICLR 2023,Poster
Compositional representations of the world are a promising step towards enabling high-level scene understanding and efficient transfer to downstream tasks. Learning such representations for complex scenes and tasks remains an open challenge. Towards this goal, we introduce Neural Radiance Field Codebooks (NRC), a scalable method for learning object-centric representations through novel view reconstruction. NRC learns to reconstruct scenes from novel views using a dictionary of object codes which are decoded through a volumetric renderer. This enables the discovery of reoccurring visual and geometric patterns across scenes which are transferable to downstream tasks. We show that NRC representations transfer well to object navigation in THOR, outperforming 2D and 3D representation learning methods by 3.1\% success rate. We demonstrate that our approach is able to perform unsupervised segmentation for more complex synthetic (THOR) and real scenes (NYU Depth) better than prior methods (.101 ARI). Finally, we show that NRC improves on the task of depth ordering by 5.5% accuracy in THOR.
https://openreview.net/pdf/b938c8f0acccd59571a25bc4c07a69bf24a26be3.pdf
Generalized Precision Matrix for Scalable Estimation of Nonparametric Markov Networks
https://openreview.net/forum?id=qBvBycTqVJ
https://openreview.net/forum?id=qBvBycTqVJ
Yujia Zheng,Ignavier Ng,Yewen Fan,Kun Zhang
ICLR 2023,Poster
A Markov network characterizes the conditional independence structure, or Markov property, among a set of random variables. Existing work focuses on specific families of distributions (e.g., exponential families) and/or certain structures of graphs, and most of them can only handle variables of a single data type (continuous or discrete). In this work, we characterize the conditional independence structure in general distributions for all data types (i.e., continuous, discrete, and mixed-type) with a Generalized Precision Matrix (GPM). Besides, we also allow general functional relations among variables, thus giving rise to a Markov network structure learning algorithm in one of the most general settings. To deal with the computational challenge of the problem, especially for large graphs, we unify all cases under the same umbrella of a regularized score matching framework. We validate the theoretical results and demonstrate the scalability empirically in various settings.
https://openreview.net/pdf/00599853de2618a03c95b10d71e6063c0cd2ecca.pdf
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image Classification
https://openreview.net/forum?id=9aokcgBVIj1
https://openreview.net/forum?id=9aokcgBVIj1
Aliaksandra Shysheya,John F Bronskill,Massimiliano Patacchiola,Sebastian Nowozin,Richard E Turner
ICLR 2023,Poster
Modern deep learning systems are increasingly deployed in situations such as personalization and federated learning where it is necessary to support i) learning on small amounts of data, and ii) communication efficient distributed training protocols. In this work, we develop FiLM Transfer (FiT) which fulfills these requirements in the image classification setting by combining ideas from transfer learning (fixed pretrained backbones and fine-tuned FiLM adapter layers) and meta-learning (automatically configured Naive Bayes classifiers and episodic training) to yield parameter efficient models with superior classification accuracy at low-shot. The resulting parameter efficiency is key for enabling few-shot learning, inexpensive model updates for personalization, and communication efficient federated learning. We experiment with FiT on a wide range of downstream datasets and show that it achieves better classification accuracy than the leading Big Transfer (BiT) algorithm at low-shot and achieves state-of-the art accuracy on the challenging VTAB-1k benchmark, with fewer than 1% of the updateable parameters. Finally, we demonstrate the parameter efficiency and superior accuracy of FiT in distributed low-shot applications including model personalization and federated learning where model update size is an important performance metric.
https://openreview.net/pdf/23f3b31eae45a21349ad7d90fe16f9e3add566ea.pdf
Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation
https://openreview.net/forum?id=1-MBdJssZ-S
https://openreview.net/forum?id=1-MBdJssZ-S
Ye Zhu,Yu Wu,Kyle Olszewski,Jian Ren,Sergey Tulyakov,Yan Yan
ICLR 2023,Poster
Diffusion probabilistic models (DPMs) have become a popular approach to conditional generation, due to their promising results and support for cross-modal synthesis. A key desideratum in conditional synthesis is to achieve high correspondence between the conditioning input and generated output. Most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound. In this work, we take a different route---we explicitly enhance input-output connections by maximizing their mutual information. To this end, we introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss and design two contrastive diffusion mechanisms to effectively incorporate it into the denoising process, combining the diffusion training and contrastive learning for the first time by connecting it with the conventional variational objectives. We demonstrate the efficacy of our approach in evaluations with diverse multimodal conditional synthesis tasks: dance-to-music generation, text-to-image synthesis, as well as class-conditioned image synthesis. On each, we enhance the input-output correspondence and achieve higher or competitive general synthesis quality. Furthermore, the proposed approach improves the convergence of diffusion models, reducing the number of required diffusion steps by more than 35% on two benchmarks, significantly increasing the inference speed.
https://openreview.net/pdf/d217aca9d2b4ac1be080c492bb9d4939061d6de7.pdf
Diffusion Probabilistic Modeling of Protein Backbones in 3D for the motif-scaffolding problem
https://openreview.net/forum?id=6TxBxqNME1Y
https://openreview.net/forum?id=6TxBxqNME1Y
Brian L. Trippe,Jason Yim,Doug Tischer,David Baker,Tamara Broderick,Regina Barzilay,Tommi S. Jaakkola
ICLR 2023,Poster
Construction of a scaffold structure that supports a desired motif, conferring protein function, shows promise for the design of vaccines and enzymes. But a general solution to this motif-scaffolding problem remains open. Current machine-learning techniques for scaffold design are either limited to unrealistically small scaffolds (up to length 20) or struggle to produce multiple diverse scaffolds. We propose to learn a distribution over diverse and longer protein backbone structures via an E(3)-equivariant graph neural network. We develop SMCDiff to efficiently sample scaffolds from this distribution conditioned on a given motif; our algorithm is the first to theoretically guarantee conditional samples from a diffusion model in the large-compute limit. We evaluate our designed backbones by how well they align with AlphaFold2-predicted structures. We show that our method can (1) sample scaffolds up to 80 residues and (2) achieve structurally diverse scaffolds for a fixed motif.
https://openreview.net/pdf/9674e0c41114fe41488ab7b7ea23289d7b213496.pdf
NeRF-SOS: Any-View Self-supervised Object Segmentation on Complex Scenes
https://openreview.net/forum?id=kfOtMqYJlUU
https://openreview.net/forum?id=kfOtMqYJlUU
Zhiwen Fan,Peihao Wang,Yifan Jiang,Xinyu Gong,Dejia Xu,Zhangyang Wang
ICLR 2023,Poster
Neural volumetric representations have shown the potential that Multi-layer Perceptrons (MLPs) can be optimized with multi-view calibrated images to represent scene geometry and appearance without explicit 3D supervision. Object segmentation can enrich many downstream applications based on the learned radiance field. However, introducing hand-crafted segmentation to define regions of interest in a complex real-world scene is non-trivial and expensive as it acquires per view annotation. This paper carries out the exploration of self-supervised learning for object segmentation using NeRF for complex real-world scenes. Our framework, called NeRF with Self-supervised Object Segmentation (NeRF-SOS), couples object segmentation and neural radiance field to segment objects in any view within a scene. By proposing a novel collaborative contrastive loss in both appearance and geometry levels, NeRF-SOS encourages NeRF models to distill compact geometry-aware segmentation clusters from their density fields and the self-supervised pre-trained 2D visual features. The self-supervised object segmentation framework can be applied to various NeRF models that both lead to photo-realistic rendering results and convincing segmentation maps for both indoor and outdoor scenarios. Extensive results on the LLFF, BlendedMVS, CO3Dv2, and Tank & Temples datasets validate the effectiveness of NeRF-SOS. It consistently surpasses other 2D-based self-supervised baselines and predicts finer object masks than existing supervised counterparts.
https://openreview.net/pdf/8a3caf2fdb6c1297748cb427b2cdaa5592c5bf14.pdf
Rethinking Graph Lottery Tickets: Graph Sparsity Matters
https://openreview.net/forum?id=fjh7UGQgOB
https://openreview.net/forum?id=fjh7UGQgOB
Bo Hui,Da Yan,Xiaolong Ma,Wei-Shinn Ku
ICLR 2023,Poster
Lottery Ticket Hypothesis (LTH) claims the existence of a winning ticket (i.e., a properly pruned sub-network together with original weight initialization) that can achieve competitive performance to the original dense network. A recent work, called UGS, extended LTH to prune graph neural networks (GNNs) for effectively accelerating GNN inference. UGS simultaneously prunes the graph adjacency matrix and the model weights using the same masking mechanism, but since the roles of the graph adjacency matrix and the weight matrices are very different, we find that their sparsifications lead to different performance characteristics. Specifically, we find that the performance of a sparsified GNN degrades significantly when the graph sparsity goes beyond a certain extent. Therefore, we propose two techniques to improve GNN performance when the graph sparsity is high. First, UGS prunes the adjacency matrix using a loss formulation which, however, does not properly involve all elements of the adjacency matrix; in contrast, we add a new auxiliary loss head to better guide the edge pruning by involving the entire adjacency matrix. Second, by regarding unfavorable graph sparsification as adversarial data perturbations, we formulate the pruning process as a min-max optimization problem to gain the robustness of lottery tickets when the graph sparsity is high. We further investigate the question: Can the ``retrainable'' winning ticket of a GNN be also effective for graph transferring learning? We call it the transferable graph lottery ticket (GLT) hypothesis. Extensive experiments were conducted which demonstrate the superiority of our proposed sparsification method over UGS, and which empirically verified our transferable GLT hypothesis.
https://openreview.net/pdf/8093135b4a069e7cbc2301dcdc4e40bd165fbaa8.pdf
Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses
https://openreview.net/forum?id=TVY6GoURrw
https://openreview.net/forum?id=TVY6GoURrw
Andrew Lowy,Meisam Razaviyayn
ICLR 2023,Poster
This paper studies federated learning (FL)—especially cross-silo FL—with data from people who do not trust the server or other silos. In this setting, each silo (e.g. hospital) has data from different people (e.g. patients) and must maintain the privacy of each person’s data (e.g. medical record), even if the server or other silos act as adversarial eavesdroppers. This requirement motivates the study of Inter-Silo Record-Level Differential Privacy (ISRL-DP), which requires silo $i$’s communications to satisfy record/item-level differential privacy (DP). ISRL-DP ensures that the data of each person (e.g. patient) in silo $i$ (e.g. hospital $i$) cannot be leaked. ISRL-DP is different from well-studied privacy notions. Central and user-level DP assume that people trust the server/other silos. On the other end of the spectrum, local DP assumes that people do not trust anyone at all (even their own silo). Sitting between central and local DP, ISRL-DP makes the realistic assumption (in cross-silo FL) that people trust their own silo, but not the server or other silos. In this work, we provide tight (up to logarithms) upper and lower bounds for ISRL-DP FL with convex/strongly convex loss functions and homogeneous (i.i.d.) silo data. Remarkably, we show that similar bounds are attainable for smooth losses with arbitrary heterogeneous silo data distributions, via an accelerated ISRL-DP algorithm. We also provide tight upper and lower bounds for ISRL-DP federated empirical risk minimization, and use acceleration to attain the optimal bounds in fewer rounds of communication than the state-of-the-art. Finally, with a secure “shuffler” to anonymize silo messages (but without a trusted server), our algorithm attains the optimal central DP rates under more practical trust assumptions. Numerical experiments show favorable privacy-accuracy tradeoffs for our algorithm in classification and regression tasks.
https://openreview.net/pdf/d390a5f0054fceb5ef93b755f3b8c613a4214bc4.pdf
Cheap Talk Discovery and Utilization in Multi-Agent Reinforcement Learning
https://openreview.net/forum?id=cddbeL1HWaD
https://openreview.net/forum?id=cddbeL1HWaD
Yat Long Lo,Christian Schroeder de Witt,Samuel Sokota,Jakob Nicolaus Foerster,Shimon Whiteson
ICLR 2023,Poster
By enabling agents to communicate, recent cooperative multi-agent reinforcement learning (MARL) methods have demonstrated better task performance and more coordinated behavior. Most existing approaches facilitate inter-agent communication by allowing agents to send messages to each other through free communication channels, i.e., \emph{cheap talk channels}. Current methods require these channels to be constantly accessible and known to the agents a priori. In this work, we lift these requirements such that the agents must discover the cheap talk channels and learn how to use them. Hence, the problem has two main parts: \emph{cheap talk discovery} (CTD) and \emph{cheap talk utilization} (CTU). We introduce a novel conceptual framework for both parts and develop a new algorithm based on mutual information maximization that outperforms existing algorithms in CTD/CTU settings. We also release a novel benchmark suite to stimulate future research in CTD/CTU.
https://openreview.net/pdf/efb6725925bb04b66d9a794a929e5ed57ea8ef69.pdf
Reversible Column Networks
https://openreview.net/forum?id=Oc2vlWU0jFY
https://openreview.net/forum?id=Oc2vlWU0jFY
Yuxuan Cai,Yizhuang Zhou,Qi Han,Jianjian Sun,Xiangwen Kong,Jun Li,Xiangyu Zhang
ICLR 2023,Poster
We propose a new neural network design paradigm Reversible Column Network (RevCol). The main body of RevCol is composed of multiple copies of subnetworks, named columns respectively, between which multi-level reversible connections are employed. Such architectural scheme attributes RevCol very different behavior from conventional networks: during forward propagation, features in RevCol are learned to be gradually disentangled when passing through each column, whose total information is maintained rather than compressed or discarded as other network does. Our experiments suggest that CNN-style RevCol models can achieve very competitive performances on multiple computer vision tasks such as image classification, object detection and semantic segmentation, especially with large parameter budget and large dataset. For example, after ImageNet-22K pre-training, RevCol-XL obtains 88.2% ImageNet-1K accuracy. Given more pre-training data, our largest model RevCol-H reaches 90.0% on ImageNet-1K, 63.8% AP$_{box}$ on COCO detection minival set, 61.0% mIoU on ADE20k segmentation. To our knowledge, it is the best COCO detection and ADE20k segmentation result among pure (static) CNN models. Moreover, as a general macro architecture fashion, RevCol can also be introduced into transformers or other neural networks, which is demonstrated to improve the performances in both computer vision and NLP tasks. We release code and models at https://github.com/megvii-research/RevCol
https://openreview.net/pdf/8a23e95804497f89fe34f28d4642c9318b2fd3c4.pdf
Modeling Multimodal Aleatoric Uncertainty in Segmentation with Mixture of Stochastic Experts
https://openreview.net/forum?id=KE_wJD2RK4
https://openreview.net/forum?id=KE_wJD2RK4
Zhitong Gao,Yucong Chen,Chuyu Zhang,Xuming He
ICLR 2023,Poster
Equipping predicted segmentation with calibrated uncertainty is essential for safety-critical applications. In this work, we focus on capturing the data-inherent uncertainty (aka aleatoric uncertainty) in segmentation, typically when ambiguities exist in input images. Due to the high-dimensional output space and potential multiple modes in segmenting ambiguous images, it remains challenging to predict well-calibrated uncertainty for segmentation. To tackle this problem, we propose a novel mixture of stochastic experts (MoSE) model, where each expert network estimates a distinct mode of the aleatoric uncertainty and a gating network predicts the probabilities of an input image being segmented in those modes. This yields an efficient two-level uncertainty representation. To learn the model, we develop a Wasserstein-like loss that directly minimizes the distribution distance between the MoSE and ground truth annotations. The loss can easily integrate traditional segmentation quality measures and be efficiently optimized via constraint relaxation. We validate our method on the LIDC-IDRI dataset and a modified multimodal Cityscapes dataset. Results demonstrate that our method achieves the state-of-the-art or competitive performance on all metrics.
https://openreview.net/pdf/a6592dd089cd504236b41eb25f4594b99c263305.pdf
On the Robustness of Safe Reinforcement Learning under Observational Perturbations
https://openreview.net/forum?id=jbIYfq4Tr-
https://openreview.net/forum?id=jbIYfq4Tr-
Zuxin Liu,Zijian Guo,Zhepeng Cen,Huan Zhang,Jie Tan,Bo Li,Ding Zhao
ICLR 2023,Poster
Safe reinforcement learning (RL) trains a policy to maximize the task reward while satisfying safety constraints. While prior works focus on the performance optimality, we find that the optimal solutions of many safe RL problems are not robust and safe against carefully designed observational perturbations. We formally analyze the unique properties of designing effective observational adversarial attackers in the safe RL setting. We show that baseline adversarial attack techniques for standard RL tasks are not always effective for safe RL and propose two new approaches - one maximizes the cost and the other maximizes the reward. One interesting and counter-intuitive finding is that the maximum reward attack is strong, as it can both induce unsafe behaviors and make the attack stealthy by maintaining the reward. We further propose a robust training framework for safe RL and evaluate it via comprehensive experiments. This paper provides a pioneer work to investigate the safety and robustness of RL under observational attacks for future safe RL studies. Code is available at: \url{https://github.com/liuzuxin/safe-rl-robustness}
https://openreview.net/pdf/692f539d83940961224b9d5f7a3aff0536fc6fdd.pdf
Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition
https://openreview.net/forum?id=TPiwkItUSu
https://openreview.net/forum?id=TPiwkItUSu
Jianhao Ma,Lingjun Guo,Salar Fattahi
ICLR 2023,Poster
This work analyzes the solution trajectory of gradient-based algorithms via a novel basis function decomposition. We show that, although solution trajectories of gradient-based algorithms may vary depending on the learning task, they behave almost monotonically when projected onto an appropriate orthonormal function basis. Such projection gives rise to a basis function decomposition of the solution trajectory. Theoretically, we use our proposed basis function decomposition to establish the convergence of gradient descent (GD) on several representative learning tasks. In particular, we improve the convergence of GD on symmetric matrix factorization and provide a completely new convergence result for the orthogonal symmetric tensor decomposition. Empirically, we illustrate the promise of our proposed framework on realistic deep neural networks (DNNs) across different architectures, gradient-based solvers, and datasets. Our key finding is that gradient-based algorithms monotonically learn the coefficients of a particular orthonormal function basis of DNNs defined as the eigenvectors of the conjugate kernel after training.
https://openreview.net/pdf/ca6112e62d0190aca22c33e237c6215e3ad9962e.pdf
What Is Missing in IRM Training and Evaluation? Challenges and Solutions
https://openreview.net/forum?id=MjsDeTcDEy
https://openreview.net/forum?id=MjsDeTcDEy
Yihua Zhang,Pranay Sharma,Parikshit Ram,Mingyi Hong,Kush R. Varshney,Sijia Liu
ICLR 2023,Poster
Invariant risk minimization (IRM) has received increasing attention as a way to acquire environment-agnostic data representations and predictions, and also a principled solution for preventing spurious correlations from being learned and improving models’ out-of-distribution generalization. Yet, recent works have found that the optimality of the originally-proposed IRM optimization (IRMV1) may be compromised in practice or could be impossible to achieve in some scenarios. Therefore, a series of advanced IRM algorithms have been developed that show practical improvement over IRMV1. In this work, we revisit these recent IRM advancements and identify and resolve three practical limitations in IRM training and evaluation. First, we find that the effect of batch size during training has been chronically overlooked in previous studies, leaving room for further improvement. We propose small-batch training and highlight the improvements over a set of large-batch optimization techniques. Second, we find that improper selection of evaluation environments could give a false sense of invariance for IRM. To alleviate this effect, we leverage diversified test-time environments to precisely characterize the invariance of IRM when applied in practice. Third, we revisit Ahuja et al. (2020)’s proposal to convert IRM into an ensemble game and identify a limitation when a single invariant predictor is desired instead of an ensemble of individual predictors. We propose a new IRM variant to address this limitation based on a novel viewpoint of ensemble IRM games as consensus-constrained bi-level optimization. Lastly, we conduct extensive experiments (covering 7 existing IRM variants and 7 datasets) to justify the practical significance of revisiting IRM training and evaluation in a principled manner.
https://openreview.net/pdf/cbda5f91d0c8d89eff493b43c5e0177cce612279.pdf
Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization
https://openreview.net/forum?id=1tHAZRqftM
https://openreview.net/forum?id=1tHAZRqftM
Mingxuan Ju,Tong Zhao,Qianlong Wen,Wenhao Yu,Neil Shah,Yanfang Ye,Chuxu Zhang
ICLR 2023,Poster
Self-supervised learning (SSL) for graph neural networks (GNNs) has attracted increasing attention from the graph machine learning community in recent years, owing to its capability to learn performant node embeddings without costly label information. One weakness of conventional SSL frameworks for GNNs is that they learn through a single philosophy, such as mutual information maximization or generative reconstruction. When applied to various downstream tasks, these frameworks rarely perform equally well for every task, because one philosophy may not span the extensive knowledge required for all tasks. To enhance the task generalization across tasks, as an important first step forward in exploring fundamental graph models, we introduce PARETOGNN, a multi-task SSL framework for node representation learning over graphs. Specifically, PARETOGNN is self-supervised by manifold pretext tasks observing multiple philosophies. To reconcile different philosophies, we explore a multiple-gradient descent algorithm, such that PARETOGNN actively learns from every pretext task while minimizing potential conflicts. We conduct comprehensive experiments over four downstream tasks (i.e., node classification, node clustering, link prediction, and partition prediction), and our proposal achieves the best overall performance across tasks on 11 widely adopted benchmark datasets. Besides, we observe that learning from multiple philosophies enhances not only the task generalization but also the single task performances, demonstrating that PARETOGNN achieves better task generalization via the disjoint yet complementary knowledge learned from different philosophies. Our code is publicly available at https://github.com/jumxglhf/ParetoGNN.
https://openreview.net/pdf/3b1a90ddbf7f8ee9be5ac7e8205cc1f71753740e.pdf
Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel
https://openreview.net/forum?id=V_06QV-kZX
https://openreview.net/forum?id=V_06QV-kZX
Ryuichi Kanoh,Mahito Sugiyama
ICLR 2023,Poster
A soft tree is an actively studied variant of a decision tree that updates splitting rules using the gradient method. Although soft trees can take various architectures, their impact is not theoretically well known. In this paper, we formulate and analyze the Neural Tangent Kernel (NTK) induced by soft tree ensembles for arbitrary tree architectures. This kernel leads to the remarkable finding that only the number of leaves at each depth is relevant for the tree architecture in ensemble learning with an infinite number of trees. In other words, if the number of leaves at each depth is fixed, the training behavior in function space and the generalization performance are exactly the same across different tree architectures, even if they are not isomorphic. We also show that the NTK of asymmetric trees like decision lists does not degenerate when they get infinitely deep. This is in contrast to the perfect binary trees, whose NTK is known to degenerate and leads to worse generalization performance for deeper trees.
https://openreview.net/pdf/e5442b04b7f82d52b255f153d530ca1d87f27fd1.pdf
Exploring The Role of Mean Teachers in Self-supervised Masked Auto-Encoders
https://openreview.net/forum?id=7sn6Vxp92xV
https://openreview.net/forum?id=7sn6Vxp92xV
Youngwan Lee,Jeffrey Ryan Willette,Jonghee Kim,Juho Lee,Sung Ju Hwang
ICLR 2023,Poster
Masked image modeling (MIM) has become a popular strategy for self-supervised learning (SSL) of visual representations with Vision Transformers. A representative MIM model, the masked auto-encoder (MAE), randomly masks a subset of image patches and reconstructs the masked patches given the unmasked patches. Concurrently, many recent works in self-supervised learning utilize the student/teacher paradigm which provides the student with an additional target based on the output of a teacher composed of an exponential moving average (EMA) of previous students. Although common, relatively little is known about the dynamics of the interaction between the student and teacher. Through analysis on a simple linear model, we find that the teacher conditionally removes previous gradient directions based on feature similarities which effectively acts as a conditional momentum regularizer. From this analysis, we present a simple SSL method, the Reconstruction-Consistent Masked Auto-Encoder (RC-MAE) by adding an EMA teacher to MAE. We find that RC-MAE converges faster and requires less memory usage than state-of-the-art self-distillation methods during pre-training, which may provide a way to enhance the practicality of prohibitively expensive self-supervised learning of Vision Transformer models. Additionally, we show that RC-MAE achieves more robustness and better performance compared to MAE on downstream tasks such as ImageNet-1K classification, object detection, and instance segmentation.
https://openreview.net/pdf/6501e6eba39a116305b247c3726769e527edd8a6.pdf
Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks
https://openreview.net/forum?id=BrJATVZDWEH
https://openreview.net/forum?id=BrJATVZDWEH
Noam Wies,Yoav Levine,Amnon Shashua
ICLR 2023,Poster
The field of Natural Language Processing (NLP) has experienced a dramatic leap in capabilities with the recent introduction of huge Language Models (LMs). Despite this success, natural language problems that involve several compounded steps are still practically unlearnable, even by the largest LMs. This complies with experimental failures for end-to-end learning of composite problems that were demonstrated in a variety of domains. An effective mitigation is to introduce intermediate supervision for solving sub-tasks of the compounded problem. Recently, several works have demonstrated high gains by taking a straightforward approach for incorporating intermediate supervision in compounded natural language problems: the sequence-to-sequence LM is fed with an augmented input, in which the decomposed tasks' labels are simply concatenated to the original input. In this paper, we prove a positive learning result that motivates these recent efforts. We show that when concatenating intermediate supervision to the input and training a sequence-to-sequence model on this modified input, unlearnable composite problems can become learnable. We show that this is true for any family of tasks which on the one hand, are unlearnable, and on the other hand, can be decomposed into a polynomial number of simple sub-tasks, each of which depends only on $O(1)$ previous sub-task results. Beyond motivating contemporary empirical efforts for incorporating intermediate supervision in sequence-to-sequence language models, our positive theoretical result is the first of its kind in the landscape of results on the benefits of intermediate supervision for neural-network learning: Until now, all theoretical results on the subject are negative, i.e., show cases where learning is impossible without intermediate supervision, while our result is positive, showing that learning is facilitated in the presence of intermediate supervision.
https://openreview.net/pdf/0e2acc3ed9aaaff91e94533aa1eb2cec3a27915b.pdf
Evaluating Long-Term Memory in 3D Mazes
https://openreview.net/forum?id=yHLvIlE9RGN
https://openreview.net/forum?id=yHLvIlE9RGN
Jurgis Pašukonis,Timothy P Lillicrap,Danijar Hafner
ICLR 2023,Poster
Intelligent agents need to remember salient information to reason in partially-observed environments. For example, agents with a first-person view should remember the positions of relevant objects even if they go out of view. Similarly, to effectively navigate through rooms agents need to remember the floor plan of how rooms are connected. However, most benchmark tasks in reinforcement learning do not test long-term memory in agents, slowing down progress in this important research direction. In this paper, we introduce the Memory Maze, a 3D domain of randomized mazes specifically designed for evaluating long-term memory in agents. Unlike existing benchmarks, Memory Maze measures long-term memory separate from confounding agent abilities and requires the agent to localize itself by integrating information over time. With Memory Maze, we propose an online reinforcement learning benchmark, a diverse offline dataset, and an offline probing evaluation. Recording a human player establishes a strong baseline and verifies the need to build up and retain memories, which is reflected in their gradually increasing rewards within each episode. We find that current algorithms benefit from training with truncated backpropagation through time and succeed on small mazes, but fall short of human performance on the large mazes, leaving room for future algorithmic designs to be evaluated on the Memory Maze.
https://openreview.net/pdf/51ee6a2d289af481a877c54cb8295fa6d6a0fc5f.pdf
Proactive Multi-Camera Collaboration for 3D Human Pose Estimation
https://openreview.net/forum?id=CPIy9TWFYBG
https://openreview.net/forum?id=CPIy9TWFYBG
Hai Ci,Mickel Liu,Xuehai Pan,fangwei zhong,Yizhou Wang
ICLR 2023,Poster
This paper presents a multi-agent reinforcement learning (MARL) scheme for proactive Multi-Camera Collaboration in 3D Human Pose Estimation in dynamic human crowds. Traditional fixed-viewpoint multi-camera solutions for human motion capture (MoCap) are limited in capture space and susceptible to dynamic occlusions. Active camera approaches proactively control camera poses to find optimal viewpoints for 3D reconstruction. However, current methods still face challenges with credit assignment and environment dynamics. To address these issues, our proposed method introduces a novel Collaborative Triangulation Contribution Reward (CTCR) that improves convergence and alleviates multi-agent credit assignment issues resulting from using 3D reconstruction accuracy as the shared reward. Additionally, we jointly train our model with multiple world dynamics learning tasks to better capture environment dynamics and encourage anticipatory behaviors for occlusion avoidance. We evaluate our proposed method in four photo-realistic UE4 environments to ensure validity and generalizability. Empirical results show that our method outperforms fixed and active baselines in various scenarios with different numbers of cameras and humans.
https://openreview.net/pdf/c0e6c42114afa223ad1ba354773bb35417716e80.pdf
Become a Proficient Player with Limited Data through Watching Pure Videos
https://openreview.net/forum?id=Sy-o2N0hF4f
https://openreview.net/forum?id=Sy-o2N0hF4f
Weirui Ye,Yunsheng Zhang,Pieter Abbeel,Yang Gao
ICLR 2023,Poster
Recently, RL has shown its strong ability for visually complex tasks. However, it suffers from the low sample efficiency and poor generalization ability, which prevent RL from being useful in real-world scenarios. Inspired by the huge success of unsupervised pre-training methods on language and vision domains, we propose to improve the sample efficiency via a novel pre-training method for model-based RL. Instead of using pre-recorded agent trajectories that come with their own actions, we consider the setting where the pre-training data are action-free videos, which are more common and available in the real world. We introduce a two-phase training pipeline as follows: for the pre-training phase, we implicitly extract the hidden action embedding from videos and pre-train the visual representation and the environment dynamics network through a novel \Changes{forward-inverse} cycle consistency \Changes{(FICC)} objective based on vector quantization; for down-stream tasks, we finetune with small amount of task data based on the learned models. Our framework can significantly improve the sample efficiency on Atari Games with data of only one hour of game playing. We achieve 118.4\% mean human performance and 36.0\% median performance with only 50k environment steps, which is 85.6\% and 65.1\% better than the scratch EfficientZero model. We believe such pre-training approach can provide an option for solving real-world RL problems. The code is available at \url{https://github.com/YeWR/FICC.git}.
https://openreview.net/pdf/b12ad9eb8aa7095d47b4f5b0248dba72a72616bf.pdf
Human MotionFormer: Transferring Human Motions with Vision Transformers
https://openreview.net/forum?id=lQVpasnQS62
https://openreview.net/forum?id=lQVpasnQS62
Hongyu Liu,Xintong Han,Chenbin Jin,Lihui Qian,Huawei Wei,Zhe Lin,Faqiang Wang,Haoye Dong,Yibing Song,Jia Xu,Qifeng Chen
ICLR 2023,Poster
Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis. An accurate matching between the source person and the target motion in both large and subtle motion changes is vital for improving the transferred motion quality. In this paper, we propose Human MotionFormer, a hierarchical ViT framework that leverages global and local perceptions to capture large and subtle motion matching, respectively. It consists of two ViT encoders to extract input features (i.e., a target motion image and a source human image) and a ViT decoder with several cascaded blocks for feature matching and motion transfer. In each block, we set the target motion feature as Query and the source person as Key and Value, calculating the cross-attention maps to conduct a global feature matching. Further, we introduce a convolutional layer to improve the local perception after the global cross-attention computations. This matching process is implemented in both warping and generation branches to guide the motion transfer. During training, we propose a mutual learning loss to enable the co-supervision between warping and generation branches for better motion representations. Experiments show that our Human MotionFormer sets the new state-of-the-art performance both qualitatively and quantitatively. Project page: https://github.com/KumapowerLIU/Human-MotionFormer.
https://openreview.net/pdf/d7f8c668602a7ac86b440d7547893cb07f0881f6.pdf
Backstepping Temporal Difference Learning
https://openreview.net/forum?id=YPChvOgRXRA
https://openreview.net/forum?id=YPChvOgRXRA
Han-Dong Lim,Donghwan Lee
ICLR 2023,Poster
Off-policy learning ability is an important feature of reinforcement learning (RL) for practical applications. However, even one of the most elementary RL algorithms, temporal-difference (TD) learning, is known to suffer form divergence issue when the off-policy scheme is used together with linear function approximation. To overcome the divergent behavior, several off-policy TD learning algorithms have been developed until now. In this work, we provide a unified view of such algorithms from a purely control-theoretic perspective. Our method relies on the backstepping technique, which is widely used in nonlinear control theory.
https://openreview.net/pdf/2e9905b6b2da1535ad686cdfabe136dd962adb12.pdf
A General Rank Preserving Framework for Asymmetric Image Retrieval
https://openreview.net/forum?id=dYHYXZ3uGdQ
https://openreview.net/forum?id=dYHYXZ3uGdQ
Hui Wu,Min Wang,Wengang Zhou,Houqiang Li
ICLR 2023,Poster
Asymmetric image retrieval aims to deploy compatible models on platforms of different resources to achieve a balance between computational efficiency and retrieval accuracy. The most critical issue is how to align the output features of different models. Despite the great progress, existing approaches apply strong constraints so that features or neighbor structures are strictly aligned across different models. However, such a one-to-one constraint is too strict to be well preserved for the query models with low capacity. Considering that the primary concern of the users is the rank of the returned images, we propose a generic rank preserving framework, which achieves feature compatibility and the order consistency between query and gallery models simultaneously. Specifically, we propose two alternatives to instantiate the framework. One realizes straightforward rank order preservation by directly preserving the consistency of the sorting results. To make sorting process differentiable, the Heaviside step function in sorting is approximated by the sigmoid function. The other aims to preserve a learnable monotonic mapping relationship between the returned similarity scores of query and gallery models. The mapped similarity scores of gallery model are considered as pseudo-supervision to guide the query model training. Extensive experiments on various large-scale datasets demonstrate the superiority of our two proposed methods.
https://openreview.net/pdf/7c38e05c89cf2c2dc6799815ab82d96a7d0e445d.pdf
Mega: Moving Average Equipped Gated Attention
https://openreview.net/forum?id=qNLe3iq2El
https://openreview.net/forum?id=qNLe3iq2El
Xuezhe Ma,Chunting Zhou,Xiang Kong,Junxian He,Liangke Gui,Graham Neubig,Jonathan May,Luke Zettlemoyer
ICLR 2023,Poster
The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences. In this paper, we introduce Mega, a simple, theoretically grounded, single-head gated attention mechanism equipped with (exponential) moving average to incorporate inductive bias of position-aware local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear time and space complexity yet yields only minimal quality loss, by efficiently splitting the whole sequence into multiple chunks with fixed length. Extensive experiments on a wide range of sequence modeling benchmarks, including the Long Range Arena, neural machine translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant improvements over other sequence models, including variants of Transformers and recent state space models.
https://openreview.net/pdf/5bb259d4fc89a118041f31e8405c7ab23df0b4a7.pdf
Parallel Deep Neural Networks Have Zero Duality Gap
https://openreview.net/forum?id=6zrOr_Rdhjs
https://openreview.net/forum?id=6zrOr_Rdhjs
Yifei Wang,Tolga Ergen,Mert Pilanci
ICLR 2023,Poster
Training deep neural networks is a challenging non-convex optimization problem. Recent work has proven that the strong duality holds (which means zero duality gap) for regularized finite-width two-layer ReLU networks and consequently provided an equivalent convex training problem. However, extending this result to deeper networks remains to be an open problem. In this paper, we prove that the duality gap for deeper linear networks with vector outputs is non-zero. In contrast, we show that the zero duality gap can be obtained by stacking standard deep networks in parallel, which we call a parallel architecture, and modifying the regularization. Therefore, we prove the strong duality and existence of equivalent convex problems that enable globally optimal training of deep networks. As a by-product of our analysis, we demonstrate that the weight decay regularization on the network parameters explicitly encourages low-rank solutions via closed-form expressions. In addition, we show that strong duality holds for three-layer standard ReLU networks given rank-1 data matrices.
https://openreview.net/pdf/49efda6feccac495256971139ea3c45e985d5e47.pdf
Information-Theoretic Analysis of Unsupervised Domain Adaptation
https://openreview.net/forum?id=c5tbxWXU9-y
https://openreview.net/forum?id=c5tbxWXU9-y
Ziqiao Wang,Yongyi Mao
ICLR 2023,Poster
This paper uses information-theoretic tools to analyze the generalization error in unsupervised domain adaptation (UDA). We present novel upper bounds for two notions of generalization errors. The first notion measures the gap between the population risk in the target domain and that in the source domain, and the second measures the gap between the population risk in the target domain and the empirical risk in the source domain. While our bounds for the first kind of error are in line with the traditional analysis and give similar insights, our bounds on the second kind of error are algorithm-dependent, which also provide insights into algorithm designs. Specifically, we present two simple techniques for improving generalization in UDA and validate them experimentally.
https://openreview.net/pdf/a95b76c0a74038a35b156f33bc0a432274fc4d96.pdf
Pessimism in the Face of Confounders: Provably Efficient Offline Reinforcement Learning in Partially Observable Markov Decision Processes
https://openreview.net/forum?id=PbkBDQ5_UbV
https://openreview.net/forum?id=PbkBDQ5_UbV
Miao Lu,Yifei Min,Zhaoran Wang,Zhuoran Yang
ICLR 2023,Poster
We study offline reinforcement learning (RL) in partially observable Markov decision processes. In particular, we aim to learn an optimal policy from a dataset collected by a behavior policy which possibly depends on the latent state. Such a dataset is confounded in the sense that the latent state simultaneously affects the action and the observation, which is prohibitive for existing offline RL algorithms. To this end, we propose the \underline{P}roxy variable \underline{P}essimistic \underline{P}olicy \underline{O}ptimization (\texttt{P3O}) algorithm, which addresses the confounding bias and the distributional shift between the optimal and behavior policies in the context of general function approximation. At the core of \texttt{P3O} is a coupled sequence of pessimistic confidence regions constructed via proximal causal inference, which is formulated as minimax estimation. Under a partial coverage assumption on the confounded dataset, we prove that \texttt{P3O} achieves a $n^{-1/2}$-suboptimality, where $n$ is the number of trajectories in the dataset. To our best knowledge, \texttt{P3O} is the first provably efficient offline RL algorithm for POMDPs with a confounded dataset.
https://openreview.net/pdf/50f1a65a8ac6404dd76325fe84b34c5695ba7360.pdf
Understanding Zero-shot Adversarial Robustness for Large-Scale Models
https://openreview.net/forum?id=P4bXCawRi5J
https://openreview.net/forum?id=P4bXCawRi5J
Chengzhi Mao,Scott Geng,Junfeng Yang,Xin Wang,Carl Vondrick
ICLR 2023,Poster
Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP's performance on new tasks. In this work, we identify and explore the problem of adapting large-scale models for zero-shot adversarial robustness. We first identify two key factors during model adaption--training losses and adaptation methods--that affect the model's zero-shot adversarial robustness. We then propose a text-guided contrastive adversarial training loss, which aligns the text embeddings and the adversarial visual features with contrastive learning on a small set of training data. We apply this training loss to two adaption methods, model finetuning and visual prompt tuning. We find that visual prompt tuning is more effective in the absence of texts, while finetuning wins in the existence of text guidance. Overall, our approach significantly improves the zero-shot adversarial robustness over CLIP, seeing an average improvement of 31 points over ImageNet and 15 zero-shot datasets. We hope this work can shed light on understanding the zero-shot adversarial robustness of large-scale models.
https://openreview.net/pdf/3260fd25cd7bfa3805e9cf3a86f91b87e701984d.pdf
Can We Faithfully Represent Absence States to Compute Shapley Values on a DNN?
https://openreview.net/forum?id=YV8tP7bW6Kt
https://openreview.net/forum?id=YV8tP7bW6Kt
Jie Ren,Zhanpeng Zhou,Qirui Chen,Quanshi Zhang
ICLR 2023,Poster
Masking some input variables of a deep neural network (DNN) and computing output changes on the masked input sample represent a typical way to compute attributions of input variables in the sample. People usually mask an input variable using its baseline value. However, there is no theory to examine whether baseline value faithfully represents the absence of an input variable, i.e., removing all signals from the input variable. Fortunately, recent studies (Ren et al., 2023a; Deng et al., 2022a) show that the inference score of a DNN can be strictly disentangled into a set of causal patterns (or concepts) encoded by the DNN. Therefore, we propose to use causal patterns to examine the faithfulness of baseline values. More crucially, it is proven that causal patterns can be explained as the elementary rationale of the Shapley value. Furthermore, we propose a method to learn optimal baseline values, and experimental results have demonstrated its effectiveness.
https://openreview.net/pdf/e2a1d8064ef92b75c5a2601cf5660a43e0a3c2b5.pdf
Dataless Knowledge Fusion by Merging Weights of Language Models
https://openreview.net/forum?id=FCnohuR6AnM
https://openreview.net/forum?id=FCnohuR6AnM
Xisen Jin,Xiang Ren,Daniel Preotiuc-Pietro,Pengxiang Cheng
ICLR 2023,Poster
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models. Oftentimes fine-tuned models are readily available but their training data is not, due to data privacy or intellectual property concerns. This creates a barrier to fusing knowledge across individual models to yield a better single model. In this paper, we study the problem of merging individual models built on different training data sets to obtain a single model that performs well both across all data set domains and can generalize on out-of-domain data. We propose a data-less knowledge fusion method that merges models in their parameter space, guided by weights that minimize prediction differences between the merged model and the individual models. Over a battery of evaluation settings, we show that the proposed method significantly outperforms baselines such as Fisher-weighted averaging or model ensembling. Further, we find that our method is a promising alternative to multi-task learning that can preserve or sometimes improve over the individual models without access to the training data. Finally, model merging is more efficient than training a multi-task model, thus making it applicable to a wider set of scenarios.
https://openreview.net/pdf/21be6dbfbc3e4e121e97e4185dbd267199c7a4a4.pdf
Universal Vision-Language Dense Retrieval: Learning A Unified Representation Space for Multi-Modal Retrieval
https://openreview.net/forum?id=PQOlkgsBsik
https://openreview.net/forum?id=PQOlkgsBsik
Zhenghao Liu,Chenyan Xiong,Yuanhuiyi Lv,Zhiyuan Liu,Ge Yu
ICLR 2023,Poster
This paper presents Universal Vision-Language Dense Retrieval (UniVL-DR), which builds a unified model for multi-modal retrieval. UniVL-DR encodes queries and multi-modality resources in an embedding space for searching candidates from different modalities. To learn a unified embedding space for multi-modal retrieval, UniVL-DR proposes two techniques: 1) Universal embedding optimization strategy, which contrastively optimizes the embedding space using the modality-balanced hard negatives; 2) Image verbalization method, which bridges the modality gap between images and texts in the raw data space. UniVL-DR achieves the state-of-the-art on the multi-modal open-domain question answering benchmark, WebQA, and outperforms all retrieval models on the two subtasks, text-text retrieval and text-image retrieval. It demonstrates that universal multi-modal search is feasible to replace the divide-and-conquer pipeline with a united model and also benefits single/cross modality tasks. All source codes of this work are available at https://github.com/OpenMatch/UniVL-DR.
https://openreview.net/pdf/b896aabb513ac882a484a404912e8485dbb11cc8.pdf
DFlow: Learning to Synthesize Better Optical Flow Datasets via a Differentiable Pipeline
https://openreview.net/forum?id=5O2uzDusEN5
https://openreview.net/forum?id=5O2uzDusEN5
Kwon Byung-Ki,Nam Hyeon-Woo,Ji-Yun Kim,Tae-Hyun Oh
ICLR 2023,Poster
Comprehensive studies of synthetic optical flow datasets have attempted to reveal what properties lead to accuracy improvement in learning-based optical flow estimation. However, manually identifying and verifying the properties that contribute to accurate optical flow estimation require large-scale trial-and-error experiments with iteratively generating whole synthetic datasets and training on them, \ie, impractical. To address this challenge, we propose a differentiable optical flow data generation pipeline and a loss function to drive the pipeline, called DFlow. DFlow efficiently synthesizes a dataset effective for a target domain without the need for cumbersome try-and-errors. This favorable property is achieved by proposing an efficient dataset comparison method that uses neural networks to approximately encode each dataset and compares the proxy networks instead of explicitly comparing datasets in a pairwise way. Our experiments show the competitive performance of our DFlow against the prior arts in pre-training. Furthermore, compared to competing datasets, DFlow achieves the best fine-tuning performance on the Sintel public benchmark with RAFT.
https://openreview.net/pdf/0ef1c24f1cf3ba80131634c7b890f2eb921fef95.pdf
Sparse Random Networks for Communication-Efficient Federated Learning
https://openreview.net/forum?id=k1FHgri5y3-
https://openreview.net/forum?id=k1FHgri5y3-
Berivan Isik,Francesco Pase,Deniz Gunduz,Tsachy Weissman,Zorzi Michele
ICLR 2023,Poster
One main challenge in federated learning is the large communication cost of exchanging weight updates from clients to the server at each round. While prior work has made great progress in compressing the weight updates through gradient compression methods, we propose a radically different approach that does not update the weights at all. Instead, our method freezes the weights at their initial \emph{random} values and learns how to sparsify the random network for the best performance. To this end, the clients collaborate in training a \emph{stochastic} binary mask to find the optimal sparse random network within the original one. At the end of the training, the final model is a sparse network with random weights -- or a subnetwork inside the dense random network. We show improvements in accuracy, communication (less than $1$ bit per parameter (bpp)), convergence speed, and final model size (less than $1$ bpp) over relevant baselines on MNIST, EMNIST, CIFAR-10, and CIFAR-100 datasets, in the low bitrate regime.
https://openreview.net/pdf/59128a762bccd9ac786cf20c698a39364cbe709c.pdf
A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis
https://openreview.net/forum?id=vVJZtlZB9D
https://openreview.net/forum?id=vVJZtlZB9D
Damien Ferbach,Christos Tsirigotis,Gauthier Gidel,Joey Bose
ICLR 2023,Poster
The Strong Lottery Ticket Hypothesis (SLTH) stipulates the existence of a subnetwork within a sufficiently overparameterized (dense) neural network that---when initialized randomly and without any training---achieves the accuracy of a fully trained target network. Recent works by Da Cunha et. al 2022, Burkholz 2022 demonstrate that the SLTH can be extended to translation equivariant networks---i.e. CNNs---with the same level of overparametrization as needed for the SLTs in dense networks. However, modern neural networks are capable of incorporating more than just translation symmetry, and developing general equivariant architectures such as rotation and permutation has been a powerful design principle. In this paper, we generalize the SLTH to functions that preserve the action of the group $G$---i.e. $G$-equivariant network---and prove, with high probability, that one can approximate any $G$-equivariant network of fixed width and depth by pruning a randomly initialized overparametrized $G$-equivariant network to a $G$-equivariant subnetwork. We further prove that our prescribed overparametrization scheme is optimal and provide a lower bound on the number of effective parameters as a function of the error tolerance. We develop our theory for a large range of groups, including subgroups of the Euclidean $\text{E}(2)$ and Symmetric group $G \leq \mathcal{S}_n$---allowing us to find SLTs for MLPs, CNNs, $\text{E}(2)$-steerable CNNs, and permutation equivariant networks as specific instantiations of our unified framework. Empirically, we verify our theory by pruning overparametrized $\text{E}(2)$-steerable CNNs, $k$-order GNNs, and message passing GNNs to match the performance of trained target networks.
https://openreview.net/pdf/d98e7ce58fd6265c2396874cce80881ba1f699a3.pdf
Robust Fair Clustering: A Novel Fairness Attack and Defense Framework
https://openreview.net/forum?id=4LMIZY7gt7h
https://openreview.net/forum?id=4LMIZY7gt7h
Anshuman Chhabra,Peizhao Li,Prasant Mohapatra,Hongfu Liu
ICLR 2023,Poster
Clustering algorithms are widely used in many societal resource allocation applications, such as loan approvals and candidate recruitment, among others, and hence, biased or unfair model outputs can adversely impact individuals that rely on these applications. To this end, many $\textit{fair}$ clustering approaches have been recently proposed to counteract this issue. Due to the potential for significant harm, it is essential to ensure that fair clustering algorithms provide consistently fair outputs even under adversarial influence. However, fair clustering algorithms have not been studied from an adversarial attack perspective. In contrast to previous research, we seek to bridge this gap and conduct a robustness analysis against fair clustering by proposing a novel $\textit{black-box fairness attack}$. Through comprehensive experiments, we find that state-of-the-art models are highly susceptible to our attack as it can reduce their fairness performance significantly. Finally, we propose Consensus Fair Clustering (CFC), the first $\textit{robust fair clustering}$ approach that transforms consensus clustering into a fair graph partitioning problem, and iteratively learns to generate fair cluster outputs. Experimentally, we observe that CFC is highly robust to the proposed attack and is thus a truly robust fair clustering alternative.
https://openreview.net/pdf/d59b24dc62f96828d592ff6b7f4251f3516e72b4.pdf
Learning to Jointly Share and Prune Weights for Grounding Based Vision and Language Models
https://openreview.net/forum?id=UMERaIHMwB3
https://openreview.net/forum?id=UMERaIHMwB3
Shangqian Gao,Burak Uzkent,Yilin Shen,Heng Huang,Hongxia Jin
ICLR 2023,Poster
Transformers have seen growing interest in processing different modalities, including language and image data. As a result, we can process vision and language data using transformers that are architecturally similar. Leveraging this feature of transformers, we propose weight sharing across two transformer backbones and within the same transformer backbone and pruning across two backbones in a unified framework. More specifically, we investigate weight sharing and pruning for two components of the transformers: (1) Multi-Head Attention (MSA) and (2) Feed-Forward Network (FFN) layers. To jointly perform weight sharing and pruning, we propose to use a regularization term to align model weights and the desired structure during the multimodal pre-training step. The structure vectors of sharing and pruning are generated by using a hypernetwork, which can capture complex interactions between pruning and sharing across layers and modalities. We train the hypernetwork and model weights iteratively so that the learned structure evolves along with model weights. After minimizing the proposed objective in the pre-training step, we perform weight sharing and pruning and fine-tune the compressed model on downstream tasks. Finally, we perform experiments on vision and language tasks, including Referring Expression Comprehension (REC), Visual Question Answering (VQA), and Object Detection using the state-of-the-art grounding based models: MDETR and GLIP. Our experiments show that we can compress these models by $35-40\%$ by sharing and pruning MSA and FFN weights without almost any loss in accuracy.
https://openreview.net/pdf/090eb3d3debddf7f3522e5122d1f1f190e4f4082.pdf
Spatial Attention Kinetic Networks with E(n)-Equivariance
https://openreview.net/forum?id=3DIpIf3wQMC
https://openreview.net/forum?id=3DIpIf3wQMC
Yuanqing Wang,John Chodera
ICLR 2023,Poster
Neural networks that are equivariant to rotations, translations, reflections, and permutations on $n$-dimensional geometric space have shown promise in physical modeling for tasks such as accurately but inexpensively modeling complex potential energy surfaces to guiding the sampling of complex dynamical systems or forecasting their time evolution. Current state-of-the-art methods employ spherical harmonics to encode higher-order interactions among particles, which are computationally expensive. In this paper, we propose a simple alternative functional form that uses neurally parametrized linear combinations of edge vectors to achieve equivariance while still universally approximating node environments. Incorporating this insight, we design \emph{spatial attention kinetic networks} with E(n)-equivariance, or SAKE, which are competitive in many-body system modeling tasks while being significantly faster.
https://openreview.net/pdf/59dd02196536ef6b1db22a709e5420ec5b6e6798.pdf
Graph Domain Adaptation via Theory-Grounded Spectral Regularization
https://openreview.net/forum?id=OysfLgrk8mk
https://openreview.net/forum?id=OysfLgrk8mk
Yuning You,Tianlong Chen,Zhangyang Wang,Yang Shen
ICLR 2023,Poster
Transfer learning on graphs drawn from varied distributions (domains) is in great demand across many applications. Emerging methods attempt to learn domain-invariant representations using graph neural networks (GNNs), yet the empirical performances vary and the theoretical foundation is limited. This paper aims at designing theory-grounded algorithms for graph domain adaptation (GDA). (i) As the first attempt, we derive a model-based GDA bound closely related to two GNN spectral properties: spectral smoothness (SS) and maximum frequency response (MFR). This is achieved by cross-pollinating between the OT-based (optimal transport) DA and graph filter theories. (ii) Inspired by the theoretical results, we propose algorithms regularizing spectral properties of SS and MFR to improve GNN transferability. We further extend the GDA theory into the more challenging scenario of conditional shift, where spectral regularization still applies. (iii) More importantly, our analyses of the theory reveal which regularization would improve performance of what transfer learning scenario, (iv) with numerical agreement with extensive real-world experiments: SS and MFR regularizations bring more benefits to the scenarios of node transfer and link transfer, respectively. In a nutshell, our study paves the way toward explicitly constructing and training GNNs that can capture more transferable representations across graph domains. Codes are released at https://github.com/Shen-Lab/GDA-SpecReg.
https://openreview.net/pdf/5ef3f0814aa54be345c2417d32d98e274e9114bd.pdf
CLARE: Conservative Model-Based Reward Learning for Offline Inverse Reinforcement Learning
https://openreview.net/forum?id=5aT4ganOd98
https://openreview.net/forum?id=5aT4ganOd98
Sheng Yue,Guanbo Wang,Wei Shao,Zhaofeng Zhang,Sen Lin,Ju Ren,Junshan Zhang
ICLR 2023,Poster
This work aims to tackle a major challenge in offline Inverse Reinforcement Learning (IRL), namely the reward extrapolation error, where the learned reward function may fail to explain the task correctly and misguide the agent in unseen environments due to the intrinsic covariate shift. Leveraging both expert data and lower-quality diverse data, we devise a principled algorithm (namely CLARE) that solves offline IRL efficiently via integrating "conservatism" into a learned reward function and utilizing an estimated dynamics model. Our theoretical analysis provides an upper bound on the return gap between the learned policy and the expert policy, based on which we characterize the impact of covariate shift by examining subtle two-tier tradeoffs between the exploitation (on both expert and diverse data) and exploration (on the estimated dynamics model). We show that CLARE can provably alleviate the reward extrapolation error by striking the right exploitation-exploration balance therein. Extensive experiments corroborate the significant performance gains of CLARE over existing state-of-the-art algorithms on MuJoCo continuous control tasks (especially with a small offline dataset), and the learned reward is highly instructive for further learning.
https://openreview.net/pdf/ab0104788311808f8c526bb4a5471ed9eb68e476.pdf
Data-Free One-Shot Federated Learning Under Very High Statistical Heterogeneity
https://openreview.net/forum?id=_hb4vM3jspB
https://openreview.net/forum?id=_hb4vM3jspB
Clare Elizabeth Heinbaugh,Emilio Luz-Ricca,Huajie Shao
ICLR 2023,Poster
Federated learning (FL) is an emerging distributed learning framework that collaboratively trains a shared model without transferring the local clients' data to a centralized server. Motivated by concerns stemming from extended communication and potential attacks, one-shot FL limits communication to a single round while attempting to retain performance. However, one-shot FL methods often degrade under high statistical heterogeneity, fail to promote pipeline security, or require an auxiliary public dataset. To address these limitations, we propose two novel data-free one-shot FL methods: FedCVAE-Ens and its extension FedCVAE-KD. Both approaches reframe the local learning task using a conditional variational autoencoder (CVAE) to address high statistical heterogeneity. Furthermore, FedCVAE-KD leverages knowledge distillation to compress the ensemble of client decoders into a single decoder. We propose a method that shifts the center of the CVAE prior distribution and experimentally demonstrate that this promotes security, and show how either method can incorporate heterogeneous local models. We confirm the efficacy of the proposed methods over baselines under high statistical heterogeneity using multiple benchmark datasets. In particular, at the highest levels of statistical heterogeneity, both FedCVAE-Ens and FedCVAE-KD typically more than double the accuracy of the baselines.
https://openreview.net/pdf/901da9f189ab8197e3781bf2163cd89298ff148e.pdf
GReTo: Remedying dynamic graph topology-task discordance via target homophily
https://openreview.net/forum?id=8duT3mi_5n
https://openreview.net/forum?id=8duT3mi_5n
Zhengyang Zhou,Qihe Huang,Gengyu Lin,Kuo Yang,LEI BAI,Yang Wang
ICLR 2023,Poster
Dynamic graphs are ubiquitous across disciplines where observations usually change over time. Regressions on dynamic graphs often contribute to diverse critical tasks, such as climate early-warning and traffic controlling. Existing homophily Graph Neural Networks (GNNs) adopt physical connections or feature similarity as adjacent matrix to perform node-level aggregations. However, on dynamic graphs with diverse node-wise relations, exploiting a pre-defined fixed topology for message passing inevitably leads to the aggregations of target-deviated neighbors. We designate such phenomenon as the topology-task discordance, which naturally challenges the homophily assumption. In this work, we revisit node-wise relationships and explore novel homophily measurements on dynamic graphs with both signs and distances, capturing multiple node-level spatial relations and temporal evolutions. We discover that advancing homophily aggregations to signed target-oriented message passing can effectively resolve the discordance and promote aggregation capacity. Therefore, a GReTo is proposed, which performs signed message passing in immediate neighborhood, and exploits both local environments and target awareness to realize high-order message propagation. Empirically, our solution achieves significant improvements against best baselines, notably improving 24.79% on KnowAir and 3.60% on Metr-LA.
https://openreview.net/pdf/48156cfb8ea4e505cf281af3482fc7dbf314ac48.pdf
Deep Reinforcement Learning for Cost-Effective Medical Diagnosis
https://openreview.net/forum?id=0WVNuEnqVu
https://openreview.net/forum?id=0WVNuEnqVu
Zheng Yu,Yikuan Li,Joseph Chahn Kim,Kaixuan Huang,Yuan Luo,Mengdi Wang
ICLR 2023,Poster
Dynamic diagnosis is desirable when medical tests are costly or time-consuming. In this work, we use reinforcement learning (RL) to find a dynamic policy that selects lab test panels sequentially based on previous observations, ensuring accurate testing at a low cost. Clinical diagnostic data are often highly imbalanced; therefore, we aim to maximize the F1 score instead of the error rate. However, optimizing the non-concave $F_1$ score is not a classic RL problem, thus invalidating standard RL methods. To remedy this issue, we develop a reward shaping approach, leveraging properties of the $F_1$ score and duality of policy optimization, to provably find the set of all Pareto-optimal policies for budget-constrained $F_1$ score maximization. To handle the combinatorially complex state space, we propose a Semi-Model-based Deep Diagnosis Policy Optimization (SM-DDPO) framework that is compatible with end-to-end training and online learning. SM-DDPO is tested on diverse clinical tasks: ferritin abnormality detection, sepsis mortality prediction, and acute kidney injury diagnosis. Experiments with real-world data validate that SM-DDPO trains efficiently and identify all Pareto-front solutions. Across all tasks, SM-DDPO is able to achieve state-of-the-art diagnosis accuracy (in some cases higher than conventional methods) with up to $85\%$ reduction in testing cost. Core codes are available at https://github.com/Zheng321/Deep-Reinforcement-Learning-for-Cost-Effective-Medical-Diagnosis.
https://openreview.net/pdf/97d4ec3502e11b43bdc708cf3305416092c7863c.pdf
POPGym: Benchmarking Partially Observable Reinforcement Learning
https://openreview.net/forum?id=chDrutUTs0K
https://openreview.net/forum?id=chDrutUTs0K
Steven Morad,Ryan Kortvelesy,Matteo Bettini,Stephan Liwicki,Amanda Prorok
ICLR 2023,Poster
Real world applications of Reinforcement Learning (RL) are often partially observable, thus requiring memory. Despite this, partial observability is still largely ignored by contemporary RL benchmarks and libraries. We introduce Partially Observable Process Gym (POPGym), a two-part library containing (1) a diverse collection of 15 partially observable environments, each with multiple difficulties and (2) implementations of 13 memory model baselines -- the most in a single RL library. Existing partially observable benchmarks tend to fixate on 3D visual navigation, which is computationally expensive and only one type of POMDP. In contrast, POPGym environments are diverse, produce smaller observations, use less memory, and often converge within two hours of training on a consumer-grade GPU. We implement our high-level memory API and memory baselines on top of the popular RLlib framework, providing plug-and-play compatibility with various training algorithms, exploration strategies, and distributed training paradigms. Using POPGym, we execute the largest comparison across RL memory models to date. POPGym is available at https://github.com/proroklab/popgym.
https://openreview.net/pdf/9dfc2ae9e672bdd44ff589054c88b18924af8483.pdf
Everybody Needs Good Neighbours: An Unsupervised Locality-based Method for Bias Mitigation
https://openreview.net/forum?id=pOnhudsvzR
https://openreview.net/forum?id=pOnhudsvzR
Xudong Han,Timothy Baldwin,Trevor Cohn
ICLR 2023,Poster
Learning models from human behavioural data often leads to outputs that are biased with respect to user demographics, such as gender or race. This effect can be controlled by explicit mitigation methods, but this typically presupposes access to demographically-labelled training data. Such data is often not available, motivating the need for unsupervised debiasing methods. To this end, we propose a new meta-algorithm for debiasing representation learning models, which combines the notions of data locality and accuracy of model fit, such that a supervised debiasing method can optimise fairness between neighbourhoods of poorly vs. well modelled instances as identified by our method. Results over five datasets, spanning natural language processing and structured data classification tasks, show that our technique recovers proxy labels that correlate with unknown demographic data, and that our method outperforms all unsupervised baselines, while also achieving competitive performance with state-of-the-art supervised methods which are given access to demographic labels.
https://openreview.net/pdf/c708a67bd8ca9f1274201759dc59f0064c85d1fe.pdf
Particle-based Variational Inference with Preconditioned Functional Gradient Flow
https://openreview.net/forum?id=6OphWWAE3cS
https://openreview.net/forum?id=6OphWWAE3cS
Hanze Dong,Xi Wang,LIN Yong,Tong Zhang
ICLR 2023,Poster
Particle-based variational inference (VI) minimizes the KL divergence between model samples and the target posterior with gradient flow estimates. With the popularity of Stein variational gradient descent (SVGD), the focus of particle-based VI algorithms has been on the properties of functions in Reproducing Kernel Hilbert Space (RKHS) to approximate the gradient flow. However, the requirement of RKHS restricts the function class and algorithmic flexibility. This paper offers a general solution to this problem by introducing a functional regularization term that encompasses the RKHS norm as a special case. This allows us to propose a new particle-based VI algorithm called preconditioned functional gradient flow (PFG). Compared to SVGD, PFG has several advantages. It has a larger function class, improved scalability in large particle-size scenarios, better adaptation to ill-conditioned distributions, and provable continuous-time convergence in KL divergence. Additionally, non-linear function classes such as neural networks can be incorporated to estimate the gradient flow. Our theory and experiments demonstrate the effectiveness of the proposed framework.
https://openreview.net/pdf/d72dcf7ba6f107d684a7f50a7f5421f1fb4d0d34.pdf
Learning Locality and Isotropy in Dialogue Modeling
https://openreview.net/forum?id=dPs6BGO2QT0
https://openreview.net/forum?id=dPs6BGO2QT0
Han Wu,Haochen Tan,Mingjie Zhan,Gangming Zhao,Shaoqing Lu,Ding Liang,Linqi Song
ICLR 2023,Poster
Existing dialogue modeling methods have achieved promising performance on various dialogue tasks with the aid of Transformer and the large-scale pre-trained language models. However, some recent studies revealed that the context representations produced by these methods suffer the problem of anisotropy. In this paper, we find that the generated representations are also not conversational, losing the conversation structure information during the context modeling stage. To this end, we identify two properties in dialogue modeling, i.e., locality and isotropy, and present a simple method for dialogue representation calibration, namely SimDRC, to build isotropic and conversational feature spaces. Experimental results show that our approach significantly outperforms current state-of-the-art models on three open-domain dialogue tasks with eight benchmarks. More in-depth analyses further confirm the effectiveness of our proposed approach. We release the code at https://github.com/hahahawu/SimDRC.
https://openreview.net/pdf/1bf27b66e1ac5124b7cf6726e833f2902ebec00d.pdf
Combating Exacerbated Heterogeneity for Robust Models in Federated Learning
https://openreview.net/forum?id=eKllxpLOOm
https://openreview.net/forum?id=eKllxpLOOm
Jianing Zhu,Jiangchao Yao,Tongliang Liu,quanming yao,Jianliang Xu,Bo Han
ICLR 2023,Poster
Privacy and security concerns in real-world applications have led to the development of adversarially robust federated models. However, the straightforward combination between adversarial training and federated learning in one framework can lead to the undesired robustness deterioration. We discover that the attribution behind this phenomenon is that the generated adversarial data could exacerbate the data heterogeneity among local clients, making the wrapped federated learning perform poorly. To deal with this problem, we propose a novel framework called Slack Federated Adversarial Training (SFAT), assigning the client-wise slack during aggregation to combat the intensified heterogeneity. Theoretically, we analyze the convergence of the proposed method to properly relax the objective when combining federated learning and adversarial training. Experimentally, we verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets with different adversarial training and federated optimization methods. The code is publicly available at: https://github.com/ZFancy/SFAT.
https://openreview.net/pdf/be55361c7545539cbbf76ba1cb0bf7b7d99ec94e.pdf
Towards Robust Object Detection Invariant to Real-World Domain Shifts
https://openreview.net/forum?id=vqSyt8D3ny
https://openreview.net/forum?id=vqSyt8D3ny
Qi Fan,Mattia Segu,Yu-Wing Tai,Fisher Yu,Chi-Keung Tang,Bernt Schiele,Dengxin Dai
ICLR 2023,Poster
Safety-critical applications such as autonomous driving require robust object detection invariant to real-world domain shifts. Such shifts can be regarded as different domain styles, which can vary substantially due to environment changes and sensor noises, but deep models only know the training domain style. Such domain style gap impedes object detection generalization on diverse real-world domains. Existing classification domain generalization (DG) methods cannot effectively solve the robust object detection problem, because they either rely on multiple source domains with large style variance or destroy the content structures of the original images. In this paper, we analyze and investigate effective solutions to overcome domain style overfitting for robust object detection without the above shortcomings. Our method, dubbed as Normalization Perturbation (NP), perturbs the channel statistics of source domain low-level features to synthesize various latent styles, so that the trained deep model can perceive diverse potential domains and generalizes well even without observations of target domain data in training. This approach is motivated by the observation that feature channel statistics of the target domain images deviate around the source domain statistics. We further explore the style-sensitive channels for effective style synthesis. Normalization Perturbation only relies on a single source domain and is surprisingly simple and effective, contributing a practical solution by effectively adapting or generalizing classification DG methods to robust object detection. Extensive experiments demonstrate the effectiveness of our method for generalizing object detectors under real-world domain shifts.
https://openreview.net/pdf/2bcffe27e982ea323f59ee62f175ae43573b76f7.pdf
Light Sampling Field and BRDF Representation for Physically-based Neural Rendering
https://openreview.net/forum?id=yYEb8v65X8
https://openreview.net/forum?id=yYEb8v65X8
Jing Yang,Hanyuan Xiao,Wenbin Teng,Yunxuan Cai,Yajie Zhao
ICLR 2023,Poster
Physically-based rendering (PBR) is key for immersive rendering effects used widely in the industry to showcase detailed realistic scenes from computer graphics assets. A well-known caveat is that producing the same is computationally heavy and relies on complex capture devices. Inspired by the success in quality and efficiency of recent volumetric neural rendering, we want to develop a physically-based neural shader to eliminate device dependency and significantly boost performance. However, no existing lighting and material models in the current neural rendering approaches can accurately represent the comprehensive lighting models and BRDFs properties required by the PBR process. Thus, this paper proposes a novel lighting representation that models direct and indirect light locally through a light sampling strategy in a learned light sampling field. We also propose BRDF models to separately represent surface/subsurface scattering details to enable complex objects such as translucent material (i.e., skin, jade). We then implement our proposed representations with an end-to-end physically-based neural face skin shader, which takes a standard face asset (i.e., geometry, albedo map, and normal map) and an HDRI for illumination as inputs and generates a photo-realistic rendering as output. Extensive experiments showcase the quality and efficiency of our PBR face skin shader, indicating the effectiveness of our proposed lighting and material representations.
https://openreview.net/pdf/2a3094715331dabb097856833e60b76e42ed5029.pdf
Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling
https://openreview.net/forum?id=X5SUR7g2vVw
https://openreview.net/forum?id=X5SUR7g2vVw
Penghao Wu,Li Chen,Hongyang Li,Xiaosong Jia,Junchi Yan,Yu Qiao
ICLR 2023,Poster
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks the view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pre-training in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. As a side product, the pre-trained geometric modeling networks could bring further improvement to the depth and odometry estimation tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data.
https://openreview.net/pdf/50b703c41097f12073158fc5a6019697ae01df4e.pdf
TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis
https://openreview.net/forum?id=ju_Uqw384Oq
https://openreview.net/forum?id=ju_Uqw384Oq
Haixu Wu,Tengge Hu,Yong Liu,Hang Zhou,Jianmin Wang,Mingsheng Long
ICLR 2023,Poster
Time series analysis is of immense importance in extensive applications, such as weather forecasting, anomaly detection, and action recognition. This paper focuses on temporal variation modeling, which is the common key problem of extensive analysis tasks. Previous methods attempt to accomplish this directly from the 1D time series, which is extremely challenging due to the intricate temporal patterns. Based on the observation of multi-periodicity in time series, we ravel out the complex temporal variations into the multiple intraperiod- and interperiod-variations. To tackle the limitations of 1D time series in representation capability, we extend the analysis of temporal variations into the 2D space by transforming the 1D time series into a set of 2D tensors based on multiple periods. This transformation can embed the intraperiod- and interperiod-variations into the columns and rows of the 2D tensors respectively, making the 2D-variations to be easily modeled by 2D kernels. Technically, we propose the TimesNet with TimesBlock as a task-general backbone for time series analysis. TimesBlock can discover the multi-periodicity adaptively and extract the complex temporal variations from transformed 2D tensors by a parameter-efficient inception block. Our proposed TimesNet achieves consistent state-of-the-art in five mainstream time series analysis tasks, including short- and long-term forecasting, imputation, classification, and anomaly detection. Code is available at this repository: https://github.com/thuml/TimesNet.
https://openreview.net/pdf/98c0a5bad8225b6d1baf5c74047c4d04bacfcfa1.pdf
Learning without Prejudices: Continual Unbiased Learning via Benign and Malignant Forgetting
https://openreview.net/forum?id=gfPUokHsW-
https://openreview.net/forum?id=gfPUokHsW-
Myeongho Jeon,Hyoje Lee,Yedarm Seong,Myungjoo Kang
ICLR 2023,Poster
Although machine learning algorithms have achieved state-of-the-art status in image classification, recent studies have substantiated that the ability of the models to learn several tasks in sequence, termed continual learning (CL), often suffers from abrupt degradation of performance from previous tasks. A large body of CL frameworks has been devoted to alleviating this issue. However, we observe that forgetting phenomena in CL are not always unfavorable, especially when there is bias (spurious correlation) in training data. We term such type of forgetting benign forgetting, and categorize detrimental forgetting as malignant forgetting. Based on this finding, our objective in this study is twofold: (a) to discourage malignant forgetting by generating previous representations, and (b) encourage benign forgetting by employing contrastive learning in conjunction with feature-level augmentation. Extensive evaluations of biased experimental setups demonstrate that our proposed method, Learning without Prejudices, is effective for continual unbiased learning.
https://openreview.net/pdf/ab51561ee336a34e1760b429c76c558f699ad838.pdf
FINDE: Neural Differential Equations for Finding and Preserving Invariant Quantities
https://openreview.net/forum?id=tLScKVhcCR
https://openreview.net/forum?id=tLScKVhcCR
Takashi Matsubara,Takaharu Yaguchi
ICLR 2023,Poster
Many real-world dynamical systems are associated with first integrals (a.k.a. invariant quantities), which are quantities that remain unchanged over time. The discovery and understanding of first integrals are fundamental and important topics both in the natural sciences and in industrial applications. First integrals arise from the conservation laws of system energy, momentum, and mass, and from constraints on states; these are typically related to specific geometric structures of the governing equations. Existing neural networks designed to ensure such first integrals have shown excellent accuracy in modeling from data. However, these models incorporate the underlying structures, and in most situations where neural networks learn unknown systems, these structures are also unknown. This limitation needs to be overcome for scientific discovery and modeling of unknown systems. To this end, we propose first integral-preserving neural differential equation (FINDE). By leveraging the projection method and the discrete gradient method, FINDE finds and preserves first integrals from data, even in the absence of prior knowledge about underlying structures. Experimental results demonstrate that FINDE can predict future states of target systems much longer and find various quantities consistent with well-known first integrals in a unified manner.
https://openreview.net/pdf/29b6bd029ab884b3018dc43a9a3a6cfecbd66e53.pdf
Approximate Vanishing Ideal Computations at Scale
https://openreview.net/forum?id=3ZPESALKXO
https://openreview.net/forum?id=3ZPESALKXO
Elias Samuel Wirth,Hiroshi Kera,Sebastian Pokutta
ICLR 2023,Poster
The vanishing ideal of a set of points $X = \{\mathbf{x}_1, \ldots, \mathbf{x}_m\}\subseteq \mathbb{R}^n$ is the set of polynomials that evaluate to $0$ over all points $\mathbf{x} \in X$ and admits an efficient representation by a finite subset of generators. In practice, to accommodate noise in the data, algorithms that construct generators of the approximate vanishing ideal are widely studied but their computational complexities remain expensive. In this paper, we scale up the oracle approximate vanishing ideal algorithm (OAVI), the only generator-constructing algorithm with known learning guarantees. We prove that the computational complexity of OAVI is not superlinear, as previously claimed, but linear in the number of samples $m$. In addition, we propose two modifications that accelerate OAVI's training time: Our analysis reveals that replacing the pairwise conditional gradients algorithm, one of the solvers used in OAVI, with the faster blended pairwise conditional gradients algorithm leads to an exponential speed-up in the number of features $n$. Finally, using a new inverse Hessian boosting approach, intermediate convex optimization problems can be solved almost instantly, improving OAVI's training time by multiple orders of magnitude in a variety of numerical experiments.
https://openreview.net/pdf/a350117990edf6c0bf6726a2356f6df19d03d250.pdf
Selective Annotation Makes Language Models Better Few-Shot Learners
https://openreview.net/forum?id=qY1hlv7gwg
https://openreview.net/forum?id=qY1hlv7gwg
Hongjin SU,Jungo Kasai,Chen Henry Wu,Weijia Shi,Tianlu Wang,Jiayi Xin,Rui Zhang,Mari Ostendorf,Luke Zettlemoyer,Noah A. Smith,Tao Yu
ICLR 2023,Poster
Many recent approaches to natural language tasks are built on the remarkable abilities of large language models. Large language models can perform in-context learning, where they learn a new task from a few task demonstrations, without any parameter updates. This work examines the implications of in-context learning for the creation of datasets for new natural language tasks. Departing from recent in-context learning methods, we formulate an annotation-efficient, two-step framework: selective annotation that chooses a pool of examples to annotate from unlabeled data in advance, followed by prompt retrieval that retrieves task examples from the annotated pool at test time. Based on this framework, we propose an unsupervised, graph-based selective annotation method, voke-k, to select diverse, representative examples to annotate. Extensive experiments on 10 datasets (covering classification, commonsense reasoning, dialogue, and text/code generation) demonstrate that our selective annotation method improves the task performance by a large margin. On average, vote-k achieves a 12.9%/11.4% relative gain under an annotation budget of 18/100, as compared to randomly selecting examples to annotate. Compared to state-of-the-art supervised finetuning approaches, it yields similar performance with 10-100x less annotation cost across 10 tasks. We further analyze the effectiveness of our framework in various scenarios: language models with varying sizes, alternative selective annotation methods, and cases where there is a test data domain shift. We hope that our studies will serve as a basis for data annotations as large language models are increasingly applied to new tasks.
https://openreview.net/pdf/6e58f3c9c108d2d4b15102e79b138f8a4177b0ad.pdf
Switch-NeRF: Learning Scene Decomposition with Mixture of Experts for Large-scale Neural Radiance Fields
https://openreview.net/forum?id=PQ2zoIZqvm
https://openreview.net/forum?id=PQ2zoIZqvm
Zhenxing MI,Dan Xu
ICLR 2023,Poster
The Neural Radiance Fields (NeRF) have been recently applied to reconstruct building-scale and even city-scale scenes. To model a large-scale scene efficiently, a dominant strategy is to employ a divide-and-conquer paradigm via performing scene decomposition, which decomposes a complex scene into parts that are further processed by different sub-networks. Existing large-scale NeRFs mainly use heuristic hand-crafted scene decomposition, with regular 3D-distance-based or physical-street-block-based schemes. Although achieving promising results, the hand-crafted schemes limit the capabilities of NeRF in large-scale scene modeling in several aspects. Manually designing a universal scene decomposition rule for different complex scenes is challenging, leading to adaptation issues for different scenarios. The decomposition procedure is not learnable, hindering the network from jointly optimizing the scene decomposition and the radiance fields in an end-to-end manner. The different sub-networks are typically optimized independently, and thus hand-crafted rules are required to composite them to achieve a better consistency. To tackle these issues, we propose Switch-NeRF, a novel end-to-end large-scale NeRF with learning-based scene decomposition. We design a gating network to dispatch 3D points to different NeRF sub-networks. The gating network can be optimized together with the NeRF sub-networks for different scene partitions, by a design with the Sparsely Gated Mixture of Experts (MoE). The outputs from different sub-networks can also be fused in a learnable way in the unified framework to effectively guarantee the consistency of the whole scene. Furthermore, the proposed MoE-based Switch-NeRF model is carefully implemented and optimized to achieve both high-fidelity scene reconstruction and efficient computation. Our method establishes clear state-of-the-art performances on several large-scale datasets. To the best of our knowledge, we are the first to propose an applicable end-to-end sparse NeRF network with learning-based decomposition for large-scale scenes. Codes are released at https://github.com/MiZhenxing/Switch-NeRF.
https://openreview.net/pdf/c7f6c91fac4a50757b3e89da49e55fb795f0119d.pdf
NORM: Knowledge Distillation via N-to-One Representation Matching
https://openreview.net/forum?id=CRNwGauQpb6
https://openreview.net/forum?id=CRNwGauQpb6
Xiaolong Liu,LUKING LI,Chao Li,Anbang Yao
ICLR 2023,Poster
Existing feature distillation methods commonly adopt the One-to-one Representation Matching between any pre-selected teacher-student layer pair. In this paper, we present $N$-to-$O$ne $R$epresentation $M$atching (NORM), a new two-stage knowledge distillation method, which relies on a simpleFeature Transform (FT) module consisting of two linear layers. In view of preserving the intact information learnt by the teacher network, during training, our FT module is merely inserted after the last convolutional layer of the student network. The first linear layer projects the student representation to a feature space having $N$ times feature channels than the teacher representation from the last convolutional layer, and the second linear layer contracts the expanded output back to the original feature space. By sequentially splitting the expanded student representation into $N$ non-overlapping feature segments having the same number of feature channels as the teacher's, they can be readily forced to approximate the intact teacher representation simultaneously, formulating a novel many-to-one representation matching mechanism conditioned on a single teacher-student layer pair. After training, such an FT module will be naturally merged into the subsequent fully connected layer thanks to its linear property, introducing no extra parameters or architectural modifications to the student network at inference. Extensive experiments on different visual recognition benchmarks demonstrate the leading performance of our method. For instance, the ResNet18|MobileNet|ResNet50-1/4 model trained by NORM reaches 72.14%|74.26%|68.03% top-1 accuracy on the ImageNet dataset when using a pre-trained ResNet34|ResNet50|ResNet50 model as the teacher, achieving an absolute improvement of 2.01%|4.63%|3.03% against the individually trained counterpart. Code is available at https://github.com/OSVAI/NORM.
https://openreview.net/pdf/0788e1e89189e5c6c54dfc29c3a06c8c32c669af.pdf
Critic Sequential Monte Carlo
https://openreview.net/forum?id=ObtGcyKmwna
https://openreview.net/forum?id=ObtGcyKmwna
Vasileios Lioutas,Jonathan Wilder Lavington,Justice Sefas,Matthew Niedoba,Yunpeng Liu,Berend Zwartsenberg,Setareh Dabiri,Frank Wood,Adam Scibior
ICLR 2023,Poster
We introduce CriticSMC, a new algorithm for planning as inference built from a composition of sequential Monte Carlo with learned Soft-Q function heuristic factors. These heuristic factors, obtained from parametric approximations of the marginal likelihood ahead, more effectively guide SMC towards the desired target distribution, which is particularly helpful for planning in environments with hard constraints placed sparsely in time. Compared with previous work, we modify the placement of such heuristic factors, which allows us to cheaply propose and evaluate large numbers of putative action particles, greatly increasing inference and planning efficiency. CriticSMC is compatible with informative priors, whose density function need not be known, and can be used as a model-free control algorithm. Our experiments on collision avoidance in a high-dimensional simulated driving task show that CriticSMC significantly reduces collision rates at a low computational cost while maintaining realism and diversity of driving behaviors across vehicles and environment scenarios.
https://openreview.net/pdf/e7242778951d90ac631dfb0d5a76dd58ce662af1.pdf
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?
https://openreview.net/forum?id=8Oun8ZUVe8N
https://openreview.net/forum?id=8Oun8ZUVe8N
Runpei Dong,Zekun Qi,Linfeng Zhang,Junbo Zhang,Jianjian Sun,Zheng Ge,Li Yi,Kaisheng Ma
ICLR 2023,Poster
The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages. This promotes the potential of utilizing models pretrained with data more than 3D as teachers for cross-modal knowledge transferring. In this paper, we revisit masked modeling in a unified fashion of knowledge distillation, and we show that foundational Transformers pretrained with 2D images or natural languages can help self-supervised 3D representation learning through training Autoencoders as Cross-Modal Teachers (ACT). The pretrained Transformers are transferred as cross-modal 3D teachers using discrete variational autoencoding self-supervision, during which the Transformers are frozen with prompt tuning for better knowledge inheritance. The latent features encoded by the 3D teachers are used as the target of masked point modeling, wherein the dark knowledge is distilled to the 3D Transformer students as foundational geometry understanding. Our ACT pretrained 3D learner achieves state-of-the-art generalization capacity across various downstream benchmarks, e.g., 88.21% overall accuracy on ScanObjectNN. Codes have been released at https://github.com/RunpeiDong/ACT.
https://openreview.net/pdf/79fadd685bdeb6df98bf287dbb0824d4b1885bb9.pdf
Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?
https://openreview.net/forum?id=0Q9H_Pgx132
https://openreview.net/forum?id=0Q9H_Pgx132
Kaiqi Zhang,Yu-Xiang Wang
ICLR 2023,Poster
We study the theory of neural network (NN) from the lens of classical nonparametric regression problems with a focus on NN’s ability to adaptively estimate functions with heterogeneous smoothness — a property of functions in Besov or Bounded Variation (BV) classes. Existing work on this problem requires tuning the NN architecture based on the function spaces and sample sizes. We consider a “Parallel NN” variant of deep ReLU networks and show that the standard weight decay is equivalent to promoting the ℓp -sparsity (0 < p < 1) of the coefficient vector of an end-to-end learned function bases, i.e., a dictionary. Using this equivalence, we further establish that by tuning only the weight decay, such Parallel NN achieves an estimation error arbitrarily close to the minimax rates for both the Besov and BV classes. Notably, it gets exponentially closer to minimax optimal as the NN gets deeper. Our research sheds new lights on why depth matters and how NNs are more powerful than kernel methods
https://openreview.net/pdf/7ec584f820fe1526e1a4e38e68788a5492e342ae.pdf
Sparse Token Transformer with Attention Back Tracking
https://openreview.net/forum?id=VV0hSE8AxCw
https://openreview.net/forum?id=VV0hSE8AxCw
Heejun Lee,Minki Kang,Youngwan Lee,Sung Ju Hwang
ICLR 2023,Poster
Despite the success of Transformers in various applications from text, vision, and speech domains, they are yet to become standard architectures for mobile and edge device applications due to their heavy memory and computational requirements. While there exist many different approaches to reduce the complexities of the Transformers, such as the pruning of the weights/attentions/tokens, quantization, and distillation, we focus on token pruning, which reduces not only the complexity of the attention operations, but also the linear layers, which have non-negligible computational costs. However, previous token pruning approaches often remove tokens during the feed-forward stage without consideration of their impact on later layers' attentions, which has a potential risk of dropping out important tokens for the given task. To tackle this issue, we propose an attention back-tracking method that tracks the importance of each attention in a Transformer architecture from the outputs to the inputs, to preserve the tokens that have a large impact on the final predictions. We experimentally validate the effectiveness of the method on both NLP and CV benchmarks, using Transformer architectures for both domains, and the results show that the proposed attention back-tracking allows the model to better retain the full models' performance even at high sparsity rates, significantly outperforming all baselines. Qualitative analysis of the examples further shows that our method does preserve semantically meaningful tokens.
https://openreview.net/pdf/374c9ac59ff86090712676f28b7100fc83ff8705.pdf
Robust Active Distillation
https://openreview.net/forum?id=ALDM5SN2r7M
https://openreview.net/forum?id=ALDM5SN2r7M
Cenk Baykal,Khoa Trinh,Fotis Iliopoulos,Gaurav Menghani,Erik Vee
ICLR 2023,Poster
Distilling knowledge from a large teacher model to a lightweight one is a widely successful approach for generating compact, powerful models in the semi-supervised learning setting where a limited amount of labeled data is available. In large-scale applications, however, the teacher tends to provide a large number of incorrect soft-labels that impairs student performance. The sheer size of the teacher additionally constrains the number of soft-labels that can be queried due to prohibitive computational and/or financial costs. The difficulty in achieving simultaneous \emph{efficiency} (i.e., minimizing soft-label queries) and \emph{robustness} (i.e., avoiding student inaccuracies due to incorrect labels) hurts the widespread application of knowledge distillation to many modern tasks. In this paper, we present a parameter-free approach with provable guarantees to query the soft-labels of points that are simultaneously informative and correctly labeled by the teacher. At the core of our work lies a game-theoretic formulation that explicitly considers the inherent trade-off between the informativeness and correctness of input instances. We establish bounds on the expected performance of our approach that hold even in worst-case distillation instances. We present empirical evaluations on popular benchmarks that demonstrate the improved distillation performance enabled by our work relative to that of state-of-the-art active learning and active distillation methods.
https://openreview.net/pdf/bc5a4490e40e444a9ccefb1e166f7bb0cfcc9cbe.pdf
Kernel Neural Optimal Transport
https://openreview.net/forum?id=Zuc_MHtUma4
https://openreview.net/forum?id=Zuc_MHtUma4
Alexander Korotin,Daniil Selikhanovych,Evgeny Burnaev
ICLR 2023,Poster
We study the Neural Optimal Transport (NOT) algorithm which uses the general optimal transport formulation and learns stochastic transport plans. We show that NOT with the weak quadratic cost may learn fake plans which are not optimal. To resolve this issue, we introduce kernel weak quadratic costs. We show that they provide improved theoretical guarantees and practical performance. We test NOT with kernel costs on the unpaired image-to-image translation task.
https://openreview.net/pdf/70b7c8efcb690e77a2ae3aeac0944a66253f5a01.pdf
SeaFormer: Squeeze-enhanced Axial Transformer for Mobile Semantic Segmentation
https://openreview.net/forum?id=-qg8MQNrxZw
https://openreview.net/forum?id=-qg8MQNrxZw
Qiang Wan,Zilong Huang,Jiachen Lu,Gang YU,Li Zhang
ICLR 2023,Poster
Since the introduction of Vision Transformers, the landscape of many computer vision tasks (e.g., semantic segmentation), which has been overwhelmingly dominated by CNNs, recently has significantly revolutionized. However, the computational cost and memory requirement render these methods unsuitable on the mobile device, especially for the high resolution per-pixel semantic segmentation task. In this paper, we introduce a new method squeeze-enhanced Axial Transformer (SeaFormer) for mobile semantic segmentation. Specifically, we design a generic attention block characterized by the formulation of squeeze Axial and spatial enhancement. It can be further used to create a family of backbone architectures with superior cost-effectiveness. Coupled with a light segmentation head, we demonstrate state-of-the-art results on the ADE20K, Pascal Context and COCO-stuff datasets. Critically, we beat both the mobile-friendly rivals and Transformer-based counterparts with better performance and lower latency without bells and whistles. Beyond semantic segmentation, we further apply the proposed SeaFormer architecture to image classification problem, demonstrating the potentials of serving as a versatile mobile-friendly backbone.
https://openreview.net/pdf/a78aac637cc78aa9bba377e1b3c903a8aa5cd916.pdf
Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks
https://openreview.net/forum?id=4UldFtZ_CVF
https://openreview.net/forum?id=4UldFtZ_CVF
Shuai Zhang,Meng Wang,Pin-Yu Chen,Sijia Liu,Songtao Lu,Miao Liu
ICLR 2023,Poster
Due to the significant computational challenge of training large-scale graph neural networks (GNNs), various sparse learning techniques have been exploited to reduce memory and storage costs. Examples include graph sparsification that samples a subgraph to reduce the amount of data aggregation and model sparsification that prunes the neural network to reduce the number of trainable weights. Despite the empirical successes in reducing the training cost while maintaining the test accuracy, the theoretical generalization analysis of sparse learning for GNNs remains elusive. To the best of our knowledge, this paper provides the first theoretical characterization of joint edge-model sparse learning from the perspective of sample complexity and convergence rate in achieving zero generalization error. It proves analytically that both sampling important nodes and pruning neurons with lowest-magnitude can reduce the sample complexity and improve convergence without compromising the test accuracy. Although the analysis is centered on two-layer GNNs with structural constraints on data, the insights are applicable to more general setups and justified by both synthetic and practical citation datasets.
https://openreview.net/pdf/00006f5bb32f82b464f8ded7c219a7fd58ebfe86.pdf
Learning Sparse and Low-Rank Priors for Image Recovery via Iterative Reweighted Least Squares Minimization
https://openreview.net/forum?id=TXPN6MtdSE4
https://openreview.net/forum?id=TXPN6MtdSE4
Stamatios Lefkimmiatis,Iaroslav Sergeevich Koshelev
ICLR 2023,Poster
In this work we introduce a novel optimization algorithm for image recovery under learned sparse and low-rank constraints, which are parameterized with weighted extensions of the $\ell_p^p$-vector and $\mathcal{S}_p^p$ Schatten-matrix quasi-norms for $0\!<p\!\le1$, respectively. Our proposed algorithm generalizes the Iteratively Reweighted Least Squares (IRLS) method, used for signal recovery under $\ell_1$ and nuclear-norm constrained minimization. Further, we interpret our overall minimization approach as a recurrent network that we then employ to deal with inverse low-level computer vision problems. Thanks to the convergence guarantees that our IRLS strategy offers, we are able to train the derived reconstruction networks using a memory-efficient implicit back-propagation scheme, which does not pose any restrictions on their effective depth. To assess our networks' performance, we compare them against other existing reconstruction methods on several inverse problems, namely image deblurring, super-resolution, demosaicking and sparse recovery. Our reconstruction results are shown to be very competitive and in many cases outperform those of existing unrolled networks, whose number of parameters is orders of magnitude higher than that of our learned models.
https://openreview.net/pdf/c3a1293bb62246a6819045c59e222fa95041a535.pdf
Spherical Sliced-Wasserstein
https://openreview.net/forum?id=jXQ0ipgMdU
https://openreview.net/forum?id=jXQ0ipgMdU
Clément Bonet,Paul Berg,Nicolas Courty,François Septier,Lucas Drumetz,Minh Tan Pham
ICLR 2023,Poster
Many variants of the Wasserstein distance have been introduced to reduce its original computational burden. In particular the Sliced-Wasserstein distance (SW), which leverages one-dimensional projections for which a closed-form solution of the Wasserstein distance is available, has received a lot of interest. Yet, it is restricted to data living in Euclidean spaces, while the Wasserstein distance has been studied and used recently on manifolds. We focus more specifically on the sphere, for which we define a novel SW discrepancy, which we call spherical Sliced-Wasserstein, making a first step towards defining SW discrepancies on manifolds. Our construction is notably based on closed-form solutions of the Wasserstein distance on the circle, together with a new spherical Radon transform. Along with efficient algorithms and the corresponding implementations, we illustrate its properties in several machine learning use cases where spherical representations of data are at stake: sampling on the sphere, density estimation on real eath data or hyperspherical auto-encoders.
https://openreview.net/pdf/36d5cf6ac0eb166de70b353929632780e2078ae1.pdf
InPL: Pseudo-labeling the Inliers First for Imbalanced Semi-supervised Learning
https://openreview.net/forum?id=m6ahb1mpwwX
https://openreview.net/forum?id=m6ahb1mpwwX
Zhuoran Yu,Yin Li,Yong Jae Lee
ICLR 2023,Poster
Recent state-of-the-art methods in imbalanced semi-supervised learning (SSL) rely on confidence-based pseudo-labeling with consistency regularization. To obtain high-quality pseudo-labels, a high confidence threshold is typically adopted. However, it has been shown that softmax-based confidence scores in deep networks can be arbitrarily high for samples far from the training data, and thus, the pseudo-labels for even high-confidence unlabeled samples may still be unreliable. In this work, we present a new perspective of pseudo-labeling for imbalanced SSL. Without relying on model confidence, we propose to measure whether an unlabeled sample is likely to be "in-distribution''; i.e., close to the current training data. To decide whether an unlabeled sample is "in-distribution'' or "out-of-distribution'', we adopt the energy score from out-of-distribution detection literature. As training progresses and more unlabeled samples become in-distribution and contribute to training, the combined labeled and pseudo-labeled data can better approximate the true class distribution to improve the model. Experiments demonstrate that our energy-based pseudo-labeling method, InPL, albeit conceptually simple, significantly outperforms confidence-based methods on imbalanced SSL benchmarks. For example, it produces a 4-6% absolute accuracy improvement on CIFAR10-LT when the imbalance ratio is higher than 50. When combined with state-of-the-art long-tailed SSL methods, further improvements are attained. In particular, in one of the most challenging scenarios, InPL achieves a 6.9% accuracy improvement over the best competitor.
https://openreview.net/pdf/28067dde30cf45081461f843391290cdf29d0146.pdf
Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam
https://openreview.net/forum?id=-CefY2EOupj
https://openreview.net/forum?id=-CefY2EOupj
Yucheng Lu,Conglong Li,Minjia Zhang,Christopher De Sa,Yuxiong He
ICLR 2023,Poster
1-bit gradient compression and local steps are two representative techniques that enable drastic communication reduction in distributed SGD. Their benefits, however, remain an open question on Adam-based large model pre-training (e.g. BERT and GPT). In this paper, we demonstrate the non-linearity in Adam causes slow convergence even when 1-bit compression or local steps are individually applied. To alleviate this limitation, we propose \textbf{0/1 Adam} that linearizes each Adam step via approximating its optimizer states using their stale estimates and linear correlation. \textbf{0/1 Adam} performs an Adam-like step to preserve the adaptivity, while its linearity allows utilizing 1-bit compression and local steps simultaneously for wall-clock time speed up. We provide convergence guarantee for \textbf{0/1 Adam} on smooth non-convex objectives. On various large-scale benchmarks such as BERT-Base, BERT-Large, GPT-2 pre-training and ImageNet, we demonstrate on up to 128 GPUs that \textbf{0/1 Adam} is able to reduce up to 87\% of data volume, 54\% of communication rounds, and achieve up to 2$\times$ higher training throughput and end-to-end training time reduction compared to the state-of-the-art baseline 1-bit Adam; while enjoying the same statistical convergence speed and end task model accuracy on GLUE dataset and ImageNet validation set.
https://openreview.net/pdf/71e3ec182ebfd2023724f92a59fe1d09a34cb114.pdf
Truthful Self-Play
https://openreview.net/forum?id=WVRb98rwbv9
https://openreview.net/forum?id=WVRb98rwbv9
Shohei Ohsawa
ICLR 2023,Poster
We present a general framework for evolutionary learning to emergent unbiased state representation without any supervision. Evolutionary frameworks such as self-play converge to bad local optima in case of multi-agent reinforcement learning in non-cooperative partially observable environments with communication due to information asymmetry. Our proposed framework is a simple modification of self-play inspired by mechanism design, also known as {\em reverse game theory}, to elicit truthful signals and make the agents cooperative. The key idea is to add imaginary rewards using the peer prediction method, i.e., a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment. Numerical experiments with predator prey, traffic junction and StarCraft tasks demonstrate that the state-of-the-art performance of our framework.
https://openreview.net/pdf/595f528bafd828c8e449b148ac29a727b9ccaa1d.pdf
Strategic Classification with Graph Neural Networks
https://openreview.net/forum?id=TuHkVOjSAR
https://openreview.net/forum?id=TuHkVOjSAR
Itay Eilat,Ben Finkelshtein,Chaim Baskin,Nir Rosenfeld
ICLR 2023,Poster
Strategic classification studies learning in settings where users can modify their features to obtain favorable predictions. Most current works focus on simple classifiers that trigger independent user responses. Here we examine the implications of learning with more elaborate models that break the independence assumption. Motivated by the idea that applications of strategic classification are often social in nature, we focus on graph neural networks, which make use of social relations between users to improve predictions. Using a graph for learning introduces inter-user dependencies in prediction; our key point is that strategic users can exploit these to promote their goals. As we show through analysis and simulation, this can work either against the system---or for it. Based on this, we propose a differentiable framework for strategically-robust learning of graph-based classifiers. Experiments on several real networked datasets demonstrate the utility of our approach.
https://openreview.net/pdf/94454ab31ce26737d20982a622b5992572d6c4b3.pdf
Continual Transformers: Redundancy-Free Attention for Online Inference
https://openreview.net/forum?id=PolHquob8M7
https://openreview.net/forum?id=PolHquob8M7
Lukas Hedegaard,Arian Bakhtiarnia,Alexandros Iosifidis
ICLR 2023,Poster
Transformers in their common form are inherently limited to operate on whole token sequences rather than on one token at a time. Consequently, their use during online inference on time-series data entails considerable redundancy due to the overlap in successive token sequences. In this work, we propose novel formulations of the Scaled Dot-Product Attention, which enable Transformers to perform efficient online token-by-token inference on a continual input stream. Importantly, our modifications are purely to the order of computations, while the outputs and learned weights are identical to those of the original Transformer Encoder. We validate our Continual Transformer Encoder with experiments on the THUMOS14, TVSeries and GTZAN datasets with remarkable results: Our Continual one- and two-block architectures reduce the floating point operations per prediction by up to 63x and 2.6x, respectively, while retaining predictive performance.
https://openreview.net/pdf/24823d84c68d2321555336724867855b12d26d31.pdf
Learning Symbolic Models for Graph-structured Physical Mechanism
https://openreview.net/forum?id=f2wN4v_2__W
https://openreview.net/forum?id=f2wN4v_2__W
Hongzhi Shi,Jingtao Ding,Yufan Cao,quanming yao,Li Liu,Yong Li
ICLR 2023,Poster
Graph-structured physical mechanisms are ubiquitous in real-world scenarios, thus revealing underneath formulas is of great importance for scientific discovery. However, classical symbolic regression methods fail on this task since they can only handle input-output pairs that are not graph-structured. In this paper, we propose a new approach that generalizes symbolic regression to graph-structured physical mechanisms. The essence of our method is to model the formula skeleton with a message-passing flow, which helps transform the discovery of the skeleton into the search for the message-passing flow. Such a transformation guarantees that we are able to search a message-passing flow, which is efficient and Pareto-optimal in terms of both accuracy and simplicity. Subsequently, the underneath formulas can be identified by interpreting component functions of the searched message-passing flow, reusing classical symbolic regression methods. We conduct extensive experiments on datasets from different physical domains, including mechanics, electricity, and thermology, and on real-world datasets of pedestrian dynamics without ground-truth formulas. The experimental results not only verify the rationale of our design but also demonstrate that the proposed method can automatically learn precise and interpretable formulas for graph-structured physical mechanisms.
https://openreview.net/pdf/49039ff9dc3318b356a8ed79f748feab42a6990e.pdf
Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning
https://openreview.net/forum?id=0v4VkCSkHNm
https://openreview.net/forum?id=0v4VkCSkHNm
Sasha Salter,Kristian Hartikainen,Walter Goodwin,Ingmar Posner
ICLR 2023,Poster
The ability to discover behaviours from past experience and transfer them to new tasks is a hallmark of intelligent agents acting sample-efficiently in the real world. Equipping embodied reinforcement learners with the same ability may be crucial for their successful deployment in robotics. While hierarchical and KL-regularized reinforcement learning individually hold promise here, arguably a hybrid approach could combine their respective benefits. Key to these fields is the use of information asymmetry across architectural modules to bias which skills are learnt. While asymmetry choice has a large influence on transferability, existing methods base their choice primarily on intuition in a domain-independent, potentially sub-optimal, manner. In this paper, we theoretically and empirically show the crucial expressivity-transferability trade-off of skills across sequential tasks, controlled by information asymmetry. Given this insight, we introduce Attentive Priors for Expressive and Transferable Skills (APES), a hierarchical KL-regularized method, heavily benefiting from both priors and hierarchy. Unlike existing approaches, APES automates the choice of asymmetry by learning it in a data-driven, domain-dependent, way based on our expressivity-transferability theorems. Experiments over complex transfer domains of varying levels of extrapolation and sparsity, such as robot block stacking, demonstrate the criticality of the correct asymmetric choice, with APES drastically outperforming previous methods.
https://openreview.net/pdf/c363ed9425c2f59718edea214b3ce8625578da08.pdf
Self-Supervised Set Representation Learning for Unsupervised Meta-Learning
https://openreview.net/forum?id=kIAx30hYi_p
https://openreview.net/forum?id=kIAx30hYi_p
Dong Bok Lee,Seanie Lee,Kenji Kawaguchi,Yunji Kim,Jihwan Bang,Jung-Woo Ha,Sung Ju Hwang
ICLR 2023,Poster
Unsupervised meta-learning (UML) essentially shares the spirit of self-supervised learning (SSL) in that their goal aims at learning models without any human supervision so that the models can be adapted to downstream tasks. Further, the learning objective of self-supervised learning, which pulls positive pairs closer and repels negative pairs, also resembles metric-based meta-learning. Metric-based meta-learning is one of the most successful meta-learning methods, which learns to minimize the distance between representations from the same class. One notable aspect of metric-based meta-learning, however, is that it is widely interpreted as a set-level problem since the inference of discriminative class prototypes (or set representations) from few examples is crucial for the performance of downstream tasks. Motivated by this, we propose Set-SimCLR, a novel self-supervised set representation learning framework for targeting UML problem. Specifically, our Set-SimCLR learns a set encoder on top of instance representations to maximize the agreement between two sets of augmented samples, which are generated by applying stochastic augmentations to a given image. We theoretically analyze how our proposed set representation learning can potentially improve the generalization performance at the meta-test. We also empirically validate its effectiveness on various benchmark datasets, showing that Set-SimCLR largely outperforms both UML and instance-level self-supervised learning baselines.
https://openreview.net/pdf/9ce55edaecb2d3fc88d5f57116eae4648e79ecf4.pdf
Causal Representation Learning for Instantaneous and Temporal Effects in Interactive Systems
https://openreview.net/forum?id=itZ6ggvMnzS
https://openreview.net/forum?id=itZ6ggvMnzS
Phillip Lippe,Sara Magliacane,Sindy Löwe,Yuki M Asano,Taco Cohen,Efstratios Gavves
ICLR 2023,Poster
Causal representation learning is the task of identifying the underlying causal variables and their relations from high-dimensional observations, such as images. Recent work has shown that one can reconstruct the causal variables from temporal sequences of observations under the assumption that there are no instantaneous causal relations between them. In practical applications, however, our measurement or frame rate might be slower than many of the causal effects. This effectively creates ``instantaneous'' effects and invalidates previous identifiability results. To address this issue, we propose iCITRIS, a causal representation learning method that allows for instantaneous effects in intervened temporal sequences when intervention targets can be observed, e.g., as actions of an agent. iCITRIS identifies the potentially multidimensional causal variables from temporal observations, while simultaneously using a differentiable causal discovery method to learn their causal graph. In experiments on three datasets of interactive systems, iCITRIS accurately identifies the causal variables and their causal graph.
https://openreview.net/pdf/4fc22cb9d6d8f26d83b6ff14379f6e08f167c4c2.pdf
Visual Imitation Learning with Patch Rewards
https://openreview.net/forum?id=OnM3R47KIiU
https://openreview.net/forum?id=OnM3R47KIiU
Minghuan Liu,Tairan He,Weinan Zhang,Shuicheng YAN,Zhongwen Xu
ICLR 2023,Poster
Visual imitation learning enables reinforcement learning agents to learn to behave from expert visual demonstrations such as videos or image sequences, without explicit, well-defined rewards. Previous reseaches either adopt supervised learning techniques or induce simple and coarse scalar rewards from pixels, neglecting the dense information contained in the image demonstrations. In this work, we propose to measure the expertise of various local regions of image samples, or called patches, and recover multi-dimensional patch rewards accordingly. Patch reward is a more precise rewarding characterization that serves as fine-grained expertise measurement and visual explainability tool. Specifically, we present Adversarial Imitation Learning with Patch Rewards (PatchAIL), which employs a patch-based discriminator to measure the expertise of different local parts from given images and provide patch rewards. The patch-based knowledge is also used to regularize the aggregated reward and stabilize the training. We evaluate our method on the standard pixel-based benchmark DeepMind Control Suite. The experiment results have demonstrated that PatchAIL outperforms baseline methods and provides valuable interpretations for visual demonstrations.
https://openreview.net/pdf/c288073bd4cad20bf8058f578c66b24863f2944d.pdf
CodeT: Code Generation with Generated Tests
https://openreview.net/forum?id=ktrw68Cmu9c
https://openreview.net/forum?id=ktrw68Cmu9c
Bei Chen,Fengji Zhang,Anh Nguyen,Daoguang Zan,Zeqi Lin,Jian-Guang Lou,Weizhu Chen
ICLR 2023,Poster
The task of generating code solutions for a given programming problem can benefit from the use of pre-trained language models such as Codex, which can produce multiple diverse samples. However, a major challenge for this task is to select the most appropriate solution from the multiple samples generated by the pre-trained language models. A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming. In this paper, we propose a novel method, CodeT, that leverages the same pre-trained language models to automatically generate test cases for the code samples, thus reducing the human effort and increasing the coverage of the test scenarios. CodeT then executes the code samples using the generated test cases, and performs a dual execution agreement, which considers both the consistency of the outputs against the generated test cases and the agreement of the outputs with other code samples. We conduct comprehensive experiments on four benchmarks, HumanEval, MBPP, APPS and CodeContests, using five different pre-trained language models with varying sizes and capabilities. Our results show that CodeT can significantly improve the performance of code solution selection over previous methods, achieving remarkable and consistent gains across different models and benchmarks. For instance, CodeT improves the pass@1 metric on HumanEval to 65.8%, which represents an absolute improvement of 18.8% over the code-davinci-002 model, and an absolute improvement of more than 20% over the previous state-of-the-art results.
https://openreview.net/pdf/39c039dd3e5f58baafde9d304a11e04a606dda0b.pdf
Learning to Generate Columns with Application to Vertex Coloring
https://openreview.net/forum?id=JHW30A4DXtO
https://openreview.net/forum?id=JHW30A4DXtO
Yuan Sun,Andreas T Ernst,Xiaodong Li,Jake Weiner
ICLR 2023,Poster
We present a new column generation approach based on Machine Learning (ML) for solving combinatorial optimization problems. The aim of our method is to generate high-quality columns that belong to an optimal integer solution, in contrast to the traditional approach that aims at solving linear programming relaxations. To achieve this aim, we design novel features to characterize a column, and develop an effective ML model to predict whether a column belongs to an optimal integer solution. We then use the ML model as a filter to select high-quality columns generated from a sampling method and use the selected columns to construct an integer solution. Our method is computationally fast compared to the traditional methods that generate columns by repeatedly solving a pricing problem. We demonstrate the efficacy of our method on the vertex coloring problem, by empirically showing that the columns selected by our ML model are significantly better, in terms of the integer solution that can be constructed from them, than those selected randomly or based only on their reduced cost. Further, we show that the columns generated by our method can be used as a warm start to boost the performance of a column generation-based heuristic.
https://openreview.net/pdf/d01af810369c47ba35939f31eeca144538d779b4.pdf