title
stringlengths 14
150
| url
stringlengths 108
108
| authors
stringlengths 7
430
| detail_url
stringlengths 108
108
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 104
104
⌀ | Supplemental
stringlengths 111
111
⌀ | abstract
stringlengths 178
2.55k
⌀ | Errata
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|
Saddle-to-Saddle Dynamics in Diagonal Linear Networks
|
https://papers.nips.cc/paper_files/paper/2023/hash/17a9ab4190289f0e1504bbb98d1d111a-Abstract-Conference.html
|
Scott Pesme, Nicolas Flammarion
|
https://papers.nips.cc/paper_files/paper/2023/hash/17a9ab4190289f0e1504bbb98d1d111a-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21501-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/17a9ab4190289f0e1504bbb98d1d111a-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/17a9ab4190289f0e1504bbb98d1d111a-Supplemental-Conference.pdf
|
In this paper we fully describe the trajectory of gradient flow over $2$-layer diagonal linear networks for the regression setting in the limit of vanishing initialisation. We show that the limiting flow successively jumps from a saddle of the training loss to another until reaching the minimum $\ell_1$-norm solution. We explicitly characterise the visited saddles as well as the jump times through a recursive algorithm reminiscent of the LARS algorithm used for computing the Lasso path. Starting from the zero vector, coordinates are successively activated until the minimum $\ell_1$-norm solution is recovered, revealing an incremental learning. Our proof leverages a convenient arc-length time-reparametrisation which enables to keep track of the transitions between the jumps. Our analysis requires negligible assumptions on the data, applies to both under and overparametrised settings and covers complex cases where there is no monotonicity of the number of active coordinates. We provide numerical experiments to support our findings.
| null |
Encoding Human Behavior in Information Design through Deep Learning
|
https://papers.nips.cc/paper_files/paper/2023/hash/17d0a21da4ec2c12b4f07fa2e34e4d6c-Abstract-Conference.html
|
Guanghui Yu, Wei Tang, Saumik Narayanan, Chien-Ju Ho
|
https://papers.nips.cc/paper_files/paper/2023/hash/17d0a21da4ec2c12b4f07fa2e34e4d6c-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22814-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/17d0a21da4ec2c12b4f07fa2e34e4d6c-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/17d0a21da4ec2c12b4f07fa2e34e4d6c-Supplemental-Conference.zip
|
We initiate the study of $\textit{behavioral information design}$ through deep learning. In information design, a $\textit{sender}$ aims to persuade a $\textit{receiver}$ to take certain actions by strategically revealing information. We address scenarios in which the receiver might exhibit different behavior patterns other than the standard Bayesian rational assumption. We propose HAIDNet, a neural-network-based optimization framework for information design that can adapt to multiple representations of human behavior. Through extensive simulation, we show that HAIDNet can not only recover information policies that are near-optimal compared with known analytical solutions, but also can extend to designing information policies for settings that are computationally challenging (e.g., when there are multiple receivers) or for settings where there are no known solutions in general (e.g., when the receiver behavior does not follow the Bayesian rational assumption). We also conduct real-world human-subject experiments and demonstrate that our framework can capture human behavior from data and lead to more effective information policy for real-world human receivers.
| null |
Collaboratively Learning Linear Models with Structured Missing Data
|
https://papers.nips.cc/paper_files/paper/2023/hash/17f158c25b08758cf650130f7f173e51-Abstract-Conference.html
|
Chen Cheng, Gary Cheng, John C. Duchi
|
https://papers.nips.cc/paper_files/paper/2023/hash/17f158c25b08758cf650130f7f173e51-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22209-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/17f158c25b08758cf650130f7f173e51-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/17f158c25b08758cf650130f7f173e51-Supplemental-Conference.pdf
|
We study the problem of collaboratively learning least squares estimates for $m$ agents. Each agent observes a different subset of the features---e.g., containing data collected from sensors of varying resolution. Our goal is to determine how to coordinate the agents in order to produce the best estimator for each agent. We propose a distributed, semi-supervised algorithm Collab, consisting of three steps: local training, aggregation, and distribution. Our procedure does not require communicating the labeled data, making it communication efficient and useful in settings where the labeled data is inaccessible. Despite this handicap, our procedure is nearly asymptotically, local-minimax optimal---even among estimators allowed to communicate the labeled data such as imputation methods. We test our method on US Census data. We also discuss generalizations of our method to non-Gaussian feature settings, non-linear settings, and Federated Learning.
| null |
Generating Behaviorally Diverse Policies with Latent Diffusion Models
|
https://papers.nips.cc/paper_files/paper/2023/hash/180d4373aca26bd86bf45fc50d1a709f-Abstract-Conference.html
|
Shashank Hegde, Sumeet Batra, K.R. Zentner, Gaurav Sukhatme
|
https://papers.nips.cc/paper_files/paper/2023/hash/180d4373aca26bd86bf45fc50d1a709f-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22414-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/180d4373aca26bd86bf45fc50d1a709f-Paper-Conference.pdf
| null |
Recent progress in Quality Diversity Reinforcement Learning (QD-RL) has enabled learning a collection of behaviorally diverse, high performing policies. However, these methods typically involve storing thousands of policies, which results in high space-complexity and poor scaling to additional behaviors. Condensing the archive into a single model while retaining the performance and coverage of theoriginal collection of policies has proved challenging. In this work, we propose using diffusion models to distill the archive into a single generative model over policy parameters. We show that our method achieves a compression ratio of 13x while recovering 98% of the original rewards and 89% of the original humanoid archive coverage. Further, the conditioning mechanism of diffusion models allowsfor flexibly selecting and sequencing behaviors, including using language. Project website: https://sites.google.com/view/policydiffusion/home.
| null |
Incentives in Private Collaborative Machine Learning
|
https://papers.nips.cc/paper_files/paper/2023/hash/180f1a1de4244c009ff0848c55ae54a5-Abstract-Conference.html
|
Rachael Sim, Yehong Zhang, Nghia Hoang, Xinyi Xu, Bryan Kian Hsiang Low, Patrick Jaillet
|
https://papers.nips.cc/paper_files/paper/2023/hash/180f1a1de4244c009ff0848c55ae54a5-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19931-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/180f1a1de4244c009ff0848c55ae54a5-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/180f1a1de4244c009ff0848c55ae54a5-Supplemental-Conference.zip
|
Collaborative machine learning involves training models on data from multiple parties but must incentivize their participation. Existing data valuation methods fairly value and reward each party based on shared data or model parameters but neglect the privacy risks involved. To address this, we introduce differential privacy (DP) as an incentive. Each party can select its required DP guarantee and perturb its sufficient statistic (SS) accordingly. The mediator values the perturbed SS by the Bayesian surprise it elicits about the model parameters. As our valuation function enforces a privacy-valuation trade-off, parties are deterred from selecting excessive DP guarantees that reduce the utility of the grand coalition's model. Finally, the mediator rewards each party with different posterior samples of the model parameters. Such rewards still satisfy existing incentives like fairness but additionally preserve DP and a high similarity to the grand coalition's posterior. We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
| null |
VideoComposer: Compositional Video Synthesis with Motion Controllability
|
https://papers.nips.cc/paper_files/paper/2023/hash/180f6184a3458fa19c28c5483bc61877-Abstract-Conference.html
|
Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, Jingren Zhou
|
https://papers.nips.cc/paper_files/paper/2023/hash/180f6184a3458fa19c28c5483bc61877-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21545-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/180f6184a3458fa19c28c5483bc61877-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/180f6184a3458fa19c28c5483bc61877-Supplemental-Conference.zip
|
The pursuit of controllability as a higher standard of visual content creation has yielded remarkable progress in customizable image synthesis. However, achieving controllable video synthesis remains challenging due to the large variation of temporal dynamics and the requirement of cross-frame temporal consistency. Based on the paradigm of compositional generation, this work presents VideoComposer that allows users to flexibly compose a video with textual conditions, spatial conditions, and more importantly temporal conditions. Specifically, considering the characteristic of video data, we introduce the motion vector from compressed videos as an explicit control signal to provide guidance regarding temporal dynamics. In addition, we develop a Spatio-Temporal Condition encoder (STC-encoder) that serves as a unified interface to effectively incorporate the spatial and temporal relations of sequential inputs, with which the model could make better use of temporal conditions and hence achieve higher inter-frame consistency. Extensive experimental results suggest that VideoComposer is able to control the spatial and temporal patterns simultaneously within a synthesized video in various forms, such as text description, sketch sequence, reference video, or even simply hand-crafted motions. The code and models are publicly available athttps://videocomposer.github.io.
| null |
Look Beneath the Surface: Exploiting Fundamental Symmetry for Sample-Efficient Offline RL
|
https://papers.nips.cc/paper_files/paper/2023/hash/181a027913d36bc0a8857c0da661d621-Abstract-Conference.html
|
Peng Cheng, Xianyuan Zhan, zhihao wu, Wenjia Zhang, Youfang Lin, Shou cheng Song, Han Wang, Li Jiang
|
https://papers.nips.cc/paper_files/paper/2023/hash/181a027913d36bc0a8857c0da661d621-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19897-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/181a027913d36bc0a8857c0da661d621-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/181a027913d36bc0a8857c0da661d621-Supplemental-Conference.pdf
|
Offline reinforcement learning (RL) offers an appealing approach to real-world tasks by learning policies from pre-collected datasets without interacting with the environment. However, the performance of existing offline RL algorithms heavily depends on the scale and state-action space coverage of datasets. Real-world data collection is often expensive and uncontrollable, leading to small and narrowly covered datasets and posing significant challenges for practical deployments of offline RL. In this paper, we provide a new insight that leveraging the fundamental symmetry of system dynamics can substantially enhance offline RL performance under small datasets. Specifically, we propose a Time-reversal symmetry (T-symmetry) enforced Dynamics Model (TDM), which establishes consistency between a pair of forward and reverse latent dynamics. TDM provides both well-behaved representations for small datasets and a new reliability measure for OOD samples based on compliance with the T-symmetry. These can be readily used to construct a new offline RL algorithm (TSRL) with less conservative policy constraints and a reliable latent space data augmentation procedure. Based on extensive experiments, we find TSRL achieves great performance on small benchmark datasets with as few as 1% of the original samples, which significantly outperforms the recent offline RL algorithms in terms of data efficiency and generalizability. Code is available at:https://github.com/pcheng2/TSRL
| null |
Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks
|
https://papers.nips.cc/paper_files/paper/2023/hash/18210aa6209b9adfc97b8c17c3741d95-Abstract-Conference.html
|
Roey Magen, Ohad Shamir
|
https://papers.nips.cc/paper_files/paper/2023/hash/18210aa6209b9adfc97b8c17c3741d95-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22246-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/18210aa6209b9adfc97b8c17c3741d95-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/18210aa6209b9adfc97b8c17c3741d95-Supplemental-Conference.pdf
|
We provide several new results on the sample complexity of vector-valued linear predictors (parameterized by a matrix), and more generally neural networks. Focusing on size-independent bounds, where only the Frobenius norm distance of the parameters from some fixed reference matrix $W_0$ is controlled, we show that the sample complexity behavior can be surprisingly different than what we may expect considering the well-studied setting of scalar-valued linear predictors. This also leads to new sample complexity bounds for feed-forward neural networks, tackling some open questions in the literature, and establishing a new convex linear prediction problem that is provably learnable without uniform convergence.
| null |
Incentivizing Honesty among Competitors in Collaborative Learning and Optimization
|
https://papers.nips.cc/paper_files/paper/2023/hash/182b39a4458fb4a9a8d6871a6671ff3e-Abstract-Conference.html
|
Florian E. Dorner, Nikola Konstantinov, Georgi Pashaliev, Martin Vechev
|
https://papers.nips.cc/paper_files/paper/2023/hash/182b39a4458fb4a9a8d6871a6671ff3e-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20441-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/182b39a4458fb4a9a8d6871a6671ff3e-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/182b39a4458fb4a9a8d6871a6671ff3e-Supplemental-Conference.zip
|
Collaborative learning techniques have the potential to enable training machine learning models that are superior to models trained on a single entity’s data. However, in many cases, potential participants in such collaborative schemes are competitors on a downstream task, such as firms that each aim to attract customers by providing the best recommendations. This can incentivize dishonest updates that damage other participants' models, potentially undermining the benefits of collaboration. In this work, we formulate a game that models such interactions and study two learning tasks within this framework: single-round mean estimation and multi-round SGD on strongly-convex objectives. For a natural class of player actions, we show that rational clients are incentivized to strongly manipulate their updates, preventing learning. We then propose mechanisms that incentivize honest communication and ensure learning quality comparable to full cooperation. Lastly, we empirically demonstrate the effectiveness of our incentive scheme on a standard non-convex federated learning benchmark. Our work shows that explicitly modeling the incentives and actions of dishonest clients, rather than assuming them malicious, can enable strong robustness guarantees for collaborative learning.
| null |
SNAP: Self-Supervised Neural Maps for Visual Positioning and Semantic Understanding
|
https://papers.nips.cc/paper_files/paper/2023/hash/182c433412b33c14e32a7c4fc2c3e290-Abstract-Conference.html
|
Paul-Edouard Sarlin, Eduard Trulls, Marc Pollefeys, Jan Hosang, Simon Lynen
|
https://papers.nips.cc/paper_files/paper/2023/hash/182c433412b33c14e32a7c4fc2c3e290-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19927-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/182c433412b33c14e32a7c4fc2c3e290-Paper-Conference.pdf
| null |
Semantic 2D maps are commonly used by humans and machines for navigation purposes, whether it's walking or driving. However, these maps have limitations: they lack detail, often contain inaccuracies, and are difficult to create and maintain, especially in an automated fashion. Can we use raw imagery to automatically create better maps that can be easily interpreted by both humans and machines? We introduce SNAP, a deep network that learns rich 2D neural maps from ground-level and overhead images. We train our model to align neural maps estimated from different inputs, supervised only with camera poses over tens of millions of StreetView images. SNAP can resolve the location of challenging image queries beyond the reach of traditional methods, outperforming the state of the art in localization by a large margin. Moreover, our neural maps encode not only geometry and appearance but also high-level semantics, discovered without explicit supervision. This enables effective pre-training for data-efficient semantic scene understanding, with the potential to unlock cost-efficient creation of more detailed maps.
| null |
Equal Opportunity of Coverage in Fair Regression
|
https://papers.nips.cc/paper_files/paper/2023/hash/1849b94ed817ae7043a6b6934ef410c1-Abstract-Conference.html
|
Fangxin Wang, Lu Cheng, Ruocheng Guo, Kay Liu, Philip S Yu
|
https://papers.nips.cc/paper_files/paper/2023/hash/1849b94ed817ae7043a6b6934ef410c1-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20000-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1849b94ed817ae7043a6b6934ef410c1-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1849b94ed817ae7043a6b6934ef410c1-Supplemental-Conference.pdf
|
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making. The seminal work of 'equalized coverage' proposed an uncertainty-aware fairness notion. However, it does not guarantee equal coverage rates across more fine-grained groups (e.g., low-income females) conditioning on the true label and is biased in the assessment of uncertainty. To tackle these limitations, we propose a new uncertainty-aware fairness -- Equal Opportunity of Coverage (EOC) -- that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level. Further, the prediction intervals should be narrow to be informative. We propose Binned Fair Quantile Regression (BFQR), a distribution-free post-processing method to improve EOC with reasonable width for any trained ML models. It first calibrates a hold-out set to bound deviation from EOC, then leverages conformal prediction to maintain EOC on a test set, meanwhile optimizing prediction interval width. Experimental results demonstrate the effectiveness of our method in improving EOC.
| null |
Nonparametric Teaching for Multiple Learners
|
https://papers.nips.cc/paper_files/paper/2023/hash/184a03a3ad07e8897c62461c02634b02-Abstract-Conference.html
|
Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
|
https://papers.nips.cc/paper_files/paper/2023/hash/184a03a3ad07e8897c62461c02634b02-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20523-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/184a03a3ad07e8897c62461c02634b02-Paper-Conference.pdf
| null |
We study the problem of teaching multiple learners simultaneously in the nonparametric iterative teaching setting, where the teacher iteratively provides examples to the learner for accelerating the acquisition of a target concept. This problem is motivated by the gap between current single-learner teaching setting and the real-world scenario of human instruction where a teacher typically imparts knowledge to multiple students. Under the new problem formulation, we introduce a novel framework -- Multi-learner Nonparametric Teaching (MINT). In MINT, the teacher aims to instruct multiple learners, with each learner focusing on learning a scalar-valued target model. To achieve this, we frame the problem as teaching a vector-valued target model and extend the target model space from a scalar-valued reproducing kernel Hilbert space used in single-learner scenarios to a vector-valued space. Furthermore, we demonstrate that MINT offers significant teaching speed-up over repeated single-learner teaching, particularly when the multiple learners can communicate with each other. Lastly, we conduct extensive experiments to validate the practicality and efficiency of MINT.
| null |
EvoPrompting: Language Models for Code-Level Neural Architecture Search
|
https://papers.nips.cc/paper_files/paper/2023/hash/184c1e18d00d7752805324da48ad25be-Abstract-Conference.html
|
Angelica Chen, David Dohan, David So
|
https://papers.nips.cc/paper_files/paper/2023/hash/184c1e18d00d7752805324da48ad25be-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20436-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/184c1e18d00d7752805324da48ad25be-Paper-Conference.pdf
| null |
Given the recent impressive accomplishments of language models (LMs) for code generation, we explore the use of LMs as general adaptive mutation and crossover operators for an evolutionary neural architecture search (NAS) algorithm.While NAS still proves too difficult a task for LMs to succeed at solely through prompting, we find that the combination of evolutionary prompt engineering with soft prompt-tuning, a method we term EvoPrompting, consistently finds diverse and high performing models. We first demonstrate that EvoPrompting is effective on the computationally efficient MNIST-1D dataset, where EvoPrompting produces convolutional architecture variants that outperform both those designed by human experts and naive few-shot prompting in terms of accuracy and model size. We then apply our method to searching for graph neural networks on the CLRS Algorithmic Reasoning Benchmark, where EvoPrompting is able to design novel architectures that outperform current state-of-the-art models on 21 out of 30 algorithmic reasoning tasks while maintaining similar model size. EvoPrompting is successful at designing accurate and efficient neural network architectures across a variety of machine learning tasks, while also being general enough for easy adaptation to other tasks beyond neural network design.
| null |
Global-correlated 3D-decoupling Transformer for Clothed Avatar Reconstruction
|
https://papers.nips.cc/paper_files/paper/2023/hash/1857d2e8f51ed219ca0c2663239b38e5-Abstract-Conference.html
|
Zechuan Zhang, Li Sun, Zongxin Yang, Ling Chen, Yi Yang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1857d2e8f51ed219ca0c2663239b38e5-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20702-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1857d2e8f51ed219ca0c2663239b38e5-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1857d2e8f51ed219ca0c2663239b38e5-Supplemental-Conference.pdf
|
Reconstructing 3D clothed human avatars from single images is a challenging task, especially when encountering complex poses and loose clothing. Current methods exhibit limitations in performance, largely attributable to their dependence on insufficient 2D image features and inconsistent query methods. Owing to this, we present the Global-correlated 3D-decoupling Transformer for clothed Avatar reconstruction (GTA), a novel transformer-based architecture that reconstructs clothed human avatars from monocular images. Our approach leverages transformer architectures by utilizing a Vision Transformer model as an encoder for capturing global-correlated image features. Subsequently, our innovative 3D-decoupling decoder employs cross-attention to decouple tri-plane features, using learnable embeddings as queries for cross-plane generation. To effectively enhance feature fusion with the tri-plane 3D feature and human body prior, we propose a hybrid prior fusion strategy combining spatial and prior-enhanced queries, leveraging the benefits of spatial localization and human body prior knowledge. Comprehensive experiments on CAPE and THuman2.0 datasets illustrate that our method outperforms state-of-the-art approaches in both geometry and texture reconstruction, exhibiting high robustness to challenging poses and loose clothing, and producing higher-resolution textures. Codes are available at https://github.com/River-Zhang/GTA.
| null |
TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models
|
https://papers.nips.cc/paper_files/paper/2023/hash/185969291540b3cd86e70c51e8af5d08-Abstract-Conference.html
|
Pum Jun Kim, Yoojin Jang, Jisu Kim, Jaejun Yoo
|
https://papers.nips.cc/paper_files/paper/2023/hash/185969291540b3cd86e70c51e8af5d08-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22140-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/185969291540b3cd86e70c51e8af5d08-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/185969291540b3cd86e70c51e8af5d08-Supplemental-Conference.pdf
|
We propose a robust and reliable evaluation metric for generative models called Topological Precision and Recall (TopP&R, pronounced “topper”), which systematically estimates supports by retaining only topologically and statistically significant features with a certain level of confidence. Existing metrics, such as Inception Score (IS), Frechet Inception Distance (FID), and various Precision and Recall (P&R) variants, rely heavily on support estimates derived from sample features. However, the reliability of these estimates has been overlooked, even though the quality of the evaluation hinges entirely on their accuracy. In this paper, we demonstrate that current methods not only fail to accurately assess sample quality when support estimation is unreliable, but also yield inconsistent results. In contrast, TopP&R reliably evaluates the sample quality and ensures statistical consistency in its results. Our theoretical and experimental findings reveal that TopP&R provides a robust evaluation, accurately capturing the true trend of change in samples, even in the presence of outliers and non-independent and identically distributed (Non-IID) perturbations where other methods result in inaccurate support estimations. To our knowledge, TopP&R is the first evaluation metric specifically focused on the robust estimation of supports, offering statistical consistency under noise conditions.
| null |
A Unified Detection Framework for Inference-Stage Backdoor Defenses
|
https://papers.nips.cc/paper_files/paper/2023/hash/1868a3c73d0d2a44c42458575fa8514c-Abstract-Conference.html
|
Xun Xian, Ganghua Wang, Jayanth Srinivasa, Ashish Kundu, Xuan Bi, Mingyi Hong, Jie Ding
|
https://papers.nips.cc/paper_files/paper/2023/hash/1868a3c73d0d2a44c42458575fa8514c-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22081-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1868a3c73d0d2a44c42458575fa8514c-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1868a3c73d0d2a44c42458575fa8514c-Supplemental-Conference.zip
|
Backdoor attacks involve inserting poisoned samples during training, resulting in a model containing a hidden backdoor that can trigger specific behaviors without impacting performance on normal samples. These attacks are challenging to detect, as the backdoored model appears normal until activated by the backdoor trigger, rendering them particularly stealthy. In this study, we devise a unified inference-stage detection framework to defend against backdoor attacks. We first rigorously formulate the inference-stage backdoor detection problem, encompassing various existing methods, and discuss several challenges and limitations. We then propose a framework with provable guarantees on the false positive rate or the probability of misclassifying a clean sample. Further, we derive the most powerful detection rule to maximize the detection power, namely the rate of accurately identifying a backdoor sample, given a false positive rate under classical learning scenarios. Based on the theoretically optimal detection rule, we suggest a practical and effective approach for real-world applications based on the latent representations of backdoored deep nets. We extensively evaluate our method on 14 different backdoor attacks using Computer Vision (CV) and Natural Language Processing (NLP) benchmark datasets. The experimental findings align with our theoretical results. We significantly surpass the state-of-the-art methods, e.g., up to 300\% improvement on the detection power as evaluated by AUCROC, over the state-of-the-art defense against advanced adaptive backdoor attacks.
| null |
Non-Stationary Bandits with Auto-Regressive Temporal Dependency
|
https://papers.nips.cc/paper_files/paper/2023/hash/186a213d720568b31f9b59c085a23e5a-Abstract-Conference.html
|
Qinyi Chen, Negin Golrezaei, Djallel Bouneffouf
|
https://papers.nips.cc/paper_files/paper/2023/hash/186a213d720568b31f9b59c085a23e5a-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20892-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/186a213d720568b31f9b59c085a23e5a-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/186a213d720568b31f9b59c085a23e5a-Supplemental-Conference.zip
|
Traditional multi-armed bandit (MAB) frameworks, predominantly examined under stochastic or adversarial settings, often overlook the temporal dynamics inherent in many real-world applications such as recommendation systems and online advertising. This paper introduces a novel non-stationary MAB framework that captures the temporal structure of these real-world dynamics through an auto-regressive (AR) reward structure. We propose an algorithm that integrates two key mechanisms: (i) an alternation mechanism adept at leveraging temporal dependencies to dynamically balance exploration and exploitation, and (ii) a restarting mechanism designed to discard out-of-date information. Our algorithm achieves a regret upper bound that nearly matches the lower bound, with regret measured against a robust dynamic benchmark. Finally, via a real-world case study on tourism demand prediction, we demonstrate both the efficacy of our algorithm and the broader applicability of our techniques to more complex, rapidly evolving time series.
| null |
Globally solving the Gromov-Wasserstein problem for point clouds in low dimensional Euclidean spaces
|
https://papers.nips.cc/paper_files/paper/2023/hash/188409d2ad91db4fb13644d024d99074-Abstract-Conference.html
|
Martin Ryner, Jan Kronqvist, Johan Karlsson
|
https://papers.nips.cc/paper_files/paper/2023/hash/188409d2ad91db4fb13644d024d99074-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20426-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/188409d2ad91db4fb13644d024d99074-Paper-Conference.pdf
| null |
This paper presents a framework for computing the Gromov-Wasserstein problem between two sets of points in low dimensional spaces, where the discrepancy is the squared Euclidean norm.The Gromov-Wasserstein problem is a generalization of the optimal transport problem that finds the assignment between two sets preserving pairwise distances as much as possible. This can be used to quantify the similarity between two formations or shapes, a common problem in AI and machine learning.The problem can be formulated as a Quadratic Assignment Problem (QAP), which is in general computationally intractable even for small problems. Our framework addresses this challenge by reformulating the QAP as an optimization problem with a low-dimensional domain, leveraging the fact that the problem can be expressed as a concave quadratic optimization problem with low rank. The method scales well with the number of points, and it can be used to find the global solution for large-scale problems with thousands of points.We compare the computational complexity of our approach with state-of-the-art methods on synthetic problems and apply it to a near-symmetrical problem which is of particular interest in computational biology.
| null |
Combinatorial Optimization with Policy Adaptation using Latent Space Search
|
https://papers.nips.cc/paper_files/paper/2023/hash/18d3a2f3068d6c669dcae19ceca1bc24-Abstract-Conference.html
|
Felix Chalumeau, Shikha Surana, Clément Bonnet, Nathan Grinsztajn, Arnu Pretorius, Alexandre Laterre, Tom Barrett
|
https://papers.nips.cc/paper_files/paper/2023/hash/18d3a2f3068d6c669dcae19ceca1bc24-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19649-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/18d3a2f3068d6c669dcae19ceca1bc24-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/18d3a2f3068d6c669dcae19ceca1bc24-Supplemental-Conference.zip
|
Combinatorial Optimization underpins many real-world applications and yet, designing performant algorithms to solve these complex, typically NP-hard, problems remains a significant research challenge. Reinforcement Learning (RL) provides a versatile framework for designing heuristics across a broad spectrum of problem domains. However, despite notable progress, RL has not yet supplanted industrial solvers as the go-to solution. Current approaches emphasize pre-training heuristics that construct solutions, but often rely on search procedures with limited variance, such as stochastically sampling numerous solutions from a single policy, or employing computationally expensive fine-tuning of the policy on individual problem instances. Building on the intuition that performant search at inference time should be anticipated during pre-training, we propose COMPASS, a novel RL approach that parameterizes a distribution of diverse and specialized policies conditioned on a continuous latent space. We evaluate COMPASS across three canonical problems - Travelling Salesman, Capacitated Vehicle Routing, and Job-Shop Scheduling - and demonstrate that our search strategy (i) outperforms state-of-the-art approaches in 9 out of 11 standard benchmarking tasks and (ii) generalizes better, surpassing all other approaches on a set of 18 procedurally transformed instance distributions.
| null |
Adversarial Resilience in Sequential Prediction via Abstention
|
https://papers.nips.cc/paper_files/paper/2023/hash/1967f962c7c2083618236d80eeb9d1ac-Abstract-Conference.html
|
Surbhi Goel, Steve Hanneke, Shay Moran, Abhishek Shetty
|
https://papers.nips.cc/paper_files/paper/2023/hash/1967f962c7c2083618236d80eeb9d1ac-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21568-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1967f962c7c2083618236d80eeb9d1ac-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1967f962c7c2083618236d80eeb9d1ac-Supplemental-Conference.pdf
|
We study the problem of sequential prediction in the stochastic setting with an adversary that is allowed to inject clean-label adversarial (or out-of-distribution) examples. Algorithms designed to handle purely stochastic data tend to fail in the presence of such adversarial examples, often leading to erroneous predictions. This is undesirable in many high-stakes applications such as medical recommendations, where abstaining from predictions on adversarial examples is preferable to misclassification. On the other hand, assuming fully adversarial data leads to very pessimistic bounds that are often vacuous in practice. To move away from these pessimistic guarantees, we propose a new model of sequential prediction that sits between the purely stochastic and fully adversarial settings by allowing the learner to abstain from making a prediction at no cost on adversarial examples, thereby asking the learner to make predictions with certainty. Assuming access to the marginal distribution on the non-adversarial examples, we design a learner whose error scales with the VC dimension (mirroring the stochastic setting) of the hypothesis class, as opposed to the Littlestone dimension which characterizes the fully adversarial setting. Furthermore, we design learners for VC dimension~1 classes and the class of axis-aligned rectangles, which work even in the absence of access to the marginal distribution. Our key technical contribution is a novel measure for quantifying uncertainty for learning VC classes, which may be of independent interest.
| null |
Simplicity Bias in 1-Hidden Layer Neural Networks
|
https://papers.nips.cc/paper_files/paper/2023/hash/196c4e02b7464c554f0f5646af5d502e-Abstract-Conference.html
|
Depen Morwani, Jatin Batra, Prateek Jain, Praneeth Netrapalli
|
https://papers.nips.cc/paper_files/paper/2023/hash/196c4e02b7464c554f0f5646af5d502e-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20366-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/196c4e02b7464c554f0f5646af5d502e-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/196c4e02b7464c554f0f5646af5d502e-Supplemental-Conference.zip
|
Recent works have demonstrated that neural networks exhibit extreme *simplicity bias* (SB). That is, they learn *only the simplest* features to solve a task at hand, even in the presence of other, more robust but more complex features. Due to the lack of a general and rigorous definition of *features*, these works showcase SB on *semi-synthetic* datasets such as Color-MNIST , MNIST-CIFAR where defining features is relatively easier. In this work, we rigorously define as well as thoroughly establish SB for *one hidden layer* neural networks in the infinite width regime. More concretely, (i) we define SB as the network essentially being a function of a low dimensional projection of the inputs (ii) theoretically, we show that when the data is linearly separable, the network primarily depends on only the linearly separable ($1$-dimensional) subspace even in the presence of an arbitrarily large number of other, more complex features which could have led to a significantly more robust classifier, (iii) empirically, we show that models trained on *real* datasets such as Imagenet and Waterbirds-Landbirds indeed depend on a low dimensional projection of the inputs, thereby demonstrating SB on these datasets, iv) finally, we present a natural ensemble approach that encourages diversity in models by training successive models on features not used by earlier models, and demonstrate that it yields models that are significantly more robust to Gaussian noise.
| null |
Temporally Disentangled Representation Learning under Unknown Nonstationarity
|
https://papers.nips.cc/paper_files/paper/2023/hash/19a567abaec3990cb40d7a013556fecd-Abstract-Conference.html
|
Xiangchen Song, Weiran Yao, Yewen Fan, Xinshuai Dong, Guangyi Chen, Juan Carlos Niebles, Eric Xing, Kun Zhang
|
https://papers.nips.cc/paper_files/paper/2023/hash/19a567abaec3990cb40d7a013556fecd-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22934-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/19a567abaec3990cb40d7a013556fecd-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/19a567abaec3990cb40d7a013556fecd-Supplemental-Conference.zip
|
In unsupervised causal representation learning for sequential data with time-delayed latent causal influences, strong identifiability results for the disentanglement of causally-related latent variables have been established in stationary settings by leveraging temporal structure.However, in nonstationary setting, existing work only partially addressed the problem by either utilizing observed auxiliary variables (e.g., class labels and/or domain indexes) as side information or assuming simplified latent causal dynamics. Both constrain the method to a limited range of scenarios.In this study, we further explored the Markov Assumption under time-delayed causally related process in nonstationary setting and showed that under mild conditions, the independent latent components can be recovered from their nonlinear mixture up to a permutation and a component-wise transformation, without the observation of auxiliary variables. We then introduce NCTRL, a principled estimation framework, to reconstruct time-delayed latent causal variables and identify their relations from measured sequential data only.Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences, with our methodology substantially outperforming existing baselines that fail to exploit the nonstationarity adequately and then, consequently, cannot distinguish distribution shifts.
| null |
Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization
|
https://papers.nips.cc/paper_files/paper/2023/hash/19c9708f31ec44b5b1cbd67f91d05d95-Abstract-Conference.html
|
Ruichen Jiang, Aryan Mokhtari
|
https://papers.nips.cc/paper_files/paper/2023/hash/19c9708f31ec44b5b1cbd67f91d05d95-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21330-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/19c9708f31ec44b5b1cbd67f91d05d95-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/19c9708f31ec44b5b1cbd67f91d05d95-Supplemental-Conference.zip
|
In this paper, we propose an accelerated quasi-Newton proximal extragradient method for solving unconstrained smooth convex optimization problems. With access only to the gradients of the objective, we prove that our method can achieve a convergence rate of $\mathcal{O}\bigl(\min\\{\frac{1}{k^2}, \frac{\sqrt{d\log k}}{k^{2.5}}\\}\bigr)$, where $d$ is the problem dimension and $k$ is the number of iterations. In particular, in the regime where $k = \mathcal{O}(d)$, our method matches the _optimal rate_ of $\mathcal{O}(\frac{1}{k^2})$ by Nesterov's accelerated gradient (NAG). Moreover, in the the regime where $k = \Omega(d \log d)$, it outperforms NAG and converges at a _faster rate_ of $\mathcal{O}\bigl(\frac{\sqrt{d\log k}}{k^{2.5}}\bigr)$. To the best of our knowledge, this result is the first to demonstrate a provable gain for a quasi-Newton-type method over NAG in the convex setting. To achieve such results, we build our method on a recent variant of the Monteiro-Svaiter acceleration framework and adopt an online learning perspective to update the Hessian approximation matrices, in which we relate the convergence rate of our method to the dynamic regret of a specific online convex optimization problem in the space of matrices.
| null |
Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
|
https://papers.nips.cc/paper_files/paper/2023/hash/19d7204af519eae9993f7f72377a0ec0-Abstract-Conference.html
|
Tao Lei, Junwen Bai, Siddhartha Brahma, Joshua Ainslie, Kenton Lee, Yanqi Zhou, Nan Du, Vincent Zhao, Yuexin Wu, Bo Li, Yu Zhang, Ming-Wei Chang
|
https://papers.nips.cc/paper_files/paper/2023/hash/19d7204af519eae9993f7f72377a0ec0-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20804-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/19d7204af519eae9993f7f72377a0ec0-Paper-Conference.pdf
| null |
We propose Conditional Adapter (CoDA), a parameter-efficient transfer learning method that also improves inference efficiency. CoDA generalizes beyond standard adapter approaches to enable a new way of balancing speed and accuracy using conditional computation.Starting with an existing dense pretrained model, CoDA adds sparse activation together with a small number of new parameters and a light-weight training phase.Our experiments demonstrate that the CoDA approach provides an unexpectedly efficient way to transfer knowledge.Across a variety of language, vision, and speech tasks, CoDA achieves a 2x to 8x inference speed-up compared to the state-of-the-art Adapter approaches with moderate to no accuracy loss and the same parameter efficiency.
| null |
Time-Independent Information-Theoretic Generalization Bounds for SGLD
|
https://papers.nips.cc/paper_files/paper/2023/hash/19dbb86f771ddbf9986cf0c9b1c61c17-Abstract-Conference.html
|
Futoshi Futami, Masahiro Fujisawa
|
https://papers.nips.cc/paper_files/paper/2023/hash/19dbb86f771ddbf9986cf0c9b1c61c17-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21534-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/19dbb86f771ddbf9986cf0c9b1c61c17-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/19dbb86f771ddbf9986cf0c9b1c61c17-Supplemental-Conference.pdf
|
We provide novel information-theoretic generalization bounds for stochastic gradient Langevin dynamics (SGLD) under the assumptions of smoothness and dissipativity, which are widely used in sampling and non-convex optimization studies.Our bounds are time-independent and decay to zero as the sample size increases, regardless of the number of iterations and whether the step size is fixed.Unlike previous studies, we derive the generalization error bounds by focusing on the time evolution of the Kullback--Leibler divergence, which is related to the stability of datasets and is the upper bound of the mutual information between output parameters and an input dataset.Additionally, we establish the first information-theoretic generalization bound when the training and test loss are the same by showing that a loss function of SGLD is sub-exponential.This bound is also time-independent and removes the problematic step size dependence in existing work, leading to an improved excess risk bound by combining our analysis with the existing non-convex optimization error bounds.
| null |
Topology-Aware Uncertainty for Image Segmentation
|
https://papers.nips.cc/paper_files/paper/2023/hash/19ded4cfc36a7feb7fce975393d378fd-Abstract-Conference.html
|
Saumya Gupta, Yikai Zhang, Xiaoling Hu, Prateek Prasanna, Chao Chen
|
https://papers.nips.cc/paper_files/paper/2023/hash/19ded4cfc36a7feb7fce975393d378fd-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22490-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/19ded4cfc36a7feb7fce975393d378fd-Paper-Conference.pdf
| null |
Segmentation of curvilinear structures such as vasculature and road networks is challenging due to relatively weak signals and complex geometry/topology. To facilitate and accelerate large scale annotation, one has to adopt semi-automatic approaches such as proofreading by experts. In this work, we focus on uncertainty estimation for such tasks, so that highly uncertain, and thus error-prone structures can be identified for human annotators to verify. Unlike most existing works, which provide pixel-wise uncertainty maps, we stipulate it is crucial to estimate uncertainty in the units of topological structures, e.g., small pieces of connections and branches. To achieve this, we leverage tools from topological data analysis, specifically discrete Morse theory (DMT), to first capture the structures, and then reason about their uncertainties. To model the uncertainty, we (1) propose a joint prediction model that estimates the uncertainty of a structure while taking the neighboring structures into consideration (inter-structural uncertainty); (2) propose a novel Probabilistic DMT to model the inherent uncertainty within each structure (intra-structural uncertainty) by sampling its representations via a perturb-and-walk scheme. On various 2D and 3D datasets, our method produces better structure-wise uncertainty maps compared to existing works. Code available at: https://github.com/Saumya-Gupta-26/struct-uncertainty
| null |
Multiplication-Free Transformer Training via Piecewise Affine Operations
|
https://papers.nips.cc/paper_files/paper/2023/hash/19df21cd4931bd0caaa4d8480e9a59cd-Abstract-Conference.html
|
Atli Kosson, Martin Jaggi
|
https://papers.nips.cc/paper_files/paper/2023/hash/19df21cd4931bd0caaa4d8480e9a59cd-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21056-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/19df21cd4931bd0caaa4d8480e9a59cd-Paper-Conference.pdf
| null |
Multiplications are responsible for most of the computational cost involved in neural network training and inference. Recent research has thus looked for ways to reduce the cost associated with them. Inspired by Mogami 2020, we replace multiplication with a cheap piecewise affine approximation that is achieved by adding the bit representation of the floating point numbers together as integers. We show that transformers can be trained with the resulting modified matrix multiplications on both vision and language tasks with little to no performance impact, and without changes to the training hyperparameters. We further replace all non-linearities in the networks making them fully and jointly piecewise affine in both inputs and weights. Finally, we show that we can eliminate all multiplications in the entire training process, including operations in the forward pass, backward pass and optimizer update, demonstrating the first successful training of modern neural network architectures in a fully multiplication-free fashion.
| null |
A Unified Framework for Uniform Signal Recovery in Nonlinear Generative Compressed Sensing
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a04df6a405210aab4986994b873db9b-Abstract-Conference.html
|
Junren Chen, Jonathan Scarlett, Michael Ng, Zhaoqiang Liu
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a04df6a405210aab4986994b873db9b-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22626-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1a04df6a405210aab4986994b873db9b-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1a04df6a405210aab4986994b873db9b-Supplemental-Conference.zip
|
In generative compressed sensing (GCS), we want to recover a signal $\mathbf{x^*}\in\mathbb{R}^n$ from $m$ measurements ($m\ll n$) using a generative prior $\mathbf{x^*}\in G(\mathbb{B}_2^k(r))$, where $G$ is typically an $L$-Lipschitz continuous generative model and $\mathbb{B}_2^k(r)$ represents the radius-$r$ $\ell_2$-ball in $\mathbb{R}^k$. Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $\mathbf{x^*}$ rather than for all $\mathbf{x^*}$ simultaneously. In this paper, we build a unified framework to derive uniform recovery guarantees for nonlinear GCS where the observation model is nonlinear and possibly discontinuous or unknown. Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index model as canonical examples. Specifically, using a single realization of the sensing ensemble and generalized Lasso, all $\mathbf{x^*}\in G(\mathbb{B}_2^k(r))$ can be recovered up to an $\ell_2$-error at most $\epsilon$ using roughly $\tilde{O}({k}/{\epsilon^2})$ samples, with omitted logarithmic factors typically being dominated by $\log L$. Notably, this almost coincides with existing non-uniform guarantees up to logarithmic factors, hence the uniformity costs very little. As part of our technical contributions, we introduce Lipschitz approximation to handle discontinuous observation models. We also develop a concentration inequality that produces tighter bound for product process whose index sets have low metric entropy. Experimental results are presented to corroborate our theory.
| null |
Tempo Adaptation in Non-stationary Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a0672689a693e0764f93f900488b3d9-Abstract-Conference.html
|
Hyunin Lee, Yuhao Ding, Jongmin Lee, Ming Jin, Javad Lavaei, Somayeh Sojoudi
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a0672689a693e0764f93f900488b3d9-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20522-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1a0672689a693e0764f93f900488b3d9-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1a0672689a693e0764f93f900488b3d9-Supplemental-Conference.zip
|
We first raise and tackle a ``time synchronization'' issue between the agent and the environment in non-stationary reinforcement learning (RL), a crucial factor hindering its real-world applications. In reality, environmental changes occur over wall-clock time ($t$) rather than episode progress ($k$), where wall-clock time signifies the actual elapsed time within the fixed duration $t \in [0, T]$. In existing works, at episode $k$, the agent rolls a trajectory and trains a policy before transitioning to episode $k+1$. In the context of the time-desynchronized environment, however, the agent at time $t_{k}$ allocates $\Delta t$ for trajectory generation and training, subsequently moves to the next episode at $t_{k+1}=t_{k}+\Delta t$. Despite a fixed total number of episodes ($K$), the agent accumulates different trajectories influenced by the choice of interaction times ($t_1,t_2,...,t_K$), significantly impacting the suboptimality gap of the policy. We propose a Proactively Synchronizing Tempo ($\texttt{ProST}$) framework that computes a suboptimal sequence {$t_1,t_2,...,t_K$} (= { $t_{1:K}$}) by minimizing an upper bound on its performance measure, i.e., the dynamic regret. Our main contribution is that we show that a suboptimal {$t_{1:K}$} trades-off between the policy training time (agent tempo) and how fast the environment changes (environment tempo). Theoretically, this work develops a suboptimal {$t_{1:K}$} as a function of the degree of the environment's non-stationarity while also achieving a sublinear dynamic regret. Our experimental evaluation on various high-dimensional non-stationary environments shows that the $\texttt{ProST}$ framework achieves a higher online return at suboptimal {$t_{1:K}$} than the existing methods.
| null |
Unsupervised Semantic Correspondence Using Stable Diffusion
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a074a28c3a6f2056562d00649ae6416-Abstract-Conference.html
|
Eric Hedlin, Gopal Sharma, Shweta Mahajan, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, Kwang Moo Yi
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a074a28c3a6f2056562d00649ae6416-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20916-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1a074a28c3a6f2056562d00649ae6416-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1a074a28c3a6f2056562d00649ae6416-Supplemental-Conference.pdf
|
Text-to-image diffusion models are now capable of generating images that are often indistinguishable from real images. To generate such images, these models must understand the semantics of the objects they are asked to generate. In this work we show that, without any training, one can leverage this semantic knowledge within diffusion models to find semantic correspondences – locations in multiple images that have the same semantic meaning. Specifically, given an image, we optimize the prompt embeddings of these models for maximum attention on the regions of interest. These optimized embeddings capture semantic information about the location, which can then be transferred to another image. By doing so we obtain results on par with the strongly supervised state of the art on the PF-Willow dataset and significantly outperform (20.9% relative for the SPair-71k dataset) any existing weakly- or unsupervised method on PF-Willow, CUB-200 and SPair-71k datasets.
| null |
Efficient Subgame Refinement for Extensive-form Games
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a2b4aba905a16733ff199888ac8eec4-Abstract-Conference.html
|
Zhenxing Ge, Zheng Xu, Tianyu Ding, Wenbin Li, Yang Gao
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a2b4aba905a16733ff199888ac8eec4-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22755-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1a2b4aba905a16733ff199888ac8eec4-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1a2b4aba905a16733ff199888ac8eec4-Supplemental-Conference.pdf
|
Subgame solving is an essential technique in addressing large imperfect information games, with various approaches developed to enhance the performance of refined strategies in the abstraction of the target subgame. However, directly applying existing subgame solving techniques may be difficult, due to the intricate nature and substantial size of many real-world games. To overcome this issue, recent subgame solving methods allow for subgame solving on limited knowledge order subgames, increasing their applicability in large games; yet this may still face obstacles due to extensive information set sizes. To address this challenge, we propose a generative subgame solving (GS2) framework, which utilizes a generation function to identify a subset of the earliest-reached nodes, reducing the size of the subgame. Our method is supported by a theoretical analysis and employs a diversity-based generation function to enhance safety. Experiments conducted on medium-sized games as well as the challenging large game of GuanDan demonstrate a significant improvement over the blueprint.
| null |
NeRF-IBVS: Visual Servo Based on NeRF for Visual Localization and Navigation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a57081f257da7b440b8eda72a0b12d4-Abstract-Conference.html
|
Yuanze Wang, Yichao Yan, Dianxi Shi, Wenhan Zhu, Jianqiang Xia, Tan Jeff, Songchang Jin, KE GAO, XIAOBO LI, Xiaokang Yang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a57081f257da7b440b8eda72a0b12d4-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21112-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1a57081f257da7b440b8eda72a0b12d4-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1a57081f257da7b440b8eda72a0b12d4-Supplemental-Conference.zip
|
Visual localization is a fundamental task in computer vision and robotics. Training existing visual localization methods requires a large number of posed images to generalize to novel views, while state-of-the-art methods generally require dense ground truth 3D labels for supervision. However, acquiring a large number of posed images and dense 3D labels in the real world is challenging and costly. In this paper, we present a novel visual localization method that achieves accurate localization while using only a few posed images compared to other localization methods. To achieve this, we first use a few posed images with coarse pseudo-3D labels provided by NeRF to train a coordinate regression network. Then a coarse pose is estimated from the regression network with PNP. Finally, we use the image-based visual servo (IBVS) with the scene prior provided by NeRF for pose optimization. Furthermore, our method can provide effective navigation prior, which enable navigation based on IBVS without using custom markers and depth sensor. Extensive experiments on 7-Scenes and 12-Scenes datasets demonstrate that our method outperforms state-of-the-art methods under the same setting, with only 5\% to 25\% training data. Furthermore, our framework can be naturally extended to the visual navigation task based on IBVS, and its effectiveness is verified in simulation experiments.
| null |
How Does Adaptive Optimization Impact Local Neural Network Geometry?
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a5e6d0441a8e1eda9a50717b0870f94-Abstract-Conference.html
|
Kaiqi Jiang, Dhruv Malik, Yuanzhi Li
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a5e6d0441a8e1eda9a50717b0870f94-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20201-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1a5e6d0441a8e1eda9a50717b0870f94-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1a5e6d0441a8e1eda9a50717b0870f94-Supplemental-Conference.zip
|
Adaptive optimization methods are well known to achieve superior convergence relative to vanilla gradient methods. The traditional viewpoint in optimization, particularly in convex optimization, explains this improved performance by arguing that, unlike vanilla gradient schemes, adaptive algorithms mimic the behavior of a second-order method by adapting to the *global* geometry of the loss function. We argue that in the context of neural network optimization, this traditional viewpoint is insufficient. Instead, we advocate for a *local* trajectory analysis. For iterate trajectories produced by running a generic optimization algorithm OPT, we introduce $R^{\text{OPT}}\_{\text{med}}$, a statistic that is analogous to the condition number of the loss Hessian evaluated at the iterates. Through extensive experiments on language models where adaptive algorithms converge faster than vanilla gradient methods like SGD, we show that adaptive methods such as Adam bias the trajectories towards regions where $R^{\text{Adam}}_{\text{med}}$ is small, where one might expect faster optimization. By contrast, SGD (with momentum) biases the trajectories towards regions where $R^{\text{SGD}}\_{\text{med}}$ is comparatively large. We complement these empirical observations with a theoretical result that provably demonstrates this phenomenon in the simplified setting of a two-layer linear network. We view our findings as evidence for the need of a new explanation of the success of adaptive methods, one that is different than the conventional wisdom.
| null |
Are Diffusion Models Vision-And-Language Reasoners?
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a675d804f50509b8e21d0d3ca709d03-Abstract-Conference.html
|
Benno Krojer, Elinor Poole-Dayan, Vikram Voleti, Chris Pal, Siva Reddy
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a675d804f50509b8e21d0d3ca709d03-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21114-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1a675d804f50509b8e21d0d3ca709d03-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1a675d804f50509b8e21d0d3ca709d03-Supplemental-Conference.pdf
|
Text-conditioned image generation models have recently shown immense qualitative success using denoising diffusion processes. However, unlike discriminative vision-and-language models, it is a non-trivial task to subject these diffusion-based generative models to automatic fine-grained quantitative evaluation of high-level phenomena such as compositionality.Towards this goal, we perform two innovations. First, we transform diffusion-based models (in our case, Stable Diffusion) for any image-text matching (ITM) task using a novel method called DiffusionITM.Second, we introduce the Generative-Discriminative Evaluation Benchmark (GDBench) benchmark with 7 complex vision-and-language tasks, bias evaluation and detailed analysis.We find that Stable Diffusion + DiffusionITM is competitive on many tasks and outperforms CLIP on compositional tasks like like CLEVR and Winoground.We further boost its compositional performance with a transfer setup by fine-tuning on MS-COCO while retaining generative capabilities. We also measure the stereotypical bias in diffusion models, and find that Stable Diffusion 2.1 is, for the most part, less biased than Stable Diffusion 1.5.Overall, our results point in an exciting direction bringing discriminative and generative model evaluation closer. We will release code and benchmark setup soon.
| null |
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a87980b9853e84dfb295855b425c262-Abstract-Conference.html
|
Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan LI, Hang Su, Jun Zhu
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a87980b9853e84dfb295855b425c262-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21386-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1a87980b9853e84dfb295855b425c262-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1a87980b9853e84dfb295855b425c262-Supplemental-Conference.zip
|
Score distillation sampling (SDS) has shown great promise in text-to-3D generation by distilling pretrained large-scale text-to-image diffusion models, but suffers from over-saturation, over-smoothing, and low-diversity problems. In this work, we propose to model the 3D parameter as a random variable instead of a constant as in SDS and present *variational score distillation* (VSD), a principled particle-based variational framework to explain and address the aforementioned issues in text-to-3D generation. We show that SDS is a special case of VSD and leads to poor samples with both small and large CFG weights. In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i.e., 7.5). We further present various improvements in the design space for text-to-3D such as distillation time schedule and density initialization, which are orthogonal to the distillation algorithm yet not well explored. Our overall approach, dubbed *ProlificDreamer*, can generate high rendering resolution (i.e., 512$\times$512) and high-fidelity NeRF with rich structure and complex effects (e.g., smoke and drops). Further, initialized from NeRF, meshes fine-tuned by VSD are meticulously detailed and photo-realistic.
| null |
SAMoSSA: Multivariate Singular Spectrum Analysis with Stochastic Autoregressive Noise
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a8d295871250443f9747d239925b89d-Abstract-Conference.html
|
Abdullah Alomar, Munther Dahleh, Sean Mann, Devavrat Shah
|
https://papers.nips.cc/paper_files/paper/2023/hash/1a8d295871250443f9747d239925b89d-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22512-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1a8d295871250443f9747d239925b89d-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1a8d295871250443f9747d239925b89d-Supplemental-Conference.zip
|
The well-established practice of time series analysis involves estimating deterministic, non-stationary trend and seasonality components followed by learning the residual stochastic, stationary components. Recently, it has been shown that one can learn the deterministic non-stationary components accurately using multivariate Singular Spectrum Analysis (mSSA) in the absence of a correlated stationary component; meanwhile, in the absence of deterministic non-stationary components, the Autoregressive (AR) stationary component can also be learnt readily, e.g. via Ordinary Least Squares (OLS). However, a theoretical underpinning of multi-stage learning algorithms involving both deterministic and stationary components has been absent in the literature despite its pervasiveness. We resolve this open question by establishing desirable theoretical guarantees for a natural two-stage algorithm, where mSSA is first applied to estimate the non-stationary components despite the presence of a correlated stationary AR component, which is subsequently learned from the residual time series. We provide a finite-sample forecasting consistency bound for the proposed algorithm, SAMoSSA, which is data-driven and thus requires minimal parameter tuning. To establish theoretical guarantees, we overcome three hurdles: (i) we characterize the spectra of Page matrices of stable AR processes, thus extending the analysis of mSSA; (ii) we extend the analysis of AR process identification in the presence of arbitrary bounded perturbations; (iii) we characterize the out-of-sample or forecasting error, as opposed to solely considering model identification. Through representative empirical studies, we validate the superior performance of SAMoSSA compared to existing baselines. Notably, SAMoSSA's ability to account for AR noise structure yields improvements ranging from 5% to 37% across various benchmark datasets.
| null |
Hierarchical Vector Quantized Transformer for Multi-class Unsupervised Anomaly Detection
|
https://papers.nips.cc/paper_files/paper/2023/hash/1abc87c67cc400a67b869358e627fe37-Abstract-Conference.html
|
Ruiying Lu, YuJie Wu, Long Tian, Dongsheng Wang, Bo Chen, Xiyang Liu, Ruimin Hu
|
https://papers.nips.cc/paper_files/paper/2023/hash/1abc87c67cc400a67b869358e627fe37-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22023-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1abc87c67cc400a67b869358e627fe37-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1abc87c67cc400a67b869358e627fe37-Supplemental-Conference.pdf
|
Unsupervised image Anomaly Detection (UAD) aims to learn robust and discriminative representations of normal samples. While separate solutions per class endow expensive computation and limited generalizability, this paper focuses on building a unified framework for multiple classes. Under such a challenging setting, popular reconstruction-based networks with continuous latent representation assumption always suffer from the "identical shortcut" issue, where both normal and abnormal samples can be well recovered and difficult to distinguish. To address this pivotal issue, we propose a hierarchical vector quantized prototype-oriented Transformer under a probabilistic framework. First, instead of learning the continuous representations, we preserve the typical normal patterns as discrete iconic prototypes, and confirm the importance of Vector Quantization in preventing the model from falling into the shortcut. The vector quantized iconic prototypes are integrated into the Transformer for reconstruction, such that the abnormal data point is flipped to a normal data point. Second, we investigate an exquisite hierarchical framework to relieve the codebook collapse issue and replenish frail normal patterns. Third, a prototype-oriented optimal transport method is proposed to better regulate the prototypes and hierarchically evaluate the abnormal score. By evaluating on MVTec-AD and VisA datasets, our model surpasses the state-of-the-art alternatives and possesses good interpretability. The code is available at https://github.com/RuiyingLu/HVQ-Trans.
| null |
MCUFormer: Deploying Vision Tranformers on Microcontrollers with Limited Memory
|
https://papers.nips.cc/paper_files/paper/2023/hash/1ae4999aefb509d75d8608e07280922c-Abstract-Conference.html
|
Yinan Liang, Ziwei Wang, Xiuwei Xu, Yansong Tang, Jie Zhou, Jiwen Lu
|
https://papers.nips.cc/paper_files/paper/2023/hash/1ae4999aefb509d75d8608e07280922c-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20868-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1ae4999aefb509d75d8608e07280922c-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1ae4999aefb509d75d8608e07280922c-Supplemental-Conference.pdf
|
Due to the high price and heavy energy consumption of GPUs, deploying deep models on IoT devices such as microcontrollers makes significant contributions for ecological AI. Conventional methods successfully enable convolutional neural network inference of high resolution images on microcontrollers, while the framework for vision transformers that achieve the state-of-the-art performance in many vision applications still remains unexplored. In this paper, we propose a hardware-algorithm co-optimizations method called MCUFormer to deploy vision transformers on microcontrollers with extremely limited memory, where we jointly design transformer architecture and construct the inference operator library to fit the memory resource constraint. More specifically, we generalize the one-shot network architecture search (NAS) to discover the optimal architecture with highest task performance given the memory budget from the microcontrollers, where we enlarge the existing search space of vision transformers by considering the low-rank decomposition dimensions and patch resolution for memory reduction. For the construction of the inference operator library of vision transformers, we schedule the memory buffer during inference through operator integration, patch embedding decomposition, and token overwriting, allowing the memory buffer to be fully utilized to adapt to the forward pass of the vision transformer. Experimental results demonstrate that our MCUFormer achieves 73.62\% top-1 accuracy on ImageNet for image classification with 320KB memory on STM32F746 microcontroller. Code is available at https://github.com/liangyn22/MCUFormer.
| null |
Towards Accelerated Model Training via Bayesian Data Selection
|
https://papers.nips.cc/paper_files/paper/2023/hash/1af3e0bf5905e33789979f666c31192d-Abstract-Conference.html
|
Zhijie Deng, Peng Cui, Jun Zhu
|
https://papers.nips.cc/paper_files/paper/2023/hash/1af3e0bf5905e33789979f666c31192d-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20107-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1af3e0bf5905e33789979f666c31192d-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1af3e0bf5905e33789979f666c31192d-Supplemental-Conference.zip
|
Mislabeled, duplicated, or biased data in real-world scenarios can lead to prolonged training and even hinder model convergence. Traditional solutions prioritizing easy or hard samples lack the flexibility to handle such a variety simultaneously. Recent work has proposed a more reasonable data selection principle by examining the data's impact on the model's generalization loss. However, its practical adoption relies on less principled approximations and additional holdout data. This work solves these problems by leveraging a lightweight Bayesian treatment and incorporating off-the-shelf zero-shot predictors built on large-scale pre-trained models. The resulting algorithm is efficient and easy to implement. We perform extensive empirical studies on challenging benchmarks with considerable data noise and imbalance in the online batch selection scenario, and observe superior training efficiency over competitive baselines. Notably, on the challenging WebVision benchmark, our method can achieve similar predictive performance with significantly fewer training iterations than leading data selection methods.
| null |
CSOT: Curriculum and Structure-Aware Optimal Transport for Learning with Noisy Labels
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b0da24d136f46bfaee78e8da907127e-Abstract-Conference.html
|
Wanxing Chang, Ye Shi, Jingya Wang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b0da24d136f46bfaee78e8da907127e-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21938-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1b0da24d136f46bfaee78e8da907127e-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1b0da24d136f46bfaee78e8da907127e-Supplemental-Conference.pdf
|
Learning with noisy labels (LNL) poses a significant challenge in training a well-generalized model while avoiding overfitting to corrupted labels.Recent advances have achieved impressive performance by identifying clean labels and correcting corrupted labels for training.However, the current approaches rely heavily on the model’s predictions and evaluate each sample independently without considering either the global or local structure of the sample distribution.These limitations typically result in a suboptimal solution for the identification and correction processes, which eventually leads to models overfitting to incorrect labels.In this paper, we propose a novel optimal transport (OT) formulation, called Curriculum and Structure-aware Optimal Transport (CSOT). CSOT concurrently considers the inter- and intra-distribution structure of the samples to construct a robust denoising and relabeling allocator.During the training process, the allocator incrementally assigns reliable labels to a fraction of the samples with the highest confidence. These labels have both global discriminability and local coherence.Notably, CSOT is a new OT formulation with a nonconvex objective function and curriculum constraints, so it is not directly compatible with classical OT solvers. Here, we develop a lightspeed computational method that involves a scaling iteration within a generalized conditional gradient framework to solve CSOT efficiently.Extensive experiments demonstrate the superiority of our method over the current state-of-the-arts in LNL.
| null |
In-Context Learning Unlocked for Diffusion Models
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b3750390ca8b931fb9ca988647940cb-Abstract-Conference.html
|
Zhendong Wang, Yifan Jiang, Yadong Lu, yelong shen, Pengcheng He, Weizhu Chen, Zhangyang "Atlas" Wang, Mingyuan Zhou
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b3750390ca8b931fb9ca988647940cb-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22255-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1b3750390ca8b931fb9ca988647940cb-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1b3750390ca8b931fb9ca988647940cb-Supplemental-Conference.zip
|
We present Prompt Diffusion, a framework for enabling in-context learning in diffusion-based generative models. Given a pair of task-specific example images, such as depth from/to image and scribble from/to image, and a text guidance, our model automatically understands the underlying task and performs the same task on a new query image following the text guidance. To achieve this, we propose a vision-language prompt that can model a wide range of vision-language tasks and a diffusion model that takes it as input. The diffusion model is trained jointly on six different tasks using these prompts. The resulting Prompt Diffusion model becomes the first diffusion-based vision-language foundation model capable of in-context learning. It demonstrates high-quality in-context generation for the trained tasks and effectively generalizes to new, unseen vision tasks using their respective prompts. Our model also shows compelling text-guided image editing results. Our framework aims to facilitate research into in-context learning for computer vision. We share our code and pre-trained models at https://github.com/Zhendong-Wang/Prompt-Diffusion.
| null |
Object-Centric Slot Diffusion
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b3ceb8a495a63ced4a48f8429ccdcd8-Abstract-Conference.html
|
Jindong Jiang, Fei Deng, Gautam Singh, Sungjin Ahn
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b3ceb8a495a63ced4a48f8429ccdcd8-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21917-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1b3ceb8a495a63ced4a48f8429ccdcd8-Paper-Conference.pdf
| null |
The recent success of transformer-based image generative models in object-centric learning highlights the importance of powerful image generators for handling complex scenes. However, despite the high expressiveness of diffusion models in image generation, their integration into object-centric learning remains largely unexplored in this domain. In this paper, we explore the feasibility and potential of integrating diffusion models into object-centric learning and investigate the pros and cons of this approach. We introduce Latent Slot Diffusion (LSD), a novel model that serves dual purposes: it is the first object-centric learning model to replace conventional slot decoders with a latent diffusion model conditioned on object slots, and it is also the first unsupervised compositional conditional diffusion model that operates without the need for supervised annotations like text. Through experiments on various object-centric tasks, including the first application of the FFHQ dataset in this field, we demonstrate that LSD significantly outperforms state-of-the-art transformer-based decoders, particularly in more complex scenes, and exhibits superior unsupervised compositional generation quality. In addition, we conduct a preliminary investigation into the integration of pre-trained diffusion models in LSD and demonstrate its effectiveness in real-world image segmentation and generation. Project page is available at https://latentslotdiffusion.github.io
| null |
NAS-X: Neural Adaptive Smoothing via Twisting
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b3d005a2cb0e71e698e0b13ac657473-Abstract-Conference.html
|
Dieterich Lawson, Michael Li, Scott Linderman
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b3d005a2cb0e71e698e0b13ac657473-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20616-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1b3d005a2cb0e71e698e0b13ac657473-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1b3d005a2cb0e71e698e0b13ac657473-Supplemental-Conference.pdf
|
Sequential latent variable models (SLVMs) are essential tools in statistics and machine learning, with applications ranging from healthcare to neuroscience. As their flexibility increases, analytic inference and model learning can become challenging, necessitating approximate methods. Here we introduce neural adaptive smoothing via twisting (NAS-X), a method that extends reweighted wake-sleep (RWS) to the sequential setting by using smoothing sequential Monte Carlo (SMC) to estimate intractable posterior expectations. Combining RWS and smoothing SMC allows NAS-X to provide low-bias and low-variance gradient estimates, and fit both discrete and continuous latent variable models. We illustrate the theoretical advantages of NAS-X over previous methods and explore these advantages empirically in a variety of tasks, including a challenging application to mechanistic models of neuronal dynamics. These experiments show that NAS-X substantially outperforms previous VI- and RWS-based methods in inference and model learning, achieving lower parameter error and tighter likelihood bounds.
| null |
Reflexion: language agents with verbal reinforcement learning
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b44b878bb782e6954cd888628510e90-Abstract-Conference.html
|
Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b44b878bb782e6954cd888628510e90-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20995-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1b44b878bb782e6954cd888628510e90-Paper-Conference.pdf
| null |
Large language models (LLMs) have been increasingly used to interact with external environments (e.g., games, compilers, APIs) as goal-driven agents. However, it remains challenging for these language agents to quickly and efficiently learn from trial-and-error as traditional reinforcement learning methods require extensive training samples and expensive model fine-tuning. We propose \emph{Reflexion}, a novel framework to reinforce language agents not by updating weights, but instead through linguistic feedback. Concretely, Reflexion agents verbally reflect on task feedback signals, then maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials. Reflexion is flexible enough to incorporate various types (scalar values or free-form language) and sources (external or internally simulated) of feedback signals, and obtains significant improvements over a baseline agent across diverse tasks (sequential decision-making, coding, language reasoning). For example, Reflexion achieves a 91\% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous state-of-the-art GPT-4 that achieves 80\%. We also conduct ablation and analysis studies using different feedback signals, feedback incorporation methods, and agent types, and provide insights into how they affect performance. We release all code, demos, and datasets at \url{https://github.com/noahshinn024/reflexion}.
| null |
Demographic Parity Constrained Minimax Optimal Regression under Linear Model
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b4acad19cc425a7352a71d4e4468393-Abstract-Conference.html
|
Kazuto Fukuchi, Jun Sakuma
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b4acad19cc425a7352a71d4e4468393-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22216-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1b4acad19cc425a7352a71d4e4468393-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1b4acad19cc425a7352a71d4e4468393-Supplemental-Conference.pdf
|
We explore the minimax optimal error associated with a demographic parity-constrained regression problem within the context of a linear model. Our proposed model encompasses a broader range of discriminatory bias sources compared to the model presented by Chzhen and Schreuder. Our analysis reveals that the minimax optimal error for the demographic parity-constrained regression problem under our model is characterized by $\Theta(\frac{dM}{n})$, where $n$ denotes the sample size, $d$ represents the dimensionality, and $M$ signifies the number of demographic groups arising from sensitive attributes. Moreover, we demonstrate that the minimax error increases in conjunction with a larger bias present in the model.
| null |
GeoCLIP: Clip-Inspired Alignment between Locations and Images for Effective Worldwide Geo-localization
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b57aaddf85ab01a2445a79c9edc1f4b-Abstract-Conference.html
|
Vicente Vivanco Cepeda, Gaurav Kumar Nayak, Mubarak Shah
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b57aaddf85ab01a2445a79c9edc1f4b-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20046-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1b57aaddf85ab01a2445a79c9edc1f4b-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1b57aaddf85ab01a2445a79c9edc1f4b-Supplemental-Conference.pdf
|
Worldwide Geo-localization aims to pinpoint the precise location of images taken anywhere on Earth. This task has considerable challenges due to the immense variation in geographic landscapes. The image-to-image retrieval-based approaches fail to solve this problem on a global scale as it is not feasible to construct a large gallery of images covering the entire world. Instead, existing approaches divide the globe into discrete geographic cells, transforming the problem into a classification task. However, their performance is limited by the predefined classes and often results in inaccurate localizations when an image's location significantly deviates from its class center. To overcome these limitations, we propose GeoCLIP, a novel CLIP-inspired Image-to-GPS retrieval approach that enforces alignment between the image and its corresponding GPS locations. GeoCLIP's location encoder models the Earth as a continuous function by employing positional encoding through random Fourier features and constructing a hierarchical representation that captures information at varying resolutions to yield a semantically rich high-dimensional feature suitable to use even beyond geo-localization. To the best of our knowledge, this is the first work employing GPS encoding for geo-localization. We demonstrate the efficacy of our method via extensive experiments and ablations on benchmark datasets. We achieve competitive performance with just 20% of training data, highlighting its effectiveness even in limited-data settings. Furthermore, we qualitatively demonstrate geo-localization using a text query by leveraging the CLIP backbone of our image encoder. The project webpage is available at: https://vicentevivan.github.io/GeoCLIP
| null |
RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b80fe066fdbceb3a2960117bac33917-Abstract-Conference.html
|
Haonan Yan, Wenjing Zhang, Qian Chen, Xiaoguang Li, Wenhai Sun, HUI LI, Xiaodong Lin
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b80fe066fdbceb3a2960117bac33917-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20450-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1b80fe066fdbceb3a2960117bac33917-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1b80fe066fdbceb3a2960117bac33917-Supplemental-Conference.pdf
|
Model poisoning attacks greatly jeopardize the application of federated learning (FL). The effectiveness of existing defenses is susceptible to the latest model poisoning attacks, leading to a decrease in prediction accuracy. Besides, these defenses are intractable to distinguish benign outliers from malicious gradients, which further compromises the model generalization. In this work, we propose a novel defense including detection and aggregation, named RECESS, to serve as a “vaccine” for FL against model poisoning attacks. Different from the passive analysis in previous defenses, RECESS proactively queries each participating client with a delicately constructed aggregation gradient, accompanied by the detection of malicious clients according to their responses with higher accuracy. Further, RECESS adopts a newly proposed trust scoring based mechanism to robustly aggregate gradients. Rather than previous methods of scoring in each iteration, RECESS takes into account the correlation of clients’ performance over multiple iterations to estimate the trust score, bringing in a significant increase in detection fault tolerance. Finally, we extensively evaluate RECESS on typical model architectures and four datasets under various settings including white/black-box, cross-silo/device FL, etc. Experimental results show the superiority of RECESS in terms of reducing accuracy loss caused by the latest model poisoning attacks over five classic and two state-of-the-art defenses.
| null |
Minimum norm interpolation by perceptra: Explicit regularization and implicit bias
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b8612e11c75456c90963fd408d75c4d-Abstract-Conference.html
|
Jiyoung Park, Ian Pelakh, Stephan Wojtowytsch
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b8612e11c75456c90963fd408d75c4d-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20001-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1b8612e11c75456c90963fd408d75c4d-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1b8612e11c75456c90963fd408d75c4d-Supplemental-Conference.zip
|
We investigate how shallow ReLU networks interpolate between known regions. Our analysis shows that empirical risk minimizers converge to a minimum norm interpolant as the number of data points and parameters tends to infinity when a weight decay regularizer is penalized with a coefficient which vanishes at a precise rate as the network width and the number of data points grow. With and without explicit regularization, we numerically study the implicit bias of common optimization algorithms towards known minimum norm interpolants.
| null |
Spectral Co-Distillation for Personalized Federated Learning
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b86cf4b15cd83b6520d851eb7298228-Abstract-Conference.html
|
Zihan Chen, Howard Yang, Tony Quek, Kai Fong Ernest Chong
|
https://papers.nips.cc/paper_files/paper/2023/hash/1b86cf4b15cd83b6520d851eb7298228-Abstract-Conference.html
|
NIPS 2023
| null | null | null | null | null |
Gradient Informed Proximal Policy Optimization
|
https://papers.nips.cc/paper_files/paper/2023/hash/1bd8cfc0e4c53869b7f1d0ed4b1e78e1-Abstract-Conference.html
|
Sanghyun Son, Laura Zheng, Ryan Sullivan, Yi-Ling Qiao, Ming Lin
|
https://papers.nips.cc/paper_files/paper/2023/hash/1bd8cfc0e4c53869b7f1d0ed4b1e78e1-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20435-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1bd8cfc0e4c53869b7f1d0ed4b1e78e1-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1bd8cfc0e4c53869b7f1d0ed4b1e78e1-Supplemental-Conference.zip
|
We introduce a novel policy learning method that integrates analytical gradients from differentiable environments with the Proximal Policy Optimization (PPO) algorithm. To incorporate analytical gradients into the PPO framework, we introduce the concept of an α-policy that stands as a locally superior policy. By adaptively modifying the α value, we can effectively manage the influence of analytical policy gradients during learning. To this end, we suggest metrics for assessing the variance and bias of analytical gradients, reducing dependence on these gradients when high variance or bias is detected. Our proposed approach outperforms baseline algorithms in various scenarios, such as function optimization, physics simulations, and traffic control environments. Our code can be found online: https://github.com/SonSang/gippo.
| null |
Blockwise Parallel Transformers for Large Context Models
|
https://papers.nips.cc/paper_files/paper/2023/hash/1bfd87d2d92f0556819467dc08034f76-Abstract-Conference.html
|
Hao Liu, Pieter Abbeel
|
https://papers.nips.cc/paper_files/paper/2023/hash/1bfd87d2d92f0556819467dc08034f76-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21352-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1bfd87d2d92f0556819467dc08034f76-Paper-Conference.pdf
| null |
Transformers have emerged as the cornerstone of state-of-the-art natural language processing models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands posed by the self-attention mechanism and the large feedforward network in Transformers limit their ability to handle long sequences, thereby creating challenges for tasks involving multiple long sequences or long-term dependencies. We present a distinct approach, Blockwise Parallel Transformer (BPT), that leverages blockwise computation of self-attention and feedforward network fusion to minimize memory costs. By processing longer input sequences while maintaining memory efficiency, BPT enables training sequences 32 times longer than vanilla Transformers and up to 4 times longer than previous memory-efficient methods. Extensive experiments on language modeling and reinforcement learning tasks demonstrate the effectiveness of BPT in reducing memory requirements and improving performance.
| null |
Neural Combinatorial Optimization with Heavy Decoder: Toward Large Scale Generalization
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c10d0c087c14689628124bbc8fa69f6-Abstract-Conference.html
|
Fu Luo, Xi Lin, Fei Liu, Qingfu Zhang, Zhenkun Wang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c10d0c087c14689628124bbc8fa69f6-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22021-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1c10d0c087c14689628124bbc8fa69f6-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1c10d0c087c14689628124bbc8fa69f6-Supplemental-Conference.pdf
|
Neural combinatorial optimization (NCO) is a promising learning-based approach for solving challenging combinatorial optimization problems without specialized algorithm design by experts. However, most constructive NCO methods cannot solve problems with large-scale instance sizes, which significantly diminishes their usefulness for real-world applications. In this work, we propose a novel Light Encoder and Heavy Decoder (LEHD) model with a strong generalization ability to address this critical issue. The LEHD model can learn to dynamically capture the relationships between all available nodes of varying sizes, which is beneficial for model generalization to problems of various scales. Moreover, we develop a data-efficient training scheme and a flexible solution construction mechanism for the proposed LEHD model. By training on small-scale problem instances, the LEHD model can generate nearly optimal solutions for the Travelling Salesman Problem (TSP) and the Capacitated Vehicle Routing Problem (CVRP) with up to 1000 nodes, and also generalizes well to solve real-world TSPLib and CVRPLib problems. These results confirm our proposed LEHD model can significantly improve the state-of-the-art performance for constructive NCO.
| null |
Topological Obstructions and How to Avoid Them
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c12ccfc7720f6b680edea17300bfc2b-Abstract-Conference.html
|
Babak Esmaeili, Robin Walters, Heiko Zimmermann, Jan-Willem van de Meent
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c12ccfc7720f6b680edea17300bfc2b-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21553-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1c12ccfc7720f6b680edea17300bfc2b-Paper-Conference.pdf
| null |
Incorporating geometric inductive biases into models can aid interpretability and generalization, but encoding to a specific geometric structure can be challenging due to the imposed topological constraints. In this paper, we theoretically and empirically characterize obstructions to training encoders with geometric latent spaces. We show that local optima can arise due to singularities (e.g. self-intersection) or due to an incorrect degree or winding number. We then discuss how normalizing flows can potentially circumvent these obstructions by defining multimodal variational distributions. Inspired by this observation, we propose a new flow-based model that maps data points to multimodal distributions over geometric spaces and empirically evaluate our model on 2 domains. We observe improved stability during training and a higher chance of converging to a homeomorphic encoder.
| null |
The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c26c389d60ec419fd24b5fee5b35796-Abstract-Conference.html
|
Spencer Frei, Gal Vardi, Peter Bartlett, Nati Srebro
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c26c389d60ec419fd24b5fee5b35796-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21839-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1c26c389d60ec419fd24b5fee5b35796-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1c26c389d60ec419fd24b5fee5b35796-Supplemental-Conference.pdf
|
In this work, we study the implications of the implicit bias of gradient flow on generalization and adversarial robustness in ReLU networks. We focus on a setting where the data consists of clusters and the correlations between cluster means are small, and show that in two-layer ReLU networks gradient flow is biased towards solutions that generalize well, but are vulnerable to adversarial examples. Our results hold even in cases where the network is highly overparameterized. Despite the potential for harmful overfitting in such settings, we prove that the implicit bias of gradient flow prevents it. However, the implicit bias also leads to non-robust solutions (susceptible to small adversarial $\ell_2$-perturbations), even though robust networks that fit the data exist.
| null |
PromptRestorer: A Prompting Image Restoration Method with Degradation Perception
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c364d98a5cdc426fd8c76fbb2c10e34-Abstract-Conference.html
|
Cong Wang, Jinshan Pan, Wei Wang, Jiangxin Dong, Mengzhu Wang, Yakun Ju, Junyang Chen
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c364d98a5cdc426fd8c76fbb2c10e34-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20367-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1c364d98a5cdc426fd8c76fbb2c10e34-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1c364d98a5cdc426fd8c76fbb2c10e34-Supplemental-Conference.pdf
|
We show that raw degradation features can effectively guide deep restoration models, providing accurate degradation priors to facilitate better restoration. While networks that do not consider them for restoration forget gradually degradation during the learning process, model capacity is severely hindered. To address this, we propose a Prompting image Restorer, termed as PromptRestorer. Specifically, PromptRestorer contains two branches: a restoration branch and a prompting branch. The former is used to restore images, while the latter perceives degradation priors to prompt the restoration branch with reliable perceived content to guide the restoration process for better recovery. To better perceive the degradation which is extracted by a pre-trained model from given degradation observations, we propose a prompting degradation perception modulator, which adequately considers the characters of the self-attention mechanism and pixel-wise modulation, to better perceive the degradation priors from global and local perspectives. To control the propagation of the perceived content for the restoration branch, we propose gated degradation perception propagation, enabling the restoration branch to adaptively learn more useful features for better recovery. Extensive experimental results show that our PromptRestorer achieves state-of-the-art results on 4 image restoration tasks, including image deraining, deblurring, dehazing, and desnowing.
| null |
Beyond MLE: Convex Learning for Text Generation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c3d419b754cb4de0a67a453cb28d959-Abstract-Conference.html
|
Chenze Shao, Zhengrui Ma, Min Zhang, Yang Feng
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c3d419b754cb4de0a67a453cb28d959-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22100-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1c3d419b754cb4de0a67a453cb28d959-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1c3d419b754cb4de0a67a453cb28d959-Supplemental-Conference.zip
|
Maximum likelihood estimation (MLE) is a statistical method used to estimate the parameters of a probability distribution that best explain the observed data. In the context of text generation, MLE is often used to train generative language models, which can then be used to generate new text. However, we argue that MLE is not always necessary and optimal, especially for closed-ended text generation tasks like machine translation. In these tasks, the goal of model is to generate the most appropriate response, which does not necessarily require it to estimate the entire data distribution with MLE. To this end, we propose a novel class of training objectives based on convex functions, which enables text generation models to focus on highly probable outputs without having to estimate the entire data distribution. We investigate the theoretical properties of the optimal predicted distribution when applying convex functions to the loss, demonstrating that convex functions can sharpen the optimal distribution, thereby enabling the model to better capture outputs with high probabilities. Experiments on various text generation tasks and models show the effectiveness of our approach. It enables autoregressive models to bridge the gap between greedy and beam search, and facilitates the learning of non-autoregressive models with a maximum improvement of 9+ BLEU points. Moreover, our approach also exhibits significant impact on large language models (LLMs), substantially enhancing their generative capability on various tasks. Source code is available at \url{https://github.com/ictnlp/Convex-Learning}.
| null |
Bandit Task Assignment with Unknown Processing Time
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c5ee7343f396954377c2c16dda33a96-Abstract-Conference.html
|
Shinji Ito, Daisuke Hatano, Hanna Sumita, Kei Takemura, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c5ee7343f396954377c2c16dda33a96-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19720-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1c5ee7343f396954377c2c16dda33a96-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1c5ee7343f396954377c2c16dda33a96-Supplemental-Conference.zip
|
This study considers a novel problem setting, referred to as \textit{bandit task assignment}, that incorporates the processing time of each task in the bandit setting. In this problem setting, a player sequentially chooses a set of tasks to start so that the set of processing tasks satisfies a given combinatorial constraint. The reward and processing time for each task follow unknown distributions, values of which are revealed only after the task has been completed. The problem generalizes the stochastic combinatorial semi-bandit problem and the budget-constrained bandit problem. For this problem setting, we propose an algorithm based on upper confidence bounds~(UCB) combined with a phased-update approach. The proposed algorithm admits a gap-dependent regret upper bound of $O(MN(1/\Delta){\log T})$ and a gap-free regret upper bound of $\tilde{O}( \sqrt{MNT} )$, where $N$ is the number of the tasks, $M$ is the maximum number of tasks run at the same time, $T$ is the time horizon, and $\Delta$ is the gap between expected per-round rewards of the optimal and best suboptimal sets of tasks. These regret bounds nearly match lower bounds.
| null |
Towards Self-Interpretable Graph-Level Anomaly Detection
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c6f06863df46de009a7a41b41c95cad-Abstract-Conference.html
|
Yixin Liu, Kaize Ding, Qinghua Lu, Fuyi Li, Leo Yu Zhang, Shirui Pan
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c6f06863df46de009a7a41b41c95cad-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21673-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1c6f06863df46de009a7a41b41c95cad-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1c6f06863df46de009a7a41b41c95cad-Supplemental-Conference.pdf
|
Graph-level anomaly detection (GLAD) aims to identify graphs that exhibit notable dissimilarity compared to the majority in a collection. However, current works primarily focus on evaluating graph-level abnormality while failing to provide meaningful explanations for the predictions, which largely limits their reliability and application scope. In this paper, we investigate a new challenging problem, explainable GLAD, where the learning objective is to predict the abnormality of each graph sample with corresponding explanations, i.e., the vital subgraph that leads to the predictions. To address this challenging problem, we propose a Self-Interpretable Graph aNomaly dETection model (SIGNET for short) that detects anomalous graphs as well as generates informative explanations simultaneously. Specifically, we first introduce the multi-view subgraph information bottleneck (MSIB) framework, serving as the design basis of our self-interpretable GLAD approach. This way SIGNET is able to not only measure the abnormality of each graph based on cross-view mutual information but also provide informative graph rationales by extracting bottleneck subgraphs from the input graph and its dual hypergraph in a self-supervised way. Extensive experiments on 16 datasets demonstrate the anomaly detection capability and self-interpretability of SIGNET.
| null |
AMAG: Additive, Multiplicative and Adaptive Graph Neural Network For Forecasting Neuron Activity
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c70ba3591d0694a535089e1c25888d7-Abstract-Conference.html
|
Jingyuan Li, Leo Scholl, Trung Le, Pavithra Rajeswaran, Amy Orsborn, Eli Shlizerman
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c70ba3591d0694a535089e1c25888d7-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20451-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1c70ba3591d0694a535089e1c25888d7-Paper-Conference.pdf
| null |
Latent Variable Models (LVMs) propose to model the dynamics of neural populations by capturing low-dimensional structures that represent features involved in neural activity. Recent LVMs are based on deep learning methodology where a deep neural network is trained to reconstruct the same neural activity given as input and as a result to build the latent representation. Without taking past or future activity into account such a task is non-causal. In contrast, the task of forecasting neural activity based on given input extends the reconstruction task. LVMs that are trained on such a task could potentially capture temporal causality constraints within its latent representation. Forecasting has received less attention than reconstruction due to recording challenges such as limited neural measurements and trials. In this work, we address modeling neural population dynamics via the forecasting task and improve forecasting performance by including a prior, which consists of pairwise neural unit interaction as a multivariate dynamic system. Our proposed model---Additive, Multiplicative, and Adaptive Graph Neural Network (AMAG)---leverages additive and multiplicative message-passing operations analogous to the interactions in neuronal systems and adaptively learns the interaction among neural units to forecast their future activity. We demonstrate the advantage of AMAG compared to non-GNN based methods on synthetic data and multiple modalities of neural recordings (field potentials from penetrating electrodes or surface-level micro-electrocorticography) from four rhesus macaques. Our results show the ability of AMAG to recover ground truth spatial interactions and yield estimation for future dynamics of the neural population.
| null |
PackQViT: Faster Sub-8-bit Vision Transformers via Full and Packed Quantization on the Mobile
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c92edb990a05f2269f0cc3afbb4c952-Abstract-Conference.html
|
Peiyan Dong, LEI LU, Chao Wu, Cheng Lyu, Geng Yuan, Hao Tang, Yanzhi Wang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1c92edb990a05f2269f0cc3afbb4c952-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20887-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1c92edb990a05f2269f0cc3afbb4c952-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1c92edb990a05f2269f0cc3afbb4c952-Supplemental-Conference.pdf
|
While Vision Transformers (ViTs) have undoubtedly made impressive strides in computer vision (CV), their intricate network structures necessitate substantial computation and memory resources. A decision-making process for CV tasks typically entails performing computations with low latency, which is a tricky problem for ViT models.Model quantization is a widely-used technique to optimize the hardware efficiency of deep neural networks.Full quantization under Sub-8-bit precision, in particular, is a promising solution to reduce inference latency significantly. Unfortunately, current commodity hardware, such as CPUs and GPUs, still struggles to efficiently execute these sub-8-bit quantized networks, as their SIMD instructions only support a granularity of 8 bits or wider.Also, there is a scarcity of literature that presents a full quantization paradigm for ViTs.In this paper, we propose an activation-aware fully sub-8-bit quantization-aware training (QAT) framework called PackQViT for efficient yet accurate ViT acceleration on mobile devices to facilitate real-time AI-powered decision-making.Specifically, in revisiting data activation within the ViT dataflow, two characteristics are relevant to quantization strategy and precision: the long-tailed distribution and systematic channel-wise outliers.In response, we employ either log2 quantization or clipping to address the long-tailed distribution and incorporate outlier-aware training for residual link quantization to regulate the various channel-wise outliers more consistently.Notably, due to the systematic fixed pattern, outlier-aware training approach can predict the channel indices and regularized scales of outliers in advance, thus avoiding the runtime data-adaptive selection during inference.Furthermore, we employ Int-$2^{n}$-Softmax, Int-LayerNorm, and Integer GELU to enable integer-only computation flow. Finally, we develop a SIMD-based 4-bit packed multiplier to achieve end-to-end ViT acceleration on mobile phones.Compared to prior studies on ViT quantization using 8-bit precision, PackQViT surpasses other works by an improved accuracy ranging from 0.4\% to 17.9\% for various widely used ViTs on ImageNet dataset; under 4-bit precision, PackQViT demonstrates 0.4%$\sim$2.8% higher accuracy. Compared to the baseline multiplier, our implementations on the Realme GT Android smartphone with Snapdragon 870 SoC CPU achieve 2.6x$\sim$3.7x speedup under 8-bit scenario and 3.8x$\sim$5.9x speedup under 4-bit which ensures practical real-time performance.
| null |
Extending the Design Space of Graph Neural Networks by Rethinking Folklore Weisfeiler-Lehman
|
https://papers.nips.cc/paper_files/paper/2023/hash/1cac8326ce3fbe79171db9754211530c-Abstract-Conference.html
|
Jiarui Feng, Lecheng Kong, Hao Liu, Dacheng Tao, Fuhai Li, Muhan Zhang, Yixin Chen
|
https://papers.nips.cc/paper_files/paper/2023/hash/1cac8326ce3fbe79171db9754211530c-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19623-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1cac8326ce3fbe79171db9754211530c-Paper-Conference.pdf
| null |
Message passing neural networks (MPNNs) have emerged as the most popular framework of graph neural networks (GNNs) in recent years. However, their expressive power is limited by the 1-dimensional Weisfeiler-Lehman (1-WL) test. Some works are inspired by $k$-WL/FWL (Folklore WL) and design the corresponding neural versions. Despite the high expressive power, there are serious limitations in this line of research. In particular, (1) $k$-WL/FWL requires at least $O(n^k)$ space complexity, which is impractical for large graphs even when $k=3$; (2) The design space of $k$-WL/FWL is rigid, with the only adjustable hyper-parameter being $k$. To tackle the first limitation, we propose an extension, $(k, t)$-FWL. We theoretically prove that even if we fix the space complexity to $O(n^k)$ (for any $k \geq 2$) in $(k, t)$-FWL, we can construct an expressiveness hierarchy up to solving the graph isomorphism problem. To tackle the second problem, we propose $k$-FWL+, which considers any equivariant set as neighbors instead of all nodes, thereby greatly expanding the design space of $k$-FWL. Combining these two modifications results in a flexible and powerful framework $(k, t)$-FWL+. We demonstrate $(k, t)$-FWL+ can implement most existing models with matching expressiveness. We then introduce an instance of $(k,t)$-FWL+ called Neighborhood$^2$-FWL (N$^2$-FWL), which is practically and theoretically sound. We prove that N$^2$-FWL is no less powerful than 3-WL, and can encode many substructures while only requiring $O(n^2)$ space. Finally, we design its neural version named **N$^2$-GNN** and evaluate its performance on various tasks. N$^2$-GNN achieves record-breaking results on ZINC-Subset (**0.059**), outperforming previous SOTA results by 10.6\%. Moreover, N$^2$-GNN achieves new SOTA results on the BREC dataset (**71.8\%**) among all existing high-expressive GNN methods.
| null |
Off-Policy Evaluation for Human Feedback
|
https://papers.nips.cc/paper_files/paper/2023/hash/1cb57fcf7ff3f6d37eebae5becc9ea6d-Abstract-Conference.html
|
Qitong Gao, Ge Gao, Juncheng Dong, Vahid Tarokh, Min Chi, Miroslav Pajic
|
https://papers.nips.cc/paper_files/paper/2023/hash/1cb57fcf7ff3f6d37eebae5becc9ea6d-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19667-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1cb57fcf7ff3f6d37eebae5becc9ea6d-Paper-Conference.pdf
| null |
Off-policy evaluation (OPE) is important for closing the gap between offline training and evaluation of reinforcement learning (RL), by estimating performance and/or rank of target (evaluation) policies using offline trajectories only. It can improve the safety and efficiency of data collection and policy testing procedures in situations where online deployments are expensive, such as healthcare. However, existing OPE methods fall short in estimating human feedback (HF) signals, as HF may be conditioned over multiple underlying factors and are only sparsely available; as opposed to the agent-defined environmental rewards (used in policy optimization), which are usually determined over parametric functions or distributions. Consequently, the nature of HF signals makes extrapolating accurate OPE estimations to be challenging. To resolve this, we introduce an OPE for HF (OPEHF) framework that revives existing OPE methods in order to accurately evaluate the HF signals. Specifically, we develop an immediate human reward (IHR) reconstruction approach, regularized by environmental knowledge distilled in a latent space that captures the underlying dynamics of state transitions as well as issuing HF signals. Our approach has been tested over two real-world experiments, adaptive in-vivo neurostimulation and intelligent tutoring, and a simulation environment (visual Q&A). Results show that our approach significantly improves the performance toward estimating HF signals accurately, compared to directly applying (variants of) existing OPE methods.
| null |
Contrastive Lift: 3D Object Instance Segmentation by Slow-Fast Contrastive Fusion
|
https://papers.nips.cc/paper_files/paper/2023/hash/1cb5b3d64bdf3c6642c8d9a8fbecd019-Abstract-Conference.html
|
Yash Bhalgat, Iro Laina, João F. Henriques, Andrea Vedaldi, Andrew Zisserman
|
https://papers.nips.cc/paper_files/paper/2023/hash/1cb5b3d64bdf3c6642c8d9a8fbecd019-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20180-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1cb5b3d64bdf3c6642c8d9a8fbecd019-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1cb5b3d64bdf3c6642c8d9a8fbecd019-Supplemental-Conference.zip
|
Instance segmentation in 3D is a challenging task due to the lack of large-scale annotated datasets. In this paper, we show that this task can be addressed effectively by leveraging instead 2D pre-trained models for instance segmentation. We propose a novel approach to lift 2D segments to 3D and fuse them by means of a neural field representation, which encourages multi-view consistency across frames. The core of our approach is a slow-fast clustering objective function, which is scalable and well-suited for scenes with a large number of objects. Unlike previous approaches, our method does not require an upper bound on the number of objects or object tracking across frames. To demonstrate the scalability of the slow-fast clustering, we create a new semi-realistic dataset called the Messy Rooms dataset, which features scenes with up to 500 objects per scene. Our approach outperforms the state-of-the-art on challenging scenes from the ScanNet, Hypersim, and Replica datasets, as well as on our newly created Messy Rooms dataset, demonstrating the effectiveness and scalability of our slow-fast clustering method.
| null |
GALOPA: Graph Transport Learning with Optimal Plan Alignment
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d35af80e775e342f4cd3792e4405837-Abstract-Conference.html
|
Yejiang Wang, Yuhai Zhao, Daniel Zhengkui Wang, Ling Li
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d35af80e775e342f4cd3792e4405837-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20355-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1d35af80e775e342f4cd3792e4405837-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1d35af80e775e342f4cd3792e4405837-Supplemental-Conference.zip
|
Self-supervised learning on graph aims to learn graph representations in an unsupervised manner. While graph contrastive learning (GCL - relying on graph augmentation for creating perturbation views of anchor graphs and maximizing/minimizing similarity for positive/negative pairs) is a popular self-supervised method, it faces challenges in finding label-invariant augmented graphs and determining the exact extent of similarity between sample pairs to be achieved. In this work, we propose an alternative self-supervised solution that (i) goes beyond the label invariance assumption without distinguishing between positive/negative samples, (ii) can calibrate the encoder for preserving not only the structural information inside the graph, but the matching information between different graphs, (iii) learns isometric embeddings that preserve the distance between graphs, a by-product of our objective. Motivated by optimal transport theory, this scheme relays on an observation that the optimal transport plans between node representations at the output space, which measure the matching probability between two distributions, should be consistent to the plans between the corresponding graphs at the input space. The experimental findings include: (i) The plan alignment strategy significantly outperforms the counterpart using the transport distance; (ii) The proposed model shows superior performance using only node attributes as calibration signals, without relying on edge information; (iii) Our model maintains robust results even under high perturbation rates; (iv) Extensive experiments on various benchmarks validate the effectiveness of the proposed method.
| null |
Adaptive Topological Feature via Persistent Homology: Filtration Learning for Point Clouds
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d49235669869ab737c1da9d64b7c769-Abstract-Conference.html
|
Naoki Nishikawa, Yuichi Ike, Kenji Yamanishi
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d49235669869ab737c1da9d64b7c769-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20064-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1d49235669869ab737c1da9d64b7c769-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1d49235669869ab737c1da9d64b7c769-Supplemental-Conference.zip
|
Machine learning for point clouds has been attracting much attention, with many applications in various fields, such as shape recognition and material science. For enhancing the accuracy of such machine learning methods, it is often effective to incorporate global topological features, which are typically extracted by persistent homology. In the calculation of persistent homology for a point cloud, we choose a filtration for the point cloud, an increasing sequence of spaces. Since the performance of machine learning methods combined with persistent homology is highly affected by the choice of a filtration, we need to tune it depending on data and tasks. In this paper, we propose a framework that learns a filtration adaptively with the use of neural networks. In order to make the resulting persistent homology isometry-invariant, we develop a neural network architecture with such invariance. Additionally, we show a theoretical result on a finite-dimensional approximation of filtration functions, which justifies the proposed network architecture. Experimental results demonstrated the efficacy of our framework in several classification tasks.
| null |
Accurate Interpolation for Scattered Data through Hierarchical Residual Refinement
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d5a92867cf463fad136cfa23395840b-Abstract-Conference.html
|
Shizhe Ding, Boyang Xia, Dongbo Bu
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d5a92867cf463fad136cfa23395840b-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19700-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1d5a92867cf463fad136cfa23395840b-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1d5a92867cf463fad136cfa23395840b-Supplemental-Conference.pdf
|
Accurate interpolation algorithms are highly desired in various theoretical and engineering scenarios. Unlike the traditional numerical algorithms that have exact zero-residual constraints on observed points, the neural network-based interpolation methods exhibit non-zero residuals at these points. These residuals, which provide observations of an underlying residual function, can guide predicting interpolation functions, but have not been exploited by the existing approaches. To fill this gap, we propose Hierarchical INTerpolation Network (HINT), which utilizes the residuals on observed points to guide target function estimation in a hierarchical fashion. HINT consists of several sequentially arranged lightweight interpolation blocks. The first interpolation block estimates the main component of the target function, while subsequent blocks predict the residual components using observed points residuals of the preceding blocks. The main component and residual components are accumulated to form the final interpolation results. Furthermore, under the assumption that finer residual prediction requires a more focused attention range on observed points, we utilize hierarchical local constraints in correlation modeling between observed and target points. Extensive experiments demonstrate that HINT outperforms existing interpolation algorithms significantly in terms of interpolation accuracy across a wide variety of datasets, which underscores its potential for practical scenarios.
| null |
Learning Universal Policies via Text-Guided Video Generation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d5b9233ad716a43be5c0d3023cb82d0-Abstract-Conference.html
|
Yilun Du, Sherry Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Josh Tenenbaum, Dale Schuurmans, Pieter Abbeel
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d5b9233ad716a43be5c0d3023cb82d0-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20898-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1d5b9233ad716a43be5c0d3023cb82d0-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1d5b9233ad716a43be5c0d3023cb82d0-Supplemental-Conference.zip
|
A goal of artificial intelligence is to construct an agent that can solve a wide variety of tasks. Recent progress in text-guided image synthesis has yielded models with an impressive ability to generate complex novel images, exhibiting combinatorial generalization across domains. Motivated by this success, we investigate whether such tools can be used to construct more general-purpose agents. Specifically, we cast the sequential decision making problem as a text-conditioned video generation problem, where, given a text-encoded specification of a desired goal, a planner synthesizes a set of future frames depicting its planned actions in the future, after which control actions are extracted from the generated video. By leveraging text as the underlying goal specification, we are able to naturally and combinatorially generalize to novel goals. The proposed policy-as-video formulation can further represent environments with different state and action spaces in a unified space of images, which, for example, enables learning and generalization across a variety of robot manipulation tasks. Finally, by leveraging pretrained language embeddings and widely available videos from the internet, the approach enables knowledge transfer through predicting highly realistic video plans for real robots.
| null |
Necessary and Sufficient Conditions for Optimal Decision Trees using Dynamic Programming
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d5fce9627e15c84db572a66e029b1fc-Abstract-Conference.html
|
Jacobus van der Linden, Mathijs de Weerdt, Emir Demirović
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d5fce9627e15c84db572a66e029b1fc-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20698-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1d5fce9627e15c84db572a66e029b1fc-Paper-Conference.pdf
| null |
Global optimization of decision trees has shown to be promising in terms of accuracy, size, and consequently human comprehensibility. However, many of the methods used rely on general-purpose solvers for which scalability remains an issue.Dynamic programming methods have been shown to scale much better because they exploit the tree structure by solving subtrees as independent subproblems. However, this only works when an objective can be optimized separately for subtrees.We explore this relationship in detail and show the necessary and sufficient conditions for such separability and generalize previous dynamic programming approaches into a framework that can optimize any combination of separable objectives and constraints.Experiments on five application domains show the general applicability of this framework, while outperforming the scalability of general-purpose solvers by a large margin.
| null |
Polyhedron Attention Module: Learning Adaptive-order Interactions
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d83ad88759cef8192451543e5d59bf6-Abstract-Conference.html
|
Tan Zhu, Fei Dou, Xinyu Wang, Jin Lu, Jinbo Bi
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d83ad88759cef8192451543e5d59bf6-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21815-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1d83ad88759cef8192451543e5d59bf6-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1d83ad88759cef8192451543e5d59bf6-Supplemental-Conference.pdf
|
Learning feature interactions can be the key for multivariate predictive modeling. ReLU-activated neural networks create piecewise linear prediction models, and other nonlinear activation functions lead to models with only high-order feature interactions. Recent methods incorporate candidate polynomial terms of fixed orders into deep learning, which is subject to the issue of combinatorial explosion, or learn the orders that are difficult to adapt to different regions of the feature space. We propose a Polyhedron Attention Module (PAM) to create piecewise polynomial models where the input space is split into polyhedrons which define the different pieces and on each piece the hyperplanes that define the polyhedron boundary multiply to form the interactive terms, resulting in interactions of adaptive order to each piece. PAM is interpretable to identify important interactions in predicting a target. Theoretic analysis shows that PAM has stronger expression capability than ReLU-activated networks. Extensive experimental results demonstrate the superior classification performance of PAM on massive datasets of the click-through rate prediction and PAM can learn meaningful interaction effects in a medical problem.
| null |
Faster Query Times for Fully Dynamic $k$-Center Clustering with Outliers
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d8e261c241aa72f9b4a02af7f52587e-Abstract-Conference.html
|
Leyla Biabani, Annika Hennes, Morteza Monemizadeh, Melanie Schmidt
|
https://papers.nips.cc/paper_files/paper/2023/hash/1d8e261c241aa72f9b4a02af7f52587e-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21092-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1d8e261c241aa72f9b4a02af7f52587e-Paper-Conference.pdf
| null |
Given a point set $P\subseteq M$ from a metric space $(M,d)$ and numbers $k, z \in N$, the *metric $k$-center problem with $z$ outliers* is to find a set $C^\ast\subseteq P$ of $k$ points such that the maximum distance of all but at most $z$ outlier points of $P$ to their nearest center in ${C}^\ast$ is minimized. We consider this problem in the fully dynamic model, i.e., under insertions and deletions of points, for the case that the metric space has a bounded doubling dimension $dim$. We utilize a hierarchical data structure to maintain the points and their neighborhoods, which enables us to efficiently find the clusters. In particular, our data structure can be queried at any time to generate a $(3+\varepsilon)$-approximate solution for input values of $k$ and $z$ in worst-case query time $\varepsilon^{-O(dim)}k \log{n} \log\log{\Delta}$, where $\Delta$ is the ratio between the maximum and minimum distance between two points in $P$. Moreover, it allows insertion/deletion of a point in worst-case update time $\varepsilon^{-O(dim)}\log{n}\log{\Delta}$. Our result achieves a significantly faster query time with respect to $k$ and $z$ than the current state-of-the-art by Pellizzoni, Pietracaprina, and Pucci, which uses $\varepsilon^{-O(dim)}(k+z)^2\log{\Delta}$ query time to obtain a $(3+\varepsilon)$-approximation.
| null |
Natural Language Instruction-following with Task-related Language Development and Translation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1dc2fe8d9ae956616f86bab3ce5edc59-Abstract-Conference.html
|
Jing-Cheng Pang, Xin-Yu Yang, Si-Hang Yang, Xiong-Hui Chen, Yang Yu
|
https://papers.nips.cc/paper_files/paper/2023/hash/1dc2fe8d9ae956616f86bab3ce5edc59-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19913-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1dc2fe8d9ae956616f86bab3ce5edc59-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1dc2fe8d9ae956616f86bab3ce5edc59-Supplemental-Conference.zip
|
Natural language-conditioned reinforcement learning (RL) enables agents to follow human instructions. Previous approaches generally implemented language-conditioned RL by providing the policy with human instructions in natural language (NL) and training the policy to follow instructions. In this is outside-in approach, the policy must comprehend the NL and manage the task simultaneously. However, the unbounded NL examples often bring much extra complexity for solving concrete RL tasks, which can distract policy learning from completing the task. To ease the learning burden of the policy, we investigate an inside-out scheme for natural language-conditioned RL by developing a task language (TL) that is task-related and easily understood by the policy, thus reducing the policy learning burden. Besides, we employ a translator to translate natural language into the TL, which is used in RL to achieve efficient policy training. We implement this scheme as TALAR (TAsk Language with predicAte Representation) that learns multiple predicates to model object relationships as the TL. Experiments indicate that TALAR not only better comprehends NL instructions but also leads to a better instruction-following policy that significantly improves the success rate over baselines and adapts to unseen expressions of NL instruction. Besides, the TL is also an effective sub-task abstraction compatible with hierarchical RL.
| null |
Convergence of Actor-Critic with Multi-Layer Neural Networks
|
https://papers.nips.cc/paper_files/paper/2023/hash/1dc9fbdb6b4d9955ad377cb983232c9f-Abstract-Conference.html
|
Haoxing Tian, Alex Olshevsky, Yannis Paschalidis
|
https://papers.nips.cc/paper_files/paper/2023/hash/1dc9fbdb6b4d9955ad377cb983232c9f-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22701-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1dc9fbdb6b4d9955ad377cb983232c9f-Paper-Conference.pdf
| null |
The early theory of actor-critic methods considered convergence using linear function approximators for the policy and value functions. Recent work has established convergence using neural network approximators with a single hidden layer. In this work we are taking the natural next step and establish convergence using deep neural networks with an arbitrary number of hidden layers, thus closing a gap between theory and practice. We show that actor-critic updates projected on a ball around the initial condition will converge to a neighborhood where the average of the squared gradients is $\tilde{O} \left( 1/\sqrt{m} \right) + O \left( \epsilon \right)$, with $m$ being the width of the neural network and $\epsilon$ the approximation quality of the best critic neural network over the projected set.
| null |
Percentile Criterion Optimization in Offline Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2023/hash/1dec73169509c223220744b2c9b2df37-Abstract-Conference.html
|
Cyrus Cousins, Elita Lobo, Marek Petrik, Yair Zick
|
https://papers.nips.cc/paper_files/paper/2023/hash/1dec73169509c223220744b2c9b2df37-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22821-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1dec73169509c223220744b2c9b2df37-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1dec73169509c223220744b2c9b2df37-Supplemental-Conference.zip
|
In reinforcement learning, robust policies for high-stakes decision-making problems with limited data are usually computed by optimizing the percentile criterion. The percentile criterion is optimized by constructing an uncertainty set that contains the true model with high probability and optimizing the policy for the worst model in the set. Since the percentile criterion is non-convex, constructing these sets itself is challenging. Existing works use Bayesian credible regions as uncertainty sets, but they are often unnecessarily large and result in learning overly conservative policies. To overcome these shortcomings, we propose a novel Value-at-Risk based dynamic programming algorithm to optimize the percentile criterion without explicitly constructing any uncertainty sets. Our theoretical and empirical results show that our algorithm implicitly constructs much smaller uncertainty sets and learns less-conservative robust policies.
| null |
TextDiffuser: Diffusion Models as Text Painters
|
https://papers.nips.cc/paper_files/paper/2023/hash/1df4afb0b4ebf492a41218ce16b6d8df-Abstract-Conference.html
|
Jingye Chen, Yupan Huang, Tengchao Lv, Lei Cui, Qifeng Chen, Furu Wei
|
https://papers.nips.cc/paper_files/paper/2023/hash/1df4afb0b4ebf492a41218ce16b6d8df-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20405-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1df4afb0b4ebf492a41218ce16b6d8df-Paper-Conference.pdf
| null |
Diffusion models have gained increasing attention for their impressive generation abilities but currently struggle with rendering accurate and coherent text. To address this issue, we introduce TextDiffuser, focusing on generating images with visually appealing text that is coherent with backgrounds. TextDiffuser consists of two stages: first, a Transformer model generates the layout of keywords extracted from text prompts, and then diffusion models generate images conditioned on the text prompt and the generated layout. Additionally, we contribute the first large-scale text images dataset with OCR annotations, MARIO-10M, containing 10 million image-text pairs with text recognition, detection, and character-level segmentation annotations. We further collect the MARIO-Eval benchmark to serve as a comprehensive tool for evaluating text rendering quality. Through experiments and user studies, we demonstrate that TextDiffuser is flexible and controllable to create high-quality text images using text prompts alone or together with text template images, and conduct text inpainting to reconstruct incomplete images with text. We will make the code, model and dataset publicly available.
| null |
Object-centric Learning with Cyclic Walks between Parts and Whole
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e0d38c676d5855bcfab7f6d29d20ad9-Abstract-Conference.html
|
Ziyu Wang, Mike Zheng Shou, Mengmi Zhang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e0d38c676d5855bcfab7f6d29d20ad9-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20264-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e0d38c676d5855bcfab7f6d29d20ad9-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e0d38c676d5855bcfab7f6d29d20ad9-Supplemental-Conference.pdf
|
Learning object-centric representations from complex natural environments enables both humans and machines with reasoning abilities from low-level perceptual features. To capture compositional entities of the scene, we proposed cyclic walks between perceptual features extracted from vision transformers and object entities. First, a slot-attention module interfaces with these perceptual features and produces a finite set of slot representations. These slots can bind to any object entities in the scene via inter-slot competitions for attention. Next, we establish entity-feature correspondence with cyclic walks along high transition probability based on the pairwise similarity between perceptual features (aka "parts") and slot-binded object representations (aka "whole"). The whole is greater than its parts and the parts constitute the whole. The part-whole interactions form cycle consistencies, as supervisory signals, to train the slot-attention module. Our rigorous experiments on \textit{seven} image datasets in \textit{three} \textit{unsupervised} tasks demonstrate that the networks trained with our cyclic walks can disentangle foregrounds and backgrounds, discover objects, and segment semantic objects in complex scenes. In contrast to object-centric models attached with a decoder for the pixel-level or feature-level reconstructions, our cyclic walks provide strong learning signals, avoiding computation overheads and enhancing memory efficiency. Our source code and data are available at: \href{https://github.com/ZhangLab-DeepNeuroCogLab/Parts-Whole-Object-Centric-Learning/}{link}.
| null |
Experiment Planning with Function Approximation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e0d9f30c100129259f66660403fb1e2-Abstract-Conference.html
|
Aldo Pacchiano, Jonathan Lee, Emma Brunskill
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e0d9f30c100129259f66660403fb1e2-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20119-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e0d9f30c100129259f66660403fb1e2-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e0d9f30c100129259f66660403fb1e2-Supplemental-Conference.pdf
|
We study the problem of experiment planning with function approximation in contextual bandit problems. In settings where there is a significant overhead to deploying adaptive algorithms---for example, when the execution of the data collection policies is required to be distributed, or a human in the loop is needed to implement these policies---producing in advance a set of policies for data collection is paramount. We study the setting where a large dataset of contexts but not rewards is available and may be used by the learner to design an effective data collection strategy. Although when rewards are linear this problem has been well studied, results are still missing for more complex reward models. In this work we propose two experiment planning strategies compatible with function approximation. The first is an eluder planning and sampling procedure that can recover optimality guarantees depending on the eluder dimension of the reward function class. For the second, we show that a uniform sampler achieves competitive optimality rates in the setting where the number of actions is small. We finalize our results introducing a statistical gap fleshing out the fundamental differences between planning and adaptive learning and provide results for planning with model selection.
| null |
White-Box Transformers via Sparse Rate Reduction
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e118ba9ee76c20df728b42a35fb4704-Abstract-Conference.html
|
Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Benjamin Haeffele, Yi Ma
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e118ba9ee76c20df728b42a35fb4704-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19521-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e118ba9ee76c20df728b42a35fb4704-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e118ba9ee76c20df728b42a35fb4704-Supplemental-Conference.pdf
|
In this paper, we contend that the objective of representation learning is to compress and transform the distribution of the data, say sets of tokens, towards a mixture of low-dimensional Gaussian distributions supported on incoherent subspaces. The quality of the final representation can be measured by a unified objective function called sparse rate reduction. From this perspective, popular deep networks such as transformers can be naturally viewed as realizing iterative schemes to optimize this objective incrementally. Particularly, we show that the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens. This leads to a family of white-box transformer-like deep network architectures which are mathematically fully interpretable. Despite their simplicity, experiments show that these networks indeed learn to optimize the designed objective: they compress and sparsify representations of large-scale real-world vision datasets such as ImageNet, and achieve performance very close to thoroughly engineered transformers such as ViT. Code is at https://github.com/Ma-Lab-Berkeley/CRATE.
| null |
Task-Robust Pre-Training for Worst-Case Downstream Adaptation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e4322fddd833f83c855660ac65e428d-Abstract-Conference.html
|
Jianghui Wang, Yang Chen, Xingyu Xie, Cong Fang, Zhouchen Lin
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e4322fddd833f83c855660ac65e428d-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19858-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e4322fddd833f83c855660ac65e428d-Paper-Conference.pdf
| null |
Pre-training has achieved remarkable success when transferred to downstream tasks. In machine learning, we care about not only the good performance of a model but also its behavior under reasonable shifts of condition. The same philosophy holds when pre-training a foundation model. However, the foundation model may not uniformly behave well for a series of related downstream tasks. This happens, for example, when conducting mask recovery regression where the recovery ability or the training instances diverge like pattern features are extracted dominantly on pre-training, but semantic features are also required on a downstream task. This paper considers pre-training a model that guarantees a uniformly good performance over the downstream tasks. We call this goal as downstream-task robustness.Our method first separates the upstream task into several representative ones and applies a simple minimax loss for pre-training. We then design an efficient algorithm to solve the minimax lossand prove its convergence in the convex setting. In the experiments, we show both on large-scale natural language processing and computer vision datasets our method increases the metrics on worse-case downstream tasks. Additionally, some theoretical explanations for why our loss is beneficial are provided. Specifically, we show fewer samples are inherently required for the most challenging downstream task in some cases.
| null |
Inconsistency, Instability, and Generalization Gap of Deep Neural Network Training
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e58b1bf9f218fcd19e4539e982752a5-Abstract-Conference.html
|
Rie Johnson, Tong Zhang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e58b1bf9f218fcd19e4539e982752a5-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20303-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e58b1bf9f218fcd19e4539e982752a5-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e58b1bf9f218fcd19e4539e982752a5-Supplemental-Conference.pdf
|
As deep neural networks are highly expressive, it is important to find solutions with small generalization gap (the difference between the performance on the training data and unseen data). Focusing on the stochastic nature of training, we first present a theoretical analysis in which the bound of generalization gap depends on what we call inconsistency and instability of model outputs, which can be estimated on unlabeled data. Our empirical study based on this analysis shows that instability and inconsistency are strongly predictive of generalization gap in various settings. In particular, our finding indicates that inconsistency is a more reliable indicator of generalization gap than the sharpness of the loss landscape. Furthermore, we show that algorithmic reduction of inconsistency leads to superior performance. The results also provide a theoretical basis for existing methods such as co-distillation and ensemble.
| null |
Neural approximation of Wasserstein distance via a universal architecture for symmetric and factorwise group invariant functions
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e5f58d98523298cba093f658cfdf2d6-Abstract-Conference.html
|
Samantha Chen, Yusu Wang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e5f58d98523298cba093f658cfdf2d6-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20312-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e5f58d98523298cba093f658cfdf2d6-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e5f58d98523298cba093f658cfdf2d6-Supplemental-Conference.zip
|
Learning distance functions between complex objects, such as the Wasserstein distance to compare point sets, is a common goal in machine learning applications. However, functions on such complex objects (e.g., point sets and graphs) are often required to be invariant to a wide variety of group actions e.g. permutation or rigid transformation. Therefore, continuous and symmetric *product* functions (such as distance functions) on such complex objects must also be invariant to the *product* of such group actions. We call these functions symmetric and factor-wise group invariant functions (or SGFI functions} in short).In this paper, we first present a general neural network architecture for approximating SFGI functions. The main contribution of this paper combines this general NN with a sketching idea in order to develop a specific and efficient neural network which can approximate the $p$-th Wasserstein distance between point sets.Very importantly, the required model complexity is *independent* of the sizes of input point sets. On the theoretical front, to the best of our knowledge, this is the first result showing that there exists a neural network with the capacity to approximate Wasserstein distance with bounded model complexity. Our work provides an interesting integration of sketching ideas for geometric problems with universal approximation of symmetric functions. On the empirical front, we present a range of results showing that our newly proposed neural network architecture performs comparatively or better than other models (including a SOTA Siamese Autoencoder based approach). In particular, our NN generalizes significantly better and trains much faster than the SOTA Siamese AE.Finally, this line of investigation could be useful in exploring effective neural network design for solving a broad range of geometric optimization problems (e.g., $k$-means in a metric space).
| null |
What Do Deep Saliency Models Learn about Visual Attention?
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e680f115a22d60cbc228a0c6dae5936-Abstract-Conference.html
|
Shi Chen, Ming Jiang, Qi Zhao
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e680f115a22d60cbc228a0c6dae5936-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20044-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e680f115a22d60cbc228a0c6dae5936-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e680f115a22d60cbc228a0c6dae5936-Supplemental-Conference.pdf
|
In recent years, deep saliency models have made significant progress in predicting human visual attention. However, the mechanisms behind their success remain largely unexplained due to the opaque nature of deep neural networks. In this paper, we present a novel analytic framework that sheds light on the implicit features learned by saliency models and provides principled interpretation and quantification of their contributions to saliency prediction. Our approach decomposes these implicit features into interpretable bases that are explicitly aligned with semantic attributes and reformulates saliency prediction as a weighted combination of probability maps connecting the bases and saliency. By applying our framework, we conduct extensive analyses from various perspectives, including the positive and negative weights of semantics, the impact of training data and architectural designs, the progressive influences of fine-tuning, and common error patterns of state-of-the-art deep saliency models. Additionally, we demonstrate the effectiveness of our framework by exploring visual attention characteristics in various application scenarios, such as the atypical attention of people with autism spectrum disorder, attention to emotion-eliciting stimuli, and attention evolution over time. Our code is publicly available at \url{https://github.com/szzexpoi/saliency_analysis}.
| null |
Three Iterations of (d − 1)-WL Test Distinguish Non Isometric Clouds of d-dimensional Points
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e6cf8f77bd8e907f53babcd7664c710-Abstract-Conference.html
|
Valentino Delle Rose, Alexander Kozachinskiy, Cristobal Rojas, Mircea Petrache, Pablo Barceló
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e6cf8f77bd8e907f53babcd7664c710-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19525-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e6cf8f77bd8e907f53babcd7664c710-Paper-Conference.pdf
| null |
The Weisfeiler-Lehman (WL) test is a fundamental iterative algorithm for checking the isomorphism of graphs. It has also been observed that it underlies the design of several graph neural network architectures, whose capabilities and performance can be understood in terms of the expressive power of this test. Motivated by recent developments in machine learning applications to datasets involving three-dimensional objects, we study when the WL test is {\em complete} for clouds of Euclidean points represented by complete distance graphs, i.e., when it can distinguish, up to isometry, any arbitrary such cloud. Our main result states that the $(d-1)$-dimensional WL test is complete for point clouds in $d$-dimensional Euclidean space, for any $d\ge 2$, and only three iterations of the test suffice. Our result is tight for $d = 2, 3$. We also observe that the $d$-dimensional WL test only requires one iteration to achieve completeness.
| null |
Puzzlefusion: Unleashing the Power of Diffusion Models for Spatial Puzzle Solving
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e70ac91ad26ba5b24cf11b12a1f90fe-Abstract-Conference.html
|
Sepidehsadat (Sepid) Hossieni, Mohammad Amin Shabani, Saghar Irandoust, Yasutaka Furukawa
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e70ac91ad26ba5b24cf11b12a1f90fe-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22445-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e70ac91ad26ba5b24cf11b12a1f90fe-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e70ac91ad26ba5b24cf11b12a1f90fe-Supplemental-Conference.zip
|
This paper presents an end-to-end neural architecture based on Diffusion Models for spatial puzzle solving, particularly jigsaw puzzle and room arrangement tasks.In the latter task, for instance, the proposed system ``PuzzleFusion'' takes a set of room layouts as polygonal curves in the top-down view and aligns the room layout pieces by estimating their 2D translations and rotations, akin to solving the jigsaw puzzle of room layouts. A surprising discovery of the paper is that the simple use of a Diffusion Model effectively solves these challenging spatial puzzle tasks as a conditional generation process. To enable learning of an end-to-end neural system, the paper introduces new datasets with ground-truth arrangements: 1) 2D Voronoi Jigsaw Dataset, a synthetic one where pieces are generated by voronoi diagram of 2D pointset; and 2) MagicPlan Dataset, a real one from a production pipeline by MagicPlan, where pieces are room layouts constructed by augmented reality App by real-estate consumers.The qualitative and quantitative evaluations demonstrate that the proposed approach outperforms the competing methods by significant margins in all three spatial puzzle tasks. We have provided code and data in https://sepidsh.github.io/puzzlefusion.
| null |
Visual Instruction Inversion: Image Editing via Image Prompting
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e75f7539cbde5de895fab238ff42519-Abstract-Conference.html
|
Thao Nguyen, Yuheng Li, Utkarsh Ojha, Yong Jae Lee
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e75f7539cbde5de895fab238ff42519-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22822-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e75f7539cbde5de895fab238ff42519-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e75f7539cbde5de895fab238ff42519-Supplemental-Conference.zip
|
Text-conditioned image editing has emerged as a powerful tool for editing images.However, in many situations, language can be ambiguous and ineffective in describing specific image edits.When faced with such challenges, visual prompts can be a more informative and intuitive way to convey ideas.We present a method for image editing via visual prompting.Given pairs of example that represent the "before" and "after" images of an edit, our goal is to learn a text-based editing direction that can be used to perform the same edit on new images.We leverage the rich, pretrained editing capabilities of text-to-image diffusion models by inverting visual prompts into editing instructions.Our results show that with just one example pair, we can achieve competitive results compared to state-of-the-art text-conditioned image editing frameworks.
| null |
Algorithm Selection for Deep Active Learning with Imbalanced Datasets
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e77af93008ee6cd248a31723ce357d8-Abstract-Conference.html
|
Jifan Zhang, Shuai Shao, Saurabh Verma, Robert Nowak
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e77af93008ee6cd248a31723ce357d8-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20700-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e77af93008ee6cd248a31723ce357d8-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e77af93008ee6cd248a31723ce357d8-Supplemental-Conference.zip
|
Label efficiency has become an increasingly important objective in deep learning applications. Active learning aims to reduce the number of labeled examples needed to train deep networks, but the empirical performance of active learning algorithms can vary dramatically across datasets and applications. It is difficult to know in advance which active learning strategy will perform well or best in a given application. To address this, we propose the first adaptive algorithm selection strategy for deep active learning. For any unlabeled dataset, our (meta) algorithm TAILOR (Thompson ActIve Learning algORithm selection) iteratively and adaptively chooses among a set of candidate active learning algorithms. TAILOR uses novel reward functions aimed at gathering class-balanced examples. Extensive experiments in multi-class and multi-label applications demonstrate TAILOR's effectiveness in achieving accuracy comparable or better than that of the best of the candidate algorithms. Our implementation of TAILOR is open-sourced at https://github.com/jifanz/TAILOR.
| null |
Federated Compositional Deep AUC Maximization
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e7b192fc8b3acb93749c5accfa60e0c-Abstract-Conference.html
|
Xinwen Zhang, Yihan Zhang, Tianbao Yang, Richard Souvenir, Hongchang Gao
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e7b192fc8b3acb93749c5accfa60e0c-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22352-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e7b192fc8b3acb93749c5accfa60e0c-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1e7b192fc8b3acb93749c5accfa60e0c-Supplemental-Conference.pdf
|
Federated learning has attracted increasing attention due to the promise of balancing privacy and large-scale learning; numerous approaches have been proposed. However, most existing approaches focus on problems with balanced data, and prediction performance is far from satisfactory for many real-world applications where the number of samples in different classes is highly imbalanced. To address this challenging problem, we developed a novel federated learning method for imbalanced data by directly optimizing the area under curve (AUC) score. In particular, we formulate the AUC maximization problem as a federated compositional minimax optimization problem, develop a local stochastic compositional gradient descent ascent with momentum algorithm, and provide bounds on the computational and communication complexities of our algorithm. To the best of our knowledge, this is the first work to achieve such favorable theoretical results. Finally, extensive experimental results confirm the efficacy of our method.
| null |
On Learning Latent Models with Multi-Instance Weak Supervision
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e83498c3eafe109a44b12979c2c73db-Abstract-Conference.html
|
Kaifu Wang, Efthymia Tsamoura, Dan Roth
|
https://papers.nips.cc/paper_files/paper/2023/hash/1e83498c3eafe109a44b12979c2c73db-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20079-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1e83498c3eafe109a44b12979c2c73db-Paper-Conference.pdf
| null |
We consider a weakly supervised learning scenario where the supervision signal is generated by a transition function $\sigma$ of labels associated with multiple input instances. We formulate this problem as *multi-instance Partial Label Learning (multi-instance PLL)*, which is an extension to the standard PLL problem. Our problem is met in different fields, including latent structural learning and neuro-symbolic integration. Despite the existence of many learning techniques, limited theoretical analysis has been dedicated to this problem. In this paper, we provide the first theoretical study of multi-instance PLL with possibly an unknown transition $\sigma$. Our main contributions are as follows: First, we proposed a necessary and sufficient condition for the learnability of the problem. This condition nontrivially generalizes and relaxes the existing *small ambiguity degree* in PLL literature since we allow the transition to be deterministic. Second, we derived Rademacher-style error bounds based on the top-$k$ surrogate loss that is widely used in the neuro-symbolic literature. Furthermore, we conclude with empirical experiments for learning with an unknown transition. The empirical results align with our theoretical findings; however, they also expose the issue of scalability in the weak supervision literature.
| null |
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
|
https://papers.nips.cc/paper_files/paper/2023/hash/1ec69275e9f002ee068f5d68380f3290-Abstract-Conference.html
|
Blake Bordelon, Cengiz Pehlevan
|
https://papers.nips.cc/paper_files/paper/2023/hash/1ec69275e9f002ee068f5d68380f3290-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19464-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1ec69275e9f002ee068f5d68380f3290-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1ec69275e9f002ee068f5d68380f3290-Supplemental-Conference.zip
|
We analyze the dynamics of finite width effects in wide but finite feature learning neural networks. Starting from a dynamical mean field theory description of infinite width deep neural network kernel and prediction dynamics, we provide a characterization of the $\mathcal{O}(1/\sqrt{\text{width}})$ fluctuations of the DMFT order parameters over random initializations of the network weights. Our results, while perturbative in width, unlike prior analyses, are non-perturbative in the strength of feature learning. In the lazy limit of network training, all kernels are random but static in time and the prediction variance has a universal form. However, in the rich, feature learning regime, the fluctuations of the kernels and predictions are dynamically coupled with a variance that can be computed self-consistently. In two layer networks, we show how feature learning can dynamically reduce the variance of the final tangent kernel and final network predictions. We also show how initialization variance can slow down online learning in wide but finite networks. In deeper networks, kernel variance can dramatically accumulate through subsequent layers at large feature learning strengths, but feature learning continues to improve the signal-to-noise ratio of the feature kernels. In discrete time, we demonstrate that large learning rate phenomena such as edge of stability effects can be well captured by infinite width dynamics and that initialization variance can decrease dynamically. For CNNs trained on CIFAR-10, we empirically find significant corrections to both the bias and variance of network dynamics due to finite width.
| null |
TART: A plug-and-play Transformer module for task-agnostic reasoning
|
https://papers.nips.cc/paper_files/paper/2023/hash/1ece70d2259b8e9510e2d4ca8754cecf-Abstract-Conference.html
|
Kush Bhatia, Avanika Narayan, Christopher M. De Sa, Christopher Ré
|
https://papers.nips.cc/paper_files/paper/2023/hash/1ece70d2259b8e9510e2d4ca8754cecf-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22915-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1ece70d2259b8e9510e2d4ca8754cecf-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1ece70d2259b8e9510e2d4ca8754cecf-Supplemental-Conference.zip
|
Large language models (LLMs) exhibit in-context learning abilities which enable the same model to perform several tasks without any task-specific training. In contrast, traditional adaptation approaches, such as fine-tuning, modify the underlying models for each specific task. In-context learning, however, consistently underperforms task-specific tuning approaches even when presented with the same examples. While most existing approaches (e.g., prompt engineering) focus on the LLM's learned representations to patch this performance gap, our experiments actually reveal that LLM representations contain sufficient information to make good predictions. As such, we focus on the LLM's reasoning abilities and demonstrate that this performance gap exists due to their inability to perform simple probabilistic reasoning tasks. This raises an intriguing question: Are LLMs actually capable of learning how to reason in a task-agnostic manner? We answer this in the affirmative and, as a proof of concept, propose TART which generically improves an LLM's reasoning abilities using a synthetically trained reasoning module. TART trains this Transformer-based reasoning module in a task-agnostic manner using only synthetic logistic regression tasks and composes it with an arbitrary real-world pre-trained model without any additional training. With a single inference module, TART improves performance across different model families (GPT-Neo, Pythia, Bloom), model sizes (100M - 6B), tasks (14 NLP classification tasks), and even across different modalities (audio and vision). On the RAFT Benchmark, TART improves GPT-Neo (125M)'s performance such that it outperforms Bloom (176B), and is within $4$% of GPT-3.
| null |
Navigating the Pitfalls of Active Learning Evaluation: A Systematic Framework for Meaningful Performance Assessment
|
https://papers.nips.cc/paper_files/paper/2023/hash/1ed4723f12853cbd02aecb8160f5e0c9-Abstract-Conference.html
|
Carsten Lüth, Till Bungert, Lukas Klein, Paul Jaeger
|
https://papers.nips.cc/paper_files/paper/2023/hash/1ed4723f12853cbd02aecb8160f5e0c9-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22850-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1ed4723f12853cbd02aecb8160f5e0c9-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1ed4723f12853cbd02aecb8160f5e0c9-Supplemental-Conference.pdf
|
Active Learning (AL) aims to reduce the labeling burden by interactively selecting the most informative samples from a pool of unlabeled data. While there has been extensive research on improving AL query methods in recent years, some studies have questioned the effectiveness of AL compared to emerging paradigms such as semi-supervised (Semi-SL) and self-supervised learning (Self-SL), or a simple optimization of classifier configurations. Thus, today’s AL literature presents an inconsistent and contradictory landscape, leaving practitioners uncertain about whether and how to use AL in their tasks. In this work, we make the case that this inconsistency arises from a lack of systematic and realistic evaluation of AL methods. Specifically, we identify five key pitfalls in the current literature that reflect the delicate considerations required for AL evaluation. Further, we present an evaluation framework that overcomes these pitfalls and thus enables meaningful statements about the performance of AL methods. To demonstrate the relevance of our protocol, we present a large-scale empirical study and benchmark for image classification spanning various data sets, query methods, AL settings, and training paradigms. Our findings clarify the inconsistent picture in the literature and enable us to give hands-on recommendations for practitioners. The benchmark is hosted at https://github.com/IML-DKFZ/realistic-al.
| null |
Semantic HELM: A Human-Readable Memory for Reinforcement Learning
|
https://papers.nips.cc/paper_files/paper/2023/hash/1eeacdf8770e6dd5164cdeec8bcfa8cc-Abstract-Conference.html
|
Fabian Paischer, Thomas Adler, Markus Hofmarcher, Sepp Hochreiter
|
https://papers.nips.cc/paper_files/paper/2023/hash/1eeacdf8770e6dd5164cdeec8bcfa8cc-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19734-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1eeacdf8770e6dd5164cdeec8bcfa8cc-Paper-Conference.pdf
| null |
Reinforcement learning agents deployed in the real world often have to cope with partially observable environments. Therefore, most agents employ memory mechanisms to approximate the state of the environment. Recently, there have been impressive success stories in mastering partially observable environments, mostly in the realm of computer games like Dota 2, StarCraft II, or MineCraft. However, existing methods lack interpretability in the sense that it is not comprehensible for humans what the agent stores in its memory.In this regard, we propose a novel memory mechanism that represents past events in human language.Our method uses CLIP to associate visual inputs with language tokens. Then we feed these tokens to a pretrained language model that serves the agent as memory and provides it with a coherent and human-readable representation of the past.We train our memory mechanism on a set of partially observable environments and find that it excels on tasks that require a memory component, while mostly attaining performance on-par with strong baselines on tasks that do not. On a challenging continuous recognition task, where memorizing the past is crucial, our memory mechanism converges two orders of magnitude faster than prior methods.Since our memory mechanism is human-readable, we can peek at an agent's memory and check whether crucial pieces of information have been stored.This significantly enhances troubleshooting and paves the way toward more interpretable agents.
| null |
Empowering Convolutional Neural Nets with MetaSin Activation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f05584d537c92c8271699f207677475-Abstract-Conference.html
|
Farnood Salehi, Tunç Aydin, André Gaillard, Guglielmo Camporese, Yuxuan Wang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f05584d537c92c8271699f207677475-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/23001-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1f05584d537c92c8271699f207677475-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1f05584d537c92c8271699f207677475-Supplemental-Conference.pdf
|
ReLU networks have remained the default choice for models in the area of image prediction despite their well-established spectral bias towards learning low frequencies faster, and consequently their difficulty of reproducing high frequency visual details. As an alternative, sin networks showed promising results in learning implicit representations of visual data. However training these networks in practically relevant settings proved to be difficult, requiring careful initialization, dealing with issues due to inconsistent gradients, and a degeneracy in local minima. In this work, we instead propose replacing a baseline network’s existing activations with a novel ensemble function with trainable parameters. The proposed MetaSin activation can be trained reliably without requiring intricate initialization schemes, and results in consistently lower test loss compared to alternatives. We demonstrate our method in the areas of Monte-Carlo denoising and image resampling where we set new state-of-the-art through a knowledge distillation based training procedure. We present ablations on hyper-parameter settings, comparisons with alternative activation function formulations, and discuss the use of our method in other domains, such as image classification.
| null |
When Does Confidence-Based Cascade Deferral Suffice?
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f09e1ee5035a4c3fe38a5681cae5815-Abstract-Conference.html
|
Wittawat Jitkrittum, Neha Gupta, Aditya K. Menon, Harikrishna Narasimhan, Ankit Rawat, Sanjiv Kumar
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f09e1ee5035a4c3fe38a5681cae5815-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22846-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1f09e1ee5035a4c3fe38a5681cae5815-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1f09e1ee5035a4c3fe38a5681cae5815-Supplemental-Conference.pdf
|
Cascades are a classical strategy to enable inference cost to vary adaptively across samples, wherein a sequence of classifiers are invoked in turn. A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction. One simple deferral rule employs the confidence of the current classifier, e.g., based on the maximum predicted softmax probability. Despite being oblivious to the structure of the cascade --- e.g., not modelling the errors of downstream models --- such confidence-based deferral often works remarkably well in practice. In this paper, we seek to better understand the conditions under which confidence-based deferral may fail, and when alternate deferral strategies can perform better. We first present a theoretical characterisation of the optimal deferral rule, which precisely characterises settings under which confidence-based deferral may suffer. We then study post-hoc deferral mechanisms, and demonstrate they can significantly improve upon confidence-based deferral in settings where (i) downstream models are specialists that only work well on a subset of inputs, (ii) samples are subject to label noise, and (iii) there is distribution shift between the train and test set.
| null |
DeWave: Discrete Encoding of EEG Waves for EEG to Text Translation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f2fd23309a5b2d2537d063b29ec1b52-Abstract-Conference.html
|
Yiqun Duan, Jinzhao Zhou, Zhen Wang, Yu-Kai Wang, Chin-teng Lin
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f2fd23309a5b2d2537d063b29ec1b52-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22572-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1f2fd23309a5b2d2537d063b29ec1b52-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1f2fd23309a5b2d2537d063b29ec1b52-Supplemental-Conference.pdf
|
The translation of brain dynamics into natural language is pivotal for brain-computer interfaces (BCIs), a field that has seen substantial growth in recent years. With the swift advancement of large language models, such as ChatGPT, the need to bridge the gap between the brain and languages becomes increasingly pressing. Current methods, however, require eye-tracking fixations or event markers to segment brain dynamics into word-level features, which can restrict the practical application of these systems. These event markers may not be readily available or could be challenging to acquire during real-time inference, and the sequence of eye fixations may not align with the order of spoken words. To tackle these issues, we introduce a novel framework, DeWave, that integrates discrete encoding sequences into open-vocabulary EEG-to-text translation tasks. DeWave uses a quantized variational encoder to derive discrete codex encoding and align it with pre-trained language models. This discrete codex representation brings forth two advantages: 1) it alleviates the order mismatch between eye fixations and spoken words by introducing text-EEG contrastive alignment training, and 2) it minimizes the interference caused by individual differences in EEG waves through an invariant discrete codex. Our model surpasses the previous baseline (40.1 and 31.7) by 3.06% and 6.34\%, respectively, achieving 41.35 BLEU-1 and 33.71 Rouge-F on the ZuCo Dataset. Furthermore, this work is the first to facilitate the translation of entire EEG signal periods without the need for word-level order markers (e.g., eye fixations), scoring 20.5 BLEU-1 and 29.5 Rouge-1 on the ZuCo Dataset, respectively.
| null |
SpatialRank: Urban Event Ranking with NDCG Optimization on Spatiotemporal Data
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f3cbee17170c3ffff3e413d2df54f6b-Abstract-Conference.html
|
BANG AN, Xun Zhou, YONGJIAN ZHONG, Tianbao Yang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f3cbee17170c3ffff3e413d2df54f6b-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22576-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1f3cbee17170c3ffff3e413d2df54f6b-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1f3cbee17170c3ffff3e413d2df54f6b-Supplemental-Conference.zip
|
The problem of urban event ranking aims at predicting the top-$k$ most risky locations of future events such as traffic accidents and crimes. This problem is of fundamental importance to public safety and urban administration especially when limited resources are available. The problem is, however, challenging due to complex and dynamic spatio-temporal correlations between locations, uneven distribution of urban events in space, and the difficulty to correctly rank nearby locations with similar features. Prior works on event forecasting mostly aim at accurately predicting the actual risk score or counts of events for all the locations. Rankings obtained as such usually have low quality due to prediction errors. Learning-to-rank methods directly optimize measures such as Normalized Discounted Cumulative Gain (NDCG), but cannot handle the spatiotemporal autocorrelation existing among locations. Due to the common assumption that items are independent. In this paper, we bridge the gap by proposing a novel spatial event ranking approach named SpatialRank. SpatialRank features adaptive graph convolution layers that dynamically learn the spatiotemporal dependencies across locations from data. In addition, the model optimizes through surrogates a hybrid NDCG loss with a spatial component to better rank neighboring spatial locations. We design an importance-sampling with a spatial filtering algorithm to effectively evaluate the loss during training. Comprehensive experiments on three real-world datasets demonstrate that SpatialRank can effectively identify the top riskiest locations of crimes and traffic accidents and outperform state-of-art methods in terms of NDCG by up to 12.7%.
| null |
An Information-Theoretic Evaluation of Generative Models in Learning Multi-modal Distributions
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f5c5cd01b864d53cc5fa0a3472e152e-Abstract-Conference.html
|
Mohammad Jalali, Cheuk Ting Li, Farzan Farnia
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f5c5cd01b864d53cc5fa0a3472e152e-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/21499-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1f5c5cd01b864d53cc5fa0a3472e152e-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1f5c5cd01b864d53cc5fa0a3472e152e-Supplemental-Conference.pdf
|
The evaluation of generative models has received significant attention in the machine learning community. When applied to a multi-modal distribution which is common among image datasets, an intuitive evaluation criterion is the number of modes captured by the generative model. While several scores have been proposed to evaluate the quality and diversity of a model's generated data, the correspondence between existing scores and the number of modes in the distribution is unclear. In this work, we propose an information-theoretic diversity evaluation method for multi-modal underlying distributions. We utilize the R\'enyi Kernel Entropy (RKE) as an evaluation score based on quantum information theory to measure the number of modes in generated samples. To interpret the proposed evaluation method, we show that the RKE score can output the number of modes of a mixture of sub-Gaussian components. We also prove estimation error bounds for estimating the RKE score from limited data, suggesting a fast convergence of the empirical RKE score to the score for the underlying data distribution. Utilizing the RKE score, we conduct an extensive evaluation of state-of-the-art generative models over standard image datasets. The numerical results indicate that while the recent algorithms for training generative models manage to improve the mode-based diversity over the earlier architectures, they remain incapable of capturing the full diversity of real data. Our empirical results provide a ranking of widely-used generative models based on the RKE score of their generated samples.
| null |
A Cross-Moment Approach for Causal Effect Estimation
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f6100363156cced8633f4e89dd8ceb1-Abstract-Conference.html
|
Yaroslav Kivva, Saber Salehkaleybar, Negar Kiyavash
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f6100363156cced8633f4e89dd8ceb1-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/19727-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1f6100363156cced8633f4e89dd8ceb1-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1f6100363156cced8633f4e89dd8ceb1-Supplemental-Conference.pdf
|
We consider the problem of estimating the causal effect of a treatment on an outcome in linear structural causal models (SCM) with latent confounders when we have access to a single proxy variable.Several methods (such as difference-in-difference (DiD) estimator or negative outcome control) have been proposed in this setting in the literature. However, these approaches require either restrictive assumptions on the data generating model or having access to at least two proxy variables.We propose a method to estimate the causal effect using cross moments between the treatment, the outcome, and the proxy variable. In particular, we show that the causal effect can be identified with simple arithmetic operations on the cross moments if the latent confounder in linear SCM is non-Gaussian.In this setting, DiD estimator provides an unbiased estimate only in the special case where the latent confounder has exactly the same direct causal effects on the outcomes in the pre-treatment and post-treatment phases. This translates to the common trend assumption in DiD, which we effectively relax.Additionally, we provide an impossibility result that shows the causal effect cannot be identified if the observational distribution over the treatment, the outcome, and the proxy is jointly Gaussian. Our experiments on both synthetic and real-world datasets showcase the effectivenessof the proposed approach in estimating the causal effect.
| null |
Combining Behaviors with the Successor Features Keyboard
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f69928210578f4cf5b538a8c8806798-Abstract-Conference.html
|
Wilka Carvalho Carvalho, Andre Saraiva, Angelos Filos, Andrew Lampinen, Loic Matthey, Richard L Lewis, Honglak Lee, Satinder Singh, Danilo Jimenez Rezende, Daniel Zoran
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f69928210578f4cf5b538a8c8806798-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20981-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1f69928210578f4cf5b538a8c8806798-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1f69928210578f4cf5b538a8c8806798-Supplemental-Conference.zip
|
The Option Keyboard (OK) was recently proposed as a method for transferring behavioral knowledge across tasks. OK transfers knowledge by adaptively combining subsets of known behaviors using Successor Features (SFs) and Generalized Policy Improvement (GPI).However, it relies on hand-designed state-features and task encodings which are cumbersome to design for every new environment.In this work, we propose the "Successor Features Keyboard" (SFK), which enables transfer with discovered state-features and task encodings.To enable discovery, we propose the "Categorical Successor Feature Approximator" (CSFA), a novel learning algorithm for estimating SFs while jointly discovering state-features and task encodings.With SFK and CSFA, we achieve the first demonstration of transfer with SFs in a challenging 3D environment where all the necessary representations are discovered.We first compare CSFA against other methods for approximating SFs and show that only CSFA discovers representations compatible with SF&GPI at this scale.We then compare SFK against transfer learning baselines and show that it transfers most quickly to long-horizon tasks.
| null |
Rethinking Semi-Supervised Medical Image Segmentation: A Variance-Reduction Perspective
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f7e6d5c84b0ed286d0e69b7d2c79b47-Abstract-Conference.html
|
Chenyu You, Weicheng Dai, Yifei Min, Fenglin Liu, David Clifton, S. Kevin Zhou, Lawrence Staib, James Duncan
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f7e6d5c84b0ed286d0e69b7d2c79b47-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/20885-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1f7e6d5c84b0ed286d0e69b7d2c79b47-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1f7e6d5c84b0ed286d0e69b7d2c79b47-Supplemental-Conference.pdf
|
For medical image segmentation, contrastive learning is the dominant practice to improve the quality of visual representations by contrasting semantically similar and dissimilar pairs of samples. This is enabled by the observation that without accessing ground truth labels, negative examples with truly dissimilar anatomical features, if sampled, can significantly improve the performance. In reality, however, these samples may come from similar anatomical features and the models may struggle to distinguish the minority tail-class samples, making the tail classes more prone to misclassification, both of which typically lead to model collapse. In this paper, we propose $\texttt{ARCO}$, a semi-supervised contrastive learning (CL) framework with stratified group theory for medical image segmentation. In particular, we first propose building $\texttt{ARCO}$ through the concept of variance-reduced estimation, and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks with extremely limited labels. Furthermore, we theoretically prove these sampling techniques are universal in variance reduction. Finally, we experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings, and our methods consistently outperform state-of-the-art semi-supervised methods. Additionally, we augment the CL frameworks with these sampling techniques and demonstrate significant gains over previous methods. We believe our work is an important step towards semi-supervised medical image segmentation by quantifying the limitation of current self-supervision objectives for accomplishing such challenging safety-critical tasks.
| null |
Boosting Verification of Deep Reinforcement Learning via Piece-Wise Linear Decision Neural Networks
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f96b24df4b06f5d68389845a9a13ed9-Abstract-Conference.html
|
Jiaxu Tian, Dapeng Zhi, Si Liu, Peixin Wang, Cheng Chen, Min Zhang
|
https://papers.nips.cc/paper_files/paper/2023/hash/1f96b24df4b06f5d68389845a9a13ed9-Abstract-Conference.html
|
NIPS 2023
|
https://papers.nips.cc/paper_files/paper/22135-/bibtex
|
https://papers.nips.cc/paper_files/paper/2023/file/1f96b24df4b06f5d68389845a9a13ed9-Paper-Conference.pdf
|
https://papers.nips.cc/paper_files/paper/2023/file/1f96b24df4b06f5d68389845a9a13ed9-Supplemental-Conference.zip
|
Formally verifying deep reinforcement learning (DRL) systems suffers from both inaccurate verification results and limited scalability. The major obstacle lies in the large overestimation introduced inherently during training and then transforming the inexplicable decision-making models, i.e., deep neural networks (DNNs), into easy-to-verify models. In this paper, we propose an inverse transform-then-train approach, which first encodes a DNN into an equivalent set of efficiently and tightly verifiable linear control policies and then optimizes them via reinforcement learning. We accompany our inverse approach with a novel neural network model called piece-wise linear decision neural networks (PLDNNs), which are compatible with most existing DRL training algorithms with comparable performance against conventional DNNs. Our extensive experiments show that, compared to DNN-based DRL systems, PLDNN-based systems can be more efficiently and tightly verified with up to $438$ times speedup and a significant reduction in overestimation. In particular, even a complex $12$-dimensional DRL system is efficiently verified with up to 7 times deeper computation steps.
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.