venue
stringclasses 11
values | paper_openreview_id
stringlengths 9
13
| title
stringlengths 4
179
⌀ | abstract
stringlengths 2
4.99k
| paper_decision
stringclasses 29
values | paper_pdf_link
stringlengths 31
63
⌀ |
|---|---|---|---|---|---|
ICLR.cc/2025/Conference
|
Dem5LyVk8R
|
Efficient Policy Evaluation with Safety Constraint for Reinforcement Learning
|
In reinforcement learning, classic on-policy evaluation methods often suffer from high variance and require massive online data to attain the desired accuracy. Previous studies attempt to reduce evaluation variance by searching for or designing proper behavior policies to collect data. However, these approaches ignore the safety of such behavior policies---the designed behavior policies have no safety guarantee and may lead to severe damage during online executions. In this paper, to address the challenge of reducing variance while ensuring safety simultaneously, we propose an optimal variance-minimizing behavior policy under safety constraints. Theoretically, while ensuring safety constraints, our evaluation method is unbiased and has lower variance than on-policy evaluation. Empirically, our method is the only existing method to achieve both substantial variance reduction and safety constraint satisfaction. Furthermore, we show our method is even superior to previous methods in both variance reduction and execution safety.
|
ICLR 2025 Poster
|
/pdf/3940db9e28d831ea16e7e2c68d05d240086b854c.pdf
|
ICLR.cc/2025/Conference
|
oWy06SBgt4
|
1-Bit FQT: Pushing the Limit of Fully Quantized Training to 1-bit
|
Fully quantized training (FQT) accelerates the training of deep neural networks by quantizing the activations, weights, and gradients into lower precision. To explore the ultimate limit of FQT (the lowest achievable precision), we make a first attempt to 1-bit FQT. We provide a theoretical analysis of FQT based on Adam and SGD, revealing that the gradient variance influences the convergence of FQT. Building on these theoretical results, we introduce an Average 1-bit Quantization (AQ) strategy. The strategy leverages the heterogeneity of gradients to mitigate gradient variance by pruning less informative gradients and enhancing the numerical precision of remaining gradients. Additionally, we propose Sample Channel joint Quantization (SCQ), which utilizes different quantization strategies in the computation of weight gradients and activation gradients to ensure that the method is friendly to low-bitwidth hardware. Finally, we present a framework to deploy our algorithm. For fine-tuning VGGNet-16 and ResNet-18 on multiple datasets, our algorithm achieves an average accuracy improvement of approximately 6\%, compared to per-sample quantization. Moreover, our training speedup can reach a maximum of 5.13× compared to full precision training.
|
Rejected_Submission
|
/pdf/00c7b6fdd5e84938921faccb0cadd118ddc5b985.pdf
|
ICLR.cc/2025/Conference
|
d465apqCqc
|
BA-LoRA: Bias-Alleviating Low-Rank Adaptation to Mitigate Catastrophic Inheritance in Large Language Models
|
Large language models (LLMs) have demonstrated remarkable proficiency across various natural language processing (NLP) tasks. However, adapting LLMs to downstream applications requires computationally intensive and memory-demanding fine-tuning procedures. To alleviate these burdens, parameter-efficient fine-tuning (PEFT) techniques have emerged as a promising approach to tailor LLMs with minimal computational overhead. While PEFT methods offer substantial advantages, they do not fully address the pervasive issue of bias propagation from pre-training data. This work introduces Bias-Alleviating Low-Rank Adaptation (BA-LoRA), a novel PEFT method designed to counteract bias inheritance. BA-LoRA incorporates three distinct regularization terms: (1) a consistency regularizer, (2) a diversity regularizer, and (3) a singular value decomposition regularizer. These regularizers aim to enhance the models' consistency, diversity, and generalization capabilities during fine-tuning. We conduct extensive experiments on natural language understanding (NLU) and natural language generation (NLG) tasks using prominent LLMs such as LLaMA, Mistral, and Gemma. The results demonstrate that BA-LoRA outperforms LoRA and its state-of-the-art variants. Moreover, our method effectively mitigates the adverse effects of pre-training bias, leading to more reliable and robust model outputs.
|
Rejected_Submission
|
/pdf/c7d1f800ba1c780d7db12d768ba6e8dae656976b.pdf
|
ICLR.cc/2024/Conference
|
Kuj5gVp5GQ
|
Accelerating Sinkhorn algorithm with sparse Newton iterations
|
Computing the optimal transport distance between statistical distributions is a fundamental task in machine learning. One remarkable recent advancement is entropic regularization and the Sinkhorn algorithm, which utilizes only matrix scaling and guarantees an approximated solution with near-linear runtime. Despite the success of the Sinkhorn algorithm, its runtime may still be slow due to the potentially large number of iterations needed for convergence. To achieve possibly super-exponential convergence, we introduce Sinkhorn-Newton-Sparse (SNS), an extension to the Sinkhorn algorithm, by introducing early stopping for the matrix scaling steps and a second stage featuring a Newton-type subroutine. Adopting the variational viewpoint that the Sinkhorn algorithm maximizes a concave Lyapunov potential, we offer the insight that the Hessian matrix of the potential function is approximately sparse. Sparsification of the Hessian results in a fast $O(n^2)$ per-iteration complexity, the same as the Sinkhorn algorithm. In terms of total iteration count, we observe that the SNS algorithm converges orders of magnitude faster across a wide range of practical cases, including optimal transportation between empirical distributions and calculating the Wasserstein $W_1, W_2$ distance of discretized continuous densities. The empirical performance is corroborated by a rigorous bound on the approximate sparsity of the Hessian matrix.
|
ICLR 2024 poster
|
/pdf/ad0c50781fe7c375ece5d9a170b96bf3e63272b4.pdf
|
ICLR.cc/2024/Conference
|
wk77w7DG1N
|
Evaluating and Improving Generation Consistency of Large Language Models via A Divide-Conquer-Reasoning Approach
|
Evaluating the quality and variability of text generated by Large Language Models (LLMs) poses a significant, yet unresolved research challenge. Traditional evaluation methods, such as ROUGE and BERTScore, which measure token similarity, often fail to capture the holistic semantic equivalence. This results in a low correlation with human judgments and intuition, which is especially problematic in high-stakes applications like healthcare and finance where reliability, safety, and robust decision-making are highly critical. This work proposes an automated framework for evaluating the consistency of LLM-generated texts using a divide-and-conquer strategy. Unlike existing LLM-based evaluators that operate at the paragraph level, our method employs a divide-and-conquer evaluator (DCE) that breaks down the comparison between two generated responses into individual sentences, each evaluated based on predefined criteria. To facilitate this approach, we introduce an automatic metric converter (AMC) that translates the output from DCE into an interpretable numeric score. Beyond the consistency evaluation, we further present a reason-assisted improver (RAI) that leverages the analytical reasons with explanations identified by DCE to generate new responses aimed at reducing these inconsistencies. Through comprehensive and systematic empirical analysis, we show that our approach outperforms state-of-the-art methods by a large margin (e.g., +19.3\% and +24.3\% on the SummEval dataset) in evaluating the consistency of LLM generation across multiple benchmarks in semantic, factual, and summarization consistency tasks. Our approach also substantially reduces nearly 90\% output inconsistencies, showing promise for effective hallucination mitigation and reduction.
|
Rejected_Submission
|
/pdf/ec446bfe830974126bf8c579f0befbf76aa519f2.pdf
|
ICLR.cc/2025/Conference
|
tBom4xOW1H
|
Adversarial Generative Flow Network for Solving Vehicle Routing Problems
|
Recent research into solving vehicle routing problems (VRPs) has gained significant traction, particularly through the application of deep (reinforcement) learning for end-to-end solution construction. However, many current construction-based neural solvers predominantly utilize Transformer architectures, which can face scalability challenges and struggle to produce diverse solutions. To address these limitations, we introduce a novel framework beyond Transformer-based approaches, i.e., Adversarial Generative Flow Networks (AGFN). This framework integrates the generative flow network (GFlowNet)—a probabilistic model inherently adept at generating diverse solutions (routes)—with a complementary model for discriminating (or evaluating) the solutions. These models are trained alternately in an adversarial manner to improve the overall solution quality, followed by a proposed hybrid decoding method to construct the solution. We apply the AGFN framework to solve the capacitated vehicle routing problem (CVRP) and travelling salesman problem (TSP), and our experimental results demonstrate that AGFN surpasses the popular construction-based neural solvers, showcasing strong generalization capabilities on synthetic and real-world benchmark instances.
|
ICLR 2025 Poster
|
/pdf/7ed63f33b382b506946c4ff7ec299ff90023f804.pdf
|
ICLR.cc/2025/Conference
|
kO0DgO07hW
|
Semantic Loss Guided Data Efficient Supervised Fine Tuning for Safe Responses in LLMs
|
Large Language Models (LLMs) generating unsafe responses to toxic prompts is a significant issue in their applications. While various efforts aim to address this safety concern, previous approaches often demand substantial human data collection or rely on the less dependable option of using another LLM to generate corrective data. In this paper, we aim to take this problem and overcome limitations of requiring significant high-quality human data. Our method requires only a small set of unsafe responses to toxic prompts, easily obtained from the unsafe LLM itself. By employing a semantic cost combined with a negative Earth Mover Distance (EMD) loss, we guide the LLM away from generating unsafe responses. Additionally, we propose a novel lower bound for EMD loss, enabling more efficient optimization. Our results demonstrate superior performance and data efficiency compared to baselines, and we further examine the nuanced effects of over-alignment and potential degradation of language capabilities when using contrastive data.
|
ICLR 2025 Poster
|
/pdf/423703df9e3cce20127a48400d60d498f821fb53.pdf
|
ICLR.cc/2025/Conference
|
o1Et3MogPw
|
Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence
|
The rapid advancement of large language models (LLMs) has paved the way for the development of highly capable autonomous agents. However, existing multi-agent frameworks often struggle with integrating diverse capable third-party agents due to reliance on agents defined within their own ecosystems. They also face challenges in simulating distributed environments, as most frameworks are limited to single-device setups. Furthermore, these frameworks often rely on hard-coded communication pipelines, limiting their adaptability to dynamic task requirements. Inspired by the concept of the Internet, we propose the Internet of Agents (IoA), a novel framework that addresses these limitations by providing a flexible and scalable platform for LLM-based multi-agent collaboration. IoA introduces an agent integration protocol, an instant-messaging-like architecture design, and dynamic mechanisms for agent teaming and conversation flow control. Through extensive experiments on general assistant tasks, embodied AI tasks, and retrieval-augmented generation benchmarks, we demonstrate that IoA consistently outperforms state-of-the-art baselines, showcasing its ability to facilitate effective collaboration among heterogeneous agents. IoA represents a step towards linking diverse agents in an Internet-like environment, where agents can seamlessly collaborate to achieve greater intelligence and capabilities. We will release our code to facilitate further research.
|
ICLR 2025 Spotlight
|
/pdf/1006483e763807a740f78d0096898fc8d8a8424b.pdf
|
ICLR.cc/2024/Conference
|
vfEqSWpMfj
|
Word Importance Explains How Prompts Affect Language Model Outputs
|
The emergence of large language models has revolutionized numerous applications across industries. However, their ``black box'' nature often hinders the understanding of how they make specific decisions, raising concerns about their transparency, reliability, and ethical use. This study presents a method to improve the explainability of LLMs by varying prompt words to uncover their statistical impact on the model outputs. This approach, inspired by permutation importance for tabular data, masks each word in the system prompt and evaluates its effect on the outputs based on the available text scores aggregated over multiple user inputs. Unlike classical attention, word importance measures the impact of prompt words on arbitrarily-defined text scores, which enables decomposing the importance of words into the specific measures of interest--including bias, reading level, verbosity, etc. This procedure also enables measuring impact when attention is not available. To test the fidelity of this approach, we explore the effect of adding different suffixes to multiple different system prompts and comparing subsequent generations with GPT-3.5 Turbo. Results show that word importance scores are closely related to the expected suffix importances for multiple scoring functions. Finally, we share a Python project for computing these scores and discuss how it could assist developing generative AI use-cases in different industry applications.
|
Rejected_Submission
|
/pdf/fee35c412a11456f2bb46a89def348588fcfe956.pdf
|
ICLR.cc/2024/Conference
|
Giwj9cgAIl
|
Mechanistic Neural Networks
|
We present Mechanistic Neural Networks, a new neural module that represent the evolution of its input data in the form of differential explicit equations.
Similar to regular neural networks, Mechanistic Neural Networks $\mathcal{}F(x)$ receive as input system observations $x$, \emph{e.g.} $n$-body trajectories or fluid dynamics recordings.
However, unlike regular neural network modules that return vector-valued outputs, mechanistic neural networks output (the parameters of) a \emph{mechanism} $\mathcal{U}_x=F(x)$ in the form of an explicit symbolic ordinary differential equation $\mathcal{U}_x$ (and not the numerical solution of the differential equation), that can be solved in the forward pass to solve arbitrary tasks, supervised and unsupervised. Providing explicit equations as part of multi-layer architectures, they differ from Neural ODEs, UDEs and symbolic regression methods like SINDy.
To learn explicit differential equations as representations, Mechanistic Neural Networks employ a new parallel and differentiable ODE solver design that (i) is able to solve large batches of independent ODEs in parallel on GPU and (ii) do so for hundreds of steps at once (iii) with \emph{learnable} step sizes.
The new solver overcomes the limitations of traditional ODE solvers that proceed sequentially and do not scale for large numbers of independent ODEs.
Mechanistic Neural Networks can be employed in diverse settings including governing equation discovery, prediction for dynamical systems, PDE solving and yield competitive or state-of-the-art results.
|
Rejected_Submission
|
/pdf/bce8d87c46286a90e756af406e60606e58f2a368.pdf
|
ICLR.cc/2025/Conference
|
6ADnEk90R2
|
CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models
|
Instruction tuning in multimodal large language models (MLLMs) generally involves smooth integration of a backbone LLM and a feature encoder that has non-text input modalities. The major challenge is how to efficiently find the synergy through cooperative learning, so that LLMs can adapt their reasoning abilities in downstream tasks while feature encoders can adjust to provide more relevant modality-specific information. In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives, where we find unbalanced learning between the two modules, i.e., the feature encoder and the LLM, can cause problems of oscillation learning and insufficient training with diminishing learning gradients. Inspired by our findings, we propose a Multimodal Balance Coefficient that enables quantitative measurement of the learning balance. Based on this, we further design a dynamic learning scheduler that better coordinates the learning between the LLM and feature encoder, alleviating the oscillation and insufficient training. In addition, we introduce an auxiliary regularization on the gradient to promote updating with larger step sizes, which potentially enables a more accurate estimation of the learning balance coefficient and further improves the training sufficiency. Our techniques are agnostic to the architecture of LLM and feature encoder, so can be generically integrated with various MLLM. Experiment results on multiple downstream tasks and modalities in vision and audio, demonstrate the proposed method’s better efficiency and effectiveness in MLLM instruction tuning.
|
Rejected_Submission
|
/pdf/ab24d4a199d32cb3dfc336d26914f3112511bbd0.pdf
|
ICLR.cc/2025/Conference
|
hKcDOfDxgn
|
Brain-Like Replay Naturally Emerges in Reinforcement Learning Agents
|
Replay is a powerful strategy to promote learning in artificial intelligence and the brain. However, the conditions to generate it and its functional advantages have not been fully recognized. In this study, we develop a modular reinforcement learning model that could generate replay. We prove that replay generated in this way helps complete the task. We also analyze the information contained in the representation and provide a mechanism for how replay makes a difference. Our design avoids complex assumptions and enables replay to emerge naturally within a task-optimized paradigm. Our model also reproduces key phenomena observed in biological agents. This research explores the structural biases in modular ANN to generate replay and its potential utility in developing efficient RL.
|
Rejected_Submission
|
/pdf/eb6c87a8b2a8b89d29c331fe65cb7bcfcf83dfaf.pdf
|
ICLR.cc/2025/Conference
|
0a7TRHhhcS
|
Preference-Driven Spatial-Temporal Counting Process Models
|
Traditional spatial-temporal models often overlook the complex decision-making processes and social factors that shape spatial-temporal event data generated by humans. This paper introduces a novel framework that integrates choice theory with social intelligence to model and analyze counting processes, such as crime occurrences or bike-sharing activity, where the observed discrete events result from individual decisions influenced by social dynamics.
Our approach aims to uncover latent human preference patterns, represented by utility functions, to capture the diverse decision-making factors within a population that result in the observed event counts. These latent factors help explain how choices—such as where and when to commit a crime—are shaped by personal preferences, environmental conditions, and social influences. By modeling the aggregate outcomes of these individual choices, we can better understand and predict patterns in counting processes. The proposed model adopts a preference-driven approach to counting data, providing interpretable insights at a detailed level. It also enables in-depth analysis of how external interventions, like law enforcement actions or policy changes, influence individual decisions and how these effects spread through the system. Empirical evaluation of crime and bike-sharing datasets demonstrates our model's ability to offer clear insights and achieve high predictive accuracy.
|
Rejected_Submission
|
/pdf/86613e09ce211fc3be0554c2bbb0c4a3a40b275d.pdf
|
ICLR.cc/2024/Conference
|
Nshk5YpdWE
|
Lagrangian Flow Networks for Conservation Laws
|
We introduce Lagrangian Flow Networks (LFlows) for modeling fluid densities and velocities continuously in space and time.
By construction, the proposed LFlows satisfy the continuity equation,
a PDE describing mass conservation in its differential form.
Our model is based on the insight that solutions to the continuity equation can be expressed as
time-dependent density transformations via differentiable and invertible maps.
This follows from classical theory of the existence and uniqueness of Lagrangian flows for smooth vector fields.
Hence, we model fluid densities by transforming a base density with parameterized diffeomorphisms conditioned on time.
The key benefit compared to methods relying on numerical ODE solvers or PINNs is that the analytic expression of the velocity is always consistent with changes in density.
Furthermore, we require neither expensive numerical solvers, nor additional penalties to enforce the PDE.
LFlows show higher predictive accuracy in density modeling tasks compared to competing models in 2D and 3D,
while being computationally efficient.
As a real-world application, we model bird migration based on sparse weather radar measurements.
|
ICLR 2024 spotlight
|
/pdf/53607a671ffb6900fcf4870e6bd3c5866146e9b4.pdf
|
ICLR.cc/2025/Conference
|
zwuemuTiN8
|
TACD-GRU: Time-Aware Context-Dependent Autoregressive Model for Irregularly Sampled Time Series
|
Multivariate time series data and their models are extremely important for under-
standing the behavior of various natural and man-made systems. Development of
accurate time series models often requires capturing intricate relationships among
the variables and their dynamics. Particularly challenging to model and learn
are time series with irregular and sparse observations, that may arise in domains
as diverse as healthcare, sensor and communication networks. In this work, we
propose and study TACD-GRU, a new Time- Aware Context-Dependent Gated
Recurrent Unit framework for multivariate time series prediction (or forecasting)
that accounts for irregularities in observation times of individual time series vari-
ables and their dependencies. Our framework defines a novel sequential unit that
is triggered by the arrival of a new observation to update its state, and a predic-
tion module that supports time series predictions at any future time. The current
prediction module consists of and combines two novel prediction models: (i) a
context-based model (TACD-GRU-CONTEXT) that relies on a set of tunable latent
decay functions of time and their linear combinations to support the prediction,
and (ii) an attention-based model (TACD-GRU-ATTENTION) that models depen-
dencies among variables and their most recent values using a temporal attention
mechanism. Our model shows highly competitive performance when powered by
both individual and combined prediction functions outperforming existing state-of-
the-art (SOTA) models on both single-step and multi-step prediction tasks across
three real-world datasets.
|
Rejected_Submission
|
/pdf/1ae992f3a1df7aaf063a4bb9989d6e2a10edaca0.pdf
|
ICLR.cc/2025/Conference
|
Dl5JaX7zoN
|
UrbanPlanBench: A Comprehensive Assessment of Urban Planning Abilities in Large Language Models
|
Urban planning is a professional discipline that shapes our daily surroundings, which demands multifaceted domain knowledge and relies heavily on human expertise. The advent of Large Language Models (LLMs) holds promise for revolutionizing such a field by the pre-trained world knowledge. However, the extent to which these models can assist human practitioners remains largely unexplored. In this paper, we introduce a comprehensive benchmark, PlanBench, tailored to evaluate the efficacy of LLMs in urban planning, which encompasses fundamental principles, professional knowledge, and management and regulations, aligning closely with the qualifications expected of human planners. Through extensive evaluation, we reveal a significant imbalance in the acquisition of planning knowledge among LLMs, with even the most proficient models falling short of meeting professional standards. For instance, we observe that 70% of LLMs achieve subpar performance in understanding planning regulations compared to other aspects. Besides the benchmark, we present the largest-ever supervised fine-tuning (SFT) dataset, PlanText, for LLMs in urban planning, comprising over 30,000 instruction pairs sourced from urban planning exams and textbooks. Our findings demonstrate that fine-tuned models exhibit enhanced performance in memorization tests and comprehension of urban planning knowledge, while there exists significant room for improvement, particularly in tasks requiring domain-specific terminology and reasoning. Our benchmark, dataset, and associated evaluation and fine-tuning toolsets aim to catalyze the integration of LLMs into practical urban computing, fostering a symbiotic relationship between human expertise and machine intelligence.
|
Rejected_Submission
|
/pdf/ef7b2769876065e7414d9adf92f94c5a53f37e0a.pdf
|
ICLR.cc/2025/Conference
|
eJVrwDE086
|
Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs
|
We present a simple meta quantization approach that quantizes different layers of a large language model (LLM) at different bit levels, and is independent of the underlying quantization technique. Specifically, we quantize the most important layers to higher bit precision and less important layers to lower bits. We propose two effective strategies to measure the importance of layers within LLMs: the first measures the importance of a layer based on how different its output embeddings are from the input embeddings (higher is better); the second estimates the importance of a layer using the number of layer weights that are much larger than average (smaller is better). We show that quantizing different layers at varying bits according to our importance scores results in minimal performance drop with a far more compressed model size. Finally, we present several practical key takeaways from our variable layer-wise quantization experiments: (a) LLM performance under variable quantization remains close to the original model until 25–50% of layers are moved in lower quantization using our proposed ordering but only until 5–10% if moved using no specific ordering; (b) Adding layer importance to inherently dynamic quantization techniques can further improve their performance, showing that our approach is complementary to other dynamic quantization methods; (c) Quantizing LLMs to lower bits performs substantially better than pruning unless extreme quantization (2-bit) is used; and (d) Layer-wise quantization to lower bits works better in the case of larger LLMs with more layers compared to smaller LLMs with fewer layers.
|
Desk_Rejected_Submission
|
/pdf/b1862d650ad71eab1fb88cf708a02617952cbc33.pdf
|
ICLR.cc/2024/Conference
|
tDuQNUQN6q
|
Algorithm and Hardness for Dynamic Attention Maintenance in Large Language Models
|
Large language models (LLMs) have made fundamental changes in human life. The attention scheme is one of the key components over all the LLMs, such as BERT, GPT-1, Transformers, GPT-2, 3, 3.5 and 4. Inspired by previous theoretical study of static version of the attention multiplication problem [Zandieh, Han, Daliri, and Karbasi ICML 2023, Alman and Song NeurIPS 2023]. In this work, we formally define a dynamic version of attention matrix multiplication problem.
There are matrices $Q,K, V \in \mathbb{R}^{n \times d}$, they represent query, key and value in LLMs. In each iteration we update one entry in $K$ or $V$. In the query stage, we receive $(i,j) \in [n] \times [d]$ as input, and want to answer $(D^{-1} A V)_{i,j}$, where $A:=\exp(QK^\top) \in \mathbb{R}^{n \times n}$ is a square matrix and $D := \mathrm{diag}(A {\bf 1}_n) \in \mathbb{R}^{n \times n}$ is a diagonal matrix. Here ${\bf 1}_n$ denote a length-$n$ vector that all the entries are ones.
We provide two results: an algorithm and a conditional lower bound.
$\bullet$ On one hand, inspired by the lazy update idea from [Demetrescu and Italiano FOCS 2000, Sankowski FOCS 2004, Cohen, Lee and Song STOC 2019, Brand SODA 2020], we provide a data-structure that uses $O(n^{\omega(1,1,\tau)-\tau})$ amortized update time,
and $O(n^{1+\tau})$ worst-case query time.
$\bullet$ On the other hand, show that unless the hinted matrix vector multiplication conjecture [Brand, Nanongkai and Saranurak FOCS 2019] is false, there is no algorithm that can use both $O(n^{\omega(1,1,\tau) - \tau- \Omega(1)})$ amortized update time, and $O(n^{1+\tau-\Omega(1)})$ worst query time.
In conclusion, our algorithmic result is conditionally optimal unless hinted matrix vector multiplication conjecture is false.
One notable difference between prior work [Alman and Song NeurIPS 2023] and our work is, their techniques are from the area of fine-grained complexity, and our techniques are not. Our algorithmic techniques are from recent work in convex optimization, e.g. solving linear programming. Our hardness techniques are from the area of dynamic algorithms.
|
Rejected_Submission
|
/pdf/79a01cd48d805baca99330b788b4f72d3c93cdf8.pdf
|
ICLR.cc/2018/Conference
|
HJaDJZ-0W
|
Block-Sparse Recurrent Neural Networks
|
Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling. Sparsity is a technique to reduce compute and memory requirements of deep learning models. Sparse RNNs are easier to deploy on devices and high-end server processors. Even though sparse operations need less compute and memory relative to their dense counterparts, the speed-up observed by using sparse operations is less than expected on different hardware platforms. In order to address this issue, we investigate two different approaches to induce block sparsity in RNNs: pruning blocks of weights in a layer and using group lasso regularization with pruning to create blocks of weights with zeros. Using these techniques, we can create block-sparse RNNs with sparsity ranging from 80% to 90% with a small loss in accuracy. This technique allows us to reduce the model size by roughly 10x. Additionally, we can prune a larger dense network to recover this loss in accuracy while maintaining high block sparsity and reducing the overall parameter count. Our technique works with a variety of block sizes up to 32x32. Block-sparse RNNs eliminate overheads related to data storage and irregular memory accesses while increasing hardware efficiency compared to unstructured sparsity.
|
Reject
|
/pdf/4f0dc412fb2b26acbd716e503d71e6ab1428f749.pdf
|
ICLR.cc/2025/Conference
|
v593OaNePQ
|
Learning to Search from Demonstration Sequences
|
Search and planning are essential for solving many real-world problems. However, in numerous learning scenarios, only action-observation sequences, such as demonstrations or instruction sequences, are available for learning. Relying solely on supervised learning with these sequences can lead to sub-optimal performance due to the vast, unseen search space encountered during training. In this paper, we introduce Differentiable Tree Search Network (D-TSN), a novel neural network architecture that learns to construct search trees from just sequences of demonstrations by performing gradient descent on a best-first search tree construction algorithm. D-TSN enables the joint learning of submodules, including an encoder, value function, and world model, which are essential for planning. To construct the search tree, we employ a stochastic tree expansion policy and formulate it as another decision-making task. Then, we optimize the tree expansion policy via REINFORCE with an effective variance reduction technique for the gradient computation. D-TSN can be applied to problems with a known world model or to scenarios where it needs to jointly learn a world model with a latent state space. We study problems from these two scenarios, including Game of 24, 2D grid navigation, and Procgen games, to understand when D-TSN is more helpful. Through our experiments, we show that D-TSN is effective, especially when the world model with a latent state space is jointly learned. The code is available at https://github.com/dixantmittal/differentiable-tree-search-network.
|
ICLR 2025 Oral
|
/pdf/0035d08afe5262daf4fe1ee8c8c2d52957d3dfd3.pdf
|
ICLR.cc/2025/Conference
|
IbCvnpJ4py
|
RoFt-Mol: Benchmarking Robust Fine-tuning with Molecular Graph Foundation Models
|
In the era of foundation models, fine-tuning pre-trained models for specific downstream tasks has become crucial. This drives the need for robust fine-tuning methods to address challenges such as model overfitting and sparse labeling. Molecular graph foundation models (MGFMs) face unique difficulties that complicate fine-tuning. These models are limited by smaller pre-training datasets and more severe data scarcity for downstream tasks, both of which require enhanced model generalization. Moreover, MGFMs must accommodate diverse pre-training objectives, including both regression and classification tasks. To better understand and improve fine-tuning techniques under these conditions, we classify eight fine-tuning methods into three mechanisms: weight-based fine-tuning, representation-based fine-tuning, and partial fine-tuning. We benchmark these methods on downstream regression and classification tasks across both supervised and self-supervised pre-trained models in diverse labeling settings. This extensive evaluation provides valuable insights and informs the design of a refined robust fine-tuning method, DWiSE-FT. This approach combines the strengths of simple post-hoc weight interpolation with more complex weight ensemble fine-tuning methods, delivering improved performance across both task types while maintaining the ease of use inherent in post-hoc weight interpolation.
|
Rejected_Submission
|
/pdf/198c6b2fd383c96f2b6f362b62a0103e4deff510.pdf
|
ICLR.cc/2025/Conference
|
wxPnuFp8fZ
|
Self-Supervised Diffusion MRI Denoising via Iterative and Stable Refinement
|
Magnetic Resonance Imaging (MRI), including diffusion MRI (dMRI), serves as a ``microscope'' for anatomical structures and routinely mitigates the influence of low signal-to-noise ratio scans by compromising temporal or spatial resolution. However, these compromises fail to meet clinical demands for both efficiency and precision. Consequently, denoising is a vital preprocessing step, particularly for dMRI, where clean data is unavailable. In this paper, we introduce Di-Fusion, a fully self-supervised denoising method that leverages the latter diffusion steps and an adaptive sampling process. Unlike previous approaches, our single-stage framework achieves efficient and stable training without extra noise model training and offers adaptive and controllable results in the sampling process. Our thorough experiments on real and simulated data demonstrate that Di-Fusion achieves state-of-the-art performance in microstructure modeling, tractography tracking, and other downstream tasks. Code is available at https://github.com/FouierL/Di-Fusion.
|
ICLR 2025 Poster
|
/pdf/6bd7c307edad6542cdf2945234d9af92905ebf8f.pdf
|
ICLR.cc/2024/Conference
|
9Gvs64deOj
|
Rendering Wireless Environments Useful for Gradient Estimators: A Zero-Order Stochastic Federated Learning Method
|
Federated learning (FL) is a novel approach to machine learning that allows multiple edge devices to collaboratively train a model without disclosing their raw data. However, several challenges hinder the practical implementation of this approach, especially when devices and the server communicate over wireless channels, as it suffers from communication and computation bottlenecks in this case. By utilizing a communication-efficient framework, we propose a novel zero-order (ZO) method with two types of gradient estimators, one-point and two-point, that harnesses the nature of the wireless communication channel without requiring the knowledge of the channel state coefficient. It is the first method that includes the wireless channel in the learning algorithm itself instead of wasting resources to analyze it and remove its impact. The two main difficulties of this work are that in FL, the objective function is usually not convex, which makes the extension of FL to ZO methods challenging, and that including the impact of wireless channels requires extra attention. However, we overcome these difficulties and comprehensively analyze the proposed zero-order federated learning (ZOFL) framework. We establish its convergence theoretically, and we prove a convergence rate of $O(\frac{1}{\sqrt[3]{K}})$ with the one-point estimate and $O(\frac{1}{\sqrt{K}})$ with the two-point one in the nonconvex setting. We further demonstrate the potential of our algorithms with experimental results, taking into account independent and identically distributed (IID) and non-IID device data distributions.
|
Rejected_Submission
|
/pdf/7ab43947f24592b7f909663681348b398b2623b3.pdf
|
ICLR.cc/2025/Conference
|
Es4RPNDtmq
|
Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis
|
As a neural network's depth increases, it can improve generalization performance. However, training deep networks is challenging due to gradient and signal propagation issues. To address these challenges, extensive theoretical research and various methods have been introduced. Despite these advances, effective weight initialization methods for tanh neural networks remain insufficiently investigated. This paper presents a novel weight initialization method for neural networks with tanh activation function. Based on an analysis of the fixed points of the function $\tanh(ax)$, the proposed method aims to determine values of $a$ that mitigate activation saturation. A series of experiments on various classification datasets and physics-informed neural networks demonstrates that the proposed method outperforms Xavier initialization methods (with or without normalization) in terms of robustness across different network sizes, data efficiency, and convergence speed. Code is available at https://github.com/1HyunwooLee/Tanh-Init.
|
ICLR 2025 Poster
|
/pdf/98a97d928200d2a62876799c0c7fd6a5cbe062cb.pdf
|
ICLR.cc/2025/Conference
|
84WmbzikPP
|
Stiefel Flow Matching for Moment-Constrained Structure Elucidation
|
Molecular structure elucidation is a fundamental step in understanding chemical phenomena, with applications in identifying molecules in natural products, lab syntheses, forensic samples, and the interstellar medium.
We consider the task of predicting a molecule's all-atom 3D structure given only its molecular formula and moments of inertia, motivated by the ability of rotational spectroscopy to measure these moments.
While existing generative models can conditionally sample 3D structures with approximately correct moments, this soft conditioning fails to leverage the many digits of precision afforded by experimental rotational spectroscopy.
To address this, we first show that the space of $n$-atom point clouds with a fixed set of moments of inertia is embedded in the Stiefel manifold $\mathrm{St}(n, 4)$.
We then propose Stiefel Flow Matching as a generative model for elucidating 3D structure under exact moment constraints.
Additionally, we learn simpler and shorter flows by finding approximate solutions for equivariant optimal transport on the Stiefel manifold.
Empirically, enforcing exact moment constraints allows Stiefel Flow Matching to achieve higher success rates and faster sampling than Euclidean diffusion models, even on high-dimensional manifolds corresponding to large molecules in the GEOM dataset.
|
ICLR 2025 Poster
|
/pdf/48437bde94e3e6e0bc10c12ac74393c4c5538f1e.pdf
|
ICLR.cc/2025/Conference
|
M4qNIzQYpd
|
OpenRCA: Can Large Language Models Locate the Root Cause of Software Failures?
|
Large language models (LLMs) are driving substantial advancements in software engineering, with successful applications like Copilot and Cursor transforming real-world development practices. However, current research predominantly focuses on the early stages of development, such as code generation, while overlooking the post-development phases that are crucial to user experience. To explore the potential of LLMs in this direction, we propose OpenRCA, a benchmark dataset and evaluation framework for assessing LLMs’ ability to identify the root cause of software failures. OpenRCA includes 335 failures from three enterprise software systems, along with over 68 GB of telemetry data (logs, metrics, and traces). Given a failure case and its associated telemetry, the LLM is tasked to identify the root cause that triggered the failure, requiring comprehension of software dependencies and reasoning over heterogeneous, long-context telemetry data. Our results show substantial room for improvement, as current models can only handle the simplest cases. Even with the specially designed RCA-agent, the best-performing model, Claude 3.5, solved only 11.34% failure cases. Our work paves the way for future research in this direction.
|
ICLR 2025 Poster
|
/pdf/388fe964a16cdde844404051497859daabbf6fe0.pdf
|
ICLR.cc/2025/Conference
|
z1ohBxWeL2
|
SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation
|
LLM inference for popular enterprise use cases, such as summarization, RAG, and code-generation, typically observes orders of magnitude longer prompt lengths than generation lengths. This characteristic leads to high cost of prefill and increased response latency.
In this paper, we present SwiftKV, a novel model transformation and distillation procedure specifically designed to reduce the time and cost of processing prompt tokens while preserving high quality of generated tokens. SwiftKV combines three key mechanisms: i) SingleInputKV, which prefills later layers' KV cache using a much earlier layer's output, allowing prompt tokens to skip much of the model computation, ii) AcrossKV, which merges the KV caches of neighboring layers to reduce the memory footprint and support larger batch size for higher throughput, and iii) a knowledge-preserving distillation procedure that can adapt existing LLMs for SwiftKV with minimal accuracy impact and low compute and data requirement. For Llama-3.1-8B and 70B, SwiftKV reduces the compute requirement of prefill by 50% and the memory requirement of the KV cache by 62.5% while incurring minimum quality degradation across a wide range of tasks. In the end-to-end inference serving using an optimized vLLM implementation, SwiftKV realizes up to 2x higher aggregate throughput and 60% lower time per output token. It can achieve a staggering 560 TFlops/GPU of normalized inference throughput, which translates to 16K tokens/s for Llama-3.1-70B in 16-bit precision on 4x H100 GPUs. Our training, inference, and model implementations are open-sourced at https://anonymized.link.
|
Rejected_Submission
|
/pdf/5a8a36759311dcf73c40bd41a210800d23585985.pdf
|
ICLR.cc/2024/Conference
|
tUtGjQEDd4
|
Generative Modeling with Phase Stochastic Bridge
|
Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs. DMs work by constructing a Stochastic Differential Equation (SDE) in the input space (ie, position space), and using a neural network to reverse it. In this work, we introduce a novel generative modeling framework grounded in \textbf{phase space dynamics}, where a phase space is defined as {an augmented space encompassing both position and velocity.} Leveraging insights from Stochastic Optimal Control, we construct a path measure in the phase space that enables efficient sampling. {In contrast to DMs, our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.} This early prediction sets the stage for efficient data generation by leveraging additional velocity information along the trajectory. On standard image generation benchmarks, our model yields favorable performance over baselines in the regime of small Number of Function Evaluations (NFEs). Furthermore, our approach rivals the performance of diffusion models equipped with efficient sampling techniques, underscoring its potential as a new tool generative modeling.
|
ICLR 2024 oral
|
/pdf/5d5ddf9cd03dbc97896ca72e62060b33d19f59e7.pdf
|
ICLR.cc/2025/Conference
|
pRIPRDALBV
|
Open-World Planning via Lifted Regression with LLM-based Affordances for Embodied Agents
|
Open-world planning is crucial for embodied AI agents that must make decisions with incomplete task-relevant knowledge. In fact, the main challenges lie in reasoning about objects and their affordances that are unknown to the agent. Large Language Models (LLMs), pre-trained on vast internet-scale data, have emerged as potential solutions for open-world planning. However, LLMs have limitations in long-horizon planning tasks and face problems related to interpretability, reliability, and cost-efficiency. Symbolic planning methods, on the other hand, offer structured and verifiable approaches to long-horizon tasks, but often struggle to generate feasible plans in an open-world setting. In this work, we propose a novel approach, called LLM-Regress, which combines the strengths of lifted symbolic regression planning with LLM-based affordances. The lifted representation allows us to generate plans capable of handling arbitrary unknown objects, while regression planning is the only planning paradigm that guarantees complete solutions using lifted representations. For such tasks, we leverage LLMs to supplement missing affordances knowledge for unknown objects. The regression nature of our approach enables the agent to focus on actions and objects relevant to the goal, thus avoiding the need for costly LLM calls for every decision. We evaluate our approach on the ALFWorld dataset and introduce a new ALFWorld-Afford dataset with higher planning complexity and more affordances types. The empirical results demonstrate that our method outperforms existing approaches in terms of success rates, planning duration, and number of LLM Tokens. Finally, we show that our approach is resilient to domain shifts in affordances and generalizes effectively to unseen tasks. This work underscores the importance of integrating symbolic reasoning with LLM knowledge for open-world decision-making in embodied AI.
|
Rejected_Submission
|
/pdf/6a6e13df07ee8729cfa801be61a5b77d9b6ba098.pdf
|
ICLR.cc/2025/Conference
|
OyAMxlDikl
|
Neural Network Adaptive Quantization based on Bayesian Deep Learning
|
We propose a novel approach to solve the adaptive quantization problem in neural networks based on epistemic uncertainty analysis. The quantized model is treated as a Bayesian neural network with stochastic weights, where the mean values are employed to estimate the corresponding weights. Standard deviations serve as an indicator of uncertainty and the number of corresponding bits — i.e., a larger number of bits indicate lower uncertainty, and vice versa. We perform an extensive analysis of several algorithms within a novel framework for different convolutional and fully connected neural networks based on open datasets demonstrating the main advantages of the proposed approach. In particular, we introduce two novel algorithms for mixed-precision quantization. Quantile Inform utilizes uncertainty to allocate bit-width across layers, while Random Bits employs stochastic gradient-based optimization techniques to maximize the full likelihood of quantization. Using our approach, we reduce the average bit-width of the VGG-16 model to 3.05 with the 90.5% accuracy on the CIFAR-10 dataset compared to 91.9% for the non-quantized model. For the LeNet model trained on the MNIST dataset, we reduce the average bit-width to 3.16 and achieve 99.0% accuracy, almost equal to 99.2% for the non-quantized model.
|
Rejected_Submission
|
/pdf/9a6d5532be4c06a7fbda86639f781db451cc3e9d.pdf
|
ICLR.cc/2025/Conference
|
m30uro534c
|
T-Graphormer: Using Transformers for Spatiotemporal Forecasting
|
Time series data is ubiquitous and appears in all fields of study. In multivariate time series, observations are interconnected both temporally and across components. For instance, in traffic flow analysis, traffic speeds at different intersections exhibit complex spatiotemporal correlations. Modelling this dual structure poses significant challenges. Most existing forecasting methods tackle these challenges by separately learning spatial and temporal dependencies. In this work, we introduce T-Graphormer, a Transformer-based approach designed to model spatiotemporal correlations directly. Extending the Graphormer architecture to incorporate temporal dynamics, our method updates each node representation by selectively attending to all other nodes within a graph sequence. This design enables the model to capture rich spatiotemporal patterns with minimal reliance on predefined spacetime inductive biases. We validate the effectiveness of T-Graphormer on real-world traffic prediction benchmark datasets, achieving up to 10% reductions in both root mean squared error (RMSE) and mean absolute percentage error (MAPE) compared to state-of-the-art methods.
|
Rejected_Submission
|
/pdf/d81da1b54712b1bac55014503fe35fb8411b6211.pdf
|
ICLR.cc/2025/Conference
|
7El7K1DoyX
|
Lawma: The Power of Specialization for Legal Annotation
|
Annotation and classification of legal text are central components of empirical legal research. Traditionally, these tasks are often delegated to trained research assistants. Motivated by the advances in language modeling, empirical legal scholars are increasingly turning to commercial models, hoping that it will alleviate the significant cost of human annotation. In this work, we present a comprehensive analysis of large language models’ current abilities to perform legal annotation tasks. To do so, we construct CaselawQA, a benchmark comprising 260 legal text classification tasks, nearly all new to the machine learning community. We demonstrate that commercial models, such as GPT-4.5 and Claude 3.7 Sonnet, achieve non-trivial accuracy but generally fall short of the performance required for legal work. We then demonstrate that small, lightly fine-tuned models vastly outperform commercial models. A few dozen to a few hundred labeled examples are usually enough to achieve higher accuracy. Our work points to a viable alternative to the predominant practice of prompting commercial models. For concrete legal annotation tasks with some available labeled data, researchers are likely better off using a fine-tuned open-source model. Code, datasets, and fine-tuned models are available at https://github.com/socialfoundations/lawma.
|
ICLR 2025 Poster
|
/pdf/20d98c1045c82ded375298c1694040f731031937.pdf
|
ICLR.cc/2024/Conference
|
Zww4Xqmk38
|
Tree-based Ensemble Learning for Out-of-distribution Detection
|
Being able to successfully determine whether the testing samples has similar distribution as the training samples is a fundamental question to address before we can safely deploy most of the machine learning models into practice. In this paper, we propose TOOD detection, a simple yet effective tree-based out-of-distribution (TOOD) detection mechanism to determine if a set of unseen samples will have similar distribution as of the training samples. The TOOD detection mechanism is based on computing pairwise hamming distance of testing samples' tree embeddings, which are obtained by fitting a tree-based ensemble model through in-distribution training samples. Our approach is interpretable and robust for its tree-based nature. Furthermore, our approach is efficient, flexible to various machine learning tasks, and can be easily generalized to unsupervised setting. Extensive experiments are conducted to show the proposed method outperforms other state-of-the-art out-of-distribution detection methods in distinguishing the in-distribution from out-of-distribution on various tabular, image, and text data.
|
Rejected_Submission
|
/pdf/e018334001525fcbbe38c06cee396b1ee473e6aa.pdf
|
ICLR.cc/2024/Conference
|
BxHgpC6FNv
|
Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data
|
Neural networks trained by gradient descent (GD) have exhibited a number of surprising generalization behaviors. First, they can achieve a perfect fit to noisy training data and still generalize near-optimally, showing that overfitting can sometimes be benign. Second, they can undergo a period of classical, harmful overfitting---achieving a perfect fit to training data with near-random performance on test data---before transitioning (''grokking'') to near-optimal generalization later in training. In this work, we show that both of these phenomena provably occur in two-layer ReLU networks trained by GD on XOR cluster data where a constant fraction of the training labels are flipped. In this setting, we show that after the first step of GD, the network achieves 100\% training accuracy, perfectly fitting the noisy labels in the training data, but achieves near-random test accuracy. At a later training step, the network achieves near-optimal test accuracy while still fitting the random labels in the training data, exhibiting a ''grokking'' phenomenon. This provides the first theoretical result of benign overfitting in neural network classification when the data distribution is not linearly separable. Our proofs rely on analyzing the feature learning process under GD, which reveals that the network implements a non-generalizable linear classifier after one step and gradually learns generalizable features in later steps.
|
ICLR 2024 poster
|
/pdf/39dd4b0c8c813b57e8116197b229d998b5770c44.pdf
|
ICLR.cc/2025/Conference
|
bFYST1MaGh
|
Communicating Activations Between Language Model Agents
|
Communication between multiple language model (LM) agents has been shown to scale up the reasoning ability of LMs. While natural language has been the dominant medium for inter-LM communication, it is not obvious this should be the standard: not only does natural language communication incur high inference costs that scale quickly with the number of both agents and messages, but also the decoding process abstracts away too much rich information that could be otherwise accessed from the internal activations. In this work, we propose a simple technique whereby LMs communicate via *activations*; concretely, we pause an LM $B$'s computation at an intermediate layer, combine its current activation with another LM $A$'s intermediate activation via some function $f$, then pass $f$'s output into the next layer of $B$ and continue the forward pass till decoding is complete. This approach scales up LMs on new tasks with *zero* additional parameters and data, and saves a *substantial amount of compute* over natural language communication. We test our method with various functional forms $f$ on two experimental setups—multi-player coordination games and reasoning benchmarks—and find that it achieves up to $27.0$% improvement over natural language communication across datasets with $<$$1/4$ the compute, illustrating the superiority and robustness of activations as an alternative "language" for communication between LMs.
|
Rejected_Submission
|
/pdf/2b196ef9b77586c21aae0e70803bb8ecbb9e94c4.pdf
|
ICLR.cc/2025/Conference
|
eIK4ojL2QM
|
Model Comparisons: XNet Outperforms KAN
|
In the fields of computational mathematics and artificial
intelligence, the need for precise data modeling is crucial,
especially for predictive machine learning tasks. This paper
explores further XNet, a novel algorithm that employs the complex-valued
Cauchy integral formula, offering a superior network architecture
that surpasses traditional Multi-Layer Perceptrons (MLPs) and
Kolmogorov-Arnold Networks (KANs). XNet significant
improves speed and accuracy across various tasks in both low
and high-dimensional spaces, redefining the scope of data-driven
model development and providing substantial improvements over
established time series models like LSTMs.
|
Desk_Rejected_Submission
|
/pdf/6f5816ce0f484ae1ed45f680865599d11bc7292c.pdf
|
ICLR.cc/2025/Conference
|
OhauMUNW8T
|
Wavelet-based Positional Representation for Long Context
|
In the realm of large-scale language models, a significant challenge arises when extrapolating sequences beyond the maximum allowable length.
This is because the model's position embedding mechanisms are limited to positions encountered during training, thus preventing effective representation of positions in longer sequences.
We analyzed conventional position encoding methods for long contexts and found the following characteristics.
(1) When the representation dimension is regarded as the time axis, Rotary Position Embedding (RoPE) can be interpreted as a restricted wavelet transform using Haar-like wavelets.
However, because it uses only a fixed scale parameter, it does not fully exploit the advantages of wavelet transforms, which capture the fine movements of non-stationary signals using multiple scales (window sizes).
This limitation could explain why RoPE performs poorly in extrapolation.
(2)
Previous research as well as our own analysis indicates that Attention with Linear Biases (ALiBi) functions similarly to windowed attention, using windows of varying sizes.
However, it has limitations in capturing deep dependencies because it restricts the receptive field of the model.
From these insights, we propose a new position representation method that captures multiple scales (i.e., window sizes) by leveraging wavelet transforms without limiting the model's attention field.
Experimental results show that this new method improves the performance of the model in both short and long contexts.
In particular, our method allows extrapolation of position information without limiting the model's attention field.
|
ICLR 2025 Poster
|
/pdf/d5ed2fc8f4eb31801279781b07970b112229814c.pdf
|
ICLR.cc/2024/Conference
|
RxhOEngX8s
|
Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection
|
Deployed machine learning systems can be improved using methods detecting out-of-distribution (OOD) inputs. Existing research mainly focuses on one type of distribution shift: detecting samples from novel classes, absent from the training set. However, real-world systems encounter a broad variety of anomalous inputs, and the OOD literature neglects this diversity. This work categorizes five distinct types of distribution shifts and critically evaluates the performance of recent OOD detection methods on each of them. We publicly release our benchmark under the name BROAD (Benchmarking Resilience Over Anomaly Diversity). We find that while these methods excel in detecting novel classes, their performances are inconsistent across other types of distribution shifts. In other words, they can only reliably detect unexpected inputs that they have been specifically designed to expect. As a first step toward broad OOD detection, we learn a Gaussian mixture generative model for existing detection scores, enabling an ensemble detection approach that is more consistent and comprehensive for broad OOD detection, with improved performances over existing methods. Our code to download BROAD and reproduce our experiments will be released upon publication.
|
Rejected_Submission
|
/pdf/97b269dbc178360c97a7daa7e4a280a38f665f9d.pdf
|
ICLR.cc/2024/Conference
|
EArTDUmILF
|
VBH-GNN: Variational Bayesian Heterogeneous Graph Neural Networks for Cross-subject Emotion Recognition
|
The research on human emotion under electroencephalogram (EEG) is an emerging field in which cross-subject emotion recognition (ER) is a promising but challenging task. Many approaches attempt to find emotionally relevant domain-invariant features using domain adaptation (DA) to improve the accuracy of cross-subject ER. However, two problems still exist with these methods. First, only single-modal data (EEG) is utilized, ignoring the complementarity between multi-modal physiological signals. Second, these methods aim to completely match the signal features between different domains, which is difficult due to the extreme individual differences of EEG. To solve these problems, we introduce the complementarity of multi-modal physiological signals and propose a new method for cross-subject ER that does not align the distribution of signal features but rather the distribution of spatio-temporal relationships between features. We design a Variational Bayesian Heterogeneous Graph Neural Network (VBH-GNN) with Relationship Distribution Adaptation (RDA). The RDA first aligns the domains by expressing the model space as a posterior distribution of a heterogeneous graph for a given source domain. Then, the RDA transforms the heterogeneous graph into an emotion-specific graph to further align the domains for the downstream ER task. Extensive experiments on two public datasets, DEAP and Dreamer, show that our VBH-GNN outperforms state-of-the-art methods in cross-subject scenarios.
|
ICLR 2024 poster
|
/pdf/8a045a606abf89ad24e81dea7ff199c2cb5fe88a.pdf
|
ICLR.cc/2018/Conference
|
HJC2SzZCW
|
Sensitivity and Generalization in Neural Networks: an Empirical Study
|
In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations. Our experiments survey thousands of models with different architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets.We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the input-output Jacobian of the network, and that this correlates well with generalization. We further establish that factors associated with poor generalization -- such as full-batch training or using random labels -- correspond to higher sensitivity, while factors associated with good generalization -- such as data augmentation and ReLU non-linearities -- give rise to more robust functions. Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points.
|
Accept (Poster)
|
/pdf/412ea5b104cf0536b53489bb956d72ac8de77e2e.pdf
|
ICLR.cc/2025/Conference
|
Lb414Rdzs8
|
NODE-SAT: Temporal Graph Learning with Neural ODE-Guided Self-Attention
|
We propose NODE-SAT, a novel temporal graph learning model that integrates Neural Ordinary Differential Equations (NODEs) with self-attention mechanisms. NODE-SAT's design requires only historical 1-hop neighbors as input and comprises three key components: a temporal link processing module utilizing NODE-guided self-attention layers to capture temporal link information, a node representation module summarizing neighbor information, and a prediction layer. Extensive experiments across thirteen temporal link prediction datasets demonstrate that NODE-SAT achieves state-of-the-art performance on most datasets with significantly faster convergence. The model demonstrates high accuracy, rapid convergence, robustness across varying dataset complexities, and strong generalization capabilities in both transductive and inductive settings in temporal link prediction. These findings highlight NODE-SAT's effectiveness in capturing node correlations and temporal link dynamics.
|
Rejected_Submission
|
/pdf/c0c884f8bb8908fa9e97090c7df2f18404e6d33b.pdf
|
ICLR.cc/2025/Conference
|
NUD03NBDOE
|
ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints
|
Reasoning about Actions and Change (RAC) has historically played a pivotal role in solving foundational AI problems, such as the frame problem. It has driven advancements in AI fields, such as non-monotonic and commonsense reasoning. RAC remains crucial for AI systems that operate in dynamic environments, engage in interactive scenarios, or rely on commonsense reasoning. Despite substantial advances made by Large Language Models (LLMs) in various AI domains, their performance in RAC remains underexplored. To address this gap, we introduce a new diagnostic benchmark, $\textbf{ActionReasoningBench}$, which encompasses 8 domains and includes questions for up to 19 action sequences. This benchmark rigorously evaluates LLMs across six key RAC dimensions: $\textit{Fluent Tracking}$, $\textit{State Tracking}$, $\textit{Action Executability}$, $\textit{Effects of Actions}$, $\textit{Numerical RAC}$, and $\textit{Composite Questions}$. LLMs demonstrate average accuracy rates of 73.55%, 65.63%, 58.73%, and 62.38% on the former four dimensions, which are frequently discussed in RAC literature. However, the performance on the latter two dimensions, which introduce complex and novel reasoning questions, the average performance of LLMs is lowered to 33.16% and 51.19%, respectively, reflecting a 17.9% performance decline. We also introduce new ramification constraints to capture the indirect effects of actions, providing deeper insights into RAC challenges. Our evaluation of state-of-the-art LLMs, including both open-source and commercial models, reveals challenges across all RAC dimensions, particularly in handling ramifications, with GPT-4o failing to solve any question and o1-preview achieving a score of only 18.4%.
|
ICLR 2025 Poster
|
/pdf/83736ef168f97ad552f160a8513145ada3403953.pdf
|
ICLR.cc/2024/Conference
|
kKxvFpvV04
|
Towards Exact Computation of Inductive Bias
|
Much research in machine learning involves finding appropriate inductive biases (e.g. convolutional neural networks, momentum-based optimizers, transformers) to promote generalization on tasks. However, quantification of the amount of inductive bias associated with these architectures and hyperparameters has been limited. We propose a novel method for efficiently computing the inductive bias required for generalization on a task with a fixed training data budget; formally, this corresponds to the amount of information required to specify well-generalizing models within a specific hypothesis space of models. Our approach involves sampling from the hypothesis space and modeling the loss distribution of hypotheses to estimate the required inductive bias for a task. Unlike prior work, our method provides a direct estimate of inductive bias without using bounds and is applicable to diverse hypothesis spaces. Moreover, we derive approximation error bounds for our estimation approach in terms of the number of sampled hypotheses. Consistent with prior results, our empirical results demonstrate that higher dimensional tasks require greater inductive bias. We show that relative to other expressive model classes, neural networks as a model class encode massive amounts of inductive bias. Furthermore, our measure quantifies the relative difference in inductive bias between different neural network architectures (e.g. with varying width and depth). Our proposed inductive bias metric provides an information-theoretic interpretation of the benefits of specific model architectures for certain tasks and provides a quantitative guide to developing tasks requiring greater inductive bias, thereby encouraging the development of more powerful inductive biases.
|
Rejected_Submission
|
/pdf/80185e3e04a9e9cfb373bcbef59dc95758d25ca2.pdf
|
ICLR.cc/2024/Conference
|
qTui9aQ3VW
|
How Robust Are Energy-Based Models Trained With Equilibrium Propagation?
|
Deep neural networks (DNNs) are easily fooled by adversarial perturbations that are imperceptible to humans. Adversarial training, a process where adversarial examples are added to the training set, is the current state-of-the-art defense against adversarial attacks, but it lowers the model's accuracy on clean inputs, is computationally expensive, and offers less robustness to natural noise. In contrast, energy-based models (EBMs), which were designed for efficient implementation in neuromorphic hardware and physical systems, incorporate feedback connections from each layer to the previous layer, yielding a recurrent, deep-attractor architecture which we hypothesize should make them naturally robust. Our work is the first to explore the robustness of EBMs to both natural corruptions and adversarial attacks, which we do using the CIFAR-10 and CIFAR-100 datasets. We demonstrate that EBMs are more robust than transformers and display comparable robustness to adversarially-trained DNNs on white-box, black-box, and natural perturbations without sacrificing clean accuracy, and without the need for adversarial training or additional training techniques.
|
Rejected_Submission
|
/pdf/a9f3cabbdac3e79a0b6c7ced381fe0a188d59011.pdf
|
ICLR.cc/2024/Conference
|
WNQjN5HzXt
|
AUGCAL: Improving Sim2Real Adaptation by Uncertainty Calibration on Augmented Synthetic Images
|
Synthetic data (Sim) drawn from simulators have emerged as a popular alternativefor training models where acquiring annotated real-world images is difficult. However, transferring models trained on synthetic images to real-world applicationscan be challenging due to appearance disparities. A commonly employed solution to counter this Sim2Real gap is unsupervised domain adaptation, where models are trained using labeled Sim data and unlabeled Real data. Mispredictions made by such Sim2Real adapted models are often associated with miscalibration – stemming from overconfident predictions on real data. In this paper, we introduce AUGCAL, a simple training-time patch for unsupervised adaptation that improves Sim2Real adapted models by – (1) reducing overall miscalibration, (2) reducing overconfidence in incorrect predictions and (3) improving confidence score reliability by better guiding misclassification detection – all while retaining or improving Sim2Real performance. Given a base Sim2Real adaptation algorithm, at training time, AUGCAL involves replacing vanilla Sim images with strongly augmented views (AUG intervention) and additionally optimizing for a training time calibration loss on augmented Sim predictions (CAL intervention). We motivate AUGCAL using a brief analytical justification of how to reduce miscalibration on unlabeled REAL data. Through our experiments, we empirically show the efficacy of AUGCAL across multiple adaptation methods, backbones, tasks and shifts.
|
ICLR 2024 poster
|
/pdf/4f20af9c38d0b7f42f702485207a3f1ec51ee183.pdf
|
ICLR.cc/2025/Conference
|
qVtfN6NoJi
|
Layer-Varying Deep Reservoir Computing Architecture
|
Data loss and corruption are common incidents that often lead to catastrophic consequences in both theoretical and experimental facets of data analytics. The aspiration to minimize the impacts of such consequences drives the demand for the development of effective data analytic tools and imputation methods to replace missing, corrupted, or artifacted data.
The focus of this paper is on multivariate time series imputation, for which we develop a dynamical systems-theoretic deep learning approach. The central idea is to view a multivariate time series as a trajectory of a dynamical system. Then, we construct a deep reservoir computing architecture to model the temporal evolution of the system by using existing data in the time series. In particular, this architecture is composed of a cascade of echo state network (ESN) layers with diminishing reservoir sizes. We then propose a layer-by-layer training scheme, which gives rise to a deep learning-based time series imputation algorithm. We further provide a rigorous convergence analysis of this algorithm by exploiting the echo state property of ESN, and demonstrate the imputation performance as well as the efficiency of the training process by utilizing both synthetic and real-world datasets arising from diverse applications.
|
Rejected_Submission
|
/pdf/884e457c91d2daad41c15914c0ad0a5f8c608928.pdf
|
ICLR.cc/2025/Conference
|
z2QdVmhtAP
|
Efficient Multi Subject Visual Reconstruction from fMRI Using Aligned Representations
|
Reconstructing visual images from fMRI data presents a challenging task, particularly when dealing with limited data and compute availability. This work introduces a novel approach to fMRI-based visual image reconstruction using a subject-agnostic common representation space. We show that subjects' brain signals naturally align in this common space during training, without the need for explicit alignment. This is leveraged to demonstrate that aligning subject-specific adapters to a reference subject is significantly more efficient than traditional end-to-end training methods. Our approach excels in low-data scenarios, where training the adapter with limited data achieves faster and better performance. We also introduce a novel method to select the most representative subset of images for a new subject, allowing for fine-tuning with 40\% less data while maintaining performance. These advancements make fMRI data collection more efficient and practical, reducing the burden on subjects and improving the generalization of fMRI reconstruction models.
|
Rejected_Submission
|
/pdf/4c481bc805855943cdd332c3de81b4fb5b961cad.pdf
|
ICLR.cc/2024/Conference
|
Qqu5mMgIBV
|
Castor: Causal Temporal Regime Structure Learning
|
The task of uncovering causal relationships among variables from time series data stands as an essential and challenging objective that cuts across a broad array of disciplines ranging from climate science to healthcare. Time series entail linear or non-linear relationships, and usually follow multiple a priori unknown regimes. Existing causal discovery methods can infer summary causal graphs from heterogeneous data with known regime indices, but they fall short in comprehensively learning both regime indices and the full temporal causal graph.
In this paper, we introduce CASTOR, a novel framework designed to learn causal relationships in heterogeneous time series data composed of various regimes, each governed by a distinct causal graph. Through the maximization of a score function via the EM algorithm, CASTOR infers the number of regimes and learns linear or non-linear causal relationships inherent in each regime. We demonstrate the robust convergence properties of CASTOR, specifically highlighting its proficiency in accurately identifying unique regimes. Empirical evidence, garnered from exhaustive synthetic experiments and two real-world benchmarks, confirm CASTOR's superior performance in causal discovery compared to relevant baselines.
By learning a full temporal causal graph for each regime, CASTOR establishes itself as a distinctly interpretable method for causal discovery in heterogeneous time series.
|
Rejected_Submission
|
/pdf/c564e817c1051c9b2e863ec7ea4fe967f1f852de.pdf
|
ICLR.cc/2025/Conference
|
UW0zetsx8X
|
Prompt Optimization with Human Feedback
|
Large language models (LLMs) have demonstrated remarkable performances in various tasks. However, the performances of LLMs heavily depend on the input prompt. This has given rise to a number of recent works on prompt optimization. However, the previous works often require the availability of a numeric score to assess the quality of every prompt. Unfortunately, when a human user interacts with a black-box LLM, it is often infeasible and unreliable to attain such a score. Instead, it is usually significantly easier and more reliable to obtain preference feedback from a human user, i.e., showing the user the responses generated from a pair of prompts and asking the user which one is preferred. Therefore, in this paper, we study the problem of prompt optimization with human feedback (POHF), in which we aim to optimize the prompt for a black-box LLM using only human preference feedback. By drawing inspirations from dueling bandits, we design a theoretically principled strategy to select a pair of prompts to query for preference feedback in every iteration, and hence introduce our algorithm named automated POHF (APOHF). We apply our APOHF algorithm to a variety of tasks, including optimizing user instructions, prompt optimization for text-to-image generative models, and response optimization with human feedback (i.e., further refining the response using a variant of our APOHF). The results demonstrate that our APOHF can efficiently find a good prompt using a small number of preference feedback instances.
|
Rejected_Submission
|
/pdf/12e81a4db660560ab7a87aeae5f271d41f77c032.pdf
|
ICLR.cc/2025/Conference
|
Iyrtb9EJBp
|
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
|
LLMs are an integral component of retrieval-augmented generation (RAG) systems. While many studies focus on evaluating the overall quality of end-to-end RAG systems, there is a gap in understanding the appropriateness of LLMs for the RAG task. To address this, we introduce Trust-Score, a holistic metric that evaluates the trustworthiness of LLMs within the RAG framework. Our results show that various prompting methods, such as in-context learning, fail to effectively adapt LLMs to the RAG task as measured by Trust-Score. Consequently, we propose Trust-Align, a method to align LLMs for improved Trust-Score performance. 26 out of 27 models aligned using Trust-Align substantially outperform competitive baselines on ASQA, QAMPARI, and ELI5. Specifically, in LLaMA-3-8b, Trust-Align outperforms FRONT on ASQA (↑12.56), QAMPARI (↑36.04), and ELI5 (↑17.69). Trust-Align also significantly enhances models’ ability to correctly refuse and provide quality citations. We also demonstrate the effectiveness of Trust-Align across different open-weight models, including the LLaMA series (1b to 8b), Qwen-2.5 series (0.5b to 7b), and Phi3.5 (3.8b). We release our code at https://github.com/declare-lab/trust-align.
|
ICLR 2025 Oral
|
/pdf/29703dfd9cef0f6afde425618f603cca393e0979.pdf
|
ICLR.cc/2025/Conference
|
XrsOu4KgDE
|
Attributing Culture-Conditioned Generations to Pretraining Corpora
|
In open-ended generative tasks like narrative writing or dialogue, large language models often exhibit cultural biases, showing limited knowledge and generating templated outputs for less prevalent cultures. Recent works show that these biases may stem from uneven cultural representation in pretraining corpora. This work investigates how pretraining leads to biased culture-conditioned generations
by analyzing how models associate entities with cultures based on pretraining data patterns. We propose the MEMOED framework (MEMOrization from prEtraining Document) to determine whether a generation for a culture arises from memorization. Using MEMOED on culture-conditioned generations about food and clothing for 110 cultures, we find that high-frequency cultures in pretraining data yield more generations with memorized symbols, while some low-frequency cultures produce none. Additionally, the model favors generating entities with extraordinarily high frequency regardless of the conditioned culture, reflecting biases toward frequent pretraining terms irrespective of relevance. We hope that the MEMOED framework and our insights will inspire more works on attributing model performance on pretraining data.
|
ICLR 2025 Poster
|
/pdf/d4ad6ebf55597935dddd75a72863d66152715ca2.pdf
|
ICLR.cc/2025/Conference
|
exgLs4snap
|
Parameter Expanded Stochastic Gradient Markov Chain Monte Carlo
|
Bayesian Neural Networks (BNNs) provide a promising framework for modeling predictive uncertainty and enhancing out-of-distribution robustness (OOD) by estimating the posterior distribution of network parameters. Stochastic Gradient Markov Chain Monte Carlo (SGMCMC) is one of the most powerful methods for scalable posterior sampling in BNNs, achieving efficiency by combining stochastic gradient descent with second-order Langevin dynamics. However, SGMCMC often suffers from limited sample diversity in practice, which affects uncertainty estimation and model performance. We propose a simple yet effective approach to enhance sample diversity in SGMCMC without the need for tempering or running multiple chains. Our approach reparameterizes the neural network by decomposing each of its weight matrices into a product of matrices, resulting in a sampling trajectory that better explores the target parameter space. This approach produces a more diverse set of samples, allowing faster mixing within the same computational budget. Notably, our sampler achieves these improvements without increasing the inference cost compared to the standard SGMCMC. Extensive experiments on image classification tasks, including OOD robustness, diversity, loss surface analyses, and a comparative study with Hamiltonian Monte Carlo, demonstrate the superiority of the proposed approach.
|
ICLR 2025 Poster
|
/pdf/caa782ba6b8081e4224455d2ba79db20f6eb72ab.pdf
|
ICLR.cc/2024/Conference
|
qr4ECbGcSj
|
On the Expressivity of Objective-Specification Formalisms in Reinforcement Learning
|
Most algorithms in reinforcement learning (RL) require that the objective is formalised with a Markovian reward function. However, it is well-known that certain tasks cannot be expressed by means of an objective in the Markov rewards formalism, motivating the study of alternative objective-specification formalisms in RL such as Linear Temporal Logic and Multi-Objective Reinforcement Learning. To date, there has not yet been any thorough analysis of how these formalisms relate to each other in terms of their expressivity. We fill this gap in the existing literature by providing a comprehensive comparison of 17 salient objective-specification formalisms. We place these formalisms in a preorder based on their expressive power, and present this preorder as a Hasse diagram. We find a variety of limitations for the different formalisms, and argue that no formalism is both dominantly expressive and straightforward to optimise with current techniques. For example, we prove that each of Regularised RL, (Outer) Nonlinear Markov Rewards, Reward Machines, Linear Temporal Logic, and Limit Average Rewards can express a task that the others cannot. The significance of our results is twofold. First, we identify important expressivity limitations to consider when specifying objectives for policy optimization. Second, our results highlight the need for future research which adapts reward learning to work with a greater variety of formalisms, since many existing reward learning methods assume that the desired objective takes a Markovian form. Our work contributes towards a more cohesive understanding of the costs and benefits of different RL objective-specification formalisms.
|
ICLR 2024 poster
|
/pdf/f2d5834dbd03ec4ae433785ac6a4ad914f56d6a6.pdf
|
ICLR.cc/2024/Conference
|
auUngos7eR
|
Implicit Maximum a Posteriori Filtering via Adaptive Optimization
|
Bayesian filtering approximates the true underlying behavior of a time-varying system by inverting an explicit generative model to convert noisy measurements into state estimates. This process typically requires matrix storage, inversion, and multiplication or Monte Carlo estimation, none of which are practical in high-dimensional state spaces such as the weight spaces of artificial neural networks. Here, we consider the standard Bayesian filtering problem as optimization over a time-varying objective. Instead of maintaining matrices for the filtering equations or simulating particles, we specify an optimizer that defines the Bayesian filter implicitly. In the linear-Gaussian setting, we show that every Kalman filter has an equivalent formulation using K steps of gradient descent. In the nonlinear setting, our experiments demonstrate that our framework results in filters that are effective, robust, and scalable to high-dimensional systems, comparing well against the standard toolbox of Bayesian filtering solutions. We suggest that it is easier to fine-tune an optimizer than it is to specify the correct filtering equations, making our framework an attractive option for high-dimensional filtering problems.
|
ICLR 2024 poster
|
/pdf/d2aed4c74551cad3f1c952f10a85be61674493ea.pdf
|
ICLR.cc/2025/Conference
|
mBrAuyd26J
|
Enhance Reasoning for Large Language Models with Reinforcement Learning in the Game Werewolf
|
Despite their success across a broad spectrum of general tasks, Large Language Models (LLMs) often underperform in domain-specific tasks not well-represented in their pre-training corpora. We introduce an innovative framework integrating general-purpose LLMs with an external \emph{Thinker} module to enhance the reasoning capabilities of LLM-based agents. Unlike augmenting LLMs with prompt engineering, our Thinker module directly accesses knowledge from domain databases and employs supervised or reinforcement learning (RL). We establish a reasoning hierarchy where LLMs handle intuitive System-1 tasks that are domain-agnostic, while the Thinker focuses on System-2 tasks that require complex logical analysis and domain-specific knowledge. Our framework is demonstrated through a 9-player Werewolf game that necessitates dual-system reasoning. We design a communication protocol between LLMs and the Thinker, then optimize the Thinker through online RL and refine it by imitation learning. Drawing from 18800 human games, this work also contributes to the largest dataset for social deduction games to date. Experiments show that GPT-3.5 and GPT-4, augmented with the Thinker, significantly improve in deductive reasoning, speech generation, and online gameplay evaluated by human players. Further, integrating a fine-tuned 6B Werewolf-specific LLM with the Thinker achieves performance on par with GPT-4.
|
Rejected_Submission
|
/pdf/cc6b9665cd85f7343ab6c520f0936229e949c2ef.pdf
|
ICLR.cc/2025/Conference
|
q6CM6UdP3K
|
$\textit{One Stone Three Birds:}$ Three-Dimensional Implicit Neural Network for Compression and Continuous Representation of Multi-Altitude Climate Data
|
Wind energy stands out as a promising clean and renewable energy alternative, not only for its potential to combat global warming but also for its capacity to meet the ever-growing demand for energy. However, analysis of wind data to fully harness the benefits of wind energy demands tackling several related challenges:
(1) Current data resolution is inadequate for capturing the detailed information needed across diverse climatic conditions;
(2) Efficient management and storage of real-time measurements are currently lacking;
(3) Extrapolating wind data across spatial specifications enables analysis at costly-to-measure, unobserved points is necessary.
In response to these challenges, we introduce a modality-agnostic learning framework utilizing implicit neural networks. Our model effectively compresses a large volume of climate data into a manageable latent codec. It also learns underlying continuous climate patterns, enabling reconstruction at any scale and supporting modality transfer and fusion. Extensive experimental results show consistent performance improvements over existing baselines.
|
Rejected_Submission
|
/pdf/65157a31db2b8f3cf2002fdd2a934ead0396ed1e.pdf
|
ICLR.cc/2025/Conference
|
p5RsCkE9sz
|
Using Multimodal Deep Neural Networks to Disentangle Language from Visual Aesthetic Experience
|
When we experience a visual stimulus as beautiful, how much of that experience derives from perceptual computations we cannot describe versus conceptual knowledge we can readily translate into natural language? Disentangling perception from language in affective experiences through behavioral paradigms or neuroimaging is often empirically intractable. Here, we circumnavigate this challenge by using linear decoding over the learned representations of unimodal vision, unimodal language, and multimodal (language-aligned) deep neural network (DNN) models to predict human beauty ratings of naturalistic images. We find that unimodal vision models (e.g. SimCLR) account for the vast majority of explainable variance in these ratings. Language-aligned vision models (e.g. SLIP) yield small gains relative to unimodal vision. Unimodal language models (e.g. GPT2) conditioned on visual embeddings to generate captions (via CLIPCap) yield no further gains. Pure-language model embeddings of machine-generated captions alone yield lower predictions. Taken together, these results suggest that whatever words we may eventually find to describe our experiences of beauty, the ineffable computations of feedforward perception likely remain the dominant basis of our judgment.
|
Rejected_Submission
|
/pdf/7de561985c3638b49a2fc93c31dbd0c41c8f87cb.pdf
|
ICLR.cc/2024/Conference
|
93LoCyww8o
|
Hybrid Internal Model: Learning Agile Legged Locomotion with Simulated Robot Response
|
Robust locomotion control depends on accurate state estimations. However, the sensors of most legged robots can only provide partial and noisy observations, making the estimation particularly challenging, especially for external states like terrain frictions and elevation maps. Inspired by the classical Internal Model Control principle, we consider these external states as disturbances and introduce Hybrid Internal Model (HIM) to estimate them according to the response of the robot. The response, which we refer to as the hybrid internal embedding, contains the robot’s explicit velocity and implicit stability representation, corresponding to two primary goals for locomotion tasks: explicitly tracking velocity and implicitly maintaining stability. We use contrastive learning to optimize the embedding to be close to the robot’s successor state, in which the response is naturally embedded. HIM has several appealing benefits: It only needs the robot’s proprioceptions, i.e., those from joint encoders and IMU as observations. It innovatively maintains consistent observations between simulation reference and reality that avoids information loss in mimicking learning. It exploits batch-level information that is more robust to noises and keeps better sample efficiency. It only requires 1 hour of training on an RTX 4090 to enable a quadruped robot to traverse any terrain under any disturbances. A wealth of real-world experiments demonstrates its agility, even in high-difficulty tasks and cases never occurred during the training process, revealing remarkable open-world generalizability.
|
ICLR 2024 poster
|
/pdf/7ebfa7daae934eacc0cd05b7ee6107d2ac5b30dd.pdf
|
ICLR.cc/2024/Conference
|
xhEN0kJh4q
|
Robust Model-Based Optimization for Challenging Fitness Landscapes
|
Protein design, a grand challenge of the day, involves optimization on a fitness landscape, and leading methods adopt a model-based approach where a model is trained on a training set (protein sequences and fitness) and proposes candidates to explore next. These methods are challenged by sparsity of high-fitness samples in the training set, a problem that has been in the literature. A less recognized but equally important problem stems from the distribution of training samples in the design space: leading methods are not designed for scenarios where the desired optimum is in a region that is not only poorly represented in training data, but also relatively far from the highly represented low-fitness regions. We show that this problem of “separation” in the design space is a significant bottleneck in existing model-based optimization tools and propose a new approach that uses a novel VAE as its search model to overcome the problem. We demonstrate its advantage over prior methods in robustly finding improved samples, regardless of the imbalance and separation between low- and high-fitness samples. Our comprehensive benchmark on real and semi-synthetic protein datasets as well as solution design for physics-informed neural networks, showcases the generality of our approach in discrete and continuous design spaces. Our implementation is available at https://github.com/sabagh1994/PGVAE.
|
ICLR 2024 poster
|
/pdf/fb0639e5f082d1a2381a60dc9bb079cb3f5de709.pdf
|
ICLR.cc/2024/Conference
|
q4AEBLHuA6
|
Solving High Frequency and Multi-Scale PDEs with Gaussian Processes
|
Machine learning based solvers have garnered much attention in physical simulation and scientific computing, with a prominent example, physics-informed neural networks (PINNs). However, PINNs often struggle to solve high-frequency and multi-scale PDEs, which can be due to spectral bias during neural network training. To address this problem, we resort to the Gaussian process (GP) framework. To flexibly capture the dominant frequencies, we model the power spectrum of the PDE solution with a student $t$ mixture or Gaussian mixture. We apply the inverse Fourier transform to obtain the covariance function (by Wiener-Khinchin theorem). The covariance derived from the Gaussian mixture spectrum corresponds to the known spectral mixture kernel. Next, we estimate the mixture weights in the log domain, which we show is equivalent to placing a Jeffreys prior. It automatically induces sparsity, prunes excessive frequencies, and adjusts the remaining toward the ground truth. Third, to enable efficient and scalable computation on massive collocation points, which are critical to capture high frequencies, we place the collocation points on a grid, and multiply our covariance function at each input dimension. We use the GP conditional mean to predict the solution and its derivatives so as to fit the boundary condition and the equation itself.
As a result, we can derive a Kronecker product structure in the covariance matrix. We use Kronecker product properties and multilinear algebra to promote computational efficiency and scalability, without low-rank approximations. We show the advantage of our method in systematic experiments. The code is released at {https://github.com/xuangu-fang/Gaussian-Process-Slover-for-High-Freq-PDE}.
|
ICLR 2024 poster
|
/pdf/76e862d3a8b7e1e2b17390f88070152bfd1bc40a.pdf
|
ICLR.cc/2025/Conference
|
0bcRCD7YUx
|
VALL-E 2: Neural Codec Language Models are Human Parity Zero-Shot Text to Speech Synthesizers
|
This paper introduces VALL-E 2, the latest advancement in neural codec language models that marks a milestone in zero-shot text-to-speech synthesis (TTS), achieving human parity for the first time. Based on its predecessor, VALL-E, this work introduces two significant enhancements: Repetition Aware Sampling refines the original nucleus sampling process by accounting for token repetition in the decoding history. It not only stabilizes the decoding but also circumvents the infinite loop issue. Grouped Code Modeling organizes codec codes into groups to effectively shorten the sequence length, which not only boosts inference speed but also addresses the challenges of long sequence modeling. Our experiments on the LibriSpeech and VCTK datasets show that VALL-E 2 surpasses previous systems in speech robustness, naturalness, and speaker similarity. It is the first of its kind to reach human parity on these benchmarks. Moreover, VALL-E 2 consistently synthesizes high-quality speech, even for sentences that are traditionally challenging due to their complexity or repetitive phrases. The advantages of this work could contribute to valuable endeavors, such as generating speech for individuals with aphasia or people with amyotrophic lateral sclerosis. See https://anonymous/valle2 for demos of VALL-E 2.
|
Rejected_Submission
|
/pdf/579fe66d05f6a3deb73edd5cbcfb8b8a4acf66e6.pdf
|
ICLR.cc/2025/Conference
|
7FQDHv9fD4
|
Decomposing heterogeneous dynamical systems with graph neural networks
|
Natural physical, chemical, and biological dynamical systems are often complex, with heterogeneous components interacting in diverse ways. We show how simple graph neural networks can be designed to jointly learn the interaction rules and the latent heterogeneity from observable dynamics. The learned latent heterogeneity and dynamics can be used to virtually decompose the complex system which is necessary to infer and parameterize the underlying governing equations. We tested the approach with simulation experiments of interacting moving particles, vector fields, and signaling networks. While our current aim is to better understand and validate the approach with simulated data, we anticipate it to become a generally applicable tool to uncover the governing rules underlying complex dynamics observed in nature.
|
Rejected_Submission
|
/pdf/7fe76b9b33733b68e6890d030f74d79cc7b0c68f.pdf
|
ICLR.cc/2024/Conference
|
jKHmjlpViu
|
OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text
|
There is growing evidence that pretraining on high quality, carefully thought-out tokens such as code or mathematics plays an important role in improving the reasoning abilities of large language models. For example, Minerva, a PaLM model finetuned on billions of tokens of mathematical documents from arXiv and the web, reported dramatically improved performance on problems that require quantitative reasoning. However, because all known open source web datasets employ preprocessing that does not faithfully preserve mathematical notation, the benefits of large scale training on quantitive web documents are unavailable to the research community. We introduce OpenWebMath, an open dataset inspired by these works containing 14.7B tokens of mathematical webpages from Common Crawl. We describe in detail our method for extracting text and LaTeX content and removing boilerplate from HTML documents, as well as our methods for quality filtering and deduplication. Additionally, we run small-scale experiments by training 1.4B language models on OpenWebMath, showing that models trained on 14.7B tokens of our dataset surpass the performance of models trained on over 20x the amount of general language data. We hope that our dataset, open-sourced and released on the Hugging Face Hub, will help spur advances in the reasoning abilities of large language models.
|
ICLR 2024 poster
|
/pdf/50b73568b958d08494c43e6d23be4be76f859f62.pdf
|
ICLR.cc/2025/Conference
|
gU4ZgQNsOC
|
Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining
|
Pretraining large language models (LLMs) on vast and heterogeneous datasets is crucial for achieving state-of-the-art performance across diverse downstream tasks. However, current training paradigms treat all samples equally, overlooking the importance or relevance of individual samples throughout the training process. Existing reweighting strategies, which primarily focus on group-level data importance, fail to leverage fine-grained instance-level information and do not adapt dynamically to individual sample importance as training progresses. In this paper, we introduce novel algorithms for dynamic, instance-level data reweighting aimed at improving both the efficiency and effectiveness of LLM pretraining. Our methods adjust the weight of each training sample based on its loss value in an online fashion, allowing the model to dynamically focus on more informative or important samples at the current training stage. In particular, our framework allows us to systematically devise reweighting strategies deprioritizing redundant or uninformative data, which we find tend to work best.
Furthermore, we develop a new theoretical framework for analyzing the impact of loss-based reweighting on the convergence of gradient-based optimization, providing the first formal characterization of how these strategies affect convergence bounds. We empirically validate our approach across a spectrum of tasks, from pretraining 7B and 1.4B parameter LLMs to smaller-scale language models and linear regression problems, demonstrating that our loss-based reweighting approach can lead to faster convergence and significantly improved performance.
|
ICLR 2025 Poster
|
/pdf/c555e59407ab06c1d2bbfabea197b2ebbea4c16c.pdf
|
ICLR.cc/2025/Conference
|
tmwR707odU
|
Curriculum GNN-LLM Alignment for Text-Attributed Graphs
|
Aligning Graph Neural Networks (GNNs) and Large Language Models (LLMs) benefits in leveraging both textual and structural knowledge for Text-attributed Graphs (TAGs) learning, which has attracted an increasing amount of attention in the research community. Most existing literature assumes a uniformly identical level of learning difficulties across texts and structures in TAGs, however, we discover the $\textit{text-structure imbalance}$ problem in real-world TAGs, $\textit{i.e.}$, nodes exhibit various levels of difficulties when learning different textual and structural information. Existing works ignoring these different difficulties may result in under-optimized GNNs and LLMs with over-reliance on either simplistic text or structure, thus failing to conduct node classifications that involve simultaneously learning complex text and structural information for nodes in TAGs. To address this problem, we propose a novel Curriculum GNN-LLM Alignment ($\textbf{CurGL}$) method, which strategically balances the learning difficulties of textual and structural information on a node-by-node basis to enhance the alignment between GNNs and LLMs. Specifically, we first propose a text-structure difficulty measurer to estimate the learning difficulty of both text and structure in a node-wise manner. Then, we propose a class-based node selection strategy to balance the training process via gradually scheduling more nodes. Finally, we propose the curriculum co-play alignment by iteratively promoting useful information from GNNs and LLMs, to progressively enhance both components with balanced textual and structural information. Extensive experiments on real-world datasets demonstrate that our proposed $\textbf{CurGL}$ method is able to outperform state-of-the-art GraphLLM, curriculum learning, as well as GNN baselines. To the best of our knowledge, this is the first study of curriculum alignment on TAGs.
|
Rejected_Submission
|
/pdf/74962d85d505a6c26a84e954415172b5803c0222.pdf
|
ICLR.cc/2025/Conference
|
gVVoZtiQlt
|
The Phase Transition Phenomenon of Shuffled Regression
|
We study the phase transition
phenomenon inherent in the shuffled (permuted) regression problem, which has found numerous applications in databases, privacy, data analysis, etc. For the permuted regression task: $\mathbf{Y} = \mathbf{\Pi}\mathbf{X}\mathbf{B}$, the goal is to recover the permutation matrix $\mathbf{\Pi}$ as well as the coefficient matrix $\mathbf{B}$. It has been empirically observed in prior studies that when recovering $\mathbf{\Pi}$, there exists a phase transition phenomenon: the error rate drops to zero rapidly once the parameters reach certain thresholds. In this study, we aim to precisely identify the locations of the phase transition points by leveraging techniques from {\em message passing} (MP).
In our analysis, we first transform the permutation recovery problem into a probabilistic graphical model. Then, we leverage the analytical tools rooted in the message passing (MP) algorithm and derive an equation to track the convergence of the MP algorithm. By linking this equation to the branching random walk process, we are able to characterize the impact of the \emph{signal-to-noise-ratio} ($\mathsf{snr}$) on the permutation recovery. Depending on whether the signal is given or not, we separately investigate the oracle case and the non-oracle case. The bottleneck in identifying the phase transition regimes lies in deriving closed-form formulas for the corresponding critical points, but only in rare scenarios can one obtain such precise expressions. To tackle this challenge, we propose the Gaussian approximation method, which allows us to obtain the closed-form formulas in almost all scenarios. In the oracle case, our method can fairly accurately predict the phase transition $\mathsf{snr}$. In the non-oracle case, our proposed algorithm can predict the maximum allowed number of permuted rows and uncover its dependency on the sample number.
|
Rejected_Submission
|
/pdf/808fe3f9ca9f306674d282b52bd2477f0703382b.pdf
|
ICLR.cc/2025/Conference
|
kwY3eL3QVh
|
Feature-guided score diffusion for sampling conditional densities
|
Score diffusion methods can learn probability densities from samples. The score of the noise-corrupted density is estimated using a deep neural network, which is then used to iteratively transport a Gaussian white noise density to a target density. Variants for conditional densities have been developed, but correct estimation of the corresponding scores is difficult.
We avoid these difficulties by introducing an algorithm that operate by projecting the score onto the target class mean in a learned feature space.
The features and the projected score are computed using the same network, which is trained by optimizing a single denoising loss.
Learned feature vectors of same-class images are tightly clustered relative to those of different classes.
We show that feature class centroids provide a low-dimensional Euclidean embedding of the class conditional densities.
We demonstrate that, when trained on a dataset of mixed image classes,
this projected score can generate high quality and diverse samples from the conditioning class.
Conditional generation can be performed using feature vectors interpolated between those of the training set, demonstrating out-of-distribution generalization.
|
Rejected_Submission
|
/pdf/e05dc56dc75450483af41194ada1ed59b2ec6c0f.pdf
|
ICLR.cc/2024/Conference
|
TzE7EG7S4i
|
High-Dimensional Geometric Streaming for Nearly Low Rank Data
|
We study streaming algorithms for the outer $(d-k)$-radius estimation of a set of points $a_1, \ldots ,a_n \in \mathbb{R}^d$. The problem asks to compute the minimum over all $k$-dimensional flats $F$ of $\max_i d(a_i, F)$, where $d(u, F)$ denotes the distance of a point $u$ from the flat $F$. This problem has been extensively studied in earlier works (Varadarajan et al., SIAM J. Comput. 2006) over a wide range of values of $d$, $k$ and $d-k$. The earlier algorithms are based on SDP relaxations of the problem and are not applicable in the streaming setting where we do not have space to store all the rows that we see. We give an efficient streaming coreset algorithm that selects $\text{poly}(k, \log n)$ rows and at the end outputs a $\text{poly}(k, \log n)$ approximation to the outer $(d-k)$-radius. The algorithm only uses $d \cdot \text{poly}(k, \log n)$ bits of space and runs in an overall time of $O(\text{nnz}(A) \cdot \log n + \text{poly}(d, \log n))$, where $\text{nnz}(A)$ denotes the number of nonzero entries in the $n \times d$ matrix $A$ with rows given by $a_1, \ldots, a_n \in \mathbb{R}^d$.
In a recent work, Woodruff and Yasuda (FOCS 2022), give streaming algorithms for a number of high-dimensional geometric problems such as width estimation, convex hull estimation, volume estimation etc. Their algorithms require $\Omega(d^2)$ bits of space and have an $\Omega(\sqrt{d})$ multiplicative approximation factor even when the rows $a_1,\ldots, a_n$ are “almost” spanned by a $k$ dimensional subspace. We show that when the rows are $a_1,\ldots,a_n$ are “almost” spanned by a $k$ dimensional space, our streaming coreset construction algorithm can be used to obtain algorithms that use only $O(d \cdot \text{poly}(k, \log n))$ bits of space and have a multiplicative error of $O(\text{poly}(k, \log n))$. When $d$ is large and $k$ is much smaller than $d$, our algorithms use a much smaller amount of space while guaranteeing a better approximation. We pay an additive error depending on how close the rows $a_1,\ldots,a_n$ to being spanned by a rank $k$ subspace.
As another application of our algorithm, we show that our streaming coreset can also be used to obtain approximations to the $\ell_p$ subspace approximation problem using exponential random variables to embed the $\ell_p$ subspace approximation problem into an instance of the $\ell_{\infty}$ subspace approximation problem.
|
Rejected_Submission
|
/pdf/b97aa5fd29e6fab363e0e6f3ce175ec3a439ad4a.pdf
|
ICLR.cc/2024/Conference
|
YkCjojDG3l
|
PolySketchFormer: Fast Transformers via Sketches for Polynomial Kernels
|
The quadratic complexity of attention in transformer architectures remains a big bottleneck in scaling up large foundation models for long context. In fact, recent theoretical results show the hardness of approximating the output of softmax attention mechanism in sub-quadratic time assuming Strong Exponential Time Hypothesis. In this paper, we show how to break this theoretical barrier by replacing softmax with a polynomial function and polynomial sketching. In particular we show that sketches for Polynomial Kernel from the randomized numerical linear algebra literature can be used to approximate the polynomial attention which leads to a significantly faster attention mechanism without assuming any sparse structure for the attention matrix that has been done in many previous works.
In addition, we propose an efficient block-based algorithm that lets us apply the causal mask to the attention matrix without explicitly realizing the $n \times n$ attention matrix and compute the output of the polynomial attention mechanism in time linear in the context length. The block-based algorithm gives significant speedups over the *cumulative sum* algorithm used by Performer to apply the causal mask to the attention matrix. These observations help us design *PolySketchFormer*, a practical linear-time transformer architecture for language modeling with provable guarantees.
We validate our design empirically by training language models with long context lengths. We first show that the eval perplexities of our models are comparable to that of models trained with softmax attention. We then show that for large context lengths our training times are significantly faster than FlashAttention.
|
Rejected_Submission
|
/pdf/6f4cf0692becb9eb65efd6e0a331593437181c3f.pdf
|
ICLR.cc/2024/Conference
|
MQrFaQC3kj
|
Dataset Fairness: Achievable Fairness On Your Data With Utility Guarantees
|
In machine learning fairness, training models which minimize disparity across different sensitive groups often leads to diminished accuracy, a phenomenon known as the fairness-accuracy trade-off. The severity of this trade-off fundamentally depends on dataset characteristics such as dataset imbalances or biases, and therefore using a universal fairness requirement across datasets remains questionable and can often lead to models with varying and substantially low utility. To address this, we present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets, backed by rigorous statistical guarantees. By utilizing the You-Only-Train-Once (YOTO) framework, our approach mitigates the computational burden of having to train multiple models when approximating the trade-off curve. Moreover, we introduce confidence intervals around this curve, offering a statistically grounded perspective on acceptable range of fairness violations for any given accuracy threshold. Our empirical evaluation which includes applications to tabular data, computer vision and natural language datasets, underscores that our approach can guide practitioners in accuracy-constrained fairness decisions across various data modalities.
|
Rejected_Submission
|
/pdf/1d2e9e0624d323f4f45aa471f7195aa6b466edaf.pdf
|
ICLR.cc/2025/Conference
|
LNL7zKvm7e
|
Frame-Voyager: Learning to Query Frames for Video Large Language Models
|
Video Large Language Models (Video-LLMs) have made remarkable progress in video understanding tasks. However, they are constrained by the maximum length of input tokens, making it impractical to input entire videos. Existing frame selection approaches, such as uniform frame sampling and text-frame retrieval, fail to account for the information density variations in the videos or the complex instructions in the tasks, leading to sub-optimal performance. In this paper, we propose Frame-Voyager that learns to query informative frame combinations, based on the given textual queries in the task. To train Frame-Voyager, we introduce a new data collection and labeling pipeline, by ranking frame combinations using a pre-trained Video-LLM. Given a video of M frames, we traverse its T-frame combinations, feed them into a Video-LLM, and rank them based on Video-LLM's prediction losses. Using this ranking as supervision, we train Frame-Voyager to query the frame combinations with lower losses. In experiments, we evaluate Frame-Voyager on four Video Question Answering benchmarks by plugging it into two different Video-LLMs. The experimental results demonstrate that Frame-Voyager achieves impressive results in all settings, highlighting its potential as a plug-and-play solution for Video-LLMs.
|
ICLR 2025 Poster
|
/pdf/154af3dfa9ab115b4d4a01fa334c6ab45c7ad3af.pdf
|
ICLR.cc/2025/Conference
|
0e2pcSxQJS
|
PN-GAIL: Leveraging Non-optimal Information from Imperfect Demonstrations
|
Imitation learning aims at constructing an optimal policy by emulating expert demonstrations. However, the prevailing approaches in this domain typically presume that the demonstrations are optimal, an assumption that seldom holds true in the complexities of real-world applications. The data collected in practical scenarios often contains imperfections, encompassing both optimal and non-optimal examples. In this study, we propose Positive-Negative Generative Adversarial Imitation Learning (PN-GAIL), a novel approach that falls within the framework of Generative Adversarial Imitation Learning (GAIL). PN-GAIL innovatively leverages non-optimal information from imperfect demonstrations, allowing the discriminator to comprehensively assess the positive and negative risks associated with these demonstrations. Furthermore, it requires only a small subset of labeled confidence scores. Theoretical analysis indicates that PN-GAIL deviates from the non-optimal data while mimicking imperfect demonstrations. Experimental results demonstrate that PN-GAIL surpasses conventional baseline methods in dealing with imperfect demonstrations, thereby significantly augmenting the practical utility of imitation learning in real-world contexts. Our codes are available at https://github.com/QiangLiuT/PN-GAIL.
|
ICLR 2025 Poster
|
/pdf/eb36df713e1b82b2a60b5674944b7f5da9b61a75.pdf
|
ICLR.cc/2024/Conference
|
fx8AJDQRVB
|
Image Super-Resolution via Latent Diffusion: A Sampling-Space Mixture of Experts and Frequency-Augmented Decoder Approach
|
The recent use of diffusion prior, enhanced by pre-trained text-image models, has markedly elevated the performance of image super-resolution (SR). To alleviate the huge computational cost required by pixel-based diffusion SR, latent-based methods utilize a feature encoder to transform the image and then implement the SR image generation in a compact latent space. Nevertheless, there are two major issues that limit the performance of latent-based diffusion. First, the compression of latent space usually causes reconstruction distortion. Second, huge computational cost still constrains the parameter scale of the diffusion model. To counteract these issues, we first propose a frequency compensation module that enhances the frequency components from latent space to pixel space. The reconstruction distortion (especially for high-frequency information) can be significantly decreased. Then, we propose to use Sample-Space Mixture of Experts
(SS-MoE) to achieve more powerful latent-based SR, which steadily improves the capacity of the model without a significant increase in inference costs. These carefully crafted designs contribute to performance improvements in largely explored 4× blind super-resolution benchmarks and extend to large magnification factors, i.e., 8× image SR benchmarks.
|
Rejected_Submission
|
/pdf/b0faec93c0d2eb587c8eeeba5e1b8346e71b01b6.pdf
|
ICLR.cc/2024/Conference
|
KUz8QXAgFV
|
Bridging Autoregressive and Masked Modeling for Enhanced Visual Representation Learning
|
Autoregressive models have demonstrated superior performance in natural language processing due to their ability to handle large-scale training and generating ability. However, their potential in computer vision has not been fully explored due to some key challenges they still face. Currently, masked modeling methods such as MAE are dominant in this field. By analyzing autoregressive and masked modeling methods in a probabilistic way, we find that they can complement each other. Based on this, we propose a general formulation and modeling framework that combines the benefits of both, named \textbf{G}enerative \textbf{V}isual \textbf{P}retraining (GVP). Our unified probabilistic framework allows for different training strategies, including masked modeling and autoregressive modeling, to be realized simultaneously. Our framework can be adapted for various downstream tasks and outperform existing methods in several benchmarks, including linear probing, fine-tuning and transfer learning. This work provides a promising direction for future research in generative masked visual representation learning.
|
Rejected_Submission
|
/pdf/b2dc37a8ab9c031ae2c8744bed1f92916bf685bd.pdf
|
ICLR.cc/2025/Conference
|
PHg4rAXFVH
|
RTop-K: Ultra-Fast Row-Wise Top-K Selection for Neural Network Acceleration on GPUs
|
Abstract Top-k selection algorithms are fundamental in a wide range of applications, including high-performance computing, information retrieval, big data processing, and neural network model training. In this paper, we present RTop-K, a highly efficient parallel row-wise top-k selection algorithm specifically designed for GPUs. RTop-K leverages a binary search-based approach to optimize row-wise top-k selection, providing a scalable and accelerated solution.
We conduct a detailed analysis of early stopping in our algorithm, showing that it effectively maintains the testing accuracy of neural network models while substantially improving performance. Our GPU implementation of RTop-K demonstrates superior performance over state-of-the-art row-wise top-k GPU implementations, achieving an average speed-up of up to 11.49× with early stopping and 7.29× without early stopping. Moreover, RTop-K accelerates the overall training workflow of MaxK-GNNs, delivering speed-ups ranging from 11.97% to 33.29% across different models and datasets.
|
ICLR 2025 Poster
|
/pdf/dd458214d8ab9259ee817a3caa21a7cfb0180c33.pdf
|
ICLR.cc/2025/Conference
|
l0fn10vSyM
|
Semi-Parametric Retrieval via Binary Bag-of-Tokens Index
|
Information retrieval has transitioned from standalone systems into essential components across broader applications, with indexing efficiency, cost-effectiveness, and freshness becoming increasingly critical yet often overlooked. In this paper, we introduce SemI-parametric Disentangled Retrieval (SiDR), a bi-encoder retrieval framework that decouples retrieval index from neural parameters to enable efficient, low-cost, and parameter-agnostic indexing for emerging use cases. Specifically, in addition to using embeddings as indexes like existing neural retrieval methods, SiDR supports a non-parametric tokenization index for search, achieving BM25-like indexing complexity with significantly better effectiveness. Our comprehensive evaluation across 16 retrieval benchmarks demonstrates that SiDR outperforms both neural and term-based retrieval baselines under the same indexing workload: (i) When using an embedding-based index, SiDR exceeds the performance of conventional neural retrievers while maintaining similar training complexity; (ii) When using a tokenization-based index, SiDR drastically reduces indexing cost and time, matching the complexity of traditional term-based retrieval, while consistently outperforming BM25 on all in-domain datasets; (iii) Additionally, we introduce a late parametric mechanism that matches BM25 index preparation time while outperforming other neural retrieval baselines in effectiveness.
|
ICLR 2025 Poster
|
/pdf/a4da394aaa515748e010a6bfef21dbe28410d80b.pdf
|
ICLR.cc/2025/Conference
|
hoYFLRNbhc
|
DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory
|
Large language models (LLMs) have achieved reasonable quality improvements in machine translation (MT).
However, most current research on MT-LLMs still faces significant challenges in maintaining translation consistency and accuracy when processing entire documents.
In this paper, we introduce DelTA, a Document-levEL Translation Agent designed to overcome these limitations.
DelTA features a multi-level memory structure that stores information across various granularities and spans, including Proper Noun Records, Bilingual Summary, Long-Term Memory, and Short-Term Memory, which are continuously retrieved and updated by auxiliary LLM-based components.
Experimental results indicate that DelTA significantly outperforms strong baselines in terms of translation consistency and quality across four open/closed-source LLMs and two representative document translation datasets, achieving an increase in consistency scores by up to 4.58 percentage points and in COMET scores by up to 3.16 points on average.
DelTA employs a sentence-by-sentence translation strategy, ensuring no sentence omissions and offering a memory-efficient solution compared to the mainstream method.
Furthermore, DelTA improves pronoun and context-dependent translation accuracy, and the summary component of the agent also shows promise as a tool for query-based summarization tasks.
The code and data of our approach are released at https://github.com/YutongWang1216/DocMTAgent.
|
ICLR 2025 Poster
|
/pdf/46c09c6621cdb1818c886adb57ebecfae7302a2d.pdf
|
ICLR.cc/2024/Conference
|
MXI8aSgl53
|
NAG-GS: Semi-Implicit, Accelerated and Robust Stochastic Optimizer
|
Classical machine learning models such as deep neural networks are usually trained by using Stochastic Gradient Descent-based (SGD) algorithms. The classical SGD can be interpreted as a discretization of the stochastic gradient flow. In this paper we propose a novel, robust and accelerated stochastic optimizer that relies on two key elements: (1) an accelerated Nesterov-like Stochastic Differential Equation (SDE) and (2) its semi-implicit Gauss-Seidel type discretization. The convergence and stability of the obtained method, referred to as NAG-GS, are first studied extensively in the case of the minimization of a quadratic function. This analysis allows us to come up with an optimal learning rate in terms of the convergence rate while ensuring the stability of NAG-GS. This is achieved by the careful analysis of the spectral radius of the iteration matrix and the covariance matrix at stationarity with respect to all hyperparameters of our method. Further, we show that NAG-GS is competitive with state-of-the-art methods such as momentum SGD with weight decay and AdamW for the training of machine learning models such as the logistic regression model, the residual networks models on standard computer vision datasets, Transformers in the frame of the GLUE benchmark and the recent Vision Transformers.
|
Rejected_Submission
|
/pdf/888a3ffa21f8cd4f7073b5542598e6b869c160ae.pdf
|
ICLR.cc/2024/Conference
|
OZWHYyfPwY
|
Don't trust your eyes: on the (un)reliability of feature visualizations
|
How do neural networks extract patterns from pixels? Feature visualizations attempt to answer this important question by visualizing highly activating patterns through optimization. Today, visualization methods form the foundation of our knowledge about the internal workings of neural networks, as a type of mechanistic interpretability. Here we ask: How reliable are feature visualizations? We start our investigation by developing network circuits that trick feature visualizations into showing arbitrary patterns that are completely disconnected from normal network behavior on natural input. We then provide evidence for a similar phenomenon occurring in standard, unmanipulated networks: feature visualizations are processed very differently from standard input, casting doubt on their ability to "explain" how neural networks process natural images. This can be used as a sanity check for feature visualizations. We underpin our empirical findings by theory proving that the set of functions that can be reliably understood by feature visualization is extremely small and does not include general black-box neural networks. Therefore, a promising way forward could be the development of networks that enforce certain structures in order to ensure more reliable feature visualizations.
|
Rejected_Submission
|
/pdf/ffec45105400f9c6588b26214d24ad100af48cc1.pdf
|
ICLR.cc/2025/Conference
|
t717joHHSc
|
Mitigate Position Bias in Large Language Models via Scaling a Single Dimension
|
Large Language Models (LLMs) are increasingly applied in various real-world scenarios due to their excellent generalization capabilities and robust generative abilities. However, they exhibit position bias, also known as "lost in the middle", a phenomenon that is especially pronounced in long-context scenarios, which indicates the placement of the key information in different positions of a prompt can significantly affect accuracy. This paper first explores the micro-level manifestations of position bias, concluding that attention weights are a micro-level expression of position bias. It further identifies that, in addition to position embeddings, causal attention mask also contributes to position bias by creating position-specific hidden states. Based on these insights, we propose a method to mitigate position bias by scaling this positional hidden states. Experiments on the NaturalQuestions Multi-document QA, KV retrieval, LongBench and timeline reorder tasks, using various models including RoPE models, context window-extended models, and Alibi models, demonstrate the effectiveness and generalizability of our approach. Our method can improve performance by up to 15.2% by modifying just one dimension of hidden states.
|
Rejected_Submission
|
/pdf/6ad1427fc6845504bfa1ced6d3de9d58b6933c74.pdf
|
ICLR.cc/2025/Conference
|
NdNuKMEv9y
|
Improving Adaptive Moment Optimization via Preconditioner Diagonalization
|
Modern adaptive optimization methods, such as Adam and its variants, have emerged as the most widely used tools in deep learning over recent years. These algorithms offer automatic mechanisms for dynamically adjusting the update step based on estimates of gradient statistics. Compared to traditional algorithms like Stochastic Gradient Descent, these adaptive methods are typically more robust to model scale and hyperparameter tuning. However, the gradient statistics employed by these methods often do not leverage sufficient gradient covariance information, leading to suboptimal updates in certain directions of the parameter space and potentially slower convergence. In this work, we keep track of such covariance statistics in the form of a structured preconditioner matrix. Unlike other works, our approach does not apply direct approximations to estimate this matrix. We instead implement an invertible transformation that maps the preconditioner matrix into a new space where it becomes approximately diagonal. This enables a diagonal approximation of the preconditioner matrix in the transformed space, offering several computational advantages. Empirical results show that our approach can substantially enhance the convergence speed of the modern adaptive optimizers. Notably, for large language models like LLaMA, we can achieve a speedup of 2x compared to the baseline Adam. Additionally, our method can be integrated with memory-efficient optimizers like Adafactor to manage computational overhead.
|
Rejected_Submission
|
/pdf/b184a61f514278a4970a0ecd39a35354f609147e.pdf
|
ICLR.cc/2025/Conference
|
Vf5ZUalFk8
|
Conformal Reasoning: Uncertainty Estimation in Interactive Environments
|
We introduce conformal reasoning, a principled method for models in interactive environments to reason about their uncertainty and decide whether to seek out more information or to return a prediction. The challenge with standard conformal prediction---a popular statistical framework for uncertainty estimation that constructs prediction sets with formal coverage guarantees---is that it relies on a fixed set of calibration data points. In interactive environments, however, the calibration trajectories require certain termination criteria determined a priori, introducing heuristic bias and/or circular dependency that break the assumptions needed for coverage guarantees. We address this issue by building on adaptive conformal inference techniques. On two real-world tasks on medical diagnosis and embodied question answering, we show that conformal reasoning empirically achieves its theoretical coverage guarantees---in contrast with standard conformal prediction approaches that can significantly over- or under-cover---while improving exploration efficiency by approximately 20% on both tasks.
|
Rejected_Submission
|
/pdf/b6dc068fce2a35722d1ada14eb4c6486d4fc8604.pdf
|
ICLR.cc/2024/Conference
|
W0zgCR6FIE
|
Spawrious: A Benchmark for Fine Control of Spurious Correlation Biases
|
Spurious correlations (SCs) occur when a classifier relies on non-predictive features that happen to be correlated with the labels in the training data. For example, a classifier may misclassify dog breeds based on the background of dog images. This happens when the backgrounds are correlated with other breeds in the training data, leading to misclassifications during test time. Previous SC benchmark datasets suffer from varying issues, e.g., over-saturation or only containing one-to-one (O2O) SCs, but no many-to-many (M2M) SCs arising between groups of spurious attributes and classes.
In this paper, we present Spawrious-{O2O, M2M}-{Easy, Medium, Hard}, an image classification benchmark suite containing spurious correlations among different dog breeds and background locations. To create this dataset, we employ a text-to-image model to generate photo-realistic images, and an image captioning model to filter out unsuitable ones. The resulting dataset is of high quality, containing approximately 152,000 images.
Our experimental results demonstrate that state-of-the-art group robustness methods struggle with Spawrious, most notably on the hardest split with <73% accuracy. By examining model misclassifications, we detect reliances on spurious backgrounds, demonstrating that our dataset provides a significant challenge to drive future research.
|
Rejected_Submission
|
/pdf/593ba66e897abc2be09fcdc5ee0fda34aab2bcfd.pdf
|
ICLR.cc/2024/Conference
|
r0BcyqWAcj
|
Loci-Segmented: Improving Scene Segmentation Learning
|
Slot-oriented processing approaches for compositional scene representation have recently undergone a tremendous development. We present Loci-Segmented (Loci-s), an advanced scene segmentation neural network that extends the slot-based location and identity tracking architecture Loci (Traub et al., ICLR 2023). The main advancements are (i) the addition of a pre-trained dynamic background module; (ii) a hyper-convolution encoder module, which enables object-focused bottom-up processing; and (iii) a cascaded decoder module, which successively generates object masks, masked depth maps, and masked, depth-map-informed RGB reconstructions. The background module features the learning of both a foreground identifying module and a background re-generator. We further improve performance via (a) the integration of depth information as well as improved slot assignments via (b) slot-location-entity regularization and (b) a prior segmentation network. Even without these latter improvements, the results reveal superior segmentation performance in the MOVi datasets and in another established dataset collection. With all improvements, Loci-s achieves a 32% better intersection over union (IoU) score in MOVi-E than the previous best. We furthermore show that Loci-s generates well-interpretable latent representations. We believe that these representations may serve as a foundation-model-like interpretable basis for solving downstream tasks, such as grounding language and context- and
goal-conditioned event processing.
|
Rejected_Submission
|
/pdf/b5f90ef00a0788b80eb3b4956a0e1dc9402f0bd7.pdf
|
ICLR.cc/2024/Conference
|
aqTipMg9CZ
|
Contextual Molecule Representation Learning from Chemical Reaction Knowledge
|
Self-supervised learning has emerged as a powerful tool for harnessing large amounts of unlabelled data to obtain meaningful representations. However, prevailing techniques such as reconstructing masked sub-units are inapplicable to Molecular Representation Learning (MRL) due to the high degree of freedom in possible combinations of atoms in molecules. In this work, we propose a self-supervised learning framework, \textit{REMO}, which pre-trains graph/Transformer encoders on 1.7 million chemical reactions by taking advantage of well-defined rules of atom combinations in common chemical reactions. Specifically, two pre-training objectives are proposed, including masked reaction centre reconstruction and reaction centre identification. \textit{REMO} offers a novel solution to MRL by leveraging the unique characteristics of chemical reactions as knowledge context for pre-training, which effectively supports diverse downstream molecular tasks with minimum finetuning. Experimental results show that \textit{REMO} outperforms masked modeling of single molecule in various downstream tasks.
|
Rejected_Submission
|
/pdf/c16a8da5995125c7b85c29be785afff82919297d.pdf
|
ICLR.cc/2025/Conference
|
fO0YO9giQV
|
AnyECG: Foundational Models for Electrocardiogram Analysis
|
Electrocardiogram (ECG), a non-invasive and affordable tool for cardiac monitoring, is highly sensitive in detecting acute heart attacks. However, due to the lengthy nature of ECG recordings, numerous machine learning methods have been developed for automated heart disease detection to reduce human workload. Despite these efforts, performance remains suboptimal. A key obstacle is the inherent complexity of ECG data, which includes heterogeneity (e.g., varying sampling rates), high levels of noise, demographic-related pattern shifts, and intricate rhythm-event associations. To overcome these challenges, this paper introduces AnyECG, a foundational model designed to extract robust representations from any real-world ECG data. Specifically, a tailored ECG Tokenizer encodes each fixed-duration ECG fragment into a token and, guided by proxy tasks, converts noisy, continuous ECG features into discrete, compact, and clinically meaningful local rhythm codes. These codes encapsulate basic morphological, frequency, and demographic information (e.g., sex), effectively mitigating signal noise. We further pre-train the AnyECG to learn rhythmic pattern associations across ECG tokens, enabling the capture of cardiac event semantics. By being jointly pre-trained on diverse ECG data sources, AnyECG is capable of generalizing across a wide range of downstream tasks where ECG signals are recorded from various devices and scenarios. Experimental results in anomaly detection, arrhythmia detection, corrupted lead generation, and ultra-long ECG signal analysis demon-
strate that AnyECG learns common ECG knowledge from data and significantly outperforms cutting-edge methods in each respective task.
|
Rejected_Submission
|
/pdf/f6f79b7c812bb3bcdb8d0c75dbab3331a8744433.pdf
|
ICLR.cc/2025/Conference
|
6fDjUoEQvm
|
HyperDAS: Towards Automating Mechanistic Interpretability with Hypernetworks
|
Mechanistic interpretability has made great strides in identifying neural network features (e.g., directions in hidden activation space) that mediate concepts (e.g., *the birth year of a Nobel laureate*) and enable predictable manipulation. Distributed alignment search (DAS) leverages supervision from counterfactual data to learn concept features within hidden states, but DAS assumes we can afford to conduct a brute force search over potential feature locations. To address this, we present HyperDAS, a transformer-based hypernetwork architecture that (1) automatically locates the token-positions of the residual stream that a concept is realized in and (2) learns features of those residual stream vectors for the concept. In experiments with Llama3-8B, HyperDAS achieves state-of-the-art performance on the RAVEL benchmark for disentangling concepts in hidden states. In addition, we review the design decisions we made to mitigate the concern that HyperDAS (like all powerful interpretabilty methods) might inject new information into the target model rather than faithfully interpreting it.
|
ICLR 2025 Poster
|
/pdf/db66788476e93fff1d62b45467b167b9158fdfd6.pdf
|
ICLR.cc/2024/Conference
|
rmXXKxQpOR
|
On the Provable Advantage of Unsupervised Pretraining
|
Unsupervised pretraining, which learns a useful representation using a large amount of unlabeled data to facilitate the learning of downstream tasks, is a critical component of modern large-scale machine learning systems. Despite its tremendous empirical success, the rigorous theoretical understanding of why unsupervised pretraining generally helps remains rather limited---most existing results are restricted to particular methods or approaches for unsupervised pretraining with specialized structural assumptions. This paper studies a generic framework,
where the unsupervised representation learning task is specified by an abstract class of latent variable models $\Phi$ and the downstream task is specified by a class of prediction functions $\Psi$. We consider a natural approach of using Maximum Likelihood Estimation (MLE) for unsupervised pretraining and Empirical Risk Minimization (ERM) for learning downstream tasks. We prove that, under a mild ``informative'' condition, our algorithm achieves an excess risk of $\\tilde{\\mathcal{O}}(\sqrt{\mathcal{C}\_\Phi/m} + \sqrt{\mathcal{C}\_\Psi/n})$ for downstream tasks, where $\mathcal{C}\_\Phi, \mathcal{C}\_\Psi$ are complexity measures of function classes $\Phi, \Psi$, and $m, n$ are the number of unlabeled and labeled data respectively. Comparing to the baseline of $\tilde{\mathcal{O}}(\sqrt{\mathcal{C}\_{\Phi \circ \Psi}/n})$ achieved by performing supervised learning using only the labeled data, our result rigorously shows the benefit of unsupervised pretraining when $m \gg n$ and $\mathcal{C}\_{\Phi\circ \Psi} > \mathcal{C}\_\Psi$. This paper further shows that our generic framework covers a wide range of approaches for unsupervised pretraining, including factor models, Gaussian mixture models, and contrastive learning.
|
ICLR 2024 spotlight
|
/pdf/0acbd62d42fa6cae47b05447238cec98212499f9.pdf
|
ICLR.cc/2024/Conference
|
1IIiQnLRe8
|
Diversity Modeling for Semantic Shift Detection
|
Semantic shift detection faces a big challenge of modeling non-semantic feature diversity while suppressing generalization to unseen semantic shifts. Existing reconstruction-based approaches are either not constrained well to avoid over-generalization or not general enough to model diversity-agnostic in-distribution samples. Both may lead to feature confusion near the decision boundary and fail to identify various semantic shifts. In this work, we propose Bi-directional Regularized Diversity Modulation (BiRDM) to model restricted feature diversity for semantic shift detection so as to address the challenging issues in reconstruction-based detection methods. BiDRM modulates feature diversity by controlling spatial transformation with learnable dynamic modulation parameters in latent space. Smoothness Regularization (SmoReg) is introduced to avoid undesired generalization to semantic shift samples. Furthermore, Batch Normalization Simulation (BNSim) coordinating with auxiliary data is leveraged to separately transform different semantic distributions and push potential semantic shift samples away implicitly, making the feature more discriminative. Compared with previous works, BiRDM can successfully model diversity-agnostic non-semantic pattern while alleviating feature confusion in latent space. Experimental results demonstrate the effectiveness of our method.
|
Rejected_Submission
|
/pdf/89108a26a977a996ee3ee4bbe3c614c7c7d4d07a.pdf
|
ICLR.cc/2025/Conference
|
6bKEWevgSd
|
ManiSkill-HAB: A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks
|
High-quality benchmarks are the foundation for embodied AI research, enabling significant advancements in long-horizon navigation, manipulation and rearrangement tasks. However, as frontier tasks in robotics get more advanced, they require faster simulation speed, more intricate test environments, and larger demonstration datasets. To this end, we present MS-HAB, a holistic benchmark for low-level manipulation and in-home object rearrangement. First, we provide a GPU-accelerated implementation of the Home Assistant Benchmark (HAB). We support realistic low-level control and achieve over 3x the speed of prior magical grasp implementations at a fraction of the GPU memory usage. Second, we train extensive reinforcement learning (RL) and imitation learning (IL) baselines for future work to compare against. Finally, we develop a rule-based trajectory filtering system to sample specific demonstrations from our RL policies which match predefined criteria for robot behavior and safety. Combining demonstration filtering with our fast environments enables efficient, controlled data generation at scale.
|
ICLR 2025 Poster
|
/pdf/bcafead0bf180b4a8da019c355309931525f5b2a.pdf
|
ICLR.cc/2025/Conference
|
todLTYB1I7
|
A Principled Evaluation Framework for Neuron Explanations
|
Understanding the function of individual units in a neural network is an important building block for mechanistic interpretability. This is often done by generating a simple text explanation of the behavior of individual neurons or units. However, for these explanations to be useful, we must understand how reliable and truthful they are. In this work we unify many existing explanation evaluation methods under one mathematical framework. This allows us to compare and contrast existing evaluation metrics and understand the evaluation pipeline with increased clarity. We propose two simple sanity checks on the evaluation metrics and show that many commonly used metrics fail these tests and do not change their score after massive changes to the concept labels. Based on our experimental and theoretical results, we propose guidelines that future evaluations should follow and identify good evaluation metrics such as correlation.
|
Rejected_Submission
|
/pdf/68f3203a7db99a2ded4aa565ac02d54560eb6404.pdf
|
ICLR.cc/2025/Conference
|
qPw5D0Xahv
|
Minimax Based Fast-training Defense against Adversarial Policy in Two-player Competitive Games
|
Adversarial policies have been shown to exploit vulnerabilities in agents during two-player competitive games, significantly undermining their performance. While existing approaches model the challenge of training robust policies in such environments as the search for Nash equilibrium points in the policy space, this often leads to substantial computational overhead. In this work, we propose MM-FATROL, a novel robust policy training method grounded in the Minimax Theorem, which significantly reduces computational overhead by efficiently identifying promising policy updates. We provide a formal analysis of the speedup achieved by our method. Extensive experiments demonstrate that MM-FATROL not only enhances efficiency but also surpasses the state-of-the-art method in terms of generalization and robustness. Additionally, we discuss the limitations of our approach and the challenges that remain in developing robust policies for more complex game environments.
|
Rejected_Submission
|
/pdf/6607c2bd1859df45eec8b8c8e1876e338e4812d4.pdf
|
ICLR.cc/2024/Conference
|
VdOaaDzDD6
|
Bandits with Ranking Feedback
|
In this paper, we introduce a novel variation of multi-armed bandits called bandits with ranking feedback. Unlike traditional bandits, this variation provides feedback to the learner that allows them to rank the arms based on previous pulls, without quantifying numerically the difference in performance. This type of feedback is well-suited for scenarios where the arms' values cannot be precisely measured using metrics such as monetary scores, probabilities, or occurrences. Common examples include human preferences in matchmaking problems. Furthermore, its investigation answers the theoretical question on how numerical rewards are crucial in bandit settings. In particular, we study the problem of designing no-regret algorithms with ranking feedback both in the stochastic and adversarial settings. We show that, with stochastic rewards, differently from what happens with non-ranking feedback, no algorithm can suffer a logarithmic regret in the time horizon $T$ in the instance-dependent case. Furthermore, we provide two algorithms. The first, namely DREE, guarantees a superlogarithmic regret in $T$ in the instance-dependent case thus matching our lower bound, while the second, namely R-LPE, guarantees a regret of $\mathcal{\widetilde O}(\sqrt{T})$ in the instance-independent case. Remarkably, we show that no algorithm can have an optimal regret bound in both instance-dependent and instance-independent cases. We also prove that no algorithm can achieve a sublinear regret when the rewards are adversarial. Finally, we numerically evaluate our algorithms in a testbed, and we compare their performance with some baseline from the literature.
|
Rejected_Submission
|
/pdf/7c494796a1487e28f965415614f73fb2fd98f6c0.pdf
|
ICLR.cc/2025/Conference
|
YaBiGjuDiC
|
A Common Pitfall of Margin-based Language Model Alignment: Gradient Entanglement
|
Reinforcement Learning from Human Feedback (RLHF) has become the predominant approach for aligning language models (LMs) to be more helpful and less harmful.
At its core, RLHF uses a margin-based loss for preference optimization, which specifies the ideal LM behavior only in terms of the difference between preferred and dispreferred responses. In this paper, we identify a common pitfall of margin-based methods---the under-specification of ideal LM behavior on preferred and dispreferred responses individually, which results in two unintended consequences as the margin increases:
(1) The probability of dispreferred (e.g., unsafe) responses may increase, resulting in potential safety alignment failures.
(2) The probability of preferred responses may decrease, even when those responses are ideal.
We demystify the reasons behind these problematic behaviors: margin-based losses couple the change in the preferred probability with the gradient of the dispreferred one, and vice versa, often preventing the preferred probability from increasing while the dispreferred one decreases, and thus causing a synchronized increase or decrease in both probabilities. We term this effect, inherent in margin-based objectives, gradient entanglement.
Formally, we derive conditions for general margin-based alignment objectives under which gradient entanglement becomes concerning: the inner product between the gradient of preferred log-probability and the gradient of dispreferred log-probability is large relative to the individual gradient norms. Furthermore, we theoretically investigate why such inner products can be large when aligning language models and empirically validate our findings. Empirical implications of our framework further extend to explaining important differences in the training dynamics of various preference optimization algorithms and suggesting future directions for improvement.
|
ICLR 2025 Poster
|
/pdf/5eed7557c3f28a8e2f5fb142bf9e58397e0080ff.pdf
|
ICLR.cc/2025/Conference
|
K9Elg2JrvY
|
FROM LOW TO HIGH-VALUE DESIGNS: OFFLINE OPTIMIZATION VIA GENERALIZED DIFFUSION
|
This paper studies the black-box optimization task which aims to find the maxima of a black-box function using only a static set of its observed input-output data. This is often achieved via learning and optimizing a surrogate function using such offline dataset. Alternatively, it can also be framed as an inverse modeling task which maps a desired performance to potential input candidates that achieve it. Both approaches are limited by the limited amount of offline data. To mitigate this limitation, we introduce a new perspective which casts offline optimization as a diffusion process mapping between an implicit distribution of low-value inputs (i.e., offline data) and a superior distribution of high-value inputs (i.e., solution candidates). Such diffusion process can be learned using low- and high-value inputs sampled from synthetic functions that resemble the target function. These synthetic functions are constructed as the mean posterior of multiple Gaussian processes fitted with different parameterizations on the offline data, alleviating the data bottleneck. Experimental results demonstrate that our approach consistently outperforms previous methods, establishing a new state-of-the-art performance.
|
Rejected_Submission
|
/pdf/2b3881c0dfbdca1a041a5109d3a3817ec9440286.pdf
|
ICLR.cc/2025/Conference
|
OeHSkJ58TG
|
Incidental Polysemanticity: A New Obstacle for Mechanistic Interpretability
|
Polysemantic neurons — neurons that activate for a set of unrelated features — have been seen as a significant obstacle towards interpretability of task-optimized deep networks, with implications for AI safety. The classic origin story of polysemanticity is that the data contains more "features" than neurons, such that learning to perform a task forces the network to co-allocate multiple unrelated features to the same neuron, endangering our ability to understand networks' internal processing. In this work, we present a second and non-mutually exclusive origin story of polysemanticity. We show that polysemanticity can arise incidentally, even when there are ample neurons to represent all features in the data, a phenomenon we term incidental polysemanticity. Using a combination of theory and experiments, we show that incidental polysemanticity can arise due to multiple reasons including regularization and neural noise; this incidental polysemanticity occurs because random initialization can, by chance alone, initially assign multiple features to the same neuron, and the training dynamics then strengthen such overlap. Our paper concludes by calling for further research quantifying the performance-polysemanticity tradeoff in task-optimized deep neural networks to better understand to what extent polysemanticity is avoidable.
|
Rejected_Submission
|
/pdf/c0dcd041a28d06e922d0fe30cd80b6887eb9996e.pdf
|
ICLR.cc/2025/Conference
|
4xBew7kuYB
|
Studying the Effects of Training Data on Small Language Models
|
Prior work has found that training very small language models (SLMs) on synthetic children's stories allows them to generate coherent text, comparable to much larger models. These stories are claimed to encompass the vocabulary and factual knowledge base of a 3-4-year-old child, capturing the ``essence of natural language."
Because of these claims, it is tempting to attribute the findings to the high readability (i.e., simple language) of children's stories, drawing a parallel to how children learn language.
Is the human concept of readability relevant in the context of language model training, or are these findings better explained by other properties of the data?
In this study, we investigate this by first validating several automatic readability measures. We then create synthetic corpora with varying levels of readability and assess the coherence of text generated by SLMs trained on these corpora.
We find that training on high readability text is not a prerequisite for coherent SLMs. Specifically, SLMs trained on data with substantially more complex language also exhibit the same abilities as those trained on simple language. Moreover, training on simple language does not lead to the earlier development of coherence during training.
|
Rejected_Submission
|
/pdf/a31aafe473dd5a31b3046399cd173b02abc7f6e9.pdf
|
ICLR.cc/2024/Conference
|
ledQ1BCrwc
|
GraphMaker: Can Diffusion Models Generate Large Attributed Graphs?
|
Large-scale graphs with node attributes are fundamental in real-world scenarios, such as social and financial networks. The generation of synthetic graphs that emulate real-world ones is pivotal in graph machine learning, aiding network evolution understanding and data utility preservation when original data cannot be shared. Traditional models for graph generation suffer from limited model capacity. Recent developments in diffusion models have shown promise in merely graph structure generation or the generation of small molecular graphs with attributes. However, their applicability to large attributed graphs remains unaddressed due to challenges in capturing intricate patterns and scalability. This paper introduces GraphMaker, a novel diffusion model tailored for generating large attributed graphs. We study the diffusion models that either couple or decouple graph structure and node attribute generation to address their complex correlation. We also employ node-level conditioning and adopt a minibatch strategy for scalability. We further propose a new evaluation pipeline using models trained on generated synthetic graphs and tested on original graphs to evaluate the quality of synthetic data. Empirical evaluations on real-world datasets showcase GraphMaker's superiority in generating realistic and diverse large-attributed graphs beneficial for downstream tasks.
|
Rejected_Submission
|
/pdf/42730bddafffbf536615e98ccbb41358699e34e1.pdf
|
ICLR.cc/2024/Conference
|
Piod76RSrx
|
Slicing Mutual Information Generalization Bounds for Neural Networks
|
The ability of machine learning (ML) algorithms to generalize to unseen data has been studied through the lens of information theory, by bounding the generalization error with the input-output mutual information (MI), i.e. the MI between the training data and the learned hypothesis. These bounds have limited empirical use for modern ML applications (e.g., deep learning) since the evaluation of MI is difficult in high-dimensional settings. Motivated by recent reports of significant low-loss compressibility of neural networks, we study the generalization capacity of algorithms that slice the parameter space, i.e. train on a random lower-dimensional subspace. We derive information-theoretic bounds on generalization error in this regime and discuss an intriguing connection to the $k$-Sliced Mutual Information, an alternative measure of statistical dependence that scales well with dimension. We also propose a rate-distortion framework that allows generalization bounds to be obtained if the weights are simply close to the random subspace, and we propose a training procedure that exploits this flexibility. The computational and statistical benefits of our approach allow us to empirically estimate the input-output information of these neural networks and compute their information-theoretic generalization bounds, a task which was previously out of reach.
|
Rejected_Submission
|
/pdf/729ffaa0e943f05a17f51c6ad96e281bb0b5cd77.pdf
|
ICLR.cc/2025/Conference
|
Mv3GAYJGcW
|
MetaDesigner: Advancing Artistic Typography through AI-Driven, User-Centric, and Multilingual WordArt Synthesis
|
MetaDesigner introduces a transformative framework for artistic typography synthesis, powered by Large Language Models (LLMs) and grounded in a user-centric design paradigm. Its foundation is a multi-agent system comprising the Pipeline, Glyph, and Texture agents, which collectively orchestrate the creation of customizable WordArt, ranging from semantic enhancements to intricate textural elements. A central feedback mechanism leverages insights from both multimodal models and user evaluations, enabling iterative refinement of design parameters. Through this iterative process, MetaDesigner dynamically adjusts hyperparameters to align with user-defined stylistic and thematic preferences, consistently delivering WordArt that excels in visual quality and contextual resonance. Empirical evaluations underscore the system's versatility and effectiveness across diverse WordArt applications, yielding outputs that are both aesthetically compelling and context-sensitive.
|
ICLR 2025 Poster
|
/pdf/3c2e25dd89cde196ab62eccc9eb1ed2197e3cb20.pdf
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.