venue
stringclasses
11 values
paper_openreview_id
stringlengths
9
13
title
stringlengths
4
179
abstract
stringlengths
2
4.99k
paper_decision
stringclasses
29 values
paper_pdf_link
stringlengths
31
63
ICLR.cc/2025/Conference
VZC9aJoI6a
PromptWizard: Task-Aware Prompt Optimization Framework
Large language models (LLMs) have transformed AI across diverse domains, with \textit{prompting} being central to their success in guiding model outputs. However, manual prompt engineering is both labor-intensive and domain-specific, necessitating the need for automated solutions. We introduce PromptWizard, a novel, fully automated framework for discrete prompt optimization, utilizing a self-evolving, self-adapting mechanism. Through a feedback-driven critique and synthesis process, PromptWizard achieves an effective balance between exploration and exploitation, iteratively refining both prompt instructions and in-context examples to generate human-readable, task-specific prompts. This guided approach systematically improves prompt quality, resulting in superior performance across 45 tasks. PromptWizard excels even with limited training data, smaller LLMs, and various LLM architectures. Additionally, our cost analysis reveals a substantial reduction in API calls, token usage, and overall cost, demonstrating PromptWizard's efficiency, scalability, and advantages over existing prompt optimization strategies.
Rejected_Submission
/pdf/14afa7892f356a55ca11b170e1ebeb75e2592e49.pdf
ICLR.cc/2025/Conference
g6syfIrVuS
Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation
Local learning, which trains a network through layer-wise local targets and losses, has been studied as an alternative to backpropagation (BP) in neural computation. However, its algorithms often become more complex or require additional hyperparameters due to the locality, making it challenging to identify desirable settings where the algorithm progresses in a stable manner. To provide theoretical and quantitative insights, we introduce maximal update parameterization ($\mu$P) in the infinite-width limit for two representative designs of local targets: predictive coding (PC) and target propagation (TP). We verify that $\mu$P enables hyperparameter transfer across models of different widths. Furthermore, our analysis reveals unique and intriguing properties of $\mu$P that are not present in conventional BP. By analyzing deep linear networks, we find that PC's gradients interpolate between first-order and Gauss-Newton-like gradients, depending on the parameterization. We demonstrate that, in specific standard settings, PC in the infinite-width limit behaves more similarly to the first-order gradient. For TP, even with the standard scaling of the last layer differing from classical $\mu$P, its local loss optimization favors the feature learning regime over the kernel regime.
ICLR 2025 Poster
/pdf/8a1ccf4e448997c7251dad971f6cebb28a74acd4.pdf
ICLR.cc/2025/Conference
CMMpcs9prj
Towards Faster Decentralized Stochastic Optimization with Communication Compression
Communication efficiency has garnered significant attention as it is considered the main bottleneck for large-scale decentralized Machine Learning applications in distributed and federated settings. In this regime, clients are restricted to transmitting small amounts of compressed information to their neighbors over a communication graph. Numerous endeavors have been made to address this challenging problem by developing algorithms with compressed communication for decentralized non-convex optimization problems. Despite considerable efforts, current theoretical understandings of the problem are still very limited, and existing algorithms all suffer from various limitations. In particular, these algorithms typically rely on strong, and often infeasible assumptions such as bounded data heterogeneity or require large batch access while failing to achieve linear speedup with the number of clients. In this paper, we introduce MoTEF, a novel approach that integrates communication compression with $\textbf{Mo}$mentum $\textbf{T}$racking and $\textbf{E}$rror $\textbf{F}$eedback. MoTEF is the first algorithm to achieve an asymptotic rate matching that of distributed SGD under arbitrary data heterogeneity, hence resolving a long-standing theoretical obstacle in decentralized optimization with compressed communication. We provide numerical experiments to validate our theoretical findings and confirm the practical superiority of MoTEF.
ICLR 2025 Poster
/pdf/db91a1f9b5191d22f46819f74495e8037584f11a.pdf
ICLR.cc/2024/Conference
SIojR1ruNQ
TIGERScore: Building Explainable Metric for All Text Generation Task
We present TIGERScore, a \textbf{T}rained metric that follow \textbf{I}nstruction \textbf{G}uidance to perform \textbf{E}xplainable, and \textbf{R}eference-free evaluation over a wide spectrum of text generation tasks. Different from other automatic evaluation methods that only provide arcane scores, TIGERScore is guided by the natural language instruction to provide error analysis to pinpoint the mistakes in the generated text. Our metric is based on LLaMA, trained on our meticulously curated instruction-tuning dataset MetricInstruct that covers 6 text generation tasks and 23 text generation datasets. The dataset consists of 48K quadruple in the form of (instruction, input, system output $\rightarrow$ error analysis). We collected the `system outputs' through diverse channels to cover different types of errors. To quantitatively assess our metric, we evaluate its correlation with human ratings on 5 held-in datasets, 2 held-out datasets and show that TIGERScore can achieve the highest overall Spearman's correlation with human ratings across these datasets and outperforms other metrics significantly. As a reference-free metric, its correlation can even surpass the best existing reference-based metrics. To further qualitatively assess the rationale generated by our metric, we conduct human evaluation on the generated explanations and found that the explanations are 70.8\% accurate. Through these experimental results, we believe TIGERScore demonstrate the possibilities to build a universal explainable metrics to evaluate any text generation task.
Rejected_Submission
/pdf/9ce48bccffb0dac34c0dcb44f51fdf08f2dafe74.pdf
ICLR.cc/2025/Conference
xP1radUi32
Endless Jailbreaks with Bijection Learning
Despite extensive safety measures, LLMs are vulnerable to adversarial inputs, or jailbreaks, which can elicit unsafe behaviors. In this work, we introduce bijection learning, a powerful attack algorithm which automatically fuzzes LLMs for safety vulnerabilities using randomly-generated encodings whose complexity can be tightly controlled. We leverage in-context learning to teach models bijective encodings, pass encoded queries to the model to bypass built-in safety mechanisms, and finally decode responses back into English. Our attack is extremely effective on a wide range of frontier language models. By controlling complexity parameters such as number of key-value mappings in the encodings, we find a close relationship between the capability level of the attacked LLM and the average complexity of the most effective bijection attacks. Our work highlights that new vulnerabilities in frontier models can emerge with scale: more capable models are more severely jailbroken by bijection attacks.
ICLR 2025 Poster
/pdf/af895e970cc73ca7ef723e0e8be29e7afefa84d8.pdf
ICLR.cc/2025/Conference
5wxCQDtbMo
GotenNet: Rethinking Efficient 3D Equivariant Graph Neural Networks
Understanding complex three-dimensional (3D) structures of graphs is essential for accurately modeling various properties, yet many existing approaches struggle with fully capturing the intricate spatial relationships and symmetries inherent in such systems, especially in large-scale, dynamic molecular datasets. These methods often must balance trade-offs between expressiveness and computational efficiency, limiting their scalability. To address this gap, we propose a novel Geometric Tensor Network (GotenNet) that effectively models the geometric intricacies of 3D graphs while ensuring strict equivariance under the Euclidean group E(3). Our approach directly tackles the expressiveness-efficiency trade-off by leveraging effective geometric tensor representations without relying on irreducible representations or Clebsch-Gordan transforms, thereby reducing computational overhead. We introduce a unified structural embedding, incorporating geometry-aware tensor attention and hierarchical tensor refinement that iteratively updates edge representations through inner product operations on high-degree steerable features, allowing for flexible and efficient representations for various tasks. We evaluated models on QM9, rMD17, MD22, and Molecule3D datasets, where the proposed model consistently outperforms state-of-the-art methods in both scalar and high-degree property predictions, demonstrating exceptional robustness across diverse datasets, and establishes GotenNet as a versatile and scalable framework for 3D equivariant Graph Neural Networks.
ICLR 2025 Poster
/pdf/a1396f1d1e7975177c314f3bddd7e718fc87796e.pdf
ICLR.cc/2025/Conference
41HlN8XYM5
Efficient Automated Circuit Discovery in Transformers using Contextual Decomposition
Automated mechanistic interpretation research has attracted great interest due to its potential to scale explanations of neural network internals to large models. Existing automated circuit discovery work relies on activation patching or its approximations to identify subgraphs in models for specific tasks (circuits). They often suffer from slow runtime, approximation errors, and specific requirements of metrics, such as non-zero gradients. In this work, we introduce contextual decomposition for transformers (CD-T) to build interpretable circuits in large language models. CD-T can produce circuits at any level of abstraction and is the first to efficiently produce circuits as fine-grained as attention heads at specific sequence positions. CD-T is compatible to all transformer types, and requires no training or manually-crafted examples. CD-T consists of a set of mathematical equations to isolate contribution of model features. Through recursively computing contribution of all nodes in a computational graph of a model using CD-T followed by pruning, we are able to reduce circuit discovery runtime from hours to seconds compared to state-of-the-art baselines. On three standard circuit evaluation datasets (indirect object identification, greater-than comparisons, and docstring completion), we demonstrate that CD-T outperforms ACDC and EAP by better recovering the manual circuits with an average of 97% ROC AUC under low runtimes. In addition, we provide evidence that faithfulness of CD-T circuits is not due to random chance by showing our circuits are 80% more faithful than random circuits of up to 60% of the original model size. Finally, we show CD-T circuits are able to perfectly replicate original models' behavior(faithfulness = 1) using fewer nodes than the baselines for all tasks. Our results underscore the great promise of CD-T for efficient automated mechanistic interpretability, paving the way for new insights into the workings of large language models.
ICLR 2025 Poster
/pdf/a99a07e7b8fea3e243e0aa4fc594b5908bb86c24.pdf
ICLR.cc/2025/Conference
kvCKoKfqTd
Non-Commutative Spectral Geometry for Adaptive Quantum-Classical Drug-Target Interaction Prediction
Drug-target interactions (DTIs) are fundamental and intricate processes essential for the advancement of drug discovery and design. We present a groundbreaking unified framework for drug-target interaction (DTI) prediction that seamlessly integrates advanced concepts from non-commutative geometry, optimal transport theory, and quantum information science. Our approach, Non-Commutative Geometric Adaptation for Molecular Interactions (NCGAMI), reframes the DTI prediction problem within the context of a non-commutative pharmacological manifold, enabling a profound synthesis of classical and quantum perspectives. By leveraging the spectral action principle, we develop a novel domain adaptation technique that minimizes a geometrically motivated functional, yielding optimal transport maps between pharmacological domains. We establish a deep connection between our framework and non-equilibrium statistical mechanics through a fluctuation theorem for domain adaptation, providing fundamental insights into the thermodynamics of the adaptation process. Our unified variational objective, formulated using geometric quantization, incorporates quantum relative entropy and Liouville volume forms, bridging information-theoretic and geometric aspects of the problem. We introduce a quantum adiabatic optimization algorithm for solving this objective, guaranteeing convergence to the optimal solution under specified conditions. Furthermore, we prove that the algebra of observables generated by our model forms a hyperfinite type III$_1$ factor, revealing a profound link between the algebraic structure of DTI prediction and the geometry of optimal transport. This result enables us to characterize the modular automorphism group governing the evolution of adapted distributions. Extensive numerical experiments demonstrate that NCGAMI significantly outperforms existing state-of-the-art methods across a wide range of DTI prediction tasks, achieving unprecedented accuracy and robustness.
Rejected_Submission
/pdf/364cff344169009be8d555862d0db6f792af2ced.pdf
ICLR.cc/2024/Conference
XUZ2S0JVJP
FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance
The rapid adoption of large language models (LLMs) has led to an growing number of companies offering generative LLMs as callable services at varying costs. We find that popular generative LLM APIs, such as GPT-4, ChatGPT, and J1-Jumbo, exhibit heterogeneous pricing structures, with fees that can differ by two orders of magnitude, and heterogeneous performance across tasks and input queries. This makes it challenging for users to decide which generative LLM APIs to utilize for their applications and budget. Motivated by these findings, we propose FrugalGPT, an algorithmic framework that adaptively selects which generative LLMs to use for different queries to reduce cost and improve accuracy. Our experiments demonstrate that, for a range of natural language tasks including news classification, reading comprehension, and scientific question answering, FrugalGPT can match the performance of the best individual generative LLM (e.g., GPT-4) with up to a 98% cost reduction or improve the accuracy over GPT-4 by 4% at the same cost. The ideas and findings presented in this paper lay a foundation for using LLMs sustainably and efficiently.
Rejected_Submission
/pdf/44f9acd4b03577917238a232b64b9793f40a8b74.pdf
ICLR.cc/2024/Conference
QQt0MwXA81
Do LLMs exhibit human-like response biases? A case study in survey design
As large language models (LLMs) become more capable, there is growing excitement about the possibility of using LLMs as proxies for humans in real-world tasks where subjective labels serve as the ground truth. A barrier to the adoption of LLMs as human proxies is their sensitivity to prompt wording. But interestingly, humans also suffer from issues of sensitivity to instruction changes. As such, it is necessary to investigate the extent to which LLMs also reflect human sensitivities, if at all. In this work, we use survey design as a case study, where human response biases caused by permutations in wordings of "prompts" have been extensively studied. Drawing from prior work in social psychology, we design a dataset and propose a framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires. Over the seven models we evaluated, we find that all but one (Llama2-70b), in particular instruction fine-tuned models, do not consistently display human-like response biases, and even sometimes show a significant change in the opposite expected direction of change in humans. Furthermore, even if a model shows a significant change in the same direction as humans, we find that perturbations that are not meant to elicit biased behavior may also result in a similar change, suggesting that such a result could be partially due to other spurious correlations. These results highlight the potential pitfalls of using LLMs to substitute humans in parts of the annotation pipeline, and further underscore the importance of finer-grained characterizations of model behavior.
Rejected_Submission
/pdf/6545b49f4d06d459f67f8ad64245981ae18db2e1.pdf
ICLR.cc/2024/Conference
p79lnC36CO
Automatic Calibration Diagnosis: Interpreting Probability Integral Transform (PIT) Histograms
Uncertainty quantification in predictive models is essential for safe decision-making and risk assessment. The predictive uncertainty is often represented by a predictive distribution because it is its most general representation. Optimising the sharpness of the distribution subject to its calibration is necessary. This work addresses the proper calibration of predictive distributions in regression tasks. We particularly focus on machine learning models, which are increasingly prevalent in real-world applications. We employ the probability integral transform (PIT) histogram to evaluate calibration quality. It can be used to diagnose calibration problems, e.g. under- or over-estimation, under- or over-dispersion, or an incorrect number of modes. However, PIT histograms are often difficult to interpret because multiple calibration problems may occur simultaneously. To tackle this issue, we present a methodological concept for the automatic interpretation of PIT histograms. It is based on a mixture density network interpreter trained with a synthetic data set of PIT histograms. Given a predictive model, data set, and corresponding PIT histogram, the interpreter can identify a probable observation-generating distribution. This allows us to diagnose a potential calibration problem by comparing the predictive with the probable observation-generating distribution. To showcase the power of the proposed concept in the automatic interpretation of PIT histograms, we referred to regression tasks on standard data sets. As a result, we could achieve notable improvements in the calibration of machine learning models.
Rejected_Submission
/pdf/f07634746a15c4d260d5f2df5eef279419854d12.pdf
ICLR.cc/2025/Conference
cbttLtO94Q
How to Evaluate Reward Models for RLHF
We introduce a new benchmark for reward models that quantifies their ability to produce strong language models through RLHF (Reinforcement Learning from Human Feedback). The gold-standard approach is to run a full RLHF training pipeline and directly probe downstream LLM performance. However, this process is prohibitively expensive. To address this, we build a predictive model of downstream LLM performance by evaluating the reward model on proxy tasks. These proxy tasks consist of a large-scale human preference and a verifiable correctness preference dataset, in which we measure 12 metrics across 12 domains. To investigate which reward model metrics are most correlated to gold-standard RLHF outcomes, we launch an end-to-end RLHF experiment on a large-scale crowd-sourced human preference platform to view real reward model downstream performance as ground truth. Ultimately, we compile our data and findings into Preference Proxy Evaluations (PPE), the first reward model benchmark explicitly linked to post-RLHF real-world human preference performance, which we opensource for public use and further development at https://github.com/lmarena/PPE.
ICLR 2025 Poster
/pdf/36c1a131f5c6d861080f6e1a655f78927b42578c.pdf
ICLR.cc/2025/Conference
I0To0G5J7g
On the Surprising Efficacy of Online Self-Improvement for Embodied Multimodal Foundation Models
Foundation models trained on web-scale data have revolutionized robotics, but their application to low-level control remains largely limited to behavioral cloning. Drawing inspiration from the sample efficiency and success of reinforcement learning (RL) fine-tuning in large language models (LLMs), we propose a two-stage approach suited to robotics. The first stage, Supervised Fine-Tuning (SFT), fine-tunes pre-trained foundation models using goal-conditioned behavioral cloning and “steps-to-go” prediction objectives. In the second stage, this foundation enables the extraction of a well-shaped reward function and a success detector, eliminating the need for manual reward engineering and real-world instrumentation, and allowing robots to practice autonomously with minimal human supervision. Our experiments on both real-world and simulated robots demonstrate that the combination of SFT and online Self-Improvement is significantly more sample-efficient than supervised learning alone. Furthermore, the combination of our proposed approach with web-scale pre-trained foundation models enables rapid acquisition of new skills, allowing robots to generalize far beyond the behaviors observed in the imitation learning datasets used during training. These findings highlight the transformative potential of combining pre-trained foundation models with online fine-tuning to unlock new levels of autonomy and skill acquisition in robotics.
Rejected_Submission
/pdf/5c7f31dd49a1e36f55aec3e556feb3eed16fb175.pdf
ICLR.cc/2024/Conference
C4CxQmp9wc
Jumanji: a Diverse Suite of Scalable Reinforcement Learning Environments in JAX
Open-source reinforcement learning (RL) environments have played a crucial role in driving progress in the development of AI algorithms. In modern RL research, there is a need for simulated environments that are performant, scalable, and modular to enable their utilization in a wider range of potential real-world applications. Therefore, we present Jumanji, a suite of diverse RL environments specifically designed to be fast, flexible, and scalable. Jumanji provides a suite of environments focusing on combinatorial problems frequently encountered in industry, as well as challenging general decision-making tasks. By leveraging the efficiency of JAX and hardware accelerators like GPUs and TPUs, Jumanji enables rapid iteration of research ideas and large-scale experimentation, ultimately empowering more capable agents. Unlike existing RL environment suites, Jumanji is highly customizable, allowing users to tailor the initial state distribution and problem complexity to their needs. Furthermore, we provide actor-critic baselines for each environment, accompanied by preliminary findings on scaling and generalization scenarios. Jumanji aims to set a new standard for speed, adaptability, and scalability of RL environments.
ICLR 2024 poster
/pdf/6e98848b7cc0fcbe27dd8490b8d2cd2f7175911b.pdf
ICLR.cc/2025/Conference
C5w86qtcgY
Decentralized Finite-Sum Optimization over Time-Varying Networks
We consider decentralized time-varying stochastic optimization problems where each of the functions held by the nodes has a finite sum structure. Such problems can be efficiently solved using variance reduction techniques. Our aim is to explore the lower complexity bounds (for communication and number of stochastic oracle calls) and find optimal algorithms. The paper studies strongly convex and nonconvex scenarios. To the best of our knowledge, variance reduced schemes and lower bounds for time-varying graphs have not been studied in the literature. For nonconvex objectives, we obtain lower bounds and develop an optimal method GT-PAGE. For strongly convex objectives, we propose the first decentralized time-varying variance-reduction method ADOM+VR and establish lower bound in this scenario, highlighting the open question of matching the algorithms complexity and lower bounds even in static network case.
Rejected_Submission
/pdf/57219a191feb2f9c72806ea8c65e81313b31bc88.pdf
ICLR.cc/2025/Conference
2bIQBDSfRk
DenseAttention: No-Compromise Exact All $N \times N$ Interactions Algorithm with $O(N)$ Space and Time Complexity
The ubiquitous Transformer architecture suffers from two main bottlenecks: 1) low computational and memory efficiency, leading to suboptimal hardware utilization, and 2) quadratic time complexity with respect to sequence length $N$, making it slow and costly for large data contexts. We propose a novel DenseAttention Network architecture, a straightforward simplification of the standard Transformer block that addresses these issues and serves as a drop-in replacement for language modeling tasks. We eliminate memory-bound components in DenseAttention, including Softmax, masking, one skip connection, and both LayerNorms, as well as key, value, and output projection matrices, as they become redundant. Despite these removals, it maintains exact $N \times N$ pairwise interactions between tokens. By exploiting the associativity of matrix multiplications, DenseAttention can be computed with $O(N^2d)$ or $O(Nd^2)$ time and space complexity, depending on the context. To handle the absence of Softmax and prevent numerical instability, we introduce MaxNormActivation at both ends of the Transformer block. We also devise Cosine Relative Positional Embeddings as a computationally efficient replacement for RoPE, and simple LocalAttention variations of the block to help the model focus on details in extremely long contexts. DenseAttention competes with FlashAttention in speed on small sequences and outperforms it by orders of magnitude on large contexts. We pre-train encoder language models on sequences up to 16K in length, which perform similarly or better than baseline BERT-large, while significantly improving speed and efficiency. Finally, we achieve state-of-the-art on the LRA benchmark among the Transformer-based architectures.
Rejected_Submission
/pdf/efeb6787f0c4f6394dce7d2e3d58e8e6dae7e3ee.pdf
ICLR.cc/2025/Conference
fQoYYtPJFX
Weakly-supervised & Uncertainty-aware 3D Gaze Estimation with Geometry-guided Constraints
3D eye gaze estimation from monocular images remains to be a challenging task due to the model sensitivity to illumination, occlusion and head pose changes. As the growing interests and demand in in-the-wild 3D gaze estimation under unconstrained environments, the generalization ability has been considered as a crucial performance metric of 3D gaze estimation models. In this work, we present UGaze-Geo, an uncertainty-aware weakly-supervised framework for 3D gaze estimation. We leverage the general knowledge of human eyeball anatomy and develop multiple geometric constraints. The proposed geometrical constraints contains two types, where the first type is formulated by constructing the mapping function from anatomical 3D eyeball parameters to eye appearance features (eyelid \& iris landmarks). The second type of constraints is based on the relationship among head rotation, eyeball rotation and gaze, where we learn a variable that describes "relative eyeball rotation" conditioned on current head pose. Both type of constraints are free of gaze labels and are general to any subjects and environmental conditions. We formulate these constraints as loss functions in a probabilistic framework. We evaluate the UGaze-Geo framework on within-domain and four cross-domain gaze estimation tasks to validate the effectiveness of each constraint and the advantage of performing probabilistic gaze estimation. Experimental results indicate that our model achieves SOTA performances on different dataset.
Rejected_Submission
/pdf/f75fd1aa742ec9896552db38ce1cc28c6e594f6f.pdf
ICLR.cc/2025/Conference
AJM52ygi6Y
Decentralized Optimization with Coupled Constraints
We consider the decentralized minimization of a separable objective $\sum_{i=1}^{n} f_i(x_i)$, where the variables are coupled through an affine constraint $\sum_{i=1}^n\left(\mathbf{A}_i x_i - b_i\right) = 0$. We assume that the functions $f_i$, matrices $\mathbf{A}_i$, and vectors $b_i$ are stored locally by the nodes of a computational network, and that the functions $f_i$ are smooth and strongly convex. This problem has significant applications in resource allocation and systems control and can also arise in distributed machine learning. We propose lower complexity bounds for decentralized optimization problems with coupled constraints and a first-order algorithm achieving the lower bounds. To the best of our knowledge, our method is also the first linearly convergent first-order decentralized algorithm for problems with general affine coupled constraints.
ICLR 2025 Poster
/pdf/955f27be28ce3c755cafccab6cfe814f005c05d4.pdf
ICLR.cc/2024/Conference
L7KDMsqWl9
HHD-Ethiopic: A Historical Handwritten Dataset for Ethiopic OCR with Baseline Models and Human-level Performance
This paper introduces HHD-Ethiopic, a new OCR dataset for historical handwritten Ethiopic script, characterized by a unique syllabic writing system, low resource availability, and complex orthographic diacritics. The dataset consists of roughly 80,000 annotated text-line images from 1700 pages of $18^{th}$ to $20^{th}$ century documents, including a training set with text-line images from the $19^{th}$ to $20^{th}$ century and two test sets. One is distributed similarly to the training set with nearly 6,000 text-line images, and the other contains only images from the $18^{th}$ century manuscripts, with around 16,000 images. The former test set allows us to check baseline performance in the classical IID setting (Independently and Identically Distributed), while the latter addresses a more realistic setting in which the test set is drawn from a different distribution than the training set (Out-Of-Distribution or OOD). Multiple annotators labeled all text-line images for the HHD-Ethiopic dataset, and an expert supervisor double-checked them. We assessed human-level recognition performance and compared it with state-of-the-art (SOTA) OCR models using the Character Error Rate (CER) and Normalized Edit Distance (NED) metrics. Our results show that the model performed comparably to human-level recognition on the $18^{th}$ century test set and outperformed humans on the IID test set. However, the unique challenges posed by the Ethiopic script, such as detecting complex diacritics, still present difficulties for the models. Our baseline evaluation and HHD-Ethiopic dataset will encourage further research on Ethiopic script recognition. The dataset and source code can be accessed at https://github.com/ethopic/hhd-ethiopic-I.
Rejected_Submission
/pdf/f906f1437718b8d620cd3f1ed9afb4c0a621279a.pdf
ICLR.cc/2025/Conference
4A9IdSa1ul
FreDF: Learning to Forecast in the Frequency Domain
Time series modeling presents unique challenges due to autocorrelation in both historical data and future sequences. While current research predominantly addresses autocorrelation within historical data, the correlations among future labels are often overlooked. Specifically, modern forecasting models primarily adhere to the Direct Forecast (DF) paradigm, generating multi-step forecasts independently and disregarding label correlations over time. In this work, we demonstrate that the learning objective of DF is biased in the presence of label correlation. To address this issue, we propose the Frequency-enhanced Direct Forecast (FreDF), which mitigates label correlation by learning to forecast in the frequency domain, thereby reducing estimation bias. Our experiments show that FreDF significantly outperforms existing state-of-the-art methods and is compatible with a variety of forecast models. Code is available at https://github.com/Master-PLC/FreDF.
ICLR 2025 Poster
/pdf/b7792b9899874fb7331b44155bda3c993ffd2f02.pdf
ICLR.cc/2025/Conference
US2UCMvzvP
Why Not Transform Chat Large Language Models to Non-English?
Large language models (LLMs) excel in various tasks, but their performance in non-English languages remains limited due to imbalanced training data. To address this limitation, we explore how to transform chat LLMs to non-English. Chat LLMs offer more advanced capabilities than base LLMs, such as multi-turn conversation and alignment with human preferences. However, transforming chat LLMs presents greater challenges than base LLMs. First, how can we effectively transfer advanced capabilities without their supervised data in target languages? Second, how can we prevent the original capabilities from catastrophic forgetting without replaying their training procedure in English? We target these issues by introducing a simple framework called TransLLM. TransLLM divides the transfer problem into some common sub-tasks with the translation chain-of-thought, eliminating the need for complex training data. More importantly, TransLLM uses two key strategies to prevent catastrophic forgetting: Low-rank adaptation, which preserves the original LLM parameters during training, and recovery KD, which utilizes data generated by the chat LLM itself to recover the original knowledge from the frozen parameters. Experiments conducted across five languages and three LLMs demonstrate the superiority of TransLLM. Notably, TransLLM outperforms GPT-4 in Thai, demonstrating higher levels of helpfulness and safety, using just 8B parameters and publicly accessible data. Our analysis demonstrates how recovery KD combined with LoRA helps mitigate catastrophic forgetting.
Rejected_Submission
/pdf/32f67736d621c0bcba1762a879cff7f5e621b228.pdf
ICLR.cc/2025/Conference
Exnt2DcdKD
NIRANTAR: Continual Learning with New Languages and Domains on Real-world Speech Data
We present Nirantar based on a large-scale effort to collect extempore and conversational speech data from participants spanning 22 languages across diverse locations in India. Given the extensive number of languages and locations involved, data is collected in incremental batches. Each batch introduces new languages, new domains (locations), or both, creating a practical playground for continual learning (CL). Nirantar contains a total of 3250 hours of human-transcribed speech data covering 208 Indian districts across 22 languages, with 1720 hours newly released as a part of this work. The data inflow and resulting multilingual multi-domain episodes are based on real-world data collection rather than simulated episodes commonly found in existing CL datasets. In particular, the amount of data collected and the number of languages and domains involved are not uniform across episodes, reflecting a practical and real-world continual learning scenario. This dataset serves as a playground for training and evaluating CL approaches in three different scenarios: Language-Incremental (LIL), Domain-Incremental (DIL), and the novel Language-Incremental Domain-Incremental Learning (LIDIL), which has not been studied before. To establish the dataset's usefulness, we evaluate several existing CL approaches within these scenarios. Our findings indicate that the behaviour of these algorithms varies across the three scenarios, emphasizing the need for detailed independent studies of each.
Rejected_Submission
/pdf/0fb68f112db4dfcaee1af1551988918106ccbe5a.pdf
ICLR.cc/2024/Conference
oTl1ABwM4n
Improving length generalization in transformers via task hinting
It has been observed in recent years that transformers have problems with length generalization for certain types of reasoning and arithmetic tasks. In particular, the performance of a transformer model trained on tasks (say addition) up to a certain length (e.g., 5 digit numbers) drops sharply when applied to longer instances of the same problem. This work proposes an approach based on task hinting towards addressing length generalization. Our key idea is that while training the model on task-specific data, it is helpful to simultaneously train the model to solve a simpler but related auxiliary task as well. We study the classical sorting problem as a canonical example to evaluate our approach. We design a multitask training framework and show that models trained via task hinting significantly improve length generalization. In particular, for sorting we show that it is possible to train models on data consisting of sequences having length at most $20$, and improve the test accuracy on sequences of length $100$ from less than $1$% (for standard training) to more than $92$% (via task hinting). Our study uncovers several interesting aspects of length generalization. We observe that while several auxiliary tasks may seem natural a priori, their effectiveness in improving length generalization differs dramatically. We further use probing and visualization-based techniques to understand the internal mechanisms via which the model performs the task, and propose a theoretical construction consistent with the observed learning behaviors of the model. Based on our construction, we show that introducing a small number of length dependent parameters into the training procedure can further boost the performance on unseen lengths. Finally, we also show the efficacy of our task hinting based approach beyond sorting, giving hope that these techniques will be applicable in broader contexts.
Rejected_Submission
/pdf/a1771310a38972d05a53e3480e72580ff661f5a4.pdf
ICLR.cc/2024/Conference
gWHiS8Z867
Routing with Rich Text Queries via Next-Vertex Prediction Models
Autoregressive modeling of text via transformers has led to recent breakthroughs in language. In this work, we study the effectiveness of this framework for routing problems on graphs. In particular, we aim to develop a learning based routing system that can process rich natural language based queries indicating various desired criteria and produce near optimal routes from the source to the destination. Furthermore, the system should be able to generalize to new geographies not seen during training time. Solving the above problem via combinatorial approaches is challenging since one has to learn specific cost functions over the edges of the graphs for each possible type of query. We instead investigate the efficacy of autoregressive modeling for routing. We propose a multimodal architecture that jointly encodes text and graph data and present a simple way of training the architecture via {\em next token prediction}. In particular, given a text query and a prefix of a ground truth path, we train the network to predict the next vertex on the path. While a priori this approach may seem suboptimal due to the local nature of the predictions made, we show that when done at scale, this yields near optimal performance. We demonstrate the effectiveness of our approach via extensive experiments on synthetic graphs as well as graphs from the OpenStreetMap repository. We also present recommendations for the training techniques, architecture choices and the inference algorithms needed to get the desired performance for such problems.
Rejected_Submission
/pdf/ef33086706656a23b6d655c32e08f95522ece802.pdf
ICLR.cc/2025/Conference
Frhj9T7ihK
All Models are Biased, Some are More Transparent about it: Fully Interpretable and Adjustable Model for Mental Disorder Diagnosis
Recent advances in machine learning have enabled AI applications in mental disorder diagnosis, but many methods remain black-box or rely on post-hoc explanations which are not straightforward or actionable for mental health practitioners. Meanwhile, interpretable methods, such as k-nearest neighbors (k-NN) classification, struggle with complex or high-dimensional data. A network-based k-NN model (NN-kNN) combines the interpretability with the predictive power of neural networks. The model prediction can be fully explained in terms of activated features and neighboring cases. We experimented with the model to predict the risks of depression and interviewed practitioners. The feedback of the practitioners emphasized the model's adaptability, integration of clinical expertise, and transparency in the diagnostic process, highlighting its potential to ethically improve the diagnostic precision and confidence of the practitioner.
Rejected_Submission
/pdf/a0381c28bc7f6b723ff75bca86e449f30ae3aa13.pdf
ICLR.cc/2025/Conference
qeY25DwmKO
Foundation Models for Boolean Logic
Boolean logic is fundamental to solving various computational problems, such as Boolean satisfiability (SAT) and model counting, but existing machine learning (ML) approaches for automating algorithm design are computationally expensive and data-intensive. We propose the first foundation model for Boolean logic, leveraging a multi-task dataset of one million instances spanning sixteen tasks and using graph neural networks (GNNs). We evaluated the generalization of the foundation models on held-out tasks; we found that models fine-tuned from the foundation model were substantially more sample efficient and converged much faster than models trained from scratch. We identified a number of crucial design components for training these models, in particular the choice of normalization layer. We showed that a hybrid of different normalization techniques across layers is much more effective than any single normalization layer.
Rejected_Submission
/pdf/11d2f906b403e4689c9848d8a4ee69704ec959c1.pdf
ICLR.cc/2025/Conference
uHkfU4TaPh
DynamicKV: Task-Aware Adaptive KV Cache Compression for Long Context LLMs
Efficiently managing the KV cache in Large Language Models (LLMs) is a critical challenge for long-context processing tasks such as retrieval-augmented generation (RAG), long text summarization, and multi-document analysis. Extending the context length substantially increases the KV cache size, leading to excessive memory consumption. Existing KV cache compression methods enforce a fixed pattern, neglecting task-specific characteristics, which hampers the effective retention of essential information while discarding less important tokens. In this paper, we introduce a novel Task-Aware KV cache mechanism that dynamically adjusts the KV cache size across different layers based on the characteristics of the tasks. Our approach builds on the significant observation of distinct activation patterns across layers in various tasks, which highlights the need for adaptive strategies tailored to each task's unique demands. Based on this insight, we propose DynamicKV, a method that dynamically optimizes token retention by adjusting the number of tokens retained at each layer, adapting to the specific task. DynamicKV establishes global and per-layer maximum KV cache budgets, temporarily retaining the maximum budget for the current layer, and periodically updating the KV cache sizes of all preceding layers during inference. Our method demonstrates exceptional performance on the LongBench dataset, retaining only 1.7\% of the KV cache while preserving 90\%, 87\%, 78\%, and 83\% of the original accuracy for LlaMA-3-8B-Instruct, Mistral-7B-Instruct-v0.2, Qwen2-7B-Instruct, and InternLM-2.5-7B-Chat-1M, respectively. When the retained KV cache size is increased to 6.9\%, the performance becomes nearly indistinguishable from that without any KV cache compression. Notably, even under extreme compression (0.9\%), DynamicKV surpasses state-of-the-art (SOTA) methods by 11\% in the Needle-in-a-Haystack test using Mistral-7B-Instruct-v0.2. The code will be released to the public.
Rejected_Submission
/pdf/c84c757390b87b1ac47363018a12ca385d2da080.pdf
ICLR.cc/2024/Conference
YUefWMfPoc
How to fix a broken confidence estimator: Evaluating post-hoc methods for selective classification with deep neural networks
This paper addresses the problem of selective classification for deep neural networks, where a model is allowed to abstain from low-confidence predictions to avoid potential errors. We focus on so-called post-hoc methods, which replace the confidence estimator of a given classifier without retraining or modifying it, thus being practically appealing. Considering neural networks with softmax outputs, our goal is to identify the best confidence estimator that can be computed directly from the unnormalized logits. This problem is motivated by the intriguing observation in recent work that many classifiers appear to have a ``broken'' confidence estimator, in the sense that their selective classification performance is much worse than what could be expected by their corresponding accuracies. We perform an extensive experimental study of many existing and proposed confidence estimators applied to 84 pretrained ImageNet classifiers available from popular repositories. Our results show that a simple $p$-norm normalization of the logits, followed by taking the maximum logit as the confidence estimator, can lead to considerable gains in selective classification performance, completely fixing the pathological behavior observed in many classifiers. As a consequence, the selective classification performance of any classifier becomes almost entirely determined by its corresponding accuracy. Moreover, these results are shown to be consistent under distribution shift. We also investigate why certain classifiers innately have a good confidence estimator that apparently cannot be improved by post-hoc methods.
Rejected_Submission
/pdf/0159b5f26a34fe393d971b3c0433aedbc2a97640.pdf
ICLR.cc/2025/Conference
F4meTCwlxZ
Consistency Guaranteed Causal Graph Recovery with Large Language Models
Causal graph recovery traditionally relies on statistical estimation of observable variables or individual knowledge, which suffer from data collection biases and knowledge limitations of individuals. Leveraging the broad knowledge in scientific corpus, we propose a novel method for causal graph recovery to deduce causal relationships with the large language models (LLMs) as a knowledge extractor. Our method extracts associational relationships among variables and further eliminates the inconsistent relationship to recover a causal graph using the constraint-based causal discovery methods. Comparing to other LLM-based methods that directly instruct LLMs to do highly complex causal reasoning, our method shows advantages on causal graph quality on benchmark datasets. More importantly, as causal graphs may evolve when new research results emerge, our method shows sensitivity to new evidence in the literature and can provide useful information to update causal graphs accordingly.
Rejected_Submission
/pdf/eeece7bd4f1164d30dda00d5c56ed87662ab2321.pdf
ICLR.cc/2025/Conference
599F4CZ0HB
Bench-O-Matic: Automating Benchmark Curation from Crowdsourced Data
The rapid evolution of Large Language Models (LLMs) has outpaced the development of model evaluation, highlighting the need for continuous curation of new, challenging benchmarks. However, manual curation of high-quality, human-aligned benchmarks is expensive and time-consuming. To address this, we introduce Bench-O-Matic, an automated pipeline that leverages LLMs to curate high-quality, open- ended prompts from large, crowd-sourced datasets, enabling continuous benchmark updates without human in the loop. We apply Bench-O-Matic to datasets such as Chatbot Arena and WildChat-1M, extracting challenging prompts and utilizing LLM-as-a-Judge for automatic model evaluation. To validate benchmark quality, we propose new metrics to measure a benchmark’s alignment with human preferences and ability to separate models. We release Eval-O-Matic, a benchmark consisting 500 challenging prompts curated by Bench-O-Matic. Eval-O-Matic provides 3x higher separation of model performances compared to MT-Bench and achieves 98.6% correlation with human preference rankings, all at a cost of $20. Our work sets a new framework for the scalable curation of automated benchmarks from extensive data.
Rejected_Submission
/pdf/49edb452c360143cb7da6d2a5ca85004241a37e3.pdf
ICLR.cc/2025/Conference
LzycEbgLoi
3D-CT-GPT++: Enhancing 3D Radiology Report Generation with Direct Preference Optimization and Large Vision-Language Models
Automatically generating radiology reports from three-dimensional medical images, such as 3D CT scans, plays a crucial role in modern diagnostics. Current approaches for generating 3D reports often adopt video processing methods, which struggle to effectively capture the relationships along the Z-axis. Additionally, multimodal large language model-based methods for generating 3D image reports face significant limitations, particularly in terms of the image encoder’s ability to represent 3D structures and the hallucinations that arise in generated content. To address these challenges, we propose the 3D-CT-GPT++ model. This model integrates the optimized 3D image encoder CTViT-V, specifically designed for chest CT scans, and builds upon the LLaVA-1.5 architecture. Furthermore, we introduce \textit{Direct Preference Optimization (DPO)}, where GPT-4 is used to score the outputs of our fully fine-tuned (SFT) model, creating a preference dataset for subsequent DPO training. DPO significantly reduces hallucinations in the report generation process, ensuring the generated reports are more aligned with clinical needs. We fine-tuned the model on both high-quality private and public datasets to ensure clinical relevance. Extensive experiments were conducted using standard natural language generation (NLG) evaluation metrics, including BLEU, METEOR, ROUGE-L, and GREEN, to assess the report generation performance. Experimental results demonstrate that 3D-CT-GPT++ significantly outperforms existing methods in terms of accuracy, fluency, clinical factual consistency, and clinical relevance, advancing the automation of 3D medical report generation.
Rejected_Submission
/pdf/ede312009898ee5bdf5979461144169d4c265f31.pdf
ICLR.cc/2025/Conference
RzdtpxL0H5
DDAD: A Two-Pronged Adversarial Defense Based on Distributional Discrepancy
Statistical adversarial data detection (SADD) detects whether an upcoming batch contains adversarial examples (AEs) by measuring the distributional discrepancies between clean examples (CEs) and AEs. In this paper, we reveal the potential strength of SADD-based methods by theoretically showing that minimizing distributional discrepancy can help reduce the expected loss on AEs. Nevertheless, despite these advantages, SADD-based methods have a potential limitation: they discard inputs detected as AEs, leading to the loss of clean information within those inputs. To address this limitation, we propose a two-pronged adversarial defense method, named Distributional-Discrepancy-based Adversarial Defense (DDAD). In the training phase, DDAD first optimizes the test power of the maximum mean discrepancy (MMD) to derive MMD-OPT, and then trains a denoiser by minimizing the MMD-OPT between CEs and AEs. In the inference phase, DDAD first leverages MMD-OPT to differentiate CEs and AEs, and then applies a two-pronged process: (1) directly feeding the detected CEs into the classifier, and (2) removing noise from the detected AEs by the distributional-discrepancy-based denoiser. Extensive experiments show that DDAD outperforms current state-of-the-art (SOTA) defense methods by notably improving clean and robust accuracy on CIFAR-10 and ImageNet-1K against adaptive white-box attacks. The code is available at: https://anonymous.4open.science/r/DDAD-DB60.
Rejected_Submission
/pdf/f566e57d0ed1f63e2292e9e393b270c9e0235961.pdf
ICLR.cc/2024/Conference
v1VvCWJAL8
Towards Characterizing Domain Counterfactuals for Invertible Latent Causal Models
Answering counterfactual queries has important applications such as explainability, robustness, and fairness but is challenging when the causal variables are unobserved and the observations are non-linear mixtures of these latent variables, such as pixels in images. One approach is to recover the latent Structural Causal Model (SCM), which may be infeasible in practice due to requiring strong assumptions, e.g., linearity of the causal mechanisms or perfect atomic interventions. Meanwhile, more practical ML-based approaches using naive domain translation models to generate counterfactual samples lack theoretical grounding and may construct invalid counterfactuals. In this work, we strive to strike a balance between practicality and theoretical guarantees by analyzing a specific type of causal query called *domain counterfactuals*, which hypothesizes what a sample would have looked like if it had been generated in a different domain (or environment). We show that recovering the latent SCM is unnecessary for estimating domain counterfactuals, thereby sidestepping some of the theoretic challenges. By assuming invertibility and sparsity of intervention, we prove domain counterfactual estimation error can be bounded by a data fit term and intervention sparsity term. Building upon our theoretical results, we develop a theoretically grounded practical algorithm that simplifies the modeling process to generative model estimation under autoregressive and shared parameter constraints that enforce intervention sparsity. Finally, we show an improvement in counterfactual estimation over baseline methods through extensive simulated and image-based experiments.
ICLR 2024 poster
/pdf/82798bd26dba629968e475108998135f769de862.pdf
ICLR.cc/2025/Conference
Vn23PakSbM
Stacking Small Language Models for Generalizability
Recent advances show that large language models (LLMs) generalize strong performance across different natural language benchmarks. However, the large size of LLMs makes training and inference expensive and impractical to run in resource-limited settings. This paper introduces a new approach called fine-tuning stacks of language models (FSLM), which involves stacking small language models (SLM) as an alternative to LLMs. By fine-tuning each SLM to perform a specific task, this approach breaks down high level reasoning into multiple lower-level steps that specific SLMs are responsible for. As a result, FSLM allows for lower training and inference costs, and also improves model interpretability as each SLM communicates with the subsequent one through natural language. By evaluating FSLM on common natural language benchmarks, this paper highlights promising early results toward generalizable performance using FSLM as a cost-effective alternative to LLMs.
Desk_Rejected_Submission
/pdf/17c8e89fd9432a0264ac114aaa9fcf2a4b4a61e9.pdf
ICLR.cc/2025/Conference
ytvWZEiywp
EVINCE: Optimizing Adversarial LLM Dialogues via Conditional Statistics and Information Theory
This paper introduces EVINCE (Entropy and Variation IN Conditional Exchanges), a framework that optimizes multi-LLM dialogues using conditional statistics and information theory. EVINCE introduces dual entropy optimization to balance perspective diversity with prior knowledge, providing quantitative measures for modulating LLM interactions. Through information-theoretic metrics and mutual information optimization, the framework demonstrates consistent improvement over single-LLM performance in applications ranging from disease diagnosis to news debiasing. We present theoretical foundations and empirical validation for this structured approach to LLM collaboration.
Rejected_Submission
/pdf/72a407f64293ee860edbed1d03f9869d1363287d.pdf
ICLR.cc/2025/Conference
eHehzSDUFp
Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition
In this work, we investigate how a model's tendency to broadly integrate its parametric knowledge evolves throughout pretraining, and how this behavior affects overall performance, particularly in terms of knowledge acquisition and forgetting. We introduce the concept of knowledge entropy, which quantifies the range of memory sources the model engages with; high knowledge entropy indicates that the model utilizes a wide range of memory sources, while low knowledge entropy suggests reliance on specific sources with greater certainty. Our analysis reveals a consistent decline in knowledge entropy as pretraining advances. We also find that the decline is closely associated with a reduction in the model's ability to acquire and retain knowledge, leading us to conclude that diminishing knowledge entropy (smaller number of active memory sources) impairs the model's knowledge acquisition and retention capabilities. We find further support for this by demonstrating that increasing the activity of inactive memory sources enhances the model's capacity for knowledge acquisition and retention.
ICLR 2025 Oral
/pdf/e18ed346a39708e4595ffb10abd9cd002de66359.pdf
ICLR.cc/2025/Conference
jZffxvubJ9
Treatment Rule Optimization Under Counterfactual Temporal Point Processes with Latent States
In high-stakes areas like healthcare, retrospective counterfactual analysis—such as evaluating what might have happened if treatments were administered earlier, later, or differently—is vital for refining treatment strategies. This paper proposes a counterfactual treatment optimization framework using temporal point processes to model outcome event sequences. By sampling potential outcome events under new treatment decision rules, our approach seeks to optimize treatment strategies in a counterfactual setting. To achieve accurate counterfactual evaluation of new decision rules, we explicitly introduce latent states into the modeling of temporal point processes. Our method first infers the latent states and associated noise, followed by counterfactual sampling of outcome events. This approach rigorously addresses the complexities introduced by latent states, effectively removing biases in the evaluation of treatment strategies. By proving the identifiability of model parameters in the presence of these states, we provide theoretical guarantees that enhance the reliability and robustness of the counterfactual analysis. By incorporating latent states and proving identifiability, our framework not only improves the accuracy and robustness of treatment decision rules but also offers actionable insights for optimizing healthcare interventions. This method holds significant potential for improving treatment strategies, particularly in healthcare scenarios where patient symptoms are complex and high-dimensional.
Rejected_Submission
/pdf/8ec6cecd1a2587bcb490d1f2196da2d6587e21ca.pdf
ICLR.cc/2018/Conference
H1Xw62kRZ
Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis
Program synthesis is the task of automatically generating a program consistent witha specification. Recent years have seen proposal of a number of neural approachesfor program synthesis, many of which adopt a sequence generation paradigm similarto neural machine translation, in which sequence-to-sequence models are trained tomaximize the likelihood of known reference programs. While achieving impressiveresults, this strategy has two key limitations. First, it ignores Program Aliasing: thefact that many different programs may satisfy a given specification (especially withincomplete specifications such as a few input-output examples). By maximizingthe likelihood of only a single reference program, it penalizes many semanticallycorrect programs, which can adversely affect the synthesizer performance. Second,this strategy overlooks the fact that programs have a strict syntax that can beefficiently checked. To address the first limitation, we perform reinforcementlearning on top of a supervised model with an objective that explicitly maximizesthe likelihood of generating semantically correct programs. For addressing thesecond limitation, we introduce a training procedure that directly maximizes theprobability of generating syntactically correct programs that fulfill the specification.We show that our contributions lead to improved accuracy of the models, especiallyin cases where the training data is limited.
Accept (Poster)
/pdf/f58500ef2f2e08832b2b72e534cc740ee50ac0b0.pdf
ICLR.cc/2025/Conference
4sJ2FYE65U
Neural Multi-Objective Combinatorial Optimization via Graph-Image Multimodal Fusion
Existing neural multi-objective combinatorial optimization (MOCO) methods still exhibit an optimality gap since they fail to fully exploit the intrinsic features of problem instances. A significant factor contributing to this shortfall is their reliance solely on graph-modal information. To overcome this, we propose a novel graph-image multimodal fusion (GIMF) framework that enhances neural MOCO methods by integrating graph and image information of the problem instances. Our GIMF framework comprises three key components: (1) a constructed coordinate image to better represent the spatial structure of the problem instance, (2) a problem-size adaptive resolution strategy during the image construction process to improve the cross-size generalization of the model, and (3) a multimodal fusion mechanism with modality-specific bottlenecks to efficiently couple graph and image information. We demonstrate the versatility of our GIMF by implementing it with two state-of-the-art neural MOCO backbones. Experimental results on classic MOCO problems show that our GIMF significantly outperforms state-of-the-art neural MOCO methods and exhibits superior generalization capability.
ICLR 2025 Poster
/pdf/f294b500eea45eec61844c76ebe9269406cfce3a.pdf
ICLR.cc/2025/Conference
7652tHbbVE
FlexMotion: Lightweight, Physics-Aware, and Controllable Human Motion Generation
Lightweight, controllable, and physically plausible human motion synthesis is crucial for animation, virtual reality, robotics, and human-computer interaction applications. Existing methods often compromise between computational efficiency, physical realism, or spatial controllability. We propose FlexMotion, a novel framework that leverages a computationally lightweight diffusion model operating in the latent space, eliminating the need for physics simulators and enabling fast and efficient training. FlexMotion employs a multimodal pre-trained Transformer encoder-decoder, integrating joint locations, contact forces, joint actuations and muscle activations to ensure the physical plausibility of the generated motions. FlexMotion also introduces a plug-and-play module, which adds spatial controllability over a range of motion parameters (e.g., joint locations, joint actuations, contact forces, and muscle activations). Our framework achieves realistic motion generation with improved efficiency and control, setting a new benchmark for human motion synthesis. We evaluate FlexMotion on extended datasets and demonstrate its superior performance in terms of realism, physical plausibility, and controllability.
Rejected_Submission
/pdf/55acf31294a70f74ab80cd15e304566fda5d445e.pdf
ICLR.cc/2025/Conference
UuZDosomkp
ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning
Meta-learning enables learning systems to adapt quickly to new tasks, similar to humans. To emulate this human-like rapid learning and enhance alignment and discrimination abilities, we propose ConML, a universal meta-learning framework that can be applied to various meta-learning algorithms without relying on specific model architectures nor target models. The core of ConML is task-level contrastive learning, which extends contrastive learning from the representation space in unsupervised learning to the model space in meta-learning. By leveraging task identity as an additional supervision signal during meta-training, we contrast the outputs of the meta-learner in the model space, minimizing inner-task distance (between models trained on different subsets of the same task) and maximizing inter-task distance (between models from different tasks). We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms, as well as in-context learning, resulting in performance improvements across diverse few-shot learning tasks.
Rejected_Submission
/pdf/1ebc12eabc3a24de9538ac60673fca1162fe33a7.pdf
ICLR.cc/2024/Conference
cZttUMTiPL
Uncertainty Quantification via Stable Distribution Propagation
We propose a new approach for propagating stable probability distributions through neural networks. Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity. This allows propagating Gaussian and Cauchy input uncertainties through neural networks to quantify their output uncertainties. To demonstrate the utility of propagating distributions, we apply the proposed method to predicting calibrated confidence intervals and selective prediction on out-of-distribution data. The results demonstrate a broad applicability of propagating distributions and show the advantages of our method over other approaches such as moment matching.
ICLR 2024 poster
/pdf/15a305fcc44fb161532065b647dc099ab1772f45.pdf
ICLR.cc/2025/Conference
bhD0EQWNut
Naturality-Guided Hyperedge Disentanglement for Message Passing Hypergraph Neural Network
Hypergraph data structure has been widely used to store information or meaning derived from group interactions, meaning that each hyperedge inherently contains the context of their interactions. For example, a set of genes or a genetic pathway can be represented as a hyperedge to express the interaction of multiple genes that collaboratively perform a biological function (i.e., interaction context). However, most existing hypergraph neural networks cannot reflect the interaction context of each hyperedge due to their limited capability in capturing important or relevant factors therein. In this paper, we propose a \textbf{simple but effective} hyperedge disentangling method, \textbf{Natural-HNN}, that captures inherent hyperedge types or the interaction context of an hyperedge. We devised a novel guidance for hyperedge disentanglement based on the naturality condition in the category theory. In our experiments, we applied our model to hypergraphs of genetic pathways for the cancer subtype classification task, and showed that our model outperforms baselines by capturing the functional semantic similarity of genetic pathways.
Rejected_Submission
/pdf/4b33111b7f18696e7566965c78875fe6b4e285d9.pdf
ICLR.cc/2024/Conference
vt5mnLVIVo
Grokking as the transition from lazy to rich training dynamics
We propose that the grokking phenomenon, where the train loss of a neural network decreases much earlier than its test loss, can arise due to a neural network transitioning from lazy training dynamics to a rich, feature learning regime. To illustrate this mechanism, we study the simple setting of vanilla gradient descent on a polynomial regression problem with a two layer neural network which exhibits grokking without regularization in a way that cannot be explained by existing theories. We identify sufficient statistics for the test loss of such a network, and tracking these over training reveals that grokking arises in this setting when the network first attempts to fit a kernel regression solution with its initial features, followed by late-time feature learning where a generalizing solution is identified after train loss is already low. We find that the key determinants of grokking are the rate of feature learning---which can be controlled precisely by parameters that scale the network output---and the alignment of the initial features with the target function $y(x)$. We argue this delayed generalization arises when (1) the top eigenvectors of the initial neural tangent kernel and the task labels $y(x)$ are misaligned, but (2) the dataset size is large enough so that it is possible for the network to generalize eventually, but not so large that train loss perfectly tracks test loss at all epochs, and (3) the network begins training in the lazy regime so does not learn features immediately. We conclude with evidence that this transition from lazy (linear model) to rich training (feature learning) can control grokking in more general settings, like on MNIST, one-layer Transformers, and student-teacher networks.
ICLR 2024 poster
/pdf/7234569d5729cb29c90979a56635327e1789c3a6.pdf
ICLR.cc/2024/Conference
9RLC0J2N9n
SynBench: Evaluating Pretrained Representations for Image Classification using Synthetic Data
Fine-tuning large models pretrained at scale on broad data for solving downstream tasks has made considerable success in recent years. There seems to be indeed an ongoing paradigm shift in deep learning from task-centric model design to task-agnostic representation learning and task-specific fine-tuning. Specifically, the representations of pretrained models are used as a foundation for different downstream tasks. This paper proposes a new task-agnostic framework, \textit{SynBench}, to measure the quality of pretrained representations for image classification using synthetic data. To address the challenge of task-agnostic data-free evaluation, we design synthetic binary classification proxy tasks with class-conditional Gaussian mixtures. This way we probe and compare the robustness-accuracy performance on pretrained representations and input synthetic data. SynBench offers a holistic quantitative evaluation, informs the model designers of the intrinsic performance, and spares efforts on task-specific finetuning with real-life data. Evaluated with various pretrained vision models for different downstream image classification tasks, the experimental results show that our SynBench score matches well the actual linear probing performance of the pretrained model when fine-tuned on downstream tasks using real-life data. Finally, SynBench can also be used in robust linear probing to mitigate the robustness-accuracy tradeoff in downstream tasks.
Rejected_Submission
/pdf/2bc269dae5687c921cfd9bac91953db3c72438e9.pdf
ICLR.cc/2025/Conference
pXlmOmlHJZ
ICLR: In-Context Learning of Representations
Recent work demonstrates that structured patterns in pretraining data influence how representations of different concepts are organized in a large language model’s (LLM) internals, with such representations then driving downstream abilities. Given the open-ended nature of LLMs, e.g., their ability to in-context learn novel tasks, we ask whether models can flexibly alter their semantically grounded organization of concepts. Specifically, if we provide in-context exemplars wherein a concept plays a different role than what the pretraining data suggests, can models infer these novel semantics and reorganize representations in accordance with them? To answer this question, we define a toy “graph tracing” task wherein the nodes of the graph are referenced via concepts seen during training (e.g., apple, bird, etc.), and the connectivity of the graph is defined via some predefined structure (e.g., a square grid). Given exemplars that indicate traces of random walks on the graph, we analyze intermediate representations of the model and find that as the amount of context is scaled, there is a sudden re-organization of representations according to the graph’s structure. Further, we find that when reference concepts have correlations in their semantics (e.g., Monday, Tuesday, etc.), the context-specified graph structure is still present in the representations, but is unable to dominate the pretrained structure. To explain these results, we analogize our task to energy minimization for a predefined graph topology, which shows getting non-trivial performance on the task requires for the model to infer a connected component. Overall, our findings indicate context-size may be an underappreciated scaling axis that can flexibly re-organize model representations, unlocking novel capabilities.
ICLR 2025 Poster
/pdf/68f0954324b0bb9a57a8d23b52d90953165dd8a1.pdf
ICLR.cc/2025/Conference
E5YnuidZ9W
Understanding Mode Connectivity via Parameter Space Symmetry
Neural network minima have been observed to be connected by curves along which train and test loss remain nearly constant, a phenomenon known as mode connectivity. While this has enabled applications such as model merging and fine-tuning, its theoretical explanation remains unclear. We propose a new approach to exploring the connectedness of minima using parameter space symmetry. By linking the topology of symmetry groups to that of the minima, we derive the number of connected components of the minima of linear networks and show that skip connections reduce this number. We then examine when mode connectivity and linear mode connectivity hold or fail, using parameter symmetries which account for a significant part of the minimum. Finally, we provide explicit expressions for connecting curves in the minima induced by symmetry. Using the curvature of these curves, we derive conditions under which linear mode connectivity approximately holds. Our analysis highlights the role of continuous symmetries in understanding the neural network loss landscape.
Rejected_Submission
/pdf/5a0b99f55bf68ddfae521b3f9050556ffaf77b75.pdf
ICLR.cc/2024/Conference
M0xK8nPGvt
Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning
Posterior sampling allows exploitation of prior knowledge on the environment's transition dynamics to improve the sample efficiency of reinforcement learning. The prior is typically specified as a class of parametric distributions, the design of which can be cumbersome in practice, often resulting in the choice of uninformative priors. In this work, we propose a novel posterior sampling approach in which the prior is given as a (partial) causal graph over the environment's variables. The latter is often more natural to design, such as listing known causal dependencies between biometric features in a medical treatment study. Specifically, we propose a hierarchical Bayesian procedure, called C-PSRL, simultaneously learning the full causal graph at the higher level and the parameters of the resulting factored dynamics at the lower level. We provide an analysis of the Bayesian regret of C-PSRL that explicitly connects the regret rate with the degree of prior knowledge. Our numerical evaluation conducted in illustrative domains confirms that C-PSRL strongly improves the efficiency of posterior sampling with an uninformative prior while performing close to posterior sampling with the full causal graph.
ICLR 2024 poster
/pdf/e1855e2abd067d42d9cd6076af71eb313492a804.pdf
ICLR.cc/2024/Conference
KgaBScZ4VI
Language Model Cascades: Token-Level Uncertainty And Beyond
Recent advances in language models (LMs) have led to significant improvements in quality on complex NLP tasks, but at the expense of increased inference costs. A simple strategy to achieve more favorable cost-quality tradeoffs is cascading: here, a small model is invoked for most “easy” instances, while a few “hard” instances are deferred to the large model. While the principles underpinning effective cascading are well-studied for classification tasks — with deferral based on predicted class uncertainty favored theoretically and practically — a similar understanding is lacking for generative LM tasks. In this work, we initiate a systematic study of deferral rules for LM cascades. We begin by examining the natural extension of predicted class uncertainty to generative LM tasks, namely, the predicted sequence uncertainty. We show that this measure suffers from the length bias problem, either over- or under-emphasizing outputs based on their lengths. This is because LMs produce a sequence of uncertainty values, one for each output token; and moreover, the number of output tokens is variable across different examples. To mitigate the length bias, we propose to exploit the richer token-level uncertainty information implicit in generative LMs. We argue that naive predicted sequence uncertainty corresponds to a simple aggregation of these uncertainties. By contrast, we show that incorporating token-level uncertainty through learned post-hoc deferral rules can significantly outperform such simple aggregation strategies, via experiments on a range of natural language benchmarks with FLAN-T5 models. We further show that incorporating embeddings from the smaller model and intermediate layers of the larger model can give an additional boost in the overall cost-quality tradeoff.
ICLR 2024 poster
/pdf/c12ed09310cf9e30337a888bd967a35d838624dc.pdf
ICLR.cc/2018/Conference
HJ3d2Ax0-
Benefits of Depth for Long-Term Memory of Recurrent Networks
The key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies. However, a well established measure of RNNs' long-term memory capacity is lacking, and thus formal understanding of their ability to correlate data throughout time is limited. Though depth efficiency in convolutional networks is well established by now, it does not suffice in order to account for the success of deep RNNs on inputs of varying lengths, and the need to address their 'time-series expressive power' arises. In this paper, we analyze the effect of depth on the ability of recurrent networks to express correlations ranging over long time-scales. To meet the above need, we introduce a measure of the information flow across time that can be supported by the network, referred to as the Start-End separation rank. Essentially, this measure reflects the distance of the function realized by the recurrent network from a function that models no interaction whatsoever between the beginning and end of the input sequence. We prove that deep recurrent networks support Start-End separation ranks which are exponentially higher than those supported by their shallow counterparts. Moreover, we show that the ability of deep recurrent networks to correlate different parts of the input sequence increases exponentially as the input sequence extends, while that of vanilla shallow recurrent networks does not adapt to the sequence length at all. Thus, we establish that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, and provide an exemplar of quantifying this key attribute which may be readily extended to other RNN architectures of interest, e.g. variants of LSTM networks. We obtain our results by considering a class of recurrent networks referred to as Recurrent Arithmetic Circuits (RACs), which merge the hidden state with the input via the Multiplicative Integration operation.
Invite to Workshop Track
/pdf/7152f854fbc6f08a01756c79f52aaf2eb2e28afa.pdf
ICLR.cc/2025/Conference
MsUhByb3CM
Extracting Symbolic Sequences from Visual Representations via Self-Supervised Learning
In this paper, we explore the potential of abstracting complex visual information into discrete, structured symbolic sequences using self-supervised learning (SSL). Inspired by how language abstracts and organizes information to enable better reasoning and generalization, we propose a novel approach for generating symbolic representations from visual data. To learn these sequences, we extend the DINO framework to handle both visual and symbolic information. Initial experiments suggest that the generated symbolic sequences capture a meaningful level of abstraction, though further refinement is required. An advantage of our method is its interpretability: the sequences are produced by a decoder transformer using cross-attention, allowing attention maps to be linked to specific symbols and offering insight into how these representations correspond to image regions. This approach lays the foundation for creating interpretable symbolic representations with potential applications in high-level scene understanding.
Rejected_Submission
/pdf/7a4483250a66910a8ab97f5d919432cfce9b7c3a.pdf
ICLR.cc/2025/Conference
QqziJAdev9
$\alpha$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Aligning large language models (LLMs) with human values and intentions is crucial for their utility, honesty, and safety. Reinforcement learning from human feedback (RLHF) is a popular approach to achieve this alignment, but it faces challenges in computational efficiency and training stability. Recent methods like Direct Preference Optimization (DPO) and Simple Preference Optimization (SimPO) have proposed offline alternatives to RLHF, simplifying the process by reparameterizing the reward function. However, DPO depends on a potentially suboptimal reference model, and SimPO's assumption of a fixed target reward margin may lead to suboptimal decisions in diverse data settings. In this work, we propose \(\alpha\)-DPO, an adaptive preference optimization algorithm designed to address these limitations by introducing a dynamic reward margin. Specifically, \(\alpha\)-DPO employs an adaptive preference distribution, balancing the policy model and the reference model to achieve personalized reward margins. We provide theoretical guarantees for \(\alpha\)-DPO, demonstrating its effectiveness as a surrogate optimization objective and its ability to balance alignment and diversity through KL divergence control. Empirical evaluations on AlpacaEval 2 and Arena-Hard show that \(\alpha\)-DPO consistently outperforms DPO and SimPO across various model settings, establishing it as a robust approach for fine-tuning LLMs. Our method achieves significant improvements in win rates, highlighting its potential as a powerful tool for LLM alignment.
Rejected_Submission
/pdf/2200c1aa8a06e092ba6d592e97670e5a09dca5cc.pdf
ICLR.cc/2024/Conference
3mnWvUZIXt
Towards Principled Representation Learning from Videos for Reinforcement Learning
We study pre-training representations for decision-making using video data, which is abundantly available for tasks such as game agents and software testing. Even though significant empirical advances have been made on this problem, a theoretical understanding remains absent. We initiate the theoretical investigation into principled approaches for representation learning and focus on learning the latent state representations of the underlying MDP using video data. We study two types of settings: one where there is iid noise in the observation, and a more challenging setting where there is also the presence of exogenous noise, which is non-iid noise that is temporally correlated, such as the motion of people or cars in the background. We study three commonly used approaches: autoencoding, temporal contrastive learning, and forward modeling. We prove upper bounds for temporal contrastive learning and forward modeling in the presence of only iid noise. We show that these approaches can learn the latent state and use it to do efficient downstream RL with polynomial sample complexity. When exogenous noise is also present, we establish a lower bound result showing that the sample complexity of learning from video data can be exponentially worse than learning from action-labeled trajectory data. This partially explains why reinforcement learning with video pre-training is hard. We evaluate these representational learning methods in two visual domains, yielding results that are consistent with our theoretical findings.
ICLR 2024 spotlight
/pdf/030d9ee1692e81df35ac143b32eb5beb1c384730.pdf
ICLR.cc/2024/Conference
gyfXuRfxW2
Learning Polynomial Problems with $SL(2, \mathbb{R})$-Equivariance
Optimizing and certifying the positivity of polynomials are fundamental primitives across mathematics and engineering applications, from dynamical systems to operations research. However, solving these problems in practice requires large semidefinite programs, with poor scaling in dimension and degree. In this work, we demonstrate for the first time that neural networks can effectively solve such problems in a data-driven fashion, achieving tenfold speedups while retaining high accuracy. Moreover, we observe that these polynomial learning problems are equivariant to the non-compact group $SL(2,\mathbb{R})$, which consists of area-preserving linear transformations. We therefore adapt our learning pipelines to accommodate this structure, including data augmentation, a new $SL(2,\mathbb{R})$-equivariant architecture, and an architecture equivariant with respect to its maximal compact subgroup, $SO(2, \mathbb{R})$. Surprisingly, the most successful approaches in practice do not enforce equivariance to the entire group, which we prove arises from an unusual lack of architecture universality for $SL(2,\mathbb{R})$ in particular. A consequence of this result, which is of independent interest, is that there exists an equivariant function for which there is no sequence of equivariant polynomials multiplied by arbitrary invariants that approximates the original function. This is a rare example of a symmetric problem where data augmentation outperforms a fully equivariant architecture, and provides interesting lessons in both theory and practice for other problems with non-compact symmetries.
ICLR 2024 poster
/pdf/eb555a545e6faf35eb426d297e554f50d4687419.pdf
ICLR.cc/2018/Conference
HJ4IhxZAb
Meta-Learning Transferable Active Learning Policies by Deep Reinforcement Learning
Active learning (AL) aims to enable training high performance classifiers with low annotation cost by predicting which subset of unlabelled instances would be most beneficial to label. The importance of AL has motivated extensive research, proposing a wide variety of manually designed AL algorithms with diverse theoretical and intuitive motivations. In contrast to this body of research, we propose to treat active learning algorithm design as a meta-learning problem and learn the best criterion from data. We model an active learning algorithm as a deep neural network that inputs the base learner state and the unlabelled point set and predicts the best point to annotate next. Training this active query policy network with reinforcement learning, produces the best non-myopic policy for a given dataset. The key challenge in achieving a general solution to AL then becomes that of learner generalisation, particularly across heterogeneous datasets. We propose a multi-task dataset-embedding approach that allows dataset-agnostic active learners to be trained. Our evaluation shows that AL algorithms trained in this way can directly generalize across diverse problems.
Reject
/pdf/667b2dc6585b9e6f09bfc409b5558469556484ba.pdf
ICLR.cc/2025/Conference
nf4v09zw6O
Self-Supervised Learning of Intertwined Content and Positional Features for Object Detection
We present a novel self-supervised feature learning method using Vision Transformers (ViT) as the backbone, specifically designed for object detection and instance segmentation. Our approach addresses the challenge of extracting features that capture both class and positional information, which are crucial for these tasks. The method introduces two key components: (1) a positional encoding tied to the cropping process in contrastive learning, which utilizes a novel vector field representation for positional embeddings; and (2) masking and prediction, similar to conventional Masked Image Modeling (MIM), applied in parallel to both content and positional embeddings of image patches. These components enable the effective learning of intertwined content and positional features. We evaluate our method against state-of-the-art approaches, pre-training on ImageNet-1K and fine-tuning on downstream tasks. Our method outperforms the state-of-the-art SSL methods on the COCO object detection benchmark, achieving significant improvements with fewer pre-training epochs. These results suggest that better integration of positional information into self-supervised learning can improve performance on dense prediction tasks.
Rejected_Submission
/pdf/e008a56e1a87fbb4899328051a4f12b767d804ef.pdf
ICLR.cc/2025/Conference
ZJj1r4gWIy
Counterfactual Delayed Feedback Learning
Estimation of heterogeneous treatment effects has gathered much attention in recent years and has been widely adopted in medicine, economics, and marketing. Previous studies assumed that one of the potential outcomes of interest could be observed timely and accurately. However, a more practical scenario is that treatment takes time to produce causal effects on the outcomes. For example, drugs take time to produce medical utility for patients and users take time to purchase items after being recommended, and ignoring such delays in feedback can lead to biased estimates of heterogeneous treatment effects. To address the above problem, we study the impact of observation time on estimating heterogeneous treatment effects by further considering the potential response time that potential outcomes have. We theoretically prove the identifiability results and further propose a principled learning approach, known as CFR-DF (Counterfactual Regression with Delayed Feedback), to simultaneously learn potential response times and potential outcomes of interest. Results on both simulated and real-world datasets demonstrate the effectiveness of our method.
Rejected_Submission
/pdf/995ec360cba08391069258103fa4bb0eb8bb0f60.pdf
ICLR.cc/2018/Conference
HJ5AUm-CZ
The Variational Homoencoder: Learning to Infer High-Capacity Generative Models from Few Examples
Hierarchical Bayesian methods have the potential to unify many related tasks (e.g. k-shot classification, conditional, and unconditional generation) by framing each as inference within a single generative model. We show that existing approaches for learning such models can fail on expressive generative networks such as PixelCNNs, by describing the global distribution with little reliance on latent variables. To address this, we develop a modification of the Variational Autoencoder in which encoded observations are decoded to new elements from the same class; the result, which we call a Variational Homoencoder (VHE), may be understood as training a hierarchical latent variable model which better utilises latent variables in these cases. Using this framework enables us to train a hierarchical PixelCNN for the Omniglot dataset, outperforming all existing models on test set likelihood. With a single model we achieve both strong one-shot generation and near human-level classification, competitive with state-of-the-art discriminative classifiers. The VHE objective extends naturally to richer dataset structures such as factorial or hierarchical categories, as we illustrate by training models to separate character content from simple variations in drawing style, and to generalise the style of an alphabet to new characters.
Reject
/pdf/9d1763e5a3c2061239bfa1612bbd2869f2b7350a.pdf
ICLR.cc/2025/Conference
7tOc6h8bea
Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation
Inference-time computation is a powerful paradigm to enhance the performance of large language models (LLMs), with Best-of-N sampling being a widely used technique. However, this method is computationally expensive, requiring both (1) an external reward model and (2) the generation of multiple samples. In this work, we introduce a new generative self-evaluation scheme designed to adaptively reduce the number of generated samples while maintaining or even improving performance. We use a generative reward model formulation, allowing the LLM to predict mid-generation the probability that restarting the generation will yield a better response. These predictions are obtained without an external reward model and can be used to decide whether or not to generate more samples, prune unpromising samples early on, or to pick the best sample. This capability is very inexpensive as it involves generating a single predefined token. Trained using a dataset constructed with real unfiltered LMSYS user prompts, Llama 3.1 8B's win rate against GPT-4 on AlpacaEval increases from 21\% to 34\% with 16 samples and math performance on GSM8K improves from 84\% to 91\%. By sampling only when the LLM determines that it is beneficial to do so and adaptively adjusting temperature annealing, we demonstrate that 74\% of the improvement from using 16 samples can be achieved with only 1.2 samples on average. We further demonstrate that 50–75\% of samples can be pruned early in generation with minimal degradation in performance. Overall, our methods enable more efficient and scalable compute utilization during inference for LLMs.
Rejected_Submission
/pdf/cc4b5393c86977d81468f182bf0787cfd4f051cd.pdf
ICLR.cc/2025/Conference
i8ynYkfoRg
Model Entanglement for solving Privacy Preserving in Federated Learning
Federated learning (FL) is widely adopted as a secure and reliable distributed machine learning system for it allows participants to retain their training data locally, transmitting only model updates, such as gradients or parameters. However, the transmission process to the server can still lead to privacy leakage, as the updated information may be exploited to launch various privacy attacks. In this work, we present a key observation that the middle layer outputs, referred to as data representations, can exhibit independence in value distribution across different types of data. This enables us to capture the intrinsic relationship between data representations and private data, and inspires us to propose a Model Entanglement(ME) strategy aimed at enhancing privacy preserving by obfuscating the data representations of private models in a fine-grained manner, while improving the balance between privacy preservation and model accuracy. We compare our approach to the baseline FedAvg and two state-of-the-art defense methods. Our method demonstrates strong defense capabilities against mainstream privacy attacks, only reducing the global model accuracy by less than 0.7\% and training efficiency of 6.8\% respectively on the widely used dataset, excelling in both accuracy and privacy preserving.
Rejected_Submission
/pdf/b366952e91487d1a17a079f9795dca7d921676f8.pdf
ICLR.cc/2025/Conference
MF7ljU8xcf
Compute-Optimal LLMs Provably Generalize Better with Scale
Why do larger language models generalize better? To explore this question, we develop generalization bounds on the pretraining objective of large language models (LLMs) in the compute-optimal regime, as described by the Chinchilla scaling laws. We introduce a novel, fully empirical Freedman-type martingale concentration inequality that tightens existing bounds by accounting for the variance of the loss function. The generalization bound can be broken into three contributions: the number of parameters per token, the loss variance, and the quantization error at a fixed bitrate. As language models are scaled up, the number of parameters per data point stays constant; however, both the loss variance and the quantization error decrease, implying that larger models should have \emph{smaller} generalization gaps. We examine why larger models tend to be more quantizable from an information theoretic perspective, showing that the rate at which they can integrate new information grows slower than their capacity on the compute optimal frontier. From these findings we produce a scaling law for the generalization gap, showing that our bounds decrease in a predictable way.
ICLR 2025 Poster
/pdf/e35f09f288013a64bdf237be6a81cf0891307411.pdf
ICLR.cc/2025/Conference
IHRQif8VQC
Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness
Adversarial examples pose a significant challenge to the robustness, reliability and alignment of deep neural networks. We propose a novel, easy-to-use approach to achieving high-quality representations that lead to adversarial robustness through the use of multi-resolution input representations and dynamic self-ensembling of intermediate layer predictions. We demonstrate that intermediate layer predictions exhibit inherent robustness to adversarial attacks crafted to fool the full classifier, and propose a robust aggregation mechanism based on Vickrey auction that we call \textit{CrossMax} to dynamically ensemble them. By combining multi-resolution inputs and robust ensembling, we achieve significant adversarial robustness on CIFAR-10 and CIFAR-100 datasets without any adversarial training or extra data, reaching an adversarial accuracy of ≈72% (CIFAR-10) and ≈48% (CIFAR-100) on the RobustBench AutoAttack suite (L∞=8/255) with a finetuned ImageNet-pretrained ResNet152. This represents a result comparable with the top three models on CIFAR-10 and a +5 % gain compared to the best current dedicated approach on CIFAR-100. Adding simple adversarial training on top, we get ≈78% on CIFAR-10 and ≈51% on CIFAR-100, improving SOTA by 5 % and 9 % respectively and seeing greater gains on the harder dataset. We validate our approach through extensive experiments and provide insights into the interplay between adversarial robustness, and the hierarchical nature of deep representations. We show that simple gradient-based attacks against our model lead to human-interpretable images of the target classes as well as interpretable image changes. As a byproduct, using our multi-resolution prior, we turn pre-trained classifiers and CLIP models into controllable image generators and develop successful transferable attacks on large vision language models.
Rejected_Submission
/pdf/a5b91ece9f183034a82b94ed2da5ac97c55bc0d4.pdf
ICLR.cc/2024/Conference
Zz61cEY84L
Meta-Learning Strategies through Value Maximization in Neural Networks
Biological and artificial learning agents face numerous choices about how to learn, ranging from hyperparameter selection to aspects of task distributions like curricula. Understanding how to make these `meta-learning’ choices could improve engineered systems and offer normative accounts of cognitive control functions in biological learners. Yet optimal strategies remain challenging to compute in modern deep networks due to the complexity of optimizing through the entire learning process. Here we theoretically investigate optimal strategies in a tractable setting. We present a learning effort framework capable of efficiently optimizing control signals on a fully normative objective: discounted cumulative performance throughout learning. We obtain computational tractability by using average dynamical equations for gradient descent, available for simple neural network architectures. Our framework accommodates a range of meta-learning and automatic curriculum learning methods in a unified normative setting. We apply this framework to investigate the effect of approximations in common meta-learning algorithms; infer aspects of optimal curricula; and compute optimal neuronal resource allocation in a continual learning setting. Across settings, we find that control effort is most beneficial when applied to easier aspects of a task early in learning; followed by sustained effort on harder aspects. Overall, the learning effort framework provides a tractable theoretical test bed to study normative benefits of interventions in a variety of learning systems, as well as a formal account of optimal cognitive control strategies over learning trajectories posited by established theories in cognitive neuroscience.
Rejected_Submission
/pdf/a7821487e738be94bde39e8996db672862a4df07.pdf
ICLR.cc/2025/Conference
WVLBWiKxjM
Deep Learning for Micro-Scale Crack Detection on Imbalanced Datasets Using Key Point Localization
Internal crack detection has been a subject of focus in structural health monitoring. By focusing on crack detection in structural datasets, it is demonstrated that deep learning (DL) methods can effectively analyse seismic wave fields interacting with micro-scale cracks, which are beyond the resolution of conventional visual inspection. This work explores a novel application of DL based key point detection technique, where cracks are localized by predicting the coordinates of four key points that define a bounding region of the crack. The study not only opens new research directions for non-visual applications but also effectively mitigates the impact of imbalanced data which poses a challenge for previous DL models, as it can be biased toward predicting the majority class (non-crack regions). Popular DL techniques, such as the Inception blocks are used and investigated. The model shows an overall reduction in loss when applied to micro-scale crack detection and is reflected in the lower average deviation between the location of actual and predicted cracks, with an average IOU being 0.511 for all micro cracks (> 0.00 µm) and 0.631 for larger micro cracks (> 4 µm).
Rejected_Submission
/pdf/be965f9fefb76574f2495e9971d88c94806d0a8b.pdf
ICLR.cc/2025/Conference
7ZToWPWUlO
Solving Normalized Cut Problem with Constrained Action Space
We address the problem of Normalized Cut (NC) in weighted graphs where the shape of the partitions follow an apriori pattern, namely they must approximately be shaped like rings and wedges on a planar graph. Classical methods like spectral clustering and METIS do not have a provision to specify such constraints and neither do newer methods that combine GNNs and Reinforcement Learning as they are based on initialization from classical methods. The key insight that underpins our approach, Wedge and Ring Transformers (WRT), is based on representing a graph using polar coordinates and then using a multi-head transformer with a PPO objective to optimize the non-differential NC objective. To the best of our knowledge, WRT is the first method to explicitly constrain the shape of NC and opens up possibility of providing a principled approach for fine-grained shape-controlled generation of graph partitions. On the theoretical front we provide new Cheeger inequalities that connect the spectral properties of a graph with algebraic properties that capture the shape of the partitions. Comparisons with adaptations of strong baselines attest to the strength of WRT.
Rejected_Submission
/pdf/69d221b8199c4911cdf2eb29793d907048857a86.pdf
ICLR.cc/2025/Conference
ig2wk7kK9J
SafeDiffuser: Safe Planning with Diffusion Probabilistic Models
Diffusion models have shown promise in data-driven planning. While these planners are commonly employed in applications where decisions are critical, they still lack established safety guarantees. In this paper, we address this limitation by introducing SafeDiffuser, a method to equip diffusion models with safety guarantees via control barrier functions. The key idea of our approach is to embed finite-time diffusion invariance, i.e., a form of specification consisting of safety constraints, into the denoising diffusion procedure. This way we enable data generation under safety constraints. We show that SafeDiffusers maintain the generative performance of diffusion models while also providing robustness in safe data generation. We evaluate our method on a series of tasks, including maze path generation, legged robot locomotion, and 3D space manipulation, and demonstrate the advantages of robustness over vanilla diffusion models.
ICLR 2025 Poster
/pdf/97b1bd5e50999f2d4d388a86da1b1b72e9d6e545.pdf
ICLR.cc/2025/Conference
DF5TVzpTW0
Detecting and Perturbing Privacy-Sensitive Neurons to Defend Embedding Inversion Attacks
This paper introduces Defense through Perturbing Privacy Neurons (DPPN), a novel approach to protect text embeddings against inversion attacks. Unlike ex- isting methods that add noise to all embedding dimensions for general protection, DPPN identifies and perturbs only a small portion of privacy-sensitive neurons. We present a differentiable neuron mask learning framework to detect these neu- rons and a neuron-suppressing perturbation function for targeted noise injection. Experiments across six datasets show DPPN achieves superior privacy-utility trade- offs. Compared to baseline methods, DPPN reduces more privacy leakage by 5-78% while improving downstream task performance by 14-40%. Tests on real- world sensitive datasets demonstrate DPPN’s effectiveness in mitigating sensitive information leakage to 17%, while baseline methods reduce it only to 43%.
Rejected_Submission
/pdf/594c6e54fba519d898acc5d52501cf5c787b4907.pdf
ICLR.cc/2024/Conference
nji0ztL5rP
Best Arm Identification for Stochastic Rising Bandits
Stochastic Rising Bandits (SRBs) model sequential decision-making problems in which the expected reward of the available options increases every time they are selected. This setting captures a wide range of scenarios in which the available options are learning entities whose performance improves (in expectation) over time. While previous works addressed the regret minimization problem, this paper focuses on the fixed-budget Best Arm Identification (BAI) problem for SRBs. In this scenario, given a fixed budget of rounds, we are asked to provide a recommendation about the best option at the end of the identification process. We propose two algorithms to tackle the above-mentioned setting, namely R-UCBE, which resorts to a UCB-like approach, and R-SR, which employs a successive reject procedure. Then, we prove that, with a sufficiently large budget, they provide guarantees on the probability of properly identifying the optimal option at the end of the learning process. Furthermore, we derive a lower bound on the error probability, matched by our R-SR (up to constant factors), and illustrate how the need for a sufficiently large budget is unavoidable in the SRB setting. Finally, we numerically validate the proposed algorithms in synthetic and real-world environments and compare them with the currently available BAI strategies.
Rejected_Submission
/pdf/d5c9d7853abd79b405db490c048b3d54c9d7260b.pdf
ICLR.cc/2024/Conference
6pPYRXKPpw
Towards Diverse Behaviors: A Benchmark for Imitation Learning with Human Demonstrations
Imitation learning with human data has demonstrated remarkable success in teaching robots in a wide range of skills. However, the inherent diversity in human behavior leads to the emergence of multi-modal data distributions, thereby presenting a formidable challenge for existing imitation learning algorithms. Quantifying a model's capacity to capture and replicate this diversity effectively is still an open problem. In this work, we introduce simulation benchmark environments and the corresponding *Datasets with Diverse human Demonstrations for Imitation Learning (D3IL)*, designed explicitly to evaluate a model's ability to learn multi-modal behavior. Our environments are designed to involve multiple sub-tasks that need to be solved, consider manipulation of multiple objects which increases the diversity of the behavior and can only be solved by policies that rely on closed loop sensory feedback. Other available datasets are missing at least one of these challenging properties. To address the challenge of diversity quantification, we introduce tractable metrics that provide valuable insights into a model's ability to acquire and reproduce diverse behaviors. These metrics offer a practical means to assess the robustness and versatility of imitation learning algorithms. Furthermore, we conduct a thorough evaluation of state-of-the-art methods on the proposed task suite. This evaluation serves as a benchmark for assessing their capability to learn diverse behaviors. Our findings shed light on the effectiveness of these methods in tackling the intricate problem of capturing and generalizing multi-modal human behaviors, offering a valuable reference for the design of future imitation learning algorithms.
ICLR 2024 poster
/pdf/2f3fdf0514ecf5071b833dfeb74b36176a0de0db.pdf
ICLR.cc/2025/Conference
Y8Kwl7GFAd
RNA FrameFlow: Flow Matching for de novo 3D RNA Backbone Generation
We introduce RNA-FrameFlow, the first generative model for de novo 3D RNA backbone design. We build upon $SE(3)$ flow matching for protein backbone generation and establish protocols for data preparation and evaluation to address unique challenges posed by RNA modeling. We formulate RNA structures as a set of rigid-body frames and associated loss functions which account for larger, more conformationally flexible RNA backbones (13 atoms per nucleotide) vs. proteins (4 atoms per residue). Toward tackling the lack of diversity in 3D RNA datasets, we explore training with structural clustering and cropping augmentations. Additionally, we define a suite of evaluation metrics to measure whether the generated RNA structures are globally self-consistent (via inverse folding followed by forward folding) and locally recover RNA-specific structural descriptors. The most performant version of RNA-FrameFlow generates locally realistic RNA backbones of 40-150 nucleotides, over 40\% of which pass our validity criteria as measured by a self-consistency TM-score $\geq0.45$, at which two RNAs have the same global fold.
Rejected_Submission
/pdf/28e37f632b8c77a58077355b1901a03198d260fc.pdf
ICLR.cc/2025/Conference
HN0CYZbAPw
Efficient Online Reinforcement Learning Fine-Tuning Need Not Retain Offline Data
The modern paradigm in machine learning involves pre-training on diverse data, followed by task-specific fine-tuning. In reinforcement learning (RL), this translates to learning via offline RL on a diverse historical dataset, followed by rapid online RL fine-tuning using interaction data. Most RL fine-tuning methods require continued training on offline data for stability and performance. However, this is undesirable because training on diverse offline data is slow and expensive for large datasets, and should, in principle, also limit the performance improvement possible because of constraints or pessimism on offline data. In this paper, we show that retaining offline data is unnecessary as long as we use a properly-designed online RL approach for fine-tuning offline RL initializations. To build this approach, we start by analyzing the role of retaining offline data in online fine-tuning. We find that continued training on offline data is mostly useful for preventing a sudden divergence in the value function at the onset of fine-tuning, caused by a distribution mismatch between the offline data and online rollouts. This divergence typically results in unlearning and forgetting the benefits of offline pre-training. Our approach, Warm-start RL (WSRL), mitigates the catastrophic forgetting of pre-trained initializations using a very simple idea. WSRL employs a warmup phase that seeds the online RL run with a very small number of rollouts from the pre-trained policy to do fast online RL. The data collected during warmup bridges the distribution mismatch, and helps ``recalibrate'' the offline Q-function to the online distribution, allowing us to completely discard offline data without destabilizing the online RL fine-tuning. We show that WSRL is able to fine-tune without retaining any offline data, and is able to learn faster and attains higher performance than existing algorithms irrespective of whether they do or do not retain offline data.
ICLR 2025 Poster
/pdf/02ea87bd0277223354a8716440d0a30c06a5b7dd.pdf
ICLR.cc/2025/Conference
wfLuiDjQ0u
Making Text Embedders Few-Shot Learners
Large language models (LLMs) with decoder-only architectures have demonstrated exceptional text-generation capabilities across a variety of tasks. Some researchers have also adapted these models for text representation tasks. However, in text representation tasks, these models often face performance degradation on unseen tasks. In-context learning (ICL), which leverages examples provided in the input context, enables LLMs to handle unseen tasks effectively. Inspired by this, we aim to fully utilize the inherent properties of LLMs to enhance text representation performance across different tasks through the ICL approach. In this paper, we introduce a simple yet effective training strategy, which significantly improves text representation capabilities. Unlike previous models that prepend task instructions to the text, our method randomly samples a varying number of examples during training, endowing the embedding model with in-context learning abilities while maintaining its zero-shot capabilities. This approach does not require additional data construction or modifications to the model architecture. On the contrary, we find that some popular modifications to the model, such as bidirectional attention, can degrade performance, undermining the inherent characteristics of LLMs. We have publicly released our method at this \href{https://github.com/FlagOpen/FlagEmbedding}{repo}.
ICLR 2025 Poster
/pdf/e5da7c368734ad33d198e1c8c7ad11d86fdc531d.pdf
ICLR.cc/2024/Conference
jjA4O1vJRz
LLM Augmented LLMs: Expanding Capabilities through Composition
Foundational models with billions of parameters which have been trained on large corpus of data have demonstrated non-trivial skills in a variety of domains. However, due to their monolithic structure, it is challenging and expensive to augment them or impart new skills. On the other hand, due to their adaptation abilities,several new instances of these models are being trained towards new domains and tasks. In this work, we study the problem of efficient and practical composition of existing foundation models with more specific models to enable newer capabilities. To this end, we propose CALM—Composition to Augment Language Models—which introduces cross-attention between models to compose their representations and enable new capabilities. Salient features of CALM are: (i) Scales up LLMs on new tasks by ‘re-using’ existing LLMs along with a few additional parameters and data, (ii) Existing model weights are kept intact, and hence preserves existing capabilities, and (iii) Applies to diverse domains and settings. We illustrate that augmenting PaLM2-S with a smaller model trained on low-resource languages results in an absolute improvement of up to 13% on tasks like translation into English and arithmetic reasoning for low-resource languages. Similarly,when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40% over the base model for code generation and explanation tasks—on-par with fully fine-tuned counterparts.
ICLR 2024 poster
/pdf/d0316195c2919ffce4f1eddace8ffdb6fdc491cc.pdf
ICLR.cc/2024/Conference
TdyfmCM8iR
Latent Concept-based Explanation of NLP Models
Interpreting and understanding the predictions made by deep learning models poses a formidable challenge due to their inherently opaque nature. Many previous efforts aimed at explaining these predictions rely on input features—specifically, the words within NLP models. However, such explanations are often less informative due to the discrete nature of these words and their lack of contextual verbosity. To address this limitation, we introduce the Latent Concept Attribution method (LACOAT), which generates explanations for predictions based on latent concepts. Our founding intuition is that a word can exhibit multiple facets, contingent upon the context in which it is used. Therefore, given a word in context, the latent space derived from our training process reflects a specific facet of that word. LACOAT functions by mapping the representations of salient input words into the training latent space, allowing it to provide predictions with context-based explanations within this latent space. We will make the code of LACOAT available to the research community.
Rejected_Submission
/pdf/87a47ba97225ff994cc474466de102e85d093531.pdf
ICLR.cc/2024/Conference
Aemqy6Hjdj
Enhancing Compositional Generalization via Compositional Feature Alignment
Real-world applications of machine learning (ML) models often confront data distribution shifts, wherein discrepancies exist between the training and test data distributions. In the common multi-domain multi-class setup, as the number of classes and domains scales up, it becomes infeasible to gather training data for every domain-class combination. This challenge naturally leads the quest for models with Compositional Generalization (CG) ability, where models can generalize to unseen domain-class combinations. To delve into the CG challenge, we develop CG-Bench, a suite of CG benchmarks derived from existing real-world image datasets, and observe that the prevalent pretraining-finetuning paradigm on foundational models, such as CLIP and DINOv2, struggles with the challenge. To address this challenge, we propose Compositional Feature Alignment (CFA), a simple two-stage finetuning technique that i) learns two orthogonal linear heads on a pretrained encoder with respect to class and domain labels, and ii) fine-tunes the encoder with the newly learned head frozen. We theoretically and empirically justify that CFA encourages compositional feature learning of pretrained models. We further conduct extensive experiments on CG-Bench for CLIP and DINOv2, two powerful pretrained vision foundation models. Experiment results show that CFA outperforms common finetuning techniques in compositional generalization, corroborating CFA's efficacy in compositional feature learning.
Rejected_Submission
/pdf/888e029dc35dd718770b33280ecde20a1f6d44af.pdf
ICLR.cc/2025/Conference
7BDUTI6aS7
Risk Quadrangle and Robust Optimization Based on $\varphi$-Divergence
The Fundamental Risk Quadrangle (FRQ) is a unified framework linking risk management, statistical estimation, and optimization. Distributionally robust optimization (DRO) based on $\varphi$-divergence minimizes the maximal expected loss, where the maximum is over a $\varphi$-divergence uncertainty set. This paper introduces the \emph{extended} $\varphi$-divergence and the extended $\varphi$-divergence quadrangle, which integrates DRO into the FRQ framework. We derive the primal and dual representations of the quadrangle elements (risk, deviation, regret, error, and statistic). The dual representation provides an interpretation of classification, portfolio optimization, and regression as robust optimization based on the extended $\varphi$-divergence. The primal representation offers tractable formulations of these robust optimizations as convex optimization. We provide illustrative examples showing that many common problems, such as least-squares regression, quantile regression, support vector machines, and CVaR optimization, fall within this framework. Additionally, we conduct a case study to visualize the optimal solution of the inner maximization in robust optimization.
Rejected_Submission
/pdf/e737cefd67f933b6cf6cfa8315022f310dab8709.pdf
ICLR.cc/2025/Conference
BgcapX9ers
Hierarchical Object-Oriented POMDP Planning for Object Rearrangement
We present an online planning framework for solving multi-object rearrangement problems in partially observable, multi-room environments. Current object rearrangement solutions, primarily based on Reinforcement Learning or hand-coded planning methods, often lack adaptability to diverse challenges. To address this limitation, we introduce a novel Hierarchical Object-Oriented Partially Observed Markov Decision Process (HOO-POMDP) planning approach. This approach comprises of (a) an object-oriented POMDP planner generating sub-goals, (b) a set of low-level policies for sub-goal achievement, and (c) an abstraction system converting the continuous low-level world into a representation suitable for abstract planning. We evaluate our system on varying numbers of objects, rooms, and problem types in AI2-THOR simulated environments with promising results.
Rejected_Submission
/pdf/af02671da770ad984f190ed65d5131aa1e6b3a7e.pdf
ICLR.cc/2025/Conference
8VXWQmNrca
Conformal Bounds on Full-Reference Image Quality for Imaging Inverse Problems
In imaging inverse problems, we would like to know how close the recovered image is to the true image in terms of full-reference image quality (FRIQ) metrics like PSNR, SSIM, LPIPS, etc. This is especially important in safety-critical applications like medical imaging, where knowing that, say, the SSIM was poor could potentially avoid a costly misdiagnosis. But since we don't know the true image, computing FRIQ is non-trivial. In this work, we combine conformal prediction with approximate posterior sampling to construct bounds on FRIQ that are guaranteed to hold up to a user-specified error probability. We demonstrate our approach on image denoising and accelerated magnetic resonance imaging (MRI) problems.
Rejected_Submission
/pdf/4a7f5ba3288586099b7522b863318b4325f5a742.pdf
ICLR.cc/2024/Conference
ViPtjIVzUw
T-MARS: Improving Visual Representations by Circumventing Text Feature Learning
Large web-crawled multimodal datasets have powered a slew of new methods for learning general-purpose visual representations, advancing the state of the art in computer vision and revolutionizing zero- and few-shot recognition. One crucial decision facing practitioners is how, if at all, to curate these ever-larger datasets. For example, the creators of the LAION-5B dataset chose to retain only image-caption pairs whose CLIP similarity score exceeded a designated threshold. In this paper, we propose a new state-of-the-art data filtering approach motivated by our observation that nearly $40\%$ of LAION's images contain text that overlaps significantly with the caption. Intuitively, such data could be wasteful as it incentivizes models to perform optical character recognition rather than learning visual features. However, naively removing all such data could also be wasteful, as it throws away images that contain visual features (in addition to overlapping text). Our simple and scalable approach, T-MARS (Text Masking and Re-Scoring), filters out only those pairs where the text dominates the remaining visual features---by first masking out the text and then filtering out those with a low CLIP similarity score of the masked image with original captions. Experimentally, T-MARS is the top ranked approach on Imagenet at ``medium scale'' of DataComp (a data filtering benchmark), and outperforms CLIP filtering by a margin of $6.5\%$ on ImageNet and $4.7\%$ on VTAB. Additionally, we show that the accuracy gains enjoyed by T-MARS linearly increase as data and compute are scaled exponentially.
ICLR 2024 poster
/pdf/005e4cf730479d9444395283ce9cdc93c46738de.pdf
ICLR.cc/2024/Conference
vVoWRFV5Y4
Solving the Quadratic Assignment Problem With Deep Reinforcement Learning
The Quadratic Assignment Problem (QAP) is an NP-hard problem which has proven particularly challenging to solve: unlike other combinatorial problems like the traveling salesman problem (TSP), which can be solved to optimality for instances with hundreds or even thousands of locations using advanced integer programming techniques, no methods are known to exactly solve QAP instances of size greater than 30. Solving the QAP is nevertheless important because of its many critical applications, such as electronic wiring design and facility layout selection. We propose a method to solve the original Koopmans-Beckman formulation of the QAP using deep reinforcement learning. Our approach relies on a novel double pointer network, which alternates between selecting a location in which to place the next facility and a facility to place in the previous location. We train our model using A2C on a large dataset of synthetic instances, producing solutions with no instance-specific retraining necessary. Out of sample, our solutions are on average within 7.5% of a high-quality local search baseline, and even outperform it on 1.2% of instances.
Rejected_Submission
/pdf/67b1c0ecd0c0fa3703012881e766ec68adad52c2.pdf
ICLR.cc/2025/Conference
e2NRNQ0sZe
Efficient Reinforcement Learning with Large Language Model Priors
In sequential decision-making (SDM) tasks, methods like reinforcement learning (RL) and heuristic search have made notable advances in specific cases. However, they often require extensive exploration and face challenges in generalizing across diverse environments due to their limited grasp of the underlying decision dynamics. In contrast, large language models (LLMs) have recently emerged as powerful general-purpose tools, due to their capacity to maintain vast amounts of domain-specific knowledge. To harness this rich prior knowledge for efficiently solving complex SDM tasks, we propose treating LLMs as prior action distributions and integrating them into RL frameworks through Bayesian inference methods, making use of variational inference and direct posterior sampling. The proposed approaches facilitate the seamless incorporation of fixed LLM priors into both policy-based and value-based RL frameworks. Our experiments show that incorporating LLM-based action priors significantly reduces exploration and optimization complexity, substantially improving sample efficiency compared to traditional RL techniques, e.g., using LLM priors decreases the number of required samples by over 90\% in offline learning scenarios.
ICLR 2025 Poster
/pdf/8351c758f6bc0579d30528bda3f2d96250aa1d7d.pdf
ICLR.cc/2025/Conference
pOq9vDIYev
Diverse Preference Learning for Capabilities and Alignment
As LLMs increasingly impact society, their ability to represent diverse perspectives is critical. However, recent studies reveal that alignment algorithms such as RLHF and DPO significantly reduce the diversity of LLM outputs. Not only do aligned LLMs generate text with repetitive structure and word choice, they also approach problems in more uniform ways, and their responses reflect a narrower range of societal perspectives. We attribute this problem to the KL divergence regularizer employed in preference learning algorithms. This causes the model to overweight majority opinions and sacrifice diversity in exchange for optimal reward. To address this, we propose Soft Preference Learning, which decouples the entropy and cross-entropy terms in the KL penalty — allowing for fine-grained control over LLM generation diversity. From a capabilities perspective, LLMs trained using Soft Preference Learning attain higher accuracy on difficult repeated sampling tasks and produce outputs with greater semantic and lexical diversity. From an alignment perspective, they are capable of representing a wider range of societal viewpoints and display improved logit calibration. Notably, Soft Preference Learning resembles, but is a Pareto improvement over, standard temperature scaling.
ICLR 2025 Poster
/pdf/2c1b1d62f868031970a42629ecefb76a107db304.pdf
ICLR.cc/2025/Conference
PDnM7mSO7M
Large Language Model Evaluation via Matrix Nuclear-Norm
As large language models (LLMs) continue to evolve, efficient evaluation metrics are vital for assessing their ability to compress information and reduce redundancy. While traditional metrics like Matrix Entropy offer valuable insights, they are computationally intensive for large-scale models due to their $\( O(n^3) \)$ time complexity with Singular Value Decomposition (SVD). To mitigate this issue, we introduce the Matrix Nuclear-Norm, which not only serves as a metric to quantify the data compression proficiency of LLM but also provides a convex approximation of matrix rank to capture both predictive discriminability and diversity. By employing the $\( L_{1,2}\text{-norm} \)$ to further approximate the nuclear norm, we can effectively assess the model's information compression capabilities. This approach reduces the time complexity to $\( O(n^2) \)$ and eliminates the need for SVD computation. Consequently, the Matrix Nuclear-Norm achieves speeds 8 to 24 times faster than Matrix Entropy for the CEREBRAS-GPT model as sizes increase from 111M to 6.7B. This performance gap becomes more pronounced with larger models, as validated in tests with other models like Pythia. Additionally, evaluations on benchmarks and model responses confirm that our proposed Matrix Nuclear-Norm is a reliable, scalable, and efficient tool for assessing LLMs' performance, striking a balance between accuracy and computational efficiency.
Rejected_Submission
/pdf/3ea23b6db3ce4b1ffe8170a9a716ca8dd866ed15.pdf
ICLR.cc/2024/Conference
2dLMPOY0HW
When Do MLPs Excel in Node Classification? An Information-Theoretic Perspective
Recent research has shed light on the competitiveness of MLP-structured methods in node-level tasks. Nevertheless, there remains a gap in our understanding regarding why MLPs perform well and how their performance varies across different datasets. This paper addresses this lacuna by emphasizing mutual information’s pivotal role in MLPs vs. GNNs performance variations. We first introduce a tractable metric to quantify the mutual information between node features and graph structure, based on which we observe different characteristics of various datasets, aligning with empirical results. Subsequently, we present InfoMLP, which optimizes node embeddings’ mutual information with the graph’s structure, i.e., the adjacency matrix. Our info-max objective comprises two sub-objectives: the first focuses on non-parametric reprocessing to identify the optimal graph-augmented node feature matrix that encapsulates the most graph-related information. The second sub-objective aims to enhance mutual information between node embeddings derived from the original node features and those from the graph-augmented features. This integration of message-passing during preprocessing maintains the efficiency of InfoMLP, ensuring it remains as efficient as a standard MLP during both training and testing. We validate the effectiveness of our approach through experiments on real-world datasets of varying scales supplemented by comprehensive ablation studies. Our results affirm our analysis and underscore the success of our innovative approach.
Rejected_Submission
/pdf/cefeddd3a4f47a2c8536937a3b6566df0e911099.pdf
ICLR.cc/2024/Conference
2qLSkTuqrb
Translating cognitive models into neural and statistical descriptions of real-world multi-agent foraging behavior
Foraging is a multi-agent social behavior that has been studied from many perspectives, including cognitive science, neuroscience, and statistics. We start from a specific type of cognitive description -- agents with internal preferences expressed as value functions -- and implement it as a biologically plausible neural network. We also present an equivalent statistical model where statistical predictors correspond to components of the value function. We use the neural network to simulate foraging agents in various environmental conditions and use the statistical model to discover which features in the environment best predict the agent's behavior. Our intended primary application is the study of multi-species groups of birds foraging in real-world environments. To test the viability of the statistical approach, we simulate bird agents with different preferences, and use Bayesian inference to recover what each type of agent values. In the multi-agent context, we investigate how communication of information about reward location affects group foraging behavior. We also test our modeling technique on a previously published locust foraging dataset (Gunzel et al., 2023). After evaluating the effectiveness of our method on both synthetic and previously published data, we analyze new multi-agent foraging bird data we captured through high-resolution video recordings. Our method distinguishes between proximity preferences of ducks and sparrows within foraging groups. This analysis framework provides a principled, interpretable, and parametric approach for reasoning about how birds' preferences relate to their decisions about where to move in a complex multi-agent environment.
Rejected_Submission
/pdf/6c982b6d2ae1731c1e9d4282d47d63c93a2d0faf.pdf
ICLR.cc/2025/Conference
FvQsk3la17
Langevin Soft Actor-Critic: Efficient Exploration through Uncertainty-Driven Critic Learning
Existing actor-critic algorithms, which are popular for continuous control reinforcement learning (RL) tasks, suffer from poor sample efficiency due to lack of principled exploration mechanism within them. Motivated by the success of Thompson sampling for efficient exploration in RL, we propose a novel model-free RL algorithm, \emph{Langevin Soft Actor Critic} (LSAC), which prioritizes enhancing critic learning through uncertainty estimation over policy optimization. LSAC employs three key innovations: approximate Thompson sampling through distributional Langevin Monte Carlo (LMC) based $Q$ updates, parallel tempering for exploring multiple modes of the posterior of the $Q$ function, and diffusion synthesized state-action samples regularized with $Q$ action gradients. Our extensive experiments demonstrate that LSAC outperforms or matches the performance of mainstream model-free RL algorithms for continuous control tasks. Notably, LSAC marks the first successful application of an LMC based Thompson sampling in continuous control tasks with continuous action spaces.
ICLR 2025 Poster
/pdf/cd0c8d7ae2ff9e613598dc4f01b80fb409713495.pdf
ICLR.cc/2025/Conference
1DIdt2YOPw
Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations
A major barrier to the practical deployment of large language models (LLMs) is their lack of reliability. Three situations where this is particularly apparent are correctness, hallucinations when given unanswerable questions, and safety where responses are harmful or offensive. In all three cases, models should ideally abstain from responding---much like humans refrain from answering questions when uncertain. Inspired by analogous approaches in classification, this study explores the feasibility and efficacy of LLMs abstaining when uncertain in the domain of question-answering. We investigate two kinds of uncertainties, statistical uncertainty metrics and a distinct verbalized measure, termed as In Dialogue Uncertainty (InDU), measuring hedge words such as `I don't know' in responses. Using these uncertainty measures combined with models with and without reinforcement learning with human feedback (RLHF), we show in all three situations, abstention based on the right kind of uncertainty measure can boost the reliability of LLMs. By abstaining for a few highly uncertain samples we improve correctness by up to 8\%, avoid 50\% of hallucinations by correctly identifying unanswerable questions, and in particular increase safety by 70-99\% with almost no additional computational overhead.
Rejected_Submission
/pdf/66216b5fd69ab306b3caa73bd9d7d3a92e0c88b6.pdf
ICLR.cc/2025/Conference
Oeb0I3JcVc
Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits
This paper is motivated by recent research in the $d$-dimensional stochastic linear bandit literature, which has revealed an unsettling discrepancy: algorithms like Thompson sampling and Greedy demonstrate promising empirical performance, yet this contrasts with their pessimistic theoretical regret bounds. The challenge arises from the fact that while these algorithms may perform poorly in certain problem instances, they generally excel in typical instances. To address this, we propose a new data-driven technique that tracks the geometric properties of the uncertainty ellipsoid around the main problem parameter. This methodology enables us to formulate a data-driven frequentist regret bound, which incorporates the geometric information, for a broad class of base algorithms, including Greedy, OFUL, and Thompson sampling. This result allows us to identify and ``course-correct" problem instances in which the base algorithms perform poorly. The course-corrected algorithms achieve the minimax optimal regret of order $\tilde{\mathcal{O}}(d\sqrt{T})$ for a $T$-period decision-making scenario, effectively maintaining the desirable attributes of the base algorithms, including their empirical efficacy. We present simulation results to validate our findings using synthetic and real data.
ICLR 2025 Poster
/pdf/83b325bb45438add11efd2a65fabbbb887598945.pdf
ICLR.cc/2024/Conference
iShM3YolRY
On the Tool Manipulation Capability of Open-sourced Large Language Models
Recent studies on software tool manipulation with large language models (LLMs) mostly rely on closed model APIs. The industrial adoption of these models is substantially constrained due to the security and robustness risks in exposing information to closed LLM API services. In this paper, we ask can we enhance open-source LLMs to be competitive to leading closed LLM APIs in tool manipulation, with practical amount of human supervision. By analyzing common tool manipulation failures, we first demonstrate that open-source LLMs may require training with usage examples, in-context demonstration and generation style regulation to resolve failures. These insights motivate us to revisit classical methods in LLM literature, and demonstrate that we can adapt them as model alignment with programmatic data generation, system prompts and in-context demonstration retrievers to enhance open-source LLMs for tool manipulation. To evaluate these techniques, we create ToolBench, a tool manipulation benchmark consisting of diverse software tools for real-world tasks. We demonstrate that our techniques can boost leading open-source LLMs by up to 94% success rate, showing capabilities competitive to OpenAI GPT-4 in 4 out of 8 ToolBench tasks. We show that such enhancement typically requires about one developer day to curate data for each tool, rendering a recipe with practical amount of human supervision.
Rejected_Submission
/pdf/26e92ad1a6fe7c03d08ee3f71a8e6390c790edc5.pdf
ICLR.cc/2024/Conference
LVFoynuAQn
A universal metric of dataset similarity for multi-source learning
Multi-source learning is a machine learning approach that involves training on data from multiple sources. Applied domains such as healthcare and finance have been increasingly using multi-source learning to improve model performance. However, datasets collected from different sources can be non-identically distributed, leading to degradation in model performance. Most existing methods for assessing dataset similarity are limited by being dataset or task-specific. They propose similarity metrics that are either unbounded and dependent on dataset dimension and scale, or require model-training. Moreover, these metrics can only be calculated by exchanging data across sources, which can be a privacy concern in domains such as healthcare and finance. To address these challenges, we propose a novel bounded metric for assessing dataset similarity. Our metric exhibits several desirable properties: it is dataset-agnostic, considers label information, and requires no model training. First, we establish a theoretical connection between our metric and the learning process. Next, we extensively evaluate our metric on a range of real-world datasets and demonstrate that our cost metric assigns scores that align with how these data were collected. Further, we show a robust and interpretable relationship between our metric and multi-source learning performance. Finally, we provide a privacy-preserving method to calculate our metric. Our metric can provide valuable insights for deep learning practitioners using multi-source datasets.
Rejected_Submission
/pdf/1b5618b5c87b4f851c580454f9a3a5060e345f8f.pdf
ICLR.cc/2024/Conference
IuXR1CCrSi
Talk like a Graph: Encoding Graphs for Large Language Models
Graphs are a powerful tool for representing and analyzing complex relationships in real-world applications such as social networks, recommender systems, and computational finance. Reasoning on graphs is essential for drawing inferences about the relationships between entities in a complex system, and to identify hidden patterns and trends. Despite the remarkable progress in automated reasoning with natural text, reasoning on graphs with large language models (LLMs) remains an understudied problem. In this work, we perform the first comprehensive study of encoding graph-structured data as text for consumption by LLMs. We show that LLM performance on graph reasoning tasks varies on three fundamental levels: (1) the graph encoding method, (2) the nature of the graph task itself, and (3) interestingly, the very structure of the graph considered. These novel results provide valuable insight on strategies for encoding graphs as text. Using these insights we illustrate how the correct choice of encoders can boost performance on graph reasoning tasks inside LLMs by 4.8% to 61.8%, depending on the task.
ICLR 2024 poster
/pdf/ef301edfe05ea96f66b94b4ffcf33ee657cd4af2.pdf
ICLR.cc/2025/Conference
M922KJFO7O
ClusterGen: Token Generation in Sublinear Time and Memory with Clustering KV Cache
Despite the significant success of large language models (LLMs), their extensive memory requirements pose challenges for deploying them in long-context token generation. The substantial memory footprint of LLM decoders arises from the necessity to store all previous tokens in the attention module, a requirement imposed by key-value (KV) caching. In this work, our focus is on developing an efficient compression technique for the KV cache. Empirical evidence indicates a significant clustering tendency within key embeddings in the attention module. Building on this key insight, we have devised a novel caching method with sublinear complexity, employing online clustering on key tokens and online sampling on values. The result is a provably accurate and efficient attention decoding algorithm, termed ClusterGen. Not only does this algorithm ensure a sublinear memory footprint and sublinear time complexity, but we also establish a tight error bound for our approach. Empirical evaluations on long-context question-answering tasks demonstrate that ClusterGen significantly outperforms existing and state-of-the-art KV cache compression methods in terms of performance and efficiency.
Rejected_Submission
/pdf/3e04eac1d99f4ccb46e0ab59726b90441e799ce1.pdf
ICLR.cc/2025/Conference
gXV84CnMUm
Outward Odyssey: Improving Reward Models with Proximal Policy Exploration for Preference-Based Reinforcement Learning
Reinforcement learning (RL) heavily depends on well-designed reward functions, which can be challenging to create and may introduce biases, especially for complex behaviors. Preference-based RL (PbRL) addresses this by using human feedback to construct a reward model that reflects human preferences, yet requiring considerable human involvement. To alleviate this, several PbRL methods aim to select queries that need minimal feedback. However, these methods do not directly enhance the data coverage within the preference buffer. In this paper, to emphasize the critical role of preference buffer coverage in determining the quality of the reward model, we first investigate and find that a reward model's evaluative accuracy is the highest for trajectories within the preference buffer's distribution and significantly decreases for out-of-distribution trajectories. Against this phenomenon, we introduce the **Proximal Policy Exploration (PPE)** algorithm, which consists of a *proximal-policy extension* method and a *mixture distribution query* method. To achieve higher preference buffer coverage, the *proximal-policy extension* method encourages active exploration of data within near-policy regions that fall outside the preference buffer's distribution. To balance the inclusion of in-distribution and out-of-distribution data, the *mixture distribution query* method proactively selects a mix of data from both outside and within the preference buffer's distribution for querying. PPE not only expands the preference buffer's coverage but also ensures the reward model's evaluative capability for in-distribution data. Our comprehensive experiments demonstrate that PPE achieves significant improvement in both human feedback efficiency and RL sample efficiency, underscoring the importance of preference buffer coverage in PbRL tasks.
Rejected_Submission
/pdf/60439a146349b7ec264d8c07c6c59d64e75005dc.pdf
ICLR.cc/2024/Conference
BWlSNtViSA
Coupling Fairness and Pruning in a Single Run: a Bi-level Optimization Perspective
Deep neural networks have demonstrated remarkable performance in various tasks. With a growing need for sparse deep learning, model compression techniques, especially pruning, have gained significant attention. However, conventional pruning techniques can inadvertently exacerbate algorithmic bias, resulting in unequal predictions. To address this, we define a fair pruning task where a sparse model is derived subject to fairness requirements. In particular, we propose a framework to jointly optimize the pruning mask and weight update processes with fairness constraints. This framework is engineered to compress models that maintain performance while ensuring fairness in a single execution. To this end, we formulate the fair pruning problem as a novel constrained bi-level optimization task and derive efficient and effective solving strategies. We design experiments spanning various datasets and settings to validate our proposed method. Our empirical analysis contrasts our framework with several mainstream pruning strategies, emphasizing our method's superiority in maintaining model fairness, performance, and efficiency.
Rejected_Submission
/pdf/41e3667b99d36a69a734e122c240b67ae95769c4.pdf
ICLR.cc/2025/Conference
IEs29RYxfK
VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters
Foundation models have emerged as a promising approach in time series forecasting (TSF). Existing approaches either repurpose large language models (LLMs) or build large-scale time series datasets to develop TSF foundation models for universal forecasting. However, these methods face challenges due to the severe cross-domain gap or in-domain heterogeneity. This paper explores a new road to building a TSF foundation model from rich and high-quality natural images. Our key insight is that a visual masked autoencoder, pre-trained on the ImageNet dataset, can naturally be a numeric series forecaster. By reformulating TSF as an image reconstruction task, we bridge the gap between image pre-training and TSF downstream tasks. Surprisingly, without further adaptation in the time-series domain, the proposed VisionTS could achieve superior zero-shot forecasting performance compared to existing TSF foundation models. With fine-tuning for one epoch, VisionTS could further improve the forecasting and achieve state-of-the-art performance in most cases. Extensive experiments reveal intrinsic similarities between images and real-world time series, suggesting visual models may offer a "free lunch'' for TSF and highlight the potential for future cross-modality research. Our code is available in the Supplementary Material.
Rejected_Submission
/pdf/f1547776c84413ecdf662491b97fb576eecf7de2.pdf
ICLR.cc/2025/Conference
R2OzZWOkjz
Retrieval-Augmented Editing Generation: Impact of Knowledge Editing and Fine-Tuning on RAG
The knowledge embedded in Large Language Models (LLMs) is static, tied to the time when the training data was collected. While Retrieval-Augmented Generation (RAG) methods are widely used to introduce new knowledge, they simply rely on retrieved information for reasoning without integrating it into the model’s parameters. This limits the model's ability for long-term knowledge retention and autonomous learning. To overcome this, in this work, we propose the \textbf{R}etrieval-\textbf{A}ugmented \textbf{E}diting \textbf{G}eneration (RAEG) framework for open-domain question answering (ODQA) tasks. RAEG enhances model generation performance by first editing the retrieved paragraphs to inject necessary knowledge, followed by an augmented generation phase. This dual mechanism—combining knowledge injection and retrieval augmentation—provides complementary advantages in the reasoning process. When the injected knowledge alone is insufficient for accurate generation, the model can rely on the retrieved information to compensate, and conversely, when retrieval yields suboptimal results, the injected knowledge ensures continuity and accuracy in the response. This interplay between internalized and externally sourced knowledge reinforces the model's ability to produce correct answers, thereby enhancing overall task performance. We explore the impact of two key methods for knowledge injection: Knowledge Editing (KE) and Parameter-Efficient Fine-Tuning (PEFT), and analyze how modifying the model's parameters influences its reasoning abilities and generation outcomes. To further improve RAEG's performance, we introduce a re-ranking mechanism to optimize the integration of external knowledge and apply parameter pruning to mitigate the potential drawbacks of parameter modifications during KE. Evaluations on two authoritative ODQA benchmarks show that RAEG is able to further replace RAG as a competitive method. Our data and code will be available at \url{https://github.com/XXX/XXX}.
Rejected_Submission
/pdf/930ef6df77cf2a9d44c25bf9f7c228cefdedd583.pdf
ICLR.cc/2025/Conference
3MDmM0rMPQ
Inverse Prompt Engineering for Task-Specific LLM Safety
Most real-world deployments of large language models (LLMs) operate within well-scoped tasks, yet current safety measures are general-purpose and fail to leverage this information. As a result, even in narrowly-scoped tasks, LLM applications remain vulnerable to adversarial jailbreaks. In these settings, we argue that task-specific safety guardrails solve a more tractable problem than general-purpose methods. We introduce Inverse Prompt Engineering (IPE) as an initial approach to building automatic, task-specific safety guardrails around LLMs. Our key insight is that robust safety guardrails can be derived from prompt engineering data that is already on hand. IPE operationalizes the principle of least privilege from computer security, restricting LLM functionality to only what is necessary for the task. We evaluate our approach in two settings. First, in an example chatbot application, where IPE outperforms existing methods against both human-written and automated adversarial attacks. Second, on TensorTrust, a crowdsourced dataset of prompt-based attacks and defenses. Here, IPE improves average defense robustness by 93\%, using real-world prompt engineering data.
Rejected_Submission
/pdf/2b3bc714b1bd54d8f1af82f18900a8fbe4a56881.pdf
ICLR.cc/2025/Conference
Wxl0JMgDoU
Understanding Skill Adaptation in Transformers Using Sparse Autoencoders: Chess as a Model System
Understanding how skill shapes decision-making in complex environments is a challenging problem in AI interpretability. We investigate this question by applying Sparse Autoencoders (SAEs) to the internal representations of Maia-2, a human-like chess model that simulates human play across varying skill levels. Maia-2 incorporates a skill-aware transformer that integrates position features with categorical skill inputs, capturing nuanced relationships between player expertise and move selection. By training SAEs on these modulated representations, we identify latent features that reveal how the model's threat response policy adapts to different levels of play. We then use these features to intervene on the internal activations of Maia-2, eliciting both higher skill and lower skill play in specific contexts. We also apply mediated intervention with targeted SAE features to effectively enhance and sabotage the model's understanding and decision-making on context-specific chess tasks. Our findings suggest that SAE features can help shed light on how skill-specific information is encoded within a model to produce human-like behavior, and that these insights can be applied to steer the model's performance on specific sub-tasks. Our work is available at \url{https://anonymous.4open.science/r/chess-sae-3C06/}
Rejected_Submission
/pdf/b76494bb8a0657fcf61f815fdde90ed5a6b683e8.pdf
ICLR.cc/2024/Conference
pK7V0glCdj
BOtied: Multi-objective Bayesian optimization with tied multivariate ranks
Many scientific and industrial applications require the joint optimization of multiple, potentially competing objectives. Multi-objective Bayesian optimization (MOBO) is a sample-efficient framework for identifying Pareto-optimal solutions. At the heart of MOBO is the acquisition function, which determines the next candidate to evaluate by navigating the best compromises among the objectives. Multi-objective acquisition functions that rely on box decomposition of the objective space, such as the expected hypervolume improvement (EHVI) and entropy search, scale poorly to a large number of objectives. We begin by showing a natural connection between non-dominated solutions and the highest multivariate rank, which coincides with the outermost level line of the joint cumulative distribution function (CDF). Motivated by this link, we propose the CDF indicator, a Pareto-compliant metric for evaluating the quality of approximate Pareto sets that complements the popular hypervolume indicator. We then propose an acquisition function based on the CDF indicator, called BOtied. BOtied can be implemented efficiently with copulas, a statistical tool for modeling complex, high-dimensional distributions. We benchmark BOtied against common acquisition functions, including EHVI, entropy search, and random scalarization, in a series of synthetic and real-data experiments. BOtied performs on par with the baselines across datasets and metrics while being computationally efficient.
Rejected_Submission
/pdf/14f020ce4bccbcf34a57be81a0e80d8e33065c2a.pdf
ICLR.cc/2025/Conference
WyZT4ZmMzf
Evaluating Representational Similarity Measures from the Lens of Functional Correspondence
Neuroscience and artificial intelligence (AI) both face the challenge of interpreting high-dimensional neural data, where the comparative analysis of such data is crucial for revealing shared mechanisms and differences between these complex systems. Despite the widespread use of representational comparisons and the abundance classes of comparison methods, a critical question remains: which metrics are most suitable for these comparisons? While some studies evaluate metrics based on their ability to differentiate models of different origins or constructions (e.g., various architectures), another approach is to assess how well they distinguish models that exhibit distinct behaviors. To investigate this, we examine the degree of alignment between various representational similarity measures and behavioral outcomes, employing group statistics and a comprehensive suite of behavioral metrics for comparison. In our evaluation of eight commonly used representational similarity metrics in the visual domain—spanning alignment-based, CCA-based, inner product kernel-based, and nearest-neighbor methods—we found that metrics like linear CKA and Procrustes, which emphasize the overall geometric structure or shape of representations, excelled in differentiating trained from untrained models and aligning with behavioral measures, whereas metrics such as linear predictivity, commonly used in neuroscience, demonstrated only moderate alignment with behavior. These insights are crucial for selecting metrics that emphasize behaviorally meaningful comparisons in NeuroAI research.
Rejected_Submission
/pdf/6d65a429ae46c4c29921fc52415d83e193b5c500.pdf