venue
stringclasses
11 values
paper_openreview_id
stringlengths
9
13
title
stringlengths
4
179
abstract
stringlengths
2
4.99k
paper_decision
stringclasses
29 values
paper_pdf_link
stringlengths
31
63
ICLR.cc/2025/Conference
FCMpUOZkxi
On Stochastic Contextual Bandits with Knapsacks in Small Budget Regime
This paper studies stochastic contextual bandits with knapsack constraints (CBwK), where a learner observes a context, takes an action, receives a reward, and incurs a vector of costs at every round. The learner aims to maximize the cumulative rewards across $T$ rounds under the knapsack constraints with an initial budget of $B$. We study CBwK in the small budget regime where the budget $B = \Omega(\sqrt{T})$ and propose an Adaptive and Universal Primal--Dual algorithm (AUPD) that achieves strong regret performance: i) AUPD achieves $\tilde{O}((1 + \frac{\nu^*}{\delta b})\sqrt{T})$ regret under the strict feasibility assumption without any prior information, matching the best-known bounds; ii) AUPD achieves $\tilde{O}(\sqrt{T}+ \frac{\nu^*}{\sqrt{b}}T^{\frac{3}{4}})$ regret without strict feasibility assumption, which, to the best of our knowledge, is the first result in the literature. Here, the parameter $\nu^*$ represents the optimal average reward; $b=B/T$ is the average budget and $\delta b$ is the feasibility/safety margin. We establish these strong results through the adaptive budget-aware design, which effectively balances reward maximization and budget consumption. We provide a new perspective on analyzing budget consumption using the Lyapunov drift method, along with a refined analysis of its cumulative variance. Our theory is further supported by experiments conducted on a large-scale dataset.
ICLR 2025 Poster
/pdf/c9a72759526a202bc84cc6763d54b9cf7da9dd6b.pdf
ICLR.cc/2025/Conference
JBXO05r4AV
From Few to Many: Self-Improving Many-Shot Reasoners Through Iterative Optimization and Generation
Recent advances in long-context large language models (LLMs) have led to the emerging paradigm of many-shot in-context learning (ICL), where it is observed that scaling many more demonstrating examples beyond the conventional few-shot setup in the context can lead to performance benefits. However, despite its promise, it is unclear what aspects dominate the benefits and whether simply scaling to more examples is the most effective way of improving many-shot ICL. In this work, we first provide an analysis on the factors driving many-shot ICL, and we find that 1) many-shot performance can still be attributed to often a few disproportionately influential examples and 2) identifying such influential examples ("optimize") and using them as demonstrations to regenerate new examples ("generate") can lead to further improvements. Inspired by the findings, we propose BRIDGE, an algorithm that alternates between the optimize step with Bayesian optimization to discover the influential sets of examples and the generate step to reuse this set to expand the reasoning paths of the examples back to the many-shot regime automatically. On Gemini, Claude, and Mistral LLMs of different sizes, we show BRIDGE led to significant improvements across a diverse set of tasks including symbolic reasoning, numerical reasoning and code generation.
ICLR 2025 Poster
/pdf/7496b87c72aeb373439fecdaa5d7d5c0193634d8.pdf
ICLR.cc/2024/Conference
t8vJSIsLhC
SMPE: A Framework for Multi-Dimensional Permutation Equivariance
Permutation equivariance (PE) is an important inductive prior for addressing tasks such as point cloud segmentation, where permuting objects in the input set maintains the output features of each object. However, the state-of-the-art PE methods mainly focused on one dimensional cases, which cannot meet the requirements of multi-dimensional tasks such as auction design, pseudo inverse computation, and multiuser resource allocation in wireless networks. It is evidenced that the direct incorporation of high-dimensional equivariance in network design necessitates tensor operations and complicated parameter sharing patterns, which contribute to its limited exploration. In this paper, we propose a novel serial multi-dimensional permutation equivariance (SMPE) framework to address these challenges. By serially composing multiple one-dimensional equivariant layers and incorporating dense connections for feature reuse, the proposed SMPE framework enables cross-dimensional interactions among objects while maintaining multi-dimensional equivariance. Additionally, we extend the SMPE framework to scenarios of permutation invariance as well as the hybrid equivariance and invariance through pooling operations. We use an extensive set of experiments to evaluate the framework on contextual auction design, pseudo inverse computation, and multiuser wireless communication tasks. It is observed that the SMPE framework not only maintains excellent equivariance property to support variable set sizes, but also outperforms the state-of-the-art models. For example, SMPE could gain as high as 8.4% and 14.4% improvements over the state-of-the-art methods in two typical multiuser resource allocation scenarios.
Rejected_Submission
/pdf/c0d39979692f8d9688cd6e03b17c2ae71c2e2466.pdf
ICLR.cc/2024/Conference
PXD3FAVHJT
Understanding the Effects of RLHF on LLM Generalisation and Diversity
Large language models (LLMs) fine-tuned with reinforcement learning from human feedback (RLHF) have been used in some of the most widely deployed AI models to date, such as OpenAI's ChatGPT or Anthropic's Claude. While there has been significant work developing these methods, our understanding of the benefits and downsides of each stage in RLHF is still limited. To fill this gap, we present an extensive analysis of how each stage of the process (i.e. supervised fine-tuning (SFT), reward modelling, and RLHF) affects two key properties: out-of-distribution (OOD) generalisation and output diversity. OOD generalisation is crucial given the wide range of real-world scenarios in which these models are being used, while output diversity refers to the model's ability to generate varied outputs and is important for a variety of use cases. We perform our analysis across two base models on both summarisation and instruction following tasks, the latter being highly relevant for current LLM use cases. We find that RLHF generalises better than SFT to new inputs, particularly as the distribution shift between train and test becomes larger. However, RLHF significantly reduces output diversity compared to SFT across a variety of measures, implying a tradeoff in current LLM fine-tuning methods between generalisation and diversity. Our results provide guidance on which fine-tuning method should be used depending on the application, and show that more research is needed to improve the tradeoff between generalisation and diversity.
ICLR 2024 poster
/pdf/6f1e5006fcc3a70f6fecd669f989e44c7c7c8e08.pdf
ICLR.cc/2025/Conference
q2Lnyegkr8
Forgetting Transformer: Softmax Attention with a Forget Gate
An essential component of modern recurrent sequence models is the forget gate. While Transformers do not have an explicit recurrent form, we show that a forget gate can be naturally incorporated into Transformers by down-weighting the unnormalized attention scores in a data-dependent way. We name this attention mechanism Forgetting Attention and the resulting model the Forgetting Transformer (FoX). We show that FoX outperforms the Transformer on long-context language modeling, length extrapolation, and short-context downstream tasks, while performing on par with the Transformer on long-context downstream tasks. Moreover, it is compatible with the FlashAttention algorithm and does not require any positional embeddings. Several analyses, including the needle-in-the-haystack test, show that FoX also retains the Transformer's superior long-context capabilities over recurrent sequence models such as Mamba-2, HGRN2, and DeltaNet. We also introduce a "Pro" block design that incorporates some common architectural components in recurrent sequence models and find it significantly improves the performance of both FoX and the Transformer. Our code is available at [`https://github.com/zhixuan-lin/forgetting-transformer`](https://github.com/zhixuan-lin/forgetting-transformer).
ICLR 2025 Poster
/pdf/bb7fdc5c6bd81ed3bc4388437015627c8aad8934.pdf
ICLR.cc/2025/Conference
R4q3cY3kQf
MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization
Reinforcement learning (RL) algorithms aim to balance exploiting the current best strategy with exploring new options that could lead to higher rewards. Most common RL algorithms use undirected exploration, i.e., select random sequences of actions. Exploration can also be directed using intrinsic rewards, such as curiosity or model epistemic uncertainty. However, effectively balancing task and intrinsic rewards is challenging and often task-dependent. In this work, we introduce a framework, MaxInfoRL, for balancing intrinsic and extrinsic exploration. MaxInfoRL steers exploration towards informative transitions, by maximizing intrinsic rewards such as the information gain about the underlying task. When combined with Boltzmann exploration, this approach naturally trades off maximization of the value function with that of the entropy over states, rewards, and actions. We show that our approach achieves sublinear regret in the simplified setting of multi-armed bandits. We then apply this general formulation to a variety of off-policy model-free RL methods for continuous state-action spaces, yielding novel algorithms that achieve superior performance across hard exploration problems and complex scenarios such as visual control tasks.
ICLR 2025 Poster
/pdf/9ebe006d2f1243528a11ce0940decaf67ce91184.pdf
ICLR.cc/2024/Conference
ATEawsFUj4
GAIA: Zero-shot Talking Avatar Generation
Zero-shot talking avatar generation aims at synthesizing natural talking videos from speech and a single portrait image. Previous methods have relied on domain-specific heuristics such as warping-based motion representation and 3D Morphable Models, which limit the naturalness and diversity of the generated avatars. In this work, we introduce GAIA (Generative AI for Avatar), which eliminates the domain priors in talking avatar generation. In light of the observation that the speech only drives the motion of the avatar while the appearance of the avatar and the background typically remain the same throughout the entire video, we divide our approach into two stages: 1) disentangling each frame into motion and appearance representations; 2) generating motion sequences conditioned on the speech and reference portrait image. We collect a large-scale high-quality talking avatar dataset and train the model on it with different scales (up to 2B parameters). Experimental results verify the superiority, scalability, and flexibility of GAIA as 1) the resulting model beats previous baseline models in terms of naturalness, diversity, lip-sync quality, and visual quality; 2) the framework is scalable since larger models yield better results; 3) it is general and enables different applications like controllable talking avatar generation and text-instructed avatar generation.
ICLR 2024 poster
/pdf/8b2a0bcb53dd30b0eaf30797acfc1d2a0099a1a1.pdf
ICLR.cc/2024/Conference
VIEbRFp6s3
Off-the-Grid MARL: Datasets with Baselines for Offline Multi-Agent Reinforcement Learning
Being able to harness the power of large datasets for developing cooperative multi-agent controllers promises to unlock enormous value for real-world applications. Many important industrial systems are multi-agent in nature and are difficult to model using bespoke simulators. However, in industry, distributed processes can often be recorded during operation, and large quantities of demonstrative data stored. Offline multi-agent reinforcement learning (MARL) provides a promising paradigm for building effective decentralised controllers from such datasets. However, offline MARL is still in its infancy and therefore lacks standardised benchmark datasets and baselines typically found in more mature subfields of reinforcement learning (RL). These deficiencies make it difficult for the community to sensibly measure progress. In this work, we aim to fill this gap by releasing off-the-grid MARL (OG-MARL): a growing repository of high-quality datasets with baselines for cooperative offline MARL research. Our datasets provide settings that are characteristic of real-world systems, including complex environment dynamics, heterogeneous agents, non-stationarity, many agents, partial observability, suboptimality, sparse rewards and demonstrated coordination. For each setting, we provide a range of different dataset types (e.g. Good, Medium, Poor, and Replay) and profile the composition of experiences for each dataset. We hope that OG-MARL will serve the community as a reliable source of datasets and help drive progress, while also providing an accessible entry point for researchers new to the field.
Rejected_Submission
/pdf/d66970b456eb31be75650468506651951f3ce007.pdf
ICLR.cc/2025/Conference
aKJr5NnN8U
Toward Understanding In-context vs. In-weight Learning
It has recently been demonstrated empirically that in-context learning emerges in transformers when certain distributional properties are present in the training data, but this ability can also diminish upon further training. We provide a new theoretical understanding of these phenomena by identifying simplified distributional properties that give rise to the emergence and eventual disappearance of in-context learning. We do so by first analyzing a simplified model that uses a gating mechanism to choose between an in-weight and an in-context predictor. Through a combination of a generalization error and regret analysis we identify conditions where in-context and in-weight learning emerge. These theoretical findings are then corroborated experimentally by comparing the behaviour of a full transformer on the simplified distributions to that of the stylized model, demonstrating aligned results. We then extend the study to a full large language model, showing how fine-tuning on various collections of natural language prompts can elicit similar in-context and in-weight learning behaviour.
ICLR 2025 Poster
/pdf/2e4ac39ebd8708acb95ef5a5f8fff63dd2917992.pdf
ICLR.cc/2025/Conference
8sSqNntaMr
RouteLLM: Learning to Route LLMs from Preference Data
Large language models (LLMs) excel at a wide range of tasks, but choosing the right model often involves balancing performance and cost. Powerful models offer better results but are expensive, while smaller models are more cost-effective but less capable. To address this trade-off, we introduce a training framework for learning efficient router models that dynamically select between a stronger and weaker LLM during inference. Our framework leverages human preference data and employs data augmentation techniques to enhance performance. Evaluations on public benchmarks show that our approach can reduce costs by over 2 times without sacrificing response quality. Moreover, our routers exhibit strong generalization capabilities, maintaining performance even when routing between LLMs not included in training. This highlights the potential of our framework to deliver cost-effective, high-performance LLM solutions.
ICLR 2025 Poster
/pdf/900c2f5b71fd9d4dc34d8f299cce9a11336d30b4.pdf
ICLR.cc/2025/Conference
oos6KyAUsW
Mitigating Unobserved Confounding via Diffusion Probabilistic Models
Learning Conditional average treatment effect estimation from observational data is a challenging task due to the existence of unobserved confounders. Previous methods mostly focus on assuming the Ignorability assumption ignoring the unobserved confounders or overlooking the impact of an a priori knowledge on the generation process of the latent variable, which can be quite impractical in real-world scenarios. Motivated by the recent advances in the latent variable modeling, we propose to capture the unobserved latent space using diffusion model, and accordingly to estimate the causal effect. More concretely, we build on the reverse diffusion process for the unobserved confounders as a Markov chain conditioned on an apriori knowledge. In order to implement our model in a feasible way, we derive the variational bound in closed form. In the experiments, we compare our model with the state-of-the-art methods based on both synthetic and real-world datasets, demonstrating consistent improvements of our model.
Rejected_Submission
/pdf/9479808394c036bfc4ea6bdb7b7597f35a9cafba.pdf
ICLR.cc/2024/Conference
skcTCdJz0f
Probabilistic Self-supervised Representation Learning via Scoring Rules Minimization
% Self-supervised learning methods have shown promising results across a wide range of tasks in computer vision, natural language processing, and multimodal analysis. However, self-supervised approaches come with a notable limitation, dimensional collapse, where a model doesn't fully utilize its capacity to encode information optimally. Motivated by this, we propose ProSMin, a novel probabilistic self-supervised learning approach that leverages the power of probabilistic models to enhance representation quality and mitigate collapsing representations. Our proposed approach involves two neural networks, the online network and the target network, which collaborate and learn the diverse distribution of representations from each other through probabilistic knowledge distillation. The two networks are trained via our new loss function based on proper scoring rules. We provide a theoretical justification for ProSMin and demonstrate its modified scoring rule. This insight validates the method's optimization process and contributes to its robustness and effectiveness in improving representation quality. We evaluate our probabilistic model on various downstream tasks, such as in-distribution generalization, out-of-distribution detection, dataset corruption, low-shot learning, and transfer learning. Our method achieves superior accuracy and calibration, outperforming the self-supervised baseline in a variety of experiments on large datasets such as ImageNet-O and ImageNet-C. ProSMin thus demonstrates its scalability and real-world applicability. Our code is publicly available: https://github.com/amirvhd/SSL-sore-rule.
ICLR 2024 poster
/pdf/a890fe173ee454dd20bc8d12ee039d25298b5ea7.pdf
ICLR.cc/2025/Conference
OXIIFZqiiN
A Dual-Modal Framework Utilizing Visual Prompts for Enhanced Patch Analysis
Patch representation learning has emerged as a crucial innovation in software development, leveraging machine learning techniques to advance software generation workflows. This approach has led to significant enhancements across various applications involving code alterations. However, existing methods often exhibit a tendency towards specialization, excelling predominantly in either predictive tasks such as security patch classification or in generative tasks like the automated creation of patch descriptions. This paper presents a groundbreaking approach to patch representation learning through the Image-Guided Code Patch Framework (IGCP), a novel architecture that bridges the gap between code analysis and image processing domains. We introduce a rigorous mathematical foundation for IGCP, leveraging measure theory, functional analysis, and information geometry to formalize the domain adaptation process in patch representation learning. The optimization dynamics of IGCP are rigorously analyzed through the lens of Stochastic Gradient Langevin Dynamics, providing convergence guarantees in both convex and non-convex loss landscapes. Empirical evaluations demonstrate that IGCP not only achieves state-of-the-art performance in patch description generation but also exhibits remarkable domain generalization capabilities.
Rejected_Submission
/pdf/1fb677403e24a04567a05498d3e41309e3346813.pdf
ICLR.cc/2025/Conference
qxobgbamw9
Output Alignment: A Top-down Approach to Length Generalization
Recently, large language models have exhibited impressive performance and surprising emergent properties. However, their abilities remain constrained by the preset context window of the Transformer architecture, and they continue to struggle with length generalization. In this work, we propose a new perspective on length generalization by focusing on the output distribution rather than the input, as most prior studies have done (e.g., through positional encodings or data structure). First, through case studies on simple synthetic tasks, we highlight the importance of **output alignment**---the consistency of output distributions across sequences of varying lengths. We then extend this observation to natural language tasks and introduce a metric named Long-Short Misalignment to quantify output alignment, finding a strong correlation between this metric and length generalization performance. Based on these insights, we propose a regularization loss during training that improves output alignment. Extensive experiments confirm the effectiveness of this approach. Overall, our work provides a novel perspective for understanding and enhancing length generalization in large language models.
Rejected_Submission
/pdf/c70fe4acfa661696d9e2d9d9999c3731155457f5.pdf
ICLR.cc/2025/Conference
zi8YBcmXqA
PokeChamp: an Expert-level Minimax Language Agent for Competitive Pokemon
We introduce \texttt{Pok\'eChamp}, a Large Language Model (LLM) powered game-theoretic aware agent for two-player competitive Pok\'emon battles, that uses an LLM prior and collected high-Elo human data to model minimax search without any additional training. \texttt{Pok\'eChamp} uses a depth-limited minimax search online where the LLM replaces three key components: 1) action sampling from the LLM guided by prompts (including from a damage calculation tool), 2) opponent-modeling via the historical likelihood of actions from our dataset to model the effect of LLM-predicted opponent actions, and 3) state value calculation for the LLM to reflect on each intrinsic state. \texttt{Pok\'eChamp} outperforms all existing AIs (76\%) and heuristic bots (84\%) by an enormous margin, including winning consistently (>50\%) against prior human-parity work run with a frontier model, GPT 4-o, while using an open-source 8 billion parameter Llama 3.1 model. \texttt{Pok\'eChamp} achieves expert performance in the top 10\% of players on the online ladder against competitive human players at an Elo of 1500. Finally, we collect the largest Pok\'emon battling dataset, including 1 million+ games with 150k+ high Elo games, prepare a series of battling benchmarks based on real player data and puzzles to analyze specific battling abilities, and provide crucial updates to the local game engine. Our code is available \href{https://sites.google.com/view/pokechamp-llm}{online}.
Rejected_Submission
/pdf/9ff4aaebc17b3da377d16c204e657cd4947d6fab.pdf
ICLR.cc/2024/Conference
GsNp4ob8BY
Mark My Words: Repurposing LLMs for Specialized Domains via Ability Tokens
Large Language Models (LLMs) have demonstrated remarkable proficiency in natural language understanding and generation. However, their capabilities wane in highly specialized domains, such as biomedical sciences, which are sparsely represented in the pretraining corpus. In this work, we explore how to repurpose general LMs as specialized task solvers. We introduce a novel and systematic framework for adding markup-style language extensions (which we term *`ability tokens"*) to pretrained LMs. These tokens are learned embeddings appended to the LM's embedding matrix, preserving the pretrained weights and the model's original capabilities. We introduce two types of ability tokens: *domain markers*, which delimit and aid in the processing of specialized inputs (e.g., molecular formulas), and *functional tokens*, which guide the model on how to leverage these inputs to solve specific tasks (e.g., predicting molecule properties). During inference, these tokens are inserted into the input text to wrap specialized information and provide problem context. Experimental results show that (i) our markup extensions significantly boost performance in various specialized domains, such as protein and molecular property prediction, matching and outperforming expert models specifically tailored to these tasks, and (ii) we can learn the ability tokens separately and combine them in a modular fashion, achieving zero-shot generalization to unseen tasks. Overall, our framework offers a promising method to enhance LMs with domain-specific knowledge while maintaining their general capacities.
Rejected_Submission
/pdf/4c624ac8fcf29a8c5229fcde23c80e8ce1488a62.pdf
ICLR.cc/2024/Conference
bIHyMpzeuI
Sparse MoE as a New Treatment: Addressing Forgetting, Fitting, Learning Issues in Multi-Modal Multi-Task Learning
Sparse Mixture-of-Experts (SMoE) is a promising paradigm that can be easily tailored for multi-task learning. Its conditional computing nature allows us to organically allocate relevant parts of a model for performant and efficient predictions. However, several under-explored pain points persist, especially when considering scenarios with both multiple modalities and tasks: 1 $\textit{{Modality Forgetting Issue.}}$ Diverse modalities may prefer conflicting optimization directions, resulting in ineffective learning or knowledge forgetting; 2 $\textit{{Modality Fitting Issue.}}$ Current SMoE pipelines select a fixed number of experts for all modalities, which can end up over-fitting to simpler modalities or under-fitting complex modalities; 3 $\textit{{Heterogeneous Learning Pace.}}$ The varied modality attributes, task resources ($\textit{i.e.,}$ the number of input samples), and task objectives usually lead to distinct optimization difficulties and convergence. Given these issues, there is a clear need for a systematic approach to harmonizing multi-model and multi-task objectives when using SMoE. We aim to address these pain points, and propose a new $\underline{S}$parse $\underline{M}$oE framework for $\underline{M}$ulti-$\underline{M}$odal $\underline{M}$ulti-task learning, $\textit{a.k.a.}$, $\texttt{SM$^4$}$, which ($1$) disentangles model spaces for different modalities to mitigate their optimization conflicts; ($2$) automatically determines the modality-specific model size ($\textit{i.e.}$, the number of experts) to improve fitting; and ($3$) synchronizes the learning paces of disparate modalities and tasks based on training dynamics in SMoE like the entropy of routing decisions. Comprehensive experiments validate the effectiveness of $\texttt{SM$^4$}$, which outperforms previous state-of-the-art across $3$ task groups and $11$ different modalities with a clear performance margin ($\textit{e.g.}$, $\ge 1.37\%$) and a substantial computation reduction ($46.49\% \sim 98.62\%$). Code is included in the supplement.
Rejected_Submission
/pdf/851622d9ff76474b2b1574a07a2667d0d666567f.pdf
ICLR.cc/2024/Conference
uvq4Nh8eZB
Protecting Sensitive Data through Federated Co-Training
In many critical applications, sensitive data is inherently distributed. Federated learning trains a model collaboratively by aggregating the parameters of locally trained models. This avoids exposing sensitive local data. It is possible, though, to infer upon the sensitive data from the shared model parameters. At the same time, many types of machine learning models do not lend themselves to parameter aggregation, such as decision trees, or rule ensembles. It has been observed that in many applications, in particular healthcare, large unlabeled datasets are publicly available. They can be used to exchange information between clients by distributed distillation, i.e., co-regularizing local training via the discrepancy between the soft predictions of each local client on the unlabeled dataset. This, however, still discloses private information and restricts the types of models to those trainable via gradient-based methods. We propose to go one step further and use a form of federated co-training, where local hard labels on the public unlabeled datasets are shared and aggregated into a consensus label. This consensus label can be used for local training by any supervised machine learning model. We show that this federated co-training approach achieves a model quality comparable to both federated learning and distributed distillation on a set of benchmark datasets and real-world medical datasets. It improves privacy over both approaches, protecting against common membership inference attacks to the highest degree. Furthermore, we show that federated co-training can collaboratively train interpretable models, such as decision trees and rule ensembles, achieving a model quality comparable to centralized training.
Rejected_Submission
/pdf/8a390ee95285268903498891def0542c02bcec49.pdf
ICLR.cc/2025/Conference
r5IXBlTCGc
Consistency Checks for Language Model Forecasters
Forecasting is a task that is difficult to evaluate: the ground truth can only be known in the future. Recent work showing LLM forecasters rapidly approaching human-level performance begs the question: how can we benchmark and evaluate these forecasters *instantaneously*? Following the consistency check framework, we measure the performance of forecasters in terms of the consistency of their predictions on different logically-related questions. We propose a new, general consistency metric based on *arbitrage*: for example, if a forecasting AI illogically predicts that both the Democratic and Republican parties have 60\% probability of winning the 2024 US presidential election, an arbitrageur could trade against the forecaster's predictions and make a profit. We build an automated evaluation system that generates a set of base questions, instantiates consistency checks from these questions, elicits the predictions of the forecaster, and measures the consistency of the predictions. We then build a standard, proper-scoring-rule forecasting benchmark, and show that our (instantaneous) consistency metrics correlate strongly with LLM forecasters' ground truth Brier scores (which are only known in the future). We also release a consistency benchmark that resolves in 2028, providing a long-term evaluation tool for forecasting.
ICLR 2025 Oral
/pdf/389d3f91440e04c180ab71609410dccb834213d4.pdf
ICLR.cc/2025/Conference
RW37MMrNAi
Class-wise Autoencoders Measure Classification Difficulty And Detect Label Mistakes
We introduce a new framework for analyzing classification datasets based on the ratios of reconstruction errors between autoencoders trained on individual classes. This analysis framework enables efficient characterization of datasets on the sample, class, and entire dataset levels. We define reconstruction error ratios (RERs) that probe classification difficulty and allow its decomposition into (1) finite sample size and (2) Bayes error and decision-boundary complexity. Through systematic study across 19 popular visual datasets, we find that our RER-based dataset difficulty probe strongly correlates with error rate for state-of-the-art (SOTA) classification models. By interpreting sample-level classification difficulty as a label mistakenness score, we further find that RERs achieve SOTA performance on mislabel detection tasks on hard datasets under symmetric and asymmetric label noise.
Rejected_Submission
/pdf/18c38263be4947246ebcbbe782886e4a1217257d.pdf
ICLR.cc/2024/Conference
3y2TfP966N
T-Rep: Representation Learning for Time Series using Time-Embeddings
Multivariate time series present challenges to standard machine learning techniques, as they are often unlabeled, high dimensional, noisy, and contain missing data. To address this, we propose T-Rep, a self-supervised method to learn time series representations at a timestep granularity. T-Rep learns vector embeddings of time alongside its feature extractor, to extract temporal features such as trend, periodicity, or distribution shifts from the signal. These time-embeddings are leveraged in pretext tasks, to incorporate smooth and fine-grained temporal dependencies in the representations, as well as reinforce robustness to missing data. We evaluate T-Rep on downstream classification, forecasting, and anomaly detection tasks. It is compared to existing self-supervised algorithms for time series, which it outperforms in all three tasks. We test T-Rep in missing data regimes, where it proves more resilient than its counterparts. Finally, we provide latent space visualisation experiments, highlighting the interpretability of the learned representations.
ICLR 2024 poster
/pdf/26d2dcb3b19b6bbc380c5a483b1a363f0acbdf2b.pdf
ICLR.cc/2018/Conference
HJewuJWCZ
Learning to Teach
Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificial intelligence, however, one has not fully explored the role of teaching, and pays most attention to machine \emph{learning}. In this paper, we argue that equal attention, if not more, should be paid to teaching, and furthermore, an optimization framework (instead of heuristics) should be used to obtain good teaching strategies. We call this approach ``learning to teach''. In the approach, two intelligent agents interact with each other: a student model (which corresponds to the learner in traditional machine learning algorithms), and a teacher model (which determines the appropriate data, loss function, and hypothesis space to facilitate the training of the student model). The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution. To demonstrate the practical value of our proposed approach, we take the training of deep neural networks (DNN) as an example, and show that by using the learning to teach techniques, we are able to use much less training data and fewer iterations to achieve almost the same accuracy for different kinds of DNN models (e.g., multi-layer perceptron, convolutional neural networks and recurrent neural networks) under various machine learning tasks (e.g., image classification and text understanding).
Accept (Poster)
/pdf/26f95314833534dd1753c76b97cfe2a6eca0ba7e.pdf
ICLR.cc/2025/Conference
fEEbTDoecM
RL$^3$: Boosting Meta Reinforcement Learning via RL inside RL$^2$
Meta reinforcement learning (meta-RL) methods such as \rlsquare have emerged as promising approaches for learning data-efficient RL algorithms tailored to a given task distribution. However, they show poor asymptotic performance and struggle with out-of-distribution tasks because they rely on sequence models, such as recurrent neural networks or transformers, to process experiences rather than summarize them using general-purpose RL components such as value functions. In contrast, traditional RL algorithms are data-inefficient as they do not use domain knowledge, but do converge to an optimal policy in the limit. We propose RL$^3$, a principled hybrid approach that incorporates action-values, learned per task via traditional RL, in the inputs to meta-RL. We show that RL$^3$ earns greater cumulative reward in the long term compared to RL$^2$ while drastically reducing meta-training time and generalizes better to out-of-distribution tasks. Experiments are conducted on both custom and benchmark discrete domains from the meta-RL literature that exhibit a range of short-term, long-term, and complex dependencies.
Rejected_Submission
/pdf/91309ef0089b24b83654ff6e36ee91e7bb88fe2b.pdf
ICLR.cc/2025/Conference
98ASXp6oPg
Self-Explained Keywords Empower Large Language Models for Code Generation
Large language models (LLMs) have achieved impressive performance in code generation. Despite the remarkable success, we observed that LLMs often misunderstand or overlook some problem-specific undertrained keywords during code generation, compromising the accuracy of the generated code. After explicitly explaining these undertrained keywords using well-trained terms in the prompt, LLMs are more likely to generate correct code implementation. Inspired by this observation, we propose a novel technique named SEK (Self-Explained Keywords), which empowers an LLM for better code generation by extracting and explaining the key terms in the problem description with the LLM itself. Comprehensive experiments across three benchmarks, i.e., HumanEval(+), MBPP(+), and APPS, with five representative LLMs, show that SEK can significantly improve LLMs in code generation, yielding substantial and consistent gains. For instance, SEK improves the Pass@1 of DeepSeek-Coder-V2-Instruct from 85.4% to 93.3% on the Humaneval benchmark. Further analysis confirms that SEK enables the LLMs to shift their attention from low-frequency keywords to their corresponding high-frequency counterparts.
Rejected_Submission
/pdf/38e131450a5b1cbbe48a0f53b9ee194263809880.pdf
ICLR.cc/2025/Conference
rbdlQE7HY7
Uniform Wrappers: Bridging Concave to Quadratizable Functions in Online Optimization
This paper presents novel contributions to the field of online optimization, particularly focusing on the adaptation of algorithms from concave optimization to more challenging classes of functions. Key contributions include the introduction of uniform wrappers, establishing a vital link between upper-quadratizable functions and algorithmic conversions. Through this framework, the paper demonstrates superior regret guarantees for various classes of up-concave functions under zeroth-order feedback. Furthermore, the paper extends zeroth-order online algorithms to bandit feedback counterparts and offline counterparts, achieving a notable improvement in regret/sample complexity compared to existing approaches.
Rejected_Submission
/pdf/500a9a42706b0c069ad9608009099d21daf49e40.pdf
ICLR.cc/2025/Conference
4D0f16Vwc3
ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing
Sparsely activated Mixture-of-Experts (MoE) models are widely adopted to scale up model capacity without increasing the computation budget. However, vanilla TopK routers are trained in a discontinuous, non-differentiable way, limiting their performance and scalability. To address this issue, we propose ReMoE, a fully differentiable MoE architecture that offers a simple yet effective drop-in replacement for the conventional TopK+Softmax routing, utilizing ReLU as the router instead. We further propose methods to regulate the router's sparsity while balancing the load among experts. ReMoE’s continuous nature enables efficient dynamic allocation of computation across tokens and layers, while also exhibiting domain specialization. Our experiments demonstrate that ReMoE consistently outperforms vanilla TopK-routed MoE across various model sizes, expert counts, and levels of granularity. Furthermore, ReMoE exhibits superior scalability with respect to the number of experts, surpassing traditional MoE architectures. The implementation based on Megatron-LM is available at https://github.com/thu-ml/ReMoE.
ICLR 2025 Poster
/pdf/f992c15598958e7e64fdac82d3d5285fe23b68d5.pdf
ICLR.cc/2024/Conference
Vy5aRVSbNo
Looping LOCI: Developing Object Permanence from Videos
Recent compositional scene representation learning models have become remarkably good in segmenting and tracking distinct objects within visual scenes. Yet, many of these models require that objects are continuously, at least partially, visible. Moreover, they tend to fail on intuitive physics tests, which infants learn to solve over the first months of their life. Our goal is to advance compositional scene representation algorithms with an embedded algorithm that fosters the progressive learning of intuitive physics, akin to infant development. As a fundamental component for such an algorithm, we introduce Loci-Looped, which advances a recently published unsupervised object location, identification, and tracking neural network architecture (Loci, Traub et al., ICLR 2023) with an internal processing loop. The loop is designed to adaptively blend pixel-space information with anticipations yielding information-fused activities as percepts. Moreover, it is designed to learn compositional representations of both individual object dynamics and between-objects interaction dynamics. We show that Loci-Looped learns to track objects through extended periods of object occlusions, indeed simulating their hidden trajectories and anticipating their reappearance, without the need for an explicit history buffer. We even find that Loci-Looped surpasses state-of-the-art models on the ADEPT and the CLEVRER dataset, when confronted with object occlusions or temporary sensory data interruptions. This indicates that Loci-Looped is able to learn the physical concepts of object permanence and inertia in a fully unsupervised emergent manner. We believe that even further architectural advancements of the internal loop—also in other compositional scene representation learning models—can be developed in the near future.
Rejected_Submission
/pdf/df7f26b6b18336394c9c697fb5a29b5a627ffc83.pdf
ICLR.cc/2018/Conference
HJGv1Z-AW
Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input
The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.
Accept (Oral)
/pdf/e3a31e61b0fb2ed042cca612b5d54e24eac59adc.pdf
ICLR.cc/2025/Conference
WfxPVtYRlL
Graph Neural Networks Gone Hogwild
Graph neural networks (GNNs) appear to be powerful tools to learn state representations for agents in distributed, decentralized multi-agent systems, but generate catastrophically incorrect predictions when nodes update asynchronously during inference. This failure under asynchrony effectively excludes these architectures from many potential applications where synchrony is difficult or impossible to enforce, e.g., robotic swarms or sensor networks. In this work we identify ''implicitly-defined'' GNNs as a class of architectures which is provably robust to asynchronous ''hogwild'' inference, adapting convergence guarantees from work in asynchronous and distributed optimization. We then propose a novel implicitly-defined GNN architecture, which we call an energy GNN. We show that this architecture outperforms other GNNs from this class on a variety of synthetic tasks inspired by multi-agent systems.
ICLR 2025 Poster
/pdf/e8bd1d55ad9f030a6e0db669b9e72967f01edacb.pdf
ICLR.cc/2025/Conference
PD8JVDg8mB
Annotation Bootstrapping: Reinforcing Visual Pre-Training using Unlabelled Images
A common approach to learning from unlabeled images is to train models to satisfy invariances on these images, such as consistency under augmentations or crops. Despite successes on Imagenet, these approaches struggle to learn from larger uncurated datasets like web crawls or video, where such inductive biases only weakly hold. How can we more effectively learn from broader datasets? Instead of training models to be invariant across views, we study an alternative approach encouraging model representations to be \textit{predictive} of important semantics of adjacent views of an image. We concurrently train a model to predict semantic annotations from images (generated either self-supervised, or from auxiliary datasets); and bootstrap the model's semantics by predicting, given a cropped view of an image and the coordinates for a nearby crop, the model's annotation distribution for the neighboring view. A core strength of this approach is the ability to extract information universally from both unlabelled and labelled image data, incorporating captions, bounding boxes, and other annotations when they are present. Our experiments show that annotation propagation improves pre-training on unlabelled datasets in the wild, including video datasets like EpicKitchens, scene datasets like COCO, and uncurated web-scale image datasets like CC12M.
Rejected_Submission
/pdf/44b08b575db13b01e443403959a20ad4737a4d03.pdf
ICLR.cc/2025/Conference
m2kJuN1bKt
Reformer: A Deep Learning Model for Runtime Selection of Convolution Kernels
As neural networks grow larger, optimizing GPU kernel selection becomes increasingly essential to minimizing the time, cost, and energy demands of model training and inference. Current methods rely on hand-written rules-based heuristics, which often yield suboptimal performance, are labor-intensive to develop, and are difficult to adapt across hardware architectures and firmware releases. In this paper, we frame kernel selection as a sequence classification problem solved on the CPU, thereby leaving GPU resources free for user training and inference tasks. Traditional transformers are less effective in this context because CPU deployment limits the advantages of parallelism in attention mechanisms. In this regard, we propose the $\Gamma$-block, which performs only three matmul operations compared to the six required by a transformer block, while maintaining the same depth in terms of learnable layers. Our experiments on the IMDB and Reuters datasets demonstrate that a small model based on the $\Gamma$-block delivers comparable sequence classification accuracy to a similar model based on transformer blocks, while also providing faster inference times on the CPU. By stacking multiple $\Gamma$-blocks, we develop a lightweight model for kernel selection, named Reformer. To train the model, we propose a novel approach that assigns optimality probabilities to kernels based on their runtimes, offering a more robust alternative to one-hot probabilities. We demonstrate the effectiveness of Reformer by integrating it into MIOpen for convolution kernel selection, achieving an average speed-up of approximately 3x in convolution operations on the AMD Instinct$\texttrademark$ MI100 GPU.
Rejected_Submission
/pdf/e2b884017c7d0bbc5527e76e84944f09c62b265f.pdf
ICLR.cc/2024/Conference
x36mCqVHnk
Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games
The problem of two-player zero-sum Markov games has recently attracted increasing interests in theoretical studies of multi-agent reinforcement learning (RL). In particular, for finite-horizon episodic Markov decision processes (MDPs), it has been shown that model-based algorithms can find an $\epsilon$-optimal Nash Equilibrium (NE) with the sample complexity of $O(H^3SAB/\epsilon^2)$, which is optimal in the dependence of the horizon $H$ and the number of states $S$ (where $A$ and $B$ denote the number of actions of the two players, respectively). However, none of the existing model-free algorithms can achieve such an optimality. In this work, we propose a model-free stage-based Q-learning algorithm and show that it achieves the same sample complexity as the best model-based algorithm, and hence for the first time demonstrate that model-free algorithms can enjoy the same optimality in the $H$ dependence as model-based algorithms. The main improvement of the dependency on $H$ arises by leveraging the popular variance reduction technique based on the reference-advantage decomposition previously used only for single-agent RL. However, such a technique relies on a critical monotonicity property of the value function, which does not hold in Markov games due to the update of the policy via the coarse correlated equilibrium (CCE) oracle. Thus, to extend such a technique to Markov games, our algorithm features a key novel design of updating the reference value functions as the pair of optimistic and pessimistic value functions whose value difference is the smallest in history in order to achieve the desired improvement in the sample efficiency.
Rejected_Submission
/pdf/c69f3c481c08fbb9ccc2dd8824e6f159acfeb0d0.pdf
ICLR.cc/2024/Conference
qDMyhAxok3
MorphGrower: A Synchronized Layer-by-layer Growing Approach for Plausible and Diverse Neuronal Morphology Generation
Neuronal morphology is essential for studying brain functioning and understanding neurodegenerative disorders, e.g. Alzheimer. As the acquiring of real-world morphology data is expensive, computational approaches especially learning-based ones e.g. MorphVAE for morphology generation were recently studied, which are often conducted in a way of randomly augmenting a given authentic morphology to achieve both plausibility and diversity. Under such a setting, this paper proposes \textbf{MorphGrower} which aims to generate more plausible morphology samples by mimicking the natural growth mechanism instead of a one-shot treatment as done in MorphVAE. In particular, MorphGrower generates morphologies layer by layer synchronously and chooses a pair of sibling branches as the basic generation block, and the generation of each layer is conditioned on the morphological structure of previous layers and then generate morphologies via a conditional variational autoencoder with spherical latent space. Extensive experimental results on four real-world datasets demonstrate that MorphGrower outperforms MorphVAE by a notable margin. Our code will be publicly available to facilitate future research.
Rejected_Submission
/pdf/7e20c9cad8bc826ee0c9089a862eb263e776148b.pdf
ICLR.cc/2025/Conference
rpbzBXdo4x
Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse
Chain-of-thought (CoT) prompting has become a widely used strategy for working with large language and multimodal models. While CoT has been shown to improve performance across many tasks, determining the settings in which it is effective remains an ongoing effort. In particular, it is still an open question in what settings CoT systematically reduces model performance. In this paper, we seek to identify the characteristics of tasks where CoT reduces performance by drawing inspiration from cognitive psychology, looking at cases where (i) verbal thinking or deliberation hurts performance in humans, and (ii) the constraints governing human performance generalize to language models. Three such cases are implicit statistical learning, visual recognition, and classifying with patterns containing exceptions. In extensive experiments across all three settings, we find that a diverse collection of state-of-the-art models exhibit significant drop-offs in performance (e.g., up to 36.3\% absolute accuracy for GPT-o1 compared to GPT-4o) when using CoT compared to zero-shot counterparts. We also identify three tasks that satisfy condition (i) but not (ii), and find that while verbal thinking reduces human performance in these tasks, CoT retains or increases model performance. Overall, our results show that even though there is not an exact parallel between the cognitive processes of models and those of humans, considering cases where thinking has negative consequences for human performance can help us identify settings where it has negative consequences for models. By connecting the literature on human deliberation with evaluation of CoT, we offer a new tool that can be used in understanding the impact of prompt choices and inference-time reasoning.
Rejected_Submission
/pdf/8eb96113a87f6152613fc37437c2b7f8aa9b4436.pdf
ICLR.cc/2024/Conference
Z2dVrgLpsF
On partial prototype collapse in clustering-based self-supervised learning
A prominent self-supervised learning paradigm is to model the representations as clusters, or more generally as a mixture model. Learning to map the data samples to compact representations and fitting the mixture model simultaneously leads to the representation collapse problem. Regularizing the distribution of data points over the clusters is the prevalent strategy to avoid this issue. While this is sufficient to prevent full representation collapse, we show that a partial prototype collapse problem still exists in these methods, that leads to significant redundancies in the prototypes. Such prototype redundancies serve as shortcuts for the method to achieve a marginal latent class distribution that matches the prescribed prior distribution. We show that by encouraging the model to use diverse prototypes, the partial prototype collapse can be mitigated. Effective utilization of the prototypes enables the methods to learn more fine-grained clusters, encouraging more informative representations. We demonstrate that this is especially beneficial when pre-training on a long-tailed fine-grained dataset.
Rejected_Submission
/pdf/b7b3ed841088cdeb6ee99c74d6f918ccf6279af5.pdf
ICLR.cc/2025/Conference
aQSbfKYXvo
The Effect of Personalization in FedProx: A Fine-grained Analysis on Statistical Accuracy and Communication Efficiency
FedProx is a simple yet effective federated learning method that enables model personalization via regularization. Despite remarkable success in practice, a rigorous analysis of how such a regularization provably improves the statistical accuracy of each client's local model hasn't been fully established. Setting the regularization strength heuristically presents a risk, as an inappropriate choice may even degrade accuracy. This work fills in the gap by analyzing the effect of regularization on statistical accuracy, thereby providing a theoretical guideline for setting the regularization strength for achieving personalization. We prove that by adaptively choosing the regularization strength under different statistical heterogeneity, FedProx can consistently outperform pure local training and achieve a minimax-optimal statistical rate. In addition, to shed light on resource allocation, we design an algorithm, provably showing that stronger personalization reduces communication complexity without increasing the computation cost overhead. Finally, our theory is validated on both synthetic and real-world datasets and its generalizability is verified in a non-convex setting.
Rejected_Submission
/pdf/6245892a1331427412467befc42cdfd409142a08.pdf
ICLR.cc/2024/Conference
Q8ibi56aM6
SINGLE-IMAGE COHERENT RECONSTRUCTION OF OBJECTS AND HUMANS
Existing methods for reconstruction of objects and humans from a monocular image suffer from severe mesh collisions and performance limitations for interacting occluding objects. In this paper, we introduce a method that deduces spatial configurations and achieves globally consistent 3D reconstruction for interacting objects and people captured within a single image. Our contributions encompass: 1) an optimization framework, featuring a novel collision loss, tailored to handle complex human-object and human-human interactions, ensuring spatially coherent scene reconstruction; and 2) a novel technique for robustly estimating 6 degrees of freedom (DOF) poses, particularly for heavily occluded objects, exploiting image inpainting. Notably, our proposed method operates effectively on images from real-world scenarios, without necessitating scene or object-level 3D supervision. Through both qualitative and quantitative assessments, we demonstrate the superior quality of our reconstructions, showcasing a significant reduction in collisions in scenes with multiple interacting humans and objects.
Rejected_Submission
/pdf/0b6e33152cac16e9ffffe1531b658b409e8dfb32.pdf
ICLR.cc/2024/Conference
1VeQ6VBbev
Beyond Stationarity: Convergence Analysis of Stochastic Softmax Policy Gradient Methods
Markov Decision Processes (MDPs) are a formal framework for modeling and solving sequential decision-making problems. In finite time horizons such problems are relevant for instance for optimal stopping or specific supply chain problems, but also in the training of large language models. In contrast to infinite horizon MDPs optimal policies are not stationary, policies must be learned for every single epoch. In practice all parameters are often trained simultaneously, ignoring the inherent structure suggested by dynamic programming. This paper introduces a combination of dynamic programming and policy gradient called dynamical policy gradient, where the parameters are trained backwards in time. For the tabular softmax parametrisation we carry out the convergence analysis for simultaneous and dynamic policy gradient towards global optima, both in the exact and sampled gradient settings without regularisation. It turns out that the use of dynamic policy gradient training much better exploits the structure of finite-time problems which is reflected in improved convergence bounds.
ICLR 2024 poster
/pdf/9a94ff52e2cc5d06bcf5c029ea21897b3ffc4690.pdf
ICLR.cc/2024/Conference
mOTiVzTgF2
ResiDual: Transformer with Dual Residual Connections
Transformer networks have become the preferred architecture for many tasks due to their state-of-the-art performance. However, the optimal way to implement residual connections in Transformer, which are essential for effective training, is still debated. Two widely used variants are the Post-Layer-Normalization (Post-LN) and Pre-Layer-Normalization (Pre-LN) Transformers, which apply layer normalization after each residual block's output or before each residual block's input, respectively. While both variants enjoy their advantages, they also suffer from severe limitations: Post-LN causes gradient vanishing issue that hinders training deep Transformers, and Pre-LN causes representation collapse issue that limits model capacity. In this paper, we propose ResiDual, a novel Transformer architecture with Pre-Post-LN (PPLN), which fuses the connections in Post-LN and Pre-LN together, and inherits their advantages while avoids their limitations. We conduct both theoretical analyses and empirical experiments to verify the effectiveness of ResiDual. Theoretically, we prove that ResiDual has a lower bound on the gradient to avoid the vanishing issue due to the residual connection from Pre-LN. Moreover, ResiDual also has diverse model representations to avoid the collapse issue due to the residual connection from Post-LN. Empirically, ResiDual outperforms both Post-LN and Pre-LN on several machine translation benchmarks across different network depths and data sizes.
Rejected_Submission
/pdf/385c54609d4a930e46cd8f417c43d2022c9f0073.pdf
ICLR.cc/2025/Conference
SaqU2ca367
Explaining Hypergraph Neural Networks: From Local Explanations to Global Concepts
Hypergraph neural networks are a class of powerful models that leverage the message passing paradigm to learn over hypergraphs, a generalization of graphs well-suited to describing relational data with higher-order interactions. However, such models are not naturally interpretable, and their explainability has received very limited attention. We introduce SHypX, the first model-agnostic post-hoc explainer for hypergraph neural networks that provides both local and global explanations. At the instance-level, it performs input attribution by discretely sampling explanation subhypergraphs optimized to be faithful and concise. At the model-level, it produces global explanation subhypergraphs using unsupervised concept extraction. Extensive experiments across four real-world and four novel, synthetic hypergraph datasets demonstrate that our method finds high-quality explanations which can target a user-specified balance between faithfulness and concision, improving over baselines by 25 percent points in fidelity on average.
Rejected_Submission
/pdf/cf01df2e88f46655b385b8ffdb75932beb278628.pdf
ICLR.cc/2025/Conference
bIoWuzFm6r
Machine Unlearning for Streaming Forgetting
Machine unlearning aims to remove knowledge derived from the specific training data that are requested to be forgotten in a well-trained model while preserving the knowledge learned from the remaining training data. Currently, machine unlearning methods typically handle all forgetting data in a single batch, removing the corresponding knowledge all at once upon request. However, in practical scenarios, requests for data removal often arise in a streaming manner rather than in a single batch, leading to reduced efficiency and effectiveness in existing methods. Such challenges of streaming forgetting have not been the focus of much research. In this paper, to address the challenges of performance maintenance, efficiency, and data access brought about by streaming unlearning requests, we introduce an online unlearning paradigm, formalizing the unlearning as a distribution shift problem. We then estimate the altered distribution and propose a novel online unlearning algorithm to achieve efficient streaming forgetting without requiring access to the original training data. Theoretical analyses confirm an $O(V_T\sqrt{T} + \Delta_T)$ error bound on the streaming unlearning regret, where $V_T$ represents the cumulative total variation in the optimal solution over $T$ learning rounds and $\Delta_T$ represents the cumulative total divergence between remaining and forgetting data distributions. This theoretical guarantee is achieved under mild conditions without the strong restriction of convex loss function. Experiments across various models and datasets validate the performance of our proposed method.
Rejected_Submission
/pdf/4da238026135fa77d5bbae7679e7bae6860b665c.pdf
ICLR.cc/2024/Conference
B37UmlxsaP
Revealing The Intrinsic Ability of Generative Text Summarizers for Outlier Paragraph Detection
Generative text summarizers are good at content encapsulation but falter when outlier paragraphs disrupt the primary narrative. We categorize these outliers into cross-document outliers that are thematically inconsistent but within the same domain, and cross-domain outliers, originating from distinct domains. Traditional methods lean on word embeddings and specialized classifiers, requiring extensive supervised fine-tuning. Confidence-based strategies, despite bypassing fine-tuning, are ill-suited due to the non-classification essence of summarization. Leveraging the encoder-decoder cross-attention framework, we introduce an approach emphasizing the unique characteristics of infrequent words in detection. We present CODE, a novel outlier detector using a closed-form expression rooted in cross-attention scores. Our experimental results validate the superiority of CODE under different datasets and architectures, e.g., achieving a 5.80\% FPR at 95\% TPR vs. 25.63\% by supervised baselines on T5-Large and Delve domain. We further underscore the significance of cross-attention, word frequency normalization and judicious integration of cross-document outliers during pretraining.
Rejected_Submission
/pdf/3714954efe3d0d8c069bc5f166c4fb9cc4a323cb.pdf
ICLR.cc/2024/Conference
d3xKPQVjSc
Bounds on Representation-Induced Confounding Bias for Treatment Effect Estimation
State-of-the-art methods for conditional average treatment effect (CATE) estimation make widespread use of representation learning. Here, the idea is to reduce the variance of the low-sample CATE estimation by a (potentially constrained) low-dimensional representation. However, low-dimensional representations can lose information about the observed confounders and thus lead to bias, because of which the validity of representation learning for CATE estimation is typically violated. In this paper, we propose a new, representation-agnostic refutation framework for estimating bounds on the representation-induced confounding bias that comes from dimensionality reduction (or other constraints on the representations) in CATE estimation. First, we establish theoretically under which conditions CATE is non-identifiable given low-dimensional (constrained) representations. Second, as our remedy, we propose a neural refutation framework which performs partial identification of CATE or, equivalently, aims at estimating lower and upper bounds of the representation-induced confounding bias. We demonstrate the effectiveness of our bounds in a series of experiments. In sum, our refutation framework is of direct relevance in practice where the validity of CATE estimation is of importance.
ICLR 2024 spotlight
/pdf/d06dd3ea5318958c6924d08f905235b1512fde33.pdf
ICLR.cc/2024/Conference
DEJIDCmWOz
On the Reliability of Watermarks for Large Language Models
As LLMs become commonplace, machine-generated text has the potential to flood the internet with spam, social media bots, and valueless content. _Watermarking_ is a simple and effective strategy for mitigating such harms by enabling the detection and documentation of LLM-generated text. Yet a crucial question remains: How reliable is watermarking in realistic settings in the wild? There, watermarked text may be modified to suit a user's needs, or entirely rewritten to avoid detection. We study the robustness of watermarked text after it is re-written by humans, paraphrased by a non-watermarked LLM, or mixed into a longer hand-written document. We find that watermarks remain detectable even after human and machine paraphrasing. While these attacks dilute the strength of the watermark, paraphrases are statistically likely to leak n-grams or even longer fragments of the original text, resulting in high-confidence detections when enough tokens are observed. For example, after strong human paraphrasing the watermark is detectable after observing 800 tokens on average, when setting a $1\mathrm{e}{-5}$ false positive rate. We also consider a range of new detection schemes that are sensitive to short spans of watermarked text embedded inside a large document, and we compare the robustness of watermarking to other kinds of detectors.
ICLR 2024 poster
/pdf/8346f872a9e6321321db39a32310048c025726d6.pdf
ICLR.cc/2025/Conference
hgjpO0H0id
On the interplay between learning and memory in deep state space models
Deep state-space models (SSMs) have emerged as a powerful deep learning architecture for sequence modeling, but the theory of how these models learn long-term dependencies lags the practice. To explain how parameterization and the number of layers affect a model's expressiveness, we study the properties of deep $\textit{linear}$ SSMs, i.e., linearly coupled stacks of linear time-invariant systems. We show that such systems share timescales across layers, and we provide novel analysis on the role of linear feedforward connections in regularizing these temporal dependencies. In practice, SSMs can struggle with an explosion of the hidden state variance when learning long-term dependencies. We expand our theoretical understanding of this problem for deep SSMs and provide new intuitions on how this problem may be resolved by increasing the number of layers. Finally, we confirm our theoretical results in a teacher-student framework and show the effects of model parameterization on learning convergence.
Rejected_Submission
/pdf/30bec6d5f6ada03d5951b57893c0b74592bfb6ea.pdf
ICLR.cc/2025/Conference
XFCKEgGhEK
Enhancing Cross-Lingual and Cross-Domain Adaptability in Large Language Models for Software Engineering
This paper presents a groundbreaking mathematical framework for unsupervised domain adaptation (UDA) in the context of cross-lingual and cross-domain code modeling. We introduce the Enhanced Dynamic Code Modeling (UDA-EDCM) system, which leverages advanced concepts from measure theory, differential geometry, and information geometry to address the challenges posed by the diversity of natural and programming languages. At the core of UDA-EDCM is a novel measure-theoretic formulation of domain adaptation, utilizing optimal transport theory to minimize the discrepancy between source and target domains. We develop a Riemannian manifold approach to feature space alignment, introducing a Geodesic Flow Kernel that captures the intrinsic geometry of the code representation space. The UDA-EDCM operator is analyzed through the lens of functional analysis, revealing its spectral properties and their implications for generalization. Our information-theoretic bound on domain adaptation provides insights into the fundamental limits of knowledge transfer in code modeling. We present a unified theorem that synthesizes these diverse mathematical perspectives, offering a comprehensive characterization of UDA-EDCM's performance in terms of Wasserstein distance, empirical Rademacher complexity, and Fisher information. This theoretical foundation is complemented by an innovative optimization framework based on the Fisher Information Metric, ensuring efficient convergence in the probabilistic manifold of model parameters. Extensive experiments demonstrate that UDA-EDCM significantly outperforms existing approaches in zero-shot and few-shot learning scenarios across a wide range of programming languages and coding tasks. Our work not only advances the baselines in domain adaptation for code intelligence but also establishes a rigorous mathematical basis for future research in adaptive AI systems for software engineering.
Rejected_Submission
/pdf/c082f34b87eedc88c771db8fcf795a9b8c1a01e5.pdf
ICLR.cc/2024/Conference
q0IZQMojwv
Objectives Are All You Need: Solving Deceptive Problems Without Explicit Diversity Maintenance
Navigating deceptive domains has often been a challenge in machine learning due to search algorithms getting stuck at sub-optimal local optima. Many algorithms have been proposed to navigate these domains by explicitly maintaining diversity or equivalently promoting exploration, such as Novelty Search or other so-called Quality Diversity algorithms. In this paper, we present an approach with promise to solve deceptive domains without explicit diversity maintenance by optimizing a potentially large set of defined objectives. These objectives can be extracted directly from the environment by sub-aggregating the raw performance of individuals in a variety of ways. We use lexicase selection to optimize for these objectives as it has been shown to implicitly maintain population diversity. We compare this technique with a varying number of objectives to a commonly used quality diversity algorithm, MAP-Elites, on a set of discrete optimization as well as reinforcement learning domains with varying degrees of deception. We find that decomposing objectives into many objectives and optimizing them outperforms MAP-Elites on the deceptive domains that we explore. Furthermore, we find that this technique results in competitive performance on the diversity-focused metrics of QD-Score and Coverage, without explicitly optimizing for these things. Our ablation study shows that this technique is robust to different sub-aggregation techniques. However, when it comes to non-deceptive, or ``illumination" domains, quality diversity techniques generally outperform our objective-based framework with respect to exploration (but not exploitation), hinting at potential directions for future work.
Rejected_Submission
/pdf/932fa6d622f650af8a97c4ccfe00341f69996a03.pdf
ICLR.cc/2024/Conference
9F0xInGNBF
VIDEOPROMPTER: AN ENSEMBLE OF FOUNDATIONAL MODELS FOR ZERO-SHOT VIDEO UNDERSTANDING
Vision-language models (VLMs) classify the query video by calculating a similarity score between the visual features and text-based class label representations. Recently, large language models (LLMs) have been used to enrich the text-based class labels by enhancing the descriptiveness of the class names. However, these improvements are restricted to the text-based classifier only, and the query visual features are not considered. In this paper, we propose a framework which combines pre-trained discriminative VLMs with pre-trained generative video-to-text and text-to-text models. We introduce two key modifications to the standard zero-shot setting. First, we propose language-guided visual feature enhancement and employ a video-to-text model to convert the query video to its descriptive form. The resulting descriptions contain vital visual cues of the query video, such as what objects are present and their spatio-temporal interactions. These descriptive cues provide additional semantic knowledge to VLMs to enhance their zero-shot performance. Second, we propose video-specific prompts to LLMs to generate more meaningful descriptions to enrich class label representations. Specifically, we introduce prompt techniques to create a Tree Hierarchy of Categories for class names, offering a higher-level action context for additional visual cues, We demonstrate the effectiveness of our approach in video understanding across three different zero-shot settings: 1) video action recognition, 2) video-to-text and text-to-video retrieval, and 3) time-sensitive video tasks. Consistent improvements across multiple benchmarks and with various VLMs demonstrate the effectiveness of our proposed framework. Our code will be made publicly available.
Rejected_Submission
/pdf/29043d747f21b29024fcff3dab8bee0aa87078b8.pdf
ICLR.cc/2025/Conference
Antib6Uovh
A Theoretical Analysis of Self-Supervised Learning for Vision Transformers
Self-supervised learning has become a cornerstone in computer vision, primarily divided into reconstruction-based methods like masked autoencoders (MAE) and discriminative methods such as contrastive learning (CL). Recent empirical observations reveal that MAE and CL capture different types of representations: CL tends to focus on global patterns, while MAE adeptly captures **both global and subtle local** information simultaneously. Despite a flurry of recent empirical investigations to shed light on this difference, theoretical understanding remains limited, especially on the dominant architecture **vision transformers** (ViTs). In this paper, to provide rigorous insights, we model the visual data distribution by considering two types of spatial features: dominant global features and comparatively minuscule local features, and study the impact of imbalance among these features. We analyze the training dynamics of one-layer softmax-based ViTs on both MAE and CL objectives using gradient descent. Our analysis shows that as the degree of feature imbalance varies, ViTs trained with the MAE objective effectively learn both global and local features to achieve near-optimal reconstruction, while the CL-trained ViTs favor predominantly global features, even under mild imbalance. These results provide a theoretical explanation for distinct behaviors of MAE and CL observed in empirical studies.
ICLR 2025 Poster
/pdf/56a9e389cfec1bfc52ddb24419786b8630bb8fad.pdf
ICLR.cc/2025/Conference
cuFnNExmdq
UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting
Transformer-based models have emerged as powerful tools for multivariate time series forecasting (MTSF). However, existing Transformer models often fall short of capturing both intricate dependencies across variate and temporal dimensions in MTS data. Some recent models are proposed to separately capture variate and temporal dependencies through either two sequential or parallel attention mechanisms. However, these methods cannot directly and explicitly learn the intricate inter-series and intra-series dependencies. In this work, we first demonstrate that these dependencies are very important as they usually exist in real-world data. To directly model these dependencies, we propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens. Additionally, we add a dispatcher module which reduces the complexity and makes the model feasible for a potentially large number of variates. Although our proposed model employs a simple architecture, it offers compelling performance as shown in our extensive experiments on several datasets for time series forecasting.
Rejected_Submission
/pdf/0867bfa652445f232d2b4465a5b4317702647583.pdf
ICLR.cc/2025/Conference
fSbPwHjdDG
Llamas (mostly) think in English: On Causal Interventions in the Latent Language of Transformers
Previous research on the Llama-2 family of Large Language Models (LLMs) suggested a correlation indicating the use of English as a intermediary language within these models for tasks in non-English languages. We improve on this by demonstrating a causal relationship. By intervening on the intermediate layers during a forward pass, we show that projecting out the activations onto a subspace corresponding to the correct prediction in English impairs the model's ability to make correct predictions on non-English translation tasks. Projecting onto an unrelated English subspace, or a related subspace in a non-English language, has little effect, demonstrating that this family of models store concepts that have a high similarity to the corresponding concept in English in the residual stream.
Rejected_Submission
/pdf/da7599858731fa10765257c5f0ad9fbe8a5cad57.pdf
ICLR.cc/2024/Conference
H3IUunLy8s
Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuning
Fine-tuning large pre-trained foundation models, such as the 175B GPT-3, has become the prevailing approach for downstream tasks. While parameter-efficient fine-tuning methods have been proposed and proven effective without retraining all model parameters, their performance is limited by the capacity of incremental modules, especially under constrained parameter budgets. To overcome this challenge, we propose CAPABOOST, a simple yet effective strategy that enhances model capacity by leveraging low-rank updates through parallel weight modules in target layers. By applying static random masks to the shared weight matrix, CAPABOOST constructs a diverse set of weight matrices, effectively increasing the rank of incremental weights without adding parameters. Notably, our approach can be seamlessly integrated into various existing parameter-efficient fine-tuning methods. We extensively validate the efficacy of CAPABOOST through experiments on diverse downstream tasks, including natural language understanding, question answering, and image classification. Our results demonstrate significant improvements over baselines, without incurring additional computation or storage costs. We will make our code and benchmark publicly available.
ICLR 2024 poster
/pdf/984e5e732b8d392e93cae7820d9c7b24c148a89e.pdf
ICLR.cc/2025/Conference
ArwsbHBoxA
Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback
Learning from human feedback plays an important role in aligning generative models, such as large language models (LLM). However, the effectiveness of this approach can be influenced by adversaries, who may intentionally provide misleading preferences to manipulate the output in an undesirable or harmful direction. To tackle this challenge, we study a specific model within this problem domain--contextual dueling bandits with adversarial feedback, where the true preference label can be flipped by an adversary. We propose an algorithm namely robust contextual dueling bandits (\algo), which is based on uncertainty-weighted maximum likelihood estimation. Our algorithm achieves an $\tilde O(d\sqrt{T}+dC)$ regret bound, where $T$ is the number of rounds, $d$ is the dimension of the context, and $ 0 \le C \le T$ is the total number of adversarial feedback. We also prove a lower bound to show that our regret bound is nearly optimal, both in scenarios with and without ($C=0$) adversarial feedback. To the best of our knowledge, our work is the first to achieve nearly minimax optimal regret for dueling bandits in the presence of adversarial preference feedback. Additionally, we conduct experiments to evaluate our proposed algorithm against various types of adversarial feedback. Experimental results demonstrate its superiority over the state-of-the-art dueling bandit algorithms in the presence of adversarial feedback.
Rejected_Submission
/pdf/8a65591acb0083f7f45e61f0d045d80dd7ce40cd.pdf
ICLR.cc/2025/Conference
hwnObmOTrV
Modeling Complex System Dynamics with Flow Matching Across Time and Conditions
Modeling the dynamics of complex real-world systems from temporal snapshot data is crucial for understanding phenomena such as gene regulation, climate change, and financial market fluctuations. Researchers have recently proposed a few methods based either on the Schroedinger Bridge or Flow Matching to tackle this problem, but these approaches remain limited in their ability to effectively combine data from multiple time points and different experimental settings. This integration is essential in real-world scenarios where observations from certain combinations of time points and experimental conditions are missing, either because of experimental costs or sensory failure. To address this challenge, we propose a novel method named Multi-Marginal Flow Matching (MMFM). MMFM first constructs a flow using smooth spline-based interpolation across time points and conditions and regresses it with a neural network using the classifier-free guided Flow Matching framework. This framework allows for the sharing of contextual information about the dynamics across multiple trajectories. We demonstrate the effectiveness of our method on both synthetic and real-world datasets, including a recent single-cell genomics data set with around a hundred chemical perturbations across time points. Our results show that MMFM significantly outperforms existing methods at imputing data at missing time points.
ICLR 2025 Spotlight
/pdf/b4f31bda4dbafe237f158f9eed6e210802a325a9.pdf
ICLR.cc/2025/Conference
Cn5Z0MUPZT
Process Supervision-Guided Policy Optimization for Code Generation
Reinforcement learning (RL) with unit test feedback has enhanced large language models’ (LLMs) code generation, but relies on sparse rewards provided only after complete code evaluation, limiting learning efficiency and incremental improvements. When generated code fails all unit tests, no learning signal is received, hindering progress on complex tasks. To address this, we propose a Process Reward Model (PRM) that delivers dense, line-level feedback on code correctness during generation, mimicking human code refinement and providing immediate guidance. We explore various strategies for training PRMs and integrating them into the RL framework, finding that using PRMs both as dense rewards and for value function initialization significantly boosts performance. Our approach increases our in-house LLM’s pass rate from 28.2\% to 29.8\% on LiveCodeBench and from 31.8\% to 35.8\% on our internal benchmark. Our experimental results highlight the effectiveness of PRMs in enhancing RL-driven code generation, especially for long-horizon scenarios.
Rejected_Submission
/pdf/b8fd1ffe3c82b638bcfd2721b970ac8c04c094b9.pdf
ICLR.cc/2025/Conference
BksqWM8737
ProteinBench: A Holistic Evaluation of Protein Foundation Models
Recent years have witnessed a surge in the development of protein foundation models, significantly improving performance in protein prediction and generative tasks ranging from 3D structure prediction and protein design to conformational dynamics. However, the capabilities and limitations associated with these models remain poorly understood due to the absence of a unified evaluation framework. To fill this gap, we introduce ProteinBench, a holistic evaluation framework designed to enhance the transparency of protein foundation models. Our approach consists of three key components: (i) A taxonomic classification of tasks that broadly encompass the main challenges in the protein domain, based on the relationships between different protein modalities; (ii) A multi-metric evaluation approach that assesses performance across four key dimensions: quality, novelty, diversity, and robustness; and (iii) In-depth analyses from various user objectives, providing a holistic view of model performance. Our comprehensive evaluation of protein foundation models reveals several key findings that shed light on their current capabilities and limitations. To promote transparency and facilitate further research, we release the evaluation dataset, code, and a public leaderboard publicly for further analysis and a general modular toolkit. We intend for ProteinBench to be a living benchmark for establishing a standardized, in-depth evaluation framework for protein foundation models, driving their development and application while fostering collaboration within the field.
ICLR 2025 Poster
/pdf/3544ce00631e6c89a866f5f9ff2ba3d59149c0d7.pdf
ICLR.cc/2018/Conference
HJGXzmspb
Training and Inference with Integers in Deep Neural Networks
Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as ``"WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.
Accept (Oral)
/pdf/516345e75eb2cf918642c571d05976a33898d715.pdf
ICLR.cc/2025/Conference
Rg2JxBZZ0g
GeneMamba: Early Parkinson’s Detection via Wearable Device and Genetic Data
Parkinson's disease (PD) is a progressive neurodegenerative disorder affecting millions worldwide, with its prevalence expected to rise as the global population ages. Early diagnosis is crucial for effective management and improved quality of life for patients. However, current accelerometer-based studies focus more on detecting the symptoms of PD, while less research has been conducted on early detection of PD. This study presents a novel multi-modal deep learning model named GeneMamba for early PD diagnosis, using state space modelling approaches to effectively analyze sequences and combining accelerometer data from wearable devices with genetic variants data. Our model predicts early PD occurrence up to 7 years before clinical onset, outperforming existing methods. Furthermore, through knowledge transfer, we enable accurate PD prediction using only wearable device data, enhancing our model's real-world applicability. Additionally, our interpretation methods uncover both established and previously unidentified genes associated with PD, advancing our understanding of the disease's genetic architecture and potentially highlighting new therapeutic targets. Our approach not only advances early PD diagnosis but also offers insights into the disease's etiology, paving the way for improved risk assessment and personalized interventions.
Rejected_Submission
/pdf/89a47e3c58410a87cc6a621eb9b5f62f2befb6a6.pdf
ICLR.cc/2025/Conference
HCJ7B6dhYK
Radon Implicit Field Transform (RIFT): Learning Scenes from Radar Signals
Data acquisition in array signal processing (ASP) is costly because achieving high angular and range resolutions necessitates large antenna apertures and wide frequency bandwidths, respectively. The data requirements for ASP problems grow multiplicatively with the number of viewpoints and frequencies, significantly increasing the burden of data collection, even for simulation. Implicit Neural Representations (INRs) — neural network-based models of 3D objects and scenes — offer compact and continuous representations with minimal radar data. They can interpolate to unseen viewpoints and potentially address the sampling cost in ASP problems. In this work, we select Synthetic Aperture Radar (SAR) as a case from ASP and propose the \textit{\textbf{R}adon \textbf{I}mplicit \textbf{F}ield \textbf{T}ransform} (RIFT). RIFT consists of two components: a classical forward model for radar (Generalized Radon Transform, GRT), and an INR based scene representation learned from radar signals. This method can be extended to other ASP problems by replacing the GRT with appropriate algorithms corresponding to different data modalities. In our experiments, we first synthesize radar data using the GRT. We then train the INR model on this synthetic data by minimizing the reconstruction error of the radar signal. After training, we render the scene using the trained INR and evaluate our scene representation against the ground truth scene. Due to the lack of existing benchmarks, we introduce two main new error metrics: \textit{\textbf{p}hase-\textbf{R}oot \textbf{M}ean \textbf{S}quare \textbf{E}rror} (p-RMSE) for radar signal interpolation, and \textit{\textbf{m}agnitude-\textbf{S}tructural \textbf{S}imilarity \textbf{I}ndex \textbf{M}easure} (m-SSIM) for scene reconstruction. These metrics adapt traditional error measures to account for the complex nature of radar signals. Compared to traditional scene models in radar signal processing, with only 10\% data footprint, our RIFT model achieves up to 188\% improvement in scene reconstruction. Using the same amount of data, RIFT is up to $3\times$ better at reconstruction and shows a 10\% improvement generalizing to unseen viewpoints.
Rejected_Submission
/pdf/b19c3b23049da2f3ebc7ae0ed67ec5004b7ac55d.pdf
ICLR.cc/2025/Conference
i7yL7VJx4H
Gradient dynamics of low-rank fine-tuning beyond kernels
LoRA has emerged as one of the \emph{de facto} methods for fine-tuning foundation models with low computational cost and memory footprint. The idea is to only train a low-rank perturbation to the weights of a pre-trained model, given supervised data for a downstream task. Despite its empirical sucess, from a mathematical perspective it remains poorly understood what learning mechanisms ensure that gradient descent converges to useful low-rank perturbations. In this work we initiate the study of low-rank fine-tuning in a student-teacher setting. We are given the weights of a two-layer \emph{base model} $f$, as well as i.i.d. samples $(x,f^*(x))$ where $x$ is Gaussian and $f^*$ is the \emph{teacher model} given by perturbing the weights of $f$ by a rank-1 matrix. This generalizes the setting of \emph{generalized linear model (GLM) regression} where the weights of $f$ are zero. When the rank-1 perturbation is comparable in norm to the weight matrix of $f$, the training dynamics are nonlinear. Nevertheless, in this regime we prove under mild assumptions that a student model which is initialized at the base model and trained with online gradient descent will converge to the teacher in $dk^{O(1)}$ iterations, where $k$ is the number of neurons in $f$. Importantly, unlike in the GLM setting, the complexity does not depend on fine-grained properties of the activation's Hermite expansion. We also prove that in our setting, learning the teacher model ``from scratch'' can require significantly more iterations.
Rejected_Submission
/pdf/513c87dd54a6130b7cf220314c41b86680cced1a.pdf
ICLR.cc/2025/Conference
a7gfCUhwdV
MetaAgent: Automatically Building Multi-Agent System based on Finite State Machine
Large Language Models (LLMs) can solve various practical tasks via a multi-agent system. However, existing human-designed multi-agent systems can only adapt to a limited number of pre-defined scenarios. Current auto-designed methods also have several drawbacks, including no tool support, reliance on in-bag training, and inflexible communication structure. Therefore, we propose \textbf{MetaAgent}, a novel framework to automatically generate a multi-agent system based on a finite state machine. Given a task description, MetaAgent will design a multi-agent system and polish it through self-generated test queries. When the multi-agent system is deployed, the finite state machine, which supports the traceback and is more suitable for tool-using, will control the process to handle every case in the task domain. To evaluate our framework, we conduct experiments on both practical tasks and basic NLP tasks, the results indicate that the generated multi-agent system surpasses other auto-designed methods and can achieve a comparable performance with the human-designed multi-agent system which is polished for those specific tasks.
Rejected_Submission
/pdf/c3bf4661c30e5a24ae84a2e78d4e5f5823318243.pdf
ICLR.cc/2024/Conference
3J7foqnJkA
Understanding Parameter Saliency via Extreme Value Theory
Deep neural networks are being increasingly implemented throughout society in recent years. It is useful to identify which parameters trigger misclassification in diagnosing undesirable model behaviors. The concept of parameter saliency is proposed and used to diagnose convolutional neural networks (CNNs) by ranking convolution filters that may have caused misclassification on the basis of parameter saliency. It is also shown that fine-tuning the top ranking salient filters efficiently corrects misidentification on ImageNet. However, there is still a knowledge gap in terms of understanding why parameter saliency ranking can find the filters inducing misidentification. In this work, we attempt to bridge the gap by analyzing parameter saliency ranking from a statistical viewpoint, namely, extreme value theory. We first show that the existing work implicitly assumes that the gradient norm computed for each filter follows a normal distribution. Then, we clarify the relationship between parameter saliency and the score based on the peaks-over-threshold (POT) method, which is often used to model extreme values. Finally, we reformulate parameter saliency in terms of the POT method, where this reformulation is regarded as statistical anomaly detection and does not require the implicit assumptions of the existing formulation of parameter saliency. Our experimental results demonstrate that our reformulation can detect malicious filters as well. Furthermore, we show that the existing parameter saliency method exhibits a bias against the depth of layers in deep neural networks. In particular, this bias has the potential to inhibit the discovery of filters that cause misidentification in situations where domain shift occurs. In contrast, parameter saliency based on POT shows less of this bias.
Rejected_Submission
/pdf/02b4f489644efb1ccc094dbddfeeed8f8b3b0f40.pdf
ICLR.cc/2024/Conference
2wFXD2upSQ
A Demon at Work: Leveraging Neuron Death for Efficient Neural Network Pruning
When training deep neural networks, the phenomenon of "dying neurons" —units that become inactive and output zero throughout training—has traditionally been viewed as undesirable, linked with optimization challenges, and contributing to plasticity loss, particularly in continual learning scenarios. In this paper, we reassess this phenomenon through the lens of network sparsity and pruning. By systematically exploring the influence of various hyperparameter configurations on the occurrence of dying neurons, we unveil their potential to facilitate simple yet effective structured pruning algorithms. We introduce "Demon's Pruning" (DemP), a method that controls the proliferation of dead neurons, dynamically sparsifying neural networks as training progresses. Remarkably, our approach, characterized by its simplicity and broad applicability, outperforms existing structured pruning techniques, while achieving results comparable to prevalent unstructured pruning methods. These findings pave the way for leveraging dying neurons as a valuable resource for efficient model compression and optimization.
Rejected_Submission
/pdf/fa0bd78c2c4cf395f9c19ad9a6e442019db811d8.pdf
ICLR.cc/2024/Conference
JlSyXwCEIQ
CodeIt: Abstract Reasoning with Iterative Policy-Guided Program Synthesis
Artificial intelligence systems are increasingly solving tasks that are commonly believed to require human-like reasoning ability. However, learned approaches still fare poorly on the Abstraction and Reasoning Corpus (ARC), a benchmark that measures skill-acquisition efficiency as a proxy for intelligence. Each ARC task requires an agent to reason about a transformation between input and output pairs. In this work, we solve these tasks by identifying the program that applies this transformation. We propose CodeIt, a program synthesis approach that leverages a higher level of abstraction through a domain-specific language. CodeIt iterates between sampling from the current large language model policy and learning that policy using supervised learning. The sampling stage augments newfound programs using hindsight relabeling and program mutation, requiring no expert search procedure. We demonstrate CodeIt’s effectiveness on the ARC benchmark, where we show that learning to write code in iterations leads to intertask generalization, which results in state-of-the-art performance.
Rejected_Submission
/pdf/a3876513e79fca09c31529eb2a98ec8cdbaaaf48.pdf
ICLR.cc/2025/Conference
o5wGjBEgH8
Novel View Acoustic Parameter Estimation
The task of Novel View Acoustic Synthesis (NVAS) -- generating Room Impulse Responses (RIRs) for unseen source and receiver positions in a scene -- has recently gained traction, especially given its relevance to Augmented Reality (AR) and Virtual Reality (VR) development. However, many of these efforts suffer from similar limitations: they infer RIRs in the time domain, which prove challenging to optimize; they focus on scenes with simple, single-room geometries; they infer only single-channel, directionally-independent acoustic characteristics; and they require inputs, such as 3D geometry meshes with material properties, that may be impractical to obtain for on-device applications. On the other hand, research suggests that sample-wise accuracy of RIRs is not required for perceptual plausibility in AR and VR. Standard acoustic parameters like Clarity Index (C50) or Reverberation Time (T60) have been shown to capably describe pertinent characteristics of the RIRs, especially late reverberation. To address these gaps, this paper introduces a new, intermediate task centered on estimating spatially distributed acoustic parameters, that can be then used to condition a simple reverberator to generate RIRs for arbitrary source and receiver positions. The approach is modeled as an image-to-image translation task, which translates 2D floormaps of a scene into 2D heatmaps of acoustic parameters. We introduce a new, large-scale dataset of 1000 scenes consisting of complex, multi-room apartment conditions, and show that our method outperforms statistical baselines significantly. Moreover, we show that the method also works for directionally-dependent (i.e. beamformed) parameter prediction. Finally, the proposed method operates on very limited information, requiring only a broad outline of the scene and a single RIR at inference time.
Rejected_Submission
/pdf/75e49ae72fa25c9d442221c3804eb066c9f5c6bc.pdf
ICLR.cc/2025/Conference
UcFjiiObbM
Harmonic Machine Learning Models are Robust
We introduce Harmonic Robustness, a powerful and intuitive method to test the robustness of any machine-learning model either during training or in black-box real-time inference monitoring without ground-truth labels. It is based on functional deviation from the harmonic mean-value property, indicating instability and lack of explainability. We show implementation examples in low-dimensional trees and feedforward NNs, where the method reliably identifies overfitting, as well as in more complex high-dimensional models such as ResNet-50 and Vision Transformer where it efficiently measures adversarial vulnerability across image classes.
Desk_Rejected_Submission
/pdf/d1412b8046cfb94a2723adaa3f54d91be09e576c.pdf
ICLR.cc/2025/Conference
gLGp77MxFo
Who Should Join the Decision-Making Table? Targeted Expert Selection for Enhanced Human-AI Collaboration
Integrating AI and human expertise can significantly enhance decision-making across various scenarios. This paper introduces a novel approach that leverages the Product of Experts (PoE) model to optimize decision-making by strategically combining AI with human inputs. While human experts bring diverse perspectives, their decisions may be constrained by biases or knowledge gaps. To address these limitations, we propose an AI agent that provides probabilistic, rule-based insights, complementing and filling human experts' knowledge gaps. A key feature of our approach is the strategic selection of human experts based on how well their knowledge complements or enhances the AI’s recommendations. By dynamically adapting the expert selection process, we ensure that decisions benefit from the most impactful and complementary inputs. Our PoE model calibrates inputs from both AI and human experts, leveraging their combined strengths to improve decision outcomes. Furthermore, operating in an online setting, our framework can also continuously update the AI’s knowledge and refine expert selection criteria, ensuring adaptability to evolving environments. Experiments in simulation environments demonstrate that our model effectively integrates logic rule-informed AI with human expertise, enhancing collaborative decision-making.
Rejected_Submission
/pdf/af836773968f11ea0ebe28359bccc4dc4ef017a1.pdf
ICLR.cc/2024/Conference
xJEd8PkdNz
Impact of Computation in Integral Reinforcement Learning for Continuous-Time Control
Integral reinforcement learning (IntRL) demands the precise computation of the utility function's integral at its policy evaluation (PEV) stage. This is achieved through quadrature rules, which are weighted sums of utility functions evaluated from state samples obtained in discrete time. Our research reveals a critical yet underexplored phenomenon: the choice of the computational method -- in this case, the quadrature rule -- can significantly impact control performance. This impact is traced back to the fact that computational errors introduced in the PEV stage can affect the policy iteration's convergence behavior, which in turn affects the learned controller. To elucidate how computation impacts control, we draw a parallel between IntRL's policy iteration and Newton's method applied to the Hamilton-Jacobi-Bellman equation. In this light, computational error in PEV manifests as an extra error term in each iteration of Newton's method, with its upper bound proportional to the computational error. Further, we demonstrate that when the utility function resides in a reproducing kernel Hilbert space (RKHS), the optimal quadrature is achievable by employing Bayesian quadrature with the RKHS-inducing kernel function. We prove that the local convergence rates for IntRL using the trapezoidal rule and Bayesian quadrature with a Matérn kernel to be $O(N^{-2})$ and $O(N^{-b})$, where $N$ is the number of evenly-spaced samples and $b$ is the Matérn kernel's smoothness parameter. These theoretical findings are finally validated by two canonical control tasks.
ICLR 2024 spotlight
/pdf/9eca44e1414a070f87a6a21de74fc149bd37de96.pdf
ICLR.cc/2024/Conference
FPpLTTvzR0
IDEA: Invariant Causal Defense for Graph Adversarial Robustness
Despite the success of graph neural networks (GNNs), their vulnerability to adversarial attacks poses tremendous challenges for practical applications. Existing defense methods suffer from severe performance decline under some unknown attacks, due to either limited observed adversarial examples (adversarial training) or pre-defined heuristics (graph purification or robust aggregation). To address these limitations, we analyze the causalities in graph adversarial attacks and conclude that causal features are desirable to achieve graph adversarial robustness, owing to their determinedness for labels and invariance across attacks. To learn these causal features, we innovatively propose an Invariant causal DEfense method against adversarial Attacks (IDEA). We derive node-based and structurebased invariance objectives from an information-theoretic perspective. IDEA is provably a causally invariant defense across various attacks. Extensive experiments demonstrate that IDEA significantly outperforms all baselines under both poisoning and evasion attacks on five benchmark datasets, highlighting its strong and invariant predictability. The implementation of IDEA is available at https://anonymous.4open.science/r/IDEA_repo-666B.
Rejected_Submission
/pdf/c0a3fbdaaa5e492707cdb2cc3fe91fdf0512ce98.pdf
ICLR.cc/2024/Conference
RN2lIjrtSR
ZeroI2V: Zero-Cost Adaptation of Pre-Trained Transformers from Image to Video
Adapting image models to video domain is becoming an efficient paradigm for solving video recognition tasks. Due to the huge number of parameters and effective transferability of image models, performing full fine-tuning is less efficient and even unnecessary. Thus, recent research is shifting its focus towards parameter-efficient image-to-video adaptation. However, these adaptation strategies inevitably introduce extra computational cost to deal with the domain gap and temporal modeling in videos. In this paper, our goal is to present a zero-cost adaptation paradigm (ZeroI2V) to transfer the image transformers to video recognition tasks (i.e., introduce zero extra cost to the adapted models during inference). To achieve this goal, we present two core designs. First, to capture the dynamics in videos and reduce the difficulty of achieving image-to-video adaptation, we exploit the flexibility of self-attention and introduce the spatial-temporal dual-headed attention (STDHA) that efficiently endow the image transformers with temporal modeling capability at zero extra parameters and computation. Second, to handle the domain gap between images and videos, we propose a linear adaption strategy which utilizes lightweight densely placed linear adapters to fully transfer the frozen image models to video recognition. Due to its customized linear design, all newly added adapters could be easily merged with the original modules through structural reparameterization after training, thus achieving zero extra cost during inference. Extensive experiments on four widely-used video recognition benchmarks show that our ZeroI2V can match or even outperform previous state-of-the-art methods while enjoying superior parameter and inference efficiency.
Rejected_Submission
/pdf/0b6fd0b9649a9ca417f6ea354aa9071fea7f1f84.pdf
ICLR.cc/2025/Conference
70xsq3EO2M
Learning Ante-hoc Explanations for Molecular Graphs
Explaining the decisions made by machine learning models for high-stakes applications is critical for transparency. This is particularly true in the case of models for graphs, where decisions depend on complex patterns combining structural and attribute data. We propose EAGER (Effective Ante-hoc Graph Explainer), a novel and flexible ante-hoc explainer designed to discover explanations for graph neural networks, with a focus on the chemical domain. As an ante-hoc model, EAGER inductively learn a graph predictive model and the associating explainer together. We employ a novel bilevel iterative training process based on optimizing the Information Bottleneck principle, effectively distilling the most useful substructures while discarding irrelevant details. As a result, EAGER can identify molecular substructures that contain the necessary and precise information needed for prediction. Our experiments on various molecular classification tasks show that EAGER explanations are better than existing post-hoc and ante-hoc approaches.
Rejected_Submission
/pdf/89a595a1390d8cfc2b7eff9875b89fb257d22f2a.pdf
ICLR.cc/2025/Conference
Y2Dh8rWwlb
EditRoom: LLM-parameterized Graph Diffusion for Composable 3D Room Layout Editing
Given the steep learning curve of professional 3D software and the time- consuming process of managing large 3D assets, language-guided 3D scene editing has significant potential in fields such as virtual reality, augmented reality, and gaming. However, recent approaches to language-guided 3D scene editing either require manual interventions or focus only on appearance modifications without supporting comprehensive scene layout changes. In response, we propose EditRoom, a unified framework capable of executing a variety of layout edits through natural language commands, without requiring manual intervention. Specifically, EditRoom leverages Large Language Models (LLMs) for command planning and generates target scenes using a diffusion-based method, enabling six types of edits: rotate, translate, scale, replace, add, and remove. To address the lack of data for language-guided 3D scene editing, we have developed an automatic pipeline to augment existing 3D scene synthesis datasets and introduced EditRoom-DB, a large-scale dataset with 83k editing pairs, for training and evaluation. Our experiments demonstrate that our approach consistently outperforms other baselines across all metrics, indicating higher accuracy and coherence in language-guided scene layout editing.
ICLR 2025 Poster
/pdf/8e2efeace9bba33ff48aa5a59c438dccaf5165a8.pdf
ICLR.cc/2024/Conference
M6kpUtpQZQ
Can Synthetic Data Reduce Conservatism of Distributionally Robust Adversarial Training?
When the inputs of a machine learning model are subject to adversarial attacks, standard stationarity assumptions on the training and test sets are violated, typically making empirical risk minimization (ERM) ineffective. Adversarial training, which imitates the adversary during the training stage, has thus emerged as the *de facto* standard for hedging against adversarial attacks. Although adversarial training provides some robustness over ERM, it can still be subject to overfitting, which explains why recent work mixing the training set with synthetic data obtains improved out-of-sample performances. Inspired by these observations, we develop a Wasserstein distributionally robust (DR) counterpart of adversarial training for improved generalization and provide a recipe for further reducing the conservatism of this approach by adjusting its ambiguity set with respect to synthetic data. The underlying optimization problem, DR adversarial training with synthetic data, is nonconvex and comprises infinitely many constraints. To this end, by using results from robust optimization and convex analysis, we develop tractable relaxations. We focus our analyses on the logistic loss function and provide discussions for adapting this framework to several other loss functions. We demonstrate the superiority of this approach on artificial as well as standard benchmark problems.
Desk_Rejected_Submission
/pdf/b9bb824dfa931a7433323cb1b3e92cc2a33145b0.pdf
ICLR.cc/2025/Conference
9mBodivRIo
LocoVR: Multiuser Indoor Locomotion Dataset in Virtual Reality
Understanding human locomotion is crucial for AI agents such as robots, particularly in complex indoor home environments. Modeling human trajectories in these spaces requires insight into how individuals maneuver around physical obstacles and manage social navigation dynamics. These dynamics include subtle behaviors influenced by proxemics - the social use of space, such as stepping aside to allow others to pass or choosing longer routes to avoid collisions. Previous research has developed datasets of human motion in indoor scenes, but these are often limited in scale and lack the nuanced social navigation dynamics common in home environments. To address this, we present LocoVR, a dataset of 7000+ two-person trajectories captured in virtual reality from over 130 different indoor home environments. LocoVR provides accurate trajectory and precise spatial information, along with rich examples of socially-motivated movement behaviors. For example, the dataset captures instances of individuals navigating around each other in narrow spaces, adjusting paths to respect personal boundaries in living areas, and coordinating movements in high-traffic zones like entryways and kitchens. Our evaluation shows that LocoVR significantly enhances model performance in three practical indoor tasks utilizing human trajectories, and demonstrates predicting socially-aware navigation patterns in home environments.
ICLR 2025 Poster
/pdf/f09d255b34007863bcd74b6fde80b26639f4a03b.pdf
ICLR.cc/2025/Conference
ZZVOrId3yN
CrossModalNet: Multimodal Medical Segmentation with Guaranteed Cross-Modal Flow and Domain Adaptability
The fusion of multimodal data in medical image segmentation has emerged as a critical frontier in biomedical research, promising unprecedented diagnostic precision and insights. However, the intricate challenge of effectively integrating diverse data streams while preserving their unique characteristics has persistently eluded comprehensive solutions. This study introduces CrossModalNet, a groundbreaking architecture that revolutionizes multimodal medical image segmentation through advanced mathematical frameworks and innovative domain adaptation techniques. We present a rigorous mathematical analysis of CrossModalNet, proving its universal approximation capabilities and deriving tight generalization bounds. Furthermore, we introduce the Cross-Modal Information Flow (CMIF) metric, providing theoretical justification for the progressive integration of multimodal information through the network layers. Our Joint Adversarial Domain Adaptation (JADA) framework addresses the critical issue of domain shift, simultaneously aligning marginal and conditional distributions while preserving topological structures. Extensive experiments on the MM-WHS dataset demonstrate CrossModalNet's superior performance. This work not only advances the field of medical image segmentation but also provides a robust theoretical foundation for future research in multimodal learning and domain adaptation across various biomedical applications.
Rejected_Submission
/pdf/2563251a5d6a660fcdbff9c43364a3071bf2f0f9.pdf
ICLR.cc/2025/Conference
DRhKnUYNm9
A Decoupled Learning Framework for Neural Marked Temporal Point Process
The standard neural marked temporal point process employs the EmbeddingEncoder-History vector-Decoder (EEHD) architecture, wherein the history vector encapsulates the cumulative effects of past events. However, due to the inherent imbalance in event categories in real-world scenarios, the history vector tends to favor more frequent events, inadvertently overlooking less common yet potentially significant ones, thereby compromising the model’s overall performance. To tackle this issue, we introduce a novel decoupled learning framework for neural marked temporal point process, where each event type is modeled independently to capture its unique characteristics, allowing for a more nuanced and equitable treatment of all event types. Each event type boasts its own complete EEHD architecture, featuring scaled-down parameters due to the decoupling of temporal dynamics. This decoupled design enables asynchronous parallel training, and the embeddings can reflect the dependencies between event types. Our versatile framework, accommodating various encoder and decoder architectures, demonstrates state-of-the-art performance across diverse datasets, outperforming benchmarks by a significant margin and increasing training speed by up to 12 times. Additionally, it offers interpretability, revealing which event types have similar influences on a particular event type, fostering a deeper understanding of temporal dynamics.
Rejected_Submission
/pdf/48796bc757115412cdac463637ba0c1393495db0.pdf
ICLR.cc/2025/Conference
Nw7i9Gd1WU
Exploration in the Face of Strategic Responses: Provable Learning of Online Stackelberg Games
We study online leader-follower games where the leader interacts with a myopic follower using a quantal response policy. The leader's objective is to design an algorithm without prior knowledge of her reward function or the state transition dynamics. Crucially, the leader also lacks insight into the follower's reward function and realized rewards, posing a significant challenge. To address this, the leader must learn the follower's quantal response mapping solely through strategic interactions --- announcing policies and observing responses. We introduce a unified algorithm, Planning after Estimation, which updates the leader's policies in a two-step approach. In particular, we first jointly estimate the leader's value function and the follower's response mapping by maximizing a sum of the Bellman error of the value function, the likelihood of the quantal response model, and a regularization term that encourages exploration. The leader's policy is then updated through a greedy planning step based on these estimates. Our algorithm achieves a $\sqrt{T}$-regret in the context of general function approximation. Moroever, this algorithm avoids the intractable optimistic planning and thus enhances implementation simplicity.
Rejected_Submission
/pdf/2520510ef74b14cffd5fc9d4f15e454bcff4b1b5.pdf
ICLR.cc/2024/Conference
TWVMVPx2wO
Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning
Deep Metric Learning (DML) has long attracted the attention of the machine learning community as a key objective. Existing solutions concentrate on fine-tuning the pre-trained models on conventional image datasets. As a result of the success of recent pre-trained models derived from larger-scale datasets, it is challenging to adapt the model to the DML tasks in the local data domain while retaining the previously gained knowledge. In this paper, we investigate parameter-efficient methods for fine-tuning the pre-trained model for DML tasks. In particular, we propose a novel and effective framework based on learning Visual Prompts (VPT) in the pre-trained Vision Transformers (ViT). Based on the conventional proxy-based DML paradigm, we augment the proxy by incorporating the semantic information from the input image and the ViT, in which we optimize the visual prompts for each class. We demonstrate that our new approximations with semantic information are superior to representative capabilities, thereby improving metric learning performance. We conduct extensive experiments to demonstrate that our proposed framework is superior and efficient by evaluating popular DML benchmarks. In particular, we demonstrate that our fine-tuning method achieves comparable or even better performance than recent state-of-the-art full fine-tuning works of DML while tuning only a small percentage of total parameters.
ICLR 2024 poster
/pdf/f018c189217e52be4cef2ab84c2395f38b3041b6.pdf
ICLR.cc/2025/Conference
fWRBheSJth
GReaTer: Gradients Over Reasoning Makes Smaller Language Models Strong Prompt Optimizers
The effectiveness of large language models (LLMs) is closely tied to the design of prompts, making prompt optimization essential for enhancing their performance across a wide range of tasks. Although recent advancements have focused on automating prompt engineering, many existing approaches rely exclusively on textual feedback, refining prompts based solely on inference errors identified by large, computationally expensive LLMs. Unfortunately, smaller models struggle to generate high-quality feedback, resulting in complete dependence on large LLM judgment. Moreover, these methods fail to leverage more direct and finer-grained information, such as gradients, due to operating purely in text space. To this end, we introduce, we introduce *GReaTer*, a novel prompt optimization technique that directly incorporates *gradient information over task-specific reasoning*. By utilizing task loss gradients, *GReaTer* enables self-optimization of prompts for smaller, lightweight language models (LM) without the need for costly closed-source LLMs, while maintaining reasonable prompt structures. This allows high-performance prompt optimization without dependence on massive LLMs, closing the gap between smaller models and the sophisticated reasoning often needed for prompt refinement. Extensive evaluations across diverse tasks demonstrate that \ours consistently outperforms previous methods, even those reliant on powerful LLMs. Additionally, *GReaTer*-optimized prompts frequently exhibit better transferability and, in some cases, boost task performance to levels comparable to or surpassing those achieved by larger language models, highlighting the effectiveness of *"gradient over reasoning"*-based prompt optimization. Code of *GReaTer* is available at: https://github.com/psunlpgroup/GreaTer
ICLR 2025 Poster
/pdf/f010915e77460523ea348edc97747955a679cf6e.pdf
ICLR.cc/2025/Conference
4mqt6QxSUO
A Unified Riemannian-Geometric Framework for SARS-CoV-2 Detection from CT Scans
We present a novel, theoretically grounded framework for automated SARS-CoV-2 detection from pulmonary Computed Tomography (CT) scans, integrating cutting-edge concepts from statistical learning theory, optimal transport, and information geometry. Our approach begins with a submodular optimization-based image selection protocol, utilizing a continuous greedy algorithm. The feature extraction process employs a Riemannian geometry-inspired attention mechanism, where feature integration is formulated as geodesic interpolation on a manifold induced by the Fisher Information Metric. We introduce a unified decision-making framework based on proper scoring rules and Bregman divergences, encompassing multiple voting schemes with proven consistency and asymptotic normality properties. To address domain shift, we develop an adversarial domain adaptation technique using the Wasserstein-Fisher-Rao distance, complemented by a graph-based regularization term derived from Gromov-Wasserstein theory. Theoretical analysis provides convergence guarantees for the adversarial training process and establishes generalization bounds in terms of optimal transport distances. Empirical evaluation demonstrates the superiority of our approach over existing methods, achieving state-of-the-art performance on benchmark datasets. This work not only advances the field of automated medical image analysis but also contributes fundamental theoretical insights to the broader domains of machine learning and optimal transport theory.
Rejected_Submission
/pdf/4c82682f7ed6d22da7c1ca718ca6f1f2bcf8a53d.pdf
ICLR.cc/2024/Conference
FjifPJV2Ol
SOLVING SCHRODINGER BRIDGE PROBLEM VIA STOCHASTIC ACTION MINIMIZATION
The Schrodinger bridge problem is a classical entropy-regularized optimal transport problem that seeks to find optimal diffusion trajectories that transform one probability distribution into another. Although mathematical theory has reached a mature stage, the ongoing research in algorithmic advancements remains a dynamic field, driven by recent innovations in diffusion models. We introduce stochastic Lagrangian and stochastic action as viable alter- native for serving as a direct loss function. We demonstrate the feasibility of incorporating all the vital physical constraints necessary to solve the problem directly into the Lagrangian, providing an intuitive grasp of the loss function and streamlining the training process.
Rejected_Submission
/pdf/6019f43f81041feffd7345674192c5f6bb9c94aa.pdf
ICLR.cc/2025/Conference
7GKbQ1WT1C
Prompting Fairness: Integrating Causality to Debias Large Language Models
Large language models (LLMs), despite their remarkable capabilities, are susceptible to generating biased and discriminatory responses. As LLMs increasingly influence high-stakes decision-making (e.g., hiring and healthcare), mitigating these biases becomes critical. In this work, we propose a causality-guided debiasing framework to tackle social biases, aiming to reduce the objectionable dependence between LLMs' decisions and the social information in the input. Our framework introduces a novel perspective to identify how social information can affect an LLM's decision through different causal pathways. Leveraging these causal insights, we outline principled prompting strategies that regulate these pathways through selection mechanisms. This framework not only unifies existing prompting-based debiasing techniques, but also opens up new directions for reducing bias by encouraging the model to prioritize fact-based reasoning over reliance on biased social cues. We validate our framework through extensive experiments on real-world datasets across multiple domains, demonstrating its effectiveness in debiasing LLM decisions, even with only black-box access to the model.
ICLR 2025 Poster
/pdf/955609b0758b8f16776f047af14ece57487eafd4.pdf
ICLR.cc/2024/Conference
DFQCJmHPoe
Adversarial latent representation for positive unlabeled learning
Novelty detection, a widely studied problem in machine learning, is the task of detecting a novel class of data that has not been previously observed. Deep networks have driven the state-of-the-art work on this application in recent years due to their successful applications on large and more complex datasets. The usual setting for novelty detection is unsupervised whereby only examples of the normal class are available during training, but more recently there has been a surge in interest in semi-supervised methods. A common assumption about semi-supervised methods is their access to an additional set of labeled data that includes a few examples of anomalies. Transductive novelty detection or positive-unlabeled (PU) learning on the other hand assumes access to an additional unlabeled set that contains examples of anomalies. In this study, we focus on machine vision applications and propose TransductGAN, a transductive generative adversarial network (GAN) that attempts to learn how to generate image examples from the novel class by separating the latter from the negative class in a latent space using a mixture of two Gaussians. It achieves that by incorporating an adversarial autoencoder with a GAN network; the ability to generate examples of novel data points offers not only a visual representation of the new class, but also overcomes the hurdle faced by many inductive methods about how to tune the model hyperparameters at the decision rule level. In addition, the introduction of a latent space enables an enhanced discriminative learning. Our model has shown superior performance over state-of-the-art work on several benchmark datasets.
Rejected_Submission
/pdf/df0e4758da36e4ee306cf7eb5357a9d7aa384a4f.pdf
ICLR.cc/2024/Conference
dKPh4CLmYp
Fishnets: Information-Optimal, Scalable Aggregation for Sets and Graphs
Set-based learning is an essential component of modern deep learning and network science. Graph Neural Networks (GNNs) and their edge-free counterparts DeepSets (DS) have proven remarkably useful on ragged and topologically challenging datasets. The key to learning informative embeddings for set members is a specified aggregation function, usually a sum, max, or mean. We propose Fishnets, an aggregation strategy for learning information-optimal embeddings for sets of data for both Bayesian inference and graph aggregation. We demonstrate that i) Fishnets neural summaries can be scaled optimally to an arbitrary number of data objects, ii) Fishnets aggregations are robust to changes in data distribution, unlike standard deepsets, iii) Fishnets saturate Bayesian information content and extend to regimes where MCMC techniques fail and iv) Fishnets can be used as a drop-in aggregation scheme within GNNs. We show that by adopting a Fishnets aggregation scheme for message passing, GNNs can acheive state-of-the-art performance versus architecture size on ogbn-protein data over existing benchmarks with a fraction of learnable parameters and faster training time.
Rejected_Submission
/pdf/2a3649d38d0710b28227b308b52cf53bfa04b700.pdf
ICLR.cc/2024/Conference
m5m3nugttY
UniVis: A Universal Framework for Computer Vision Tasks
We propose $\texttt{UniVis}$, a universal learning framework to tam a wide range of computer vision tasks, including visual understanding (e.g., semantic segmentation), low-level image processing (e.g., denoising), and conditional image generation (e.g., edge-to-image synthesis). Built on a large-scale pre-trained text-to-image diffusion model, $\texttt{UniVis}$ unifies various vision tasks through a general framework using instruction tuning, where its unifying ability comes from the generative and reasoning power of the pre-trained model. Specifically, $\texttt{UniVis}$ defines a general image completion task wherein the input consists of a pair of input-output images corresponding to the target task and a query image, and the aim is to generate the ''missing'' data paired to the query. The paired images play the role of image instruction defining the task, e.g., semantic segmentation is represented by an RGB image and its segmentation mask. Our rationale is that each computer vision task can be characterized by its unique input-output pair, which informs our $\texttt{UniVis}$ model about the expected output for the given query. Furthermore, a task-level or instance-level prompt can be optionally added to provide text instruction. By unifying various visual tasks, $\texttt{UniVis}$ has the advantage of minimizing the inductive bias inherent in designing models for individual tasks, and it also suggests that the understanding of different visual tasks can be achieved through a shared generative model. In experiments, $\texttt{UniVis}$ showcases impressive performance on a bunch of standard computer vision benchmarks including ten tasks in total. The source code will be made publicly available.
Rejected_Submission
/pdf/74a320a1f1b10be4b500a105dbfc35558d08451c.pdf
ICLR.cc/2025/Conference
mltelO89Ve
From Demonstrations to Rewards: Alignment Without Explicit Human Preferences
One of the challenges of aligning large models with human preferences lies in both the data requirements and the technical complexities of current approaches. Predominant methods, such as RLHF, involve multiple steps, each demanding distinct types of data, including demonstrations data and preference data. In RLHF, human preferences are typically modeled through a reward model, which serves as a proxy to guide policy learning during the reinforcement learning stage, ultimately producing a policy aligned with human preferences. However, in this paper, we propose a fresh perspective on learning alignment based on inverse reinforcement learning principles, where the optimal policy is still derived from reward maximization. However, instead of relying on preference data, we directly learn the reward model from demonstration data. This new formulation offers the flexibility to be applied even when only demonstration data is available, a capability that current RLHF methods lack, and it also shows that demonstration data offers more utility than what conventional wisdom suggests. Our extensive evaluation, based on public reward benchmark and HuggingFace Open LLM Leaderboard, demonstrates that our approach compares favorably to state-of-the-art methods that rely solely on demonstration data.
Rejected_Submission
/pdf/3a26981da493551e02b69714207199e80d86246e.pdf
ICLR.cc/2025/Conference
jWQf6jk55V
Has My System Prompt Been Used? Large Language Model Prompt Membership Inference
Prompt engineering has emerged as a powerful technique for optimizing large language models (LLMs) for specific applications, enabling faster prototyping and improved performance, and giving rise to the interest of the community in protecting proprietary system prompts. In this work, we explore a novel perspective on prompt privacy through the lens of membership inference. We develop Prompt Detective, a statistical method to reliably determine whether a given system prompt was used by a third-party language model. Our approach relies on a statistical test comparing the distributions of two groups of generations corresponding to different system prompts. Through extensive experiments with a variety of language models, we demonstrate the effectiveness of Prompt Detective in both standard and challenging scenarios, including black-box settings. Our work reveals that even minor changes in system prompts manifest in distinct response distributions, enabling us to verify prompt usage with statistical significance.
Rejected_Submission
/pdf/c852ee0e863fd42c8c64b7e5131e92d5c6233547.pdf
ICLR.cc/2024/Conference
cUSNs8nGaV
GlucoBench: Curated List of Continuous Glucose Monitoring Datasets with Prediction Benchmarks
The rising rates of diabetes necessitate innovative methods for its management. Continuous glucose monitors (CGM) are small medical devices that measure blood glucose levels at regular intervals providing insights into daily patterns of glucose variation. Forecasting of glucose trajectories based on CGM data holds the potential to substantially improve diabetes management, by both refining artificial pancreas systems and enabling individuals to make adjustments based on predictions to maintain optimal glycemic range. Despite numerous methods proposed for CGM-based glucose trajectory prediction, these methods are typically evaluated on small, private datasets, impeding reproducibility, further research, and practical adoption. The absence of standardized prediction tasks and systematic comparisons between methods has led to uncoordinated research efforts, obstructing the identification of optimal tools for tackling specific challenges. As a result, only a limited number of prediction methods have been implemented in clinical practice. To address these challenges, we present a comprehensive resource that provides (1) a consolidated repository of curated publicly available CGM datasets to foster reproducibility and accessibility; (2) a standardized task list to unify research objectives and facilitate coordinated efforts; (3) a set of benchmark models with established baseline performance, enabling the research community to objectively gauge new methods' efficacy; and (4) a detailed analysis of performance-influencing factors for model development. We anticipate these resources to propel collaborative research endeavors in the critical domain of CGM-based glucose predictions. Our code is available online at github.com/IrinaStatsLab/GlucoBench.
ICLR 2024 poster
/pdf/43f545b8172aa5500f599f53c7466c1b3897e5d0.pdf
ICLR.cc/2024/Conference
GYAvwLviup
Aligning brain functions boosts the decoding of videos in novel subjects
Deep learning is leading to major advances in the realm of brain decoding from functional Magnetic Resonance Imaging (fMRI). However, the large inter-subject variability in brain characteristics has limited most studies to train models on one subject at a time. Consequently, this approach hampers the training of deep learning models, which typically requires very large datasets. Here, we propose to boost brain decoding by aligning brain responses to videos across subjects. Compared to the anatomically-aligned baseline, our method improves out-of-subject decoding performance by up to 75%. Moreover, it also outperforms classical single-subject approaches when less than 100 minutes of data is available for the tested subject. Furthermore, we propose a new multi-subject alignment method, which obtains comparable results to that of classical single-subject approaches while easing out-of-subject generalization. Finally, we show that this method aligns neural representations in accordance with brain anatomy. Overall, this study lays foundations to leverage extensive neuroimaging datasets and enhance the decoding of individuals with a limited amount of brain recordings.
Rejected_Submission
/pdf/eecd01972894c1f499522160907e92b69c051237.pdf
ICLR.cc/2024/Conference
dxJKLozjQl
Data Distribution Valuation with Incentive Compatibility
Data valuation is a class of techniques for quantitatively assessing the value of data for applications like pricing in data marketplaces. Existing data valuation methods define a value for a dataset $D$. However, in many use cases, users are interested not only in the value of a dataset, but in the distribution from which the dataset was sampled. For example, consider a buyer trying to evaluate whether to purchase data from different vendors. The buyer may observe (and compare) only a small sample from each vendor prior to purchasing the data, to decide which vendor's data distribution is most useful to the buyer. The core question of this work is how should we compare the values of data distributions from their samples? Under a Huber model for statistical heterogeneity across vendors, we propose a maximum-mean discrepancy (MMD)-based valuation method which enables theoretically principled and actionable policies for comparing data distributions from samples. We show theoretically that our method achieves incentive-compatibility, thus incentivizing the data vendors to report their data truthfully. We demonstrate the efficacy of our proposed valuation method against several existing baselines, on multiple real-world datasets (e.g., network intrusion detection, credit card fraud detection) and downstream applications (classification, regression).
Rejected_Submission
/pdf/1ca3cc25ea309cf3db33d4fdcb3ddbce6a8e2930.pdf
ICLR.cc/2025/Conference
qi5dkmEE91
Uncovering BioLOGICAL Motifs and Syntax via Sufficient and Necessary Explanations
In recent years, deep neural networks (DNNs) have excelled at learning from high-throughput genome-profiling experiments to predict transcription factor (TF) binding. TF binding is driven by sequence motifs, and explaining how and why DNNs make accurate predictions could help identify these motifs, as well as their logical syntax. However, the black-box nature of DNNs makes interpretation difficult. Most post-hoc methods evaluate the importance of each base pair in isolation, often resulting in noise since they overlook the fact that motifs are contiguous regions. Additionally, these methods fail to capture the complex interactions between different motifs. To address these challenges, we propose Motif Explainer Models (MEMs), a novel explanation method that uses sufficiency and necessity to identify important motifs and their syntax. MEMs excel at identifying multiple disjoint motifs across DNA sequences, overcoming limitations of existing methods. Moreover, by accurately pinpointing sufficient and necessary motifs, MEMs can reveal the logical syntax that governs genomic regulation.
Rejected_Submission
/pdf/e9ef4f6a064fae917ce966f94807785d7321dd7d.pdf
ICLR.cc/2025/Conference
yu1vqQqKkx
LICO: Large Language Models for In-Context Molecular Optimization
Optimizing black-box functions is a fundamental problem in science and engineering. To solve this problem, many approaches learn a surrogate function that estimates the underlying objective from limited historical evaluations. Large Language Models (LLMs), with their strong pattern-matching capabilities via pretraining on vast amounts of data, stand out as a potential candidate for surrogate modeling. However, directly prompting a pretrained language model to produce predictions is not feasible in many scientific domains due to the scarcity of domain-specific data in the pretraining corpora and the challenges of articulating complex problems in natural language. In this work, we introduce LICO, a general-purpose model that extends arbitrary base LLMs for black-box optimization, with a particular application to the molecular domain. To achieve this, we equip the language model with a separate embedding layer and prediction layer, and train the model to perform in-context predictions on a diverse set of functions defined over the domain. Once trained, LICO can generalize to unseen molecule properties simply via in-context prompting. LICO performs competitively on PMO, a challenging molecular optimization benchmark comprising 23 objective functions, and achieves state-of-the-art performance on its low-budget version PMO-1K.
ICLR 2025 Poster
/pdf/fdaf26360c905ffcbe97280c3dbfde67197855e0.pdf
ICLR.cc/2024/Conference
AcSChDWL6V
Distinguished In Uniform: Self-Attention Vs. Virtual Nodes
Graph Transformers (GTs) such as SAN and GPS are graph processing models that combine Message-Passing GNNs (MPGNNs) with global Self-Attention. They were shown to be universal function approximators, with two reservations: 1. The initial node features must be augmented with certain positional encodings. 2. The approximation is non-uniform: Graphs of different sizes may require a different approximating network. We first clarify that this form of universality is not unique to GTs: Using the same positional encodings, also pure MPGNNs and even 2-layer MLPs are non-uniform universal approximators. We then consider uniform expressivity: The target function is to be approximated by a single network for graphs of all sizes. There, we compare GTs to the more efficient MPGNN + Virtual Node architecture. The essential difference between the two model definitions is in their global computation method: Self-Attention Vs Virtual Node. We prove that none of the models is a uniform-universal approximator, before proving our main result: Neither model’s uniform expressivity subsumes the other’s. We demonstrate the theory with experiments on synthetic data. We further augment our study with real-world datasets, observing mixed results which indicate no clear ranking in practice as well.
ICLR 2024 poster
/pdf/e1ac40eba4c49945d38feedf14666ad5b8b2974c.pdf
ICLR.cc/2024/Conference
lvSMIsztka
Faster Approximation of Probabilistic and Distributional Values via Least Squares
The family of probabilistic values, axiomatically-grounded in cooperative game theory, has recently received much attention in data valuation. However, it is often computationally expensive to compute exactly (exponential w.r.t. the number of data to valuate denoted by $n$). The existing generic estimator costs $O(n^2\log n)$ utility evaluations to achieve an $(\epsilon,\delta)$-approximation under the 2-norm, while faster estimators have been developed recently for special cases (e.g., empirically for the Shapley value and theoretically for the Banzhaf value). In this work, starting from the discovered connection between probabilistic values and least square regressions, we propose a Generic Estimator based on Least Squares (GELS) along with its variants that cost $O(n\log n)$ utility evaluations for many probabilistic values, largely extending the scope of this currently best complexity bound. Moreover, we show that each distributional value, proposed by Ghorbani et al. (2020) to alleviate the inconsistency of probabilistic values induced by using distinct databases, can also be cast as optimizing a similar least square regression. This observation leads to a theoretically-grounded framework TrELS (Training Estimators based on Least Squares) that can train estimators towards the specified distributional values without requiring any supervised signals. Particularly, the trained estimators are capable of predicting the corresponding distributional values for unseen data, largely saving the budgets required for running Monte-Carlo methods otherwise. Our experiments verify the faster convergence of GELS, and demonstrate the effectiveness of TrELS in learning distributional values. Our code is available at https://github.com/watml/fastpvalue.
ICLR 2024 poster
/pdf/47fa947a6543c13bae85e65a2696fb130668bd6c.pdf
ICLR.cc/2025/Conference
9ut3QBscB0
Beyond Standardization – Putting the Normality in Normalization
The normal distribution plays a central role in information theory – it is at the same time the best-case signal and worst-case noise distribution, has the greatest representational capacity of any distribution, and offers an equivalence between uncorrelatedness and independence for joint distributions. Accounting for the mean and variance of activations throughout the layers of deep neural networks has had a significant effect on facilitating their effective training, but seldom has a prescription for precisely what distribution these activations should take, and how this might be achieved, been offered. Motivated by the information-theoretic properties of the normal distribution, we address this question and concurrently present normality normalization: a novel normalization layer which encourages normality in the feature representations of neural networks using the power transform and employs additive Gaussian noise during training. Our experiments comprehensively demonstrate the effectiveness of normality normalization, in regards to its generalization performance on an array of widely used model and dataset combinations, its strong performance across various common factors of variation such as model width, depth, and training minibatch size, its suitability for usage wherever existing normalization layers are conventionally used, and as a means to improving model robustness to random perturbations.
Rejected_Submission
/pdf/5bb96663fe7c58c2aaa22039c239ea7e488c98dd.pdf
ICLR.cc/2025/Conference
uuriavczkL
Counterfactual Realizability
It is commonly believed that, in a real-world environment, samples can only be drawn from observational and interventional distributions, corresponding to Layers 1 and 2 of the *Pearl Causal Hierarchy*. Layer 3, representing counterfactual distributions, is believed to be inaccessible by definition. However, Bareinboim, Forney, and Pearl (2015) introduced a procedure that allows an agent to sample directly from a counterfactual distribution, leaving open the question of what other counterfactual quantities can be estimated directly via physical experimentation. We resolve this by introducing a formal definition of realizability, the ability to draw samples from a distribution, and then developing a complete algorithm to determine whether an arbitrary counterfactual distribution is realizable given fundamental physical constraints, such as the inability to go back in time and subject the same unit to a different experimental condition. We illustrate the implications of this new framework for counterfactual data collection using motivating examples from causal fairness and causal reinforcement learning. While the baseline approach in these motivating settings typically follows an interventional or observational strategy, we show that a counterfactual strategy provably dominates both.
ICLR 2025 Spotlight
/pdf/e0bf58c6302a6d41688eeb07e3b9bf4597424941.pdf
ICLR.cc/2025/Conference
Ty7xx0pn0a
DEQ-MPC : Deep Equilibrium Model Predictive Control
Incorporating task-specific priors within a policy or network architecture is crucial for enhancing safety and improving representation and generalization in robotic control problems. Differentiable Model Predictive Control (MPC) layers have proven effective for embedding these priors, such as constraints and cost functions, directly within the architecture, enabling end-to-end training. However, current methods often treat the solver and the neural network as separate, independent entities, leading to suboptimal integration. In this work, we propose a novel approach that co-develops the solver and architecture unifying the optimization solver and network inference problems. Specifically, we formulate this as a joint fixed-point problem over the coupled network outputs and necessary conditions of the optimization problem. We solve this problem in an iterative manner where we alternate between network forward passes and optimization iterations. Through extensive ablations in various robotic control tasks, we demonstrate that our approach results in richer representations and more stable training, while naturally accommodating warm starting, a key requirement for MPC.
Rejected_Submission
/pdf/933181d248285b6854cf8daa51a8b821f4f97b2e.pdf
ICLR.cc/2024/Conference
yisfNWUEsD
SCALE: Synergized Collaboration of Asymmetric Language Translation Engines
In this paper, we introduce SCALE, a collaborative framework that connects compact Specialized Translation Models (STMs) and general-purpose Large Language Models (LLMs) as one unified translation engine. By introducing translation from STM into the triplet in-context demonstrations, SCALE unlocks refinement and pivoting ability of LLM, thus mitigating language bias of LLM and parallel data bias of STM, enhancing LLM speciality without sacrificing generality, and facilitating continual learning without expensive LLM fine-tuning. Our comprehensive experiments show that SCALE significantly outperforms both few-shot LLMs (GPT-4) and specialized models (NLLB) in challenging low-resource settings. Moreover, in Xhosa to English translation, SCALE experiences consistent improvement by a 4 BLEURT score without tuning LLM and surpasses few-shot GPT-4 by 2.5 COMET score and 3.8 BLEURT score when equipped with a compact model consisting of merely 600M parameters. SCALE could also effectively exploit the existing language bias of LLMs by using an English-centric STM as a pivot for translation between any language pairs, outperforming few-shot GPT-4 by an average of 6 COMET points across eight translation directions. Furthermore we provide an in-depth analysis of SCALE's robustness, translation characteristics, and latency costs, providing solid foundation for future studies exploring the potential synergy between LLMs and more specialized translation models.
Rejected_Submission
/pdf/31bcbf9e12c0573e859e49c29de3e093622a76f5.pdf
ICLR.cc/2024/Conference
mavWQw7DnC
Explaining recommendation systems through contrapositive perturbations
Recommender systems are widely used to help users discover new items online. A popular method for recommendations is factorization models, which predict a user's preference for an item based on latent factors derived from their interaction history. However, explaining why a particular item was recommended to a user is challenging, and current approaches such as counterfactual explanations can be computationally expensive. In this paper, we propose a new approach called contrapositive explanations that leverages a different logical structure to counterfactual explanations. We show how contrapositive explanations can be used to explain recommendation systems by finding the minimum change that would have resulted in a different recommendation. Specifically, we present a methodology that focuses on finding an explanation in the form of "Because the user interacted with item, j we recommend item i to the user," which is easier to compute and find compared to traditional counterfactual approaches which aim at "Because the user $\textbf{did not}$ interacted with item j, we $\textbf{did not}$ recommend item i to the user,". We evaluate our approach on several real-world datasets and show that it provides effective and efficient explanations compared to other existing methods.
Rejected_Submission
/pdf/5c7958d9412e6abd7657c52b73ee5b50e613b202.pdf
ICLR.cc/2025/Conference
hbS1t37PGM
Training Universal Text Encoders with Pair Relevance Classification Loss
Finetuning large language models (LLMs) using contrastive learning objectives has become the dominant approach for representation learning in general-purpose text embedding tasks. Our work seeks to enable going beyond strictly positive (or negative) pairs of text, to more fine-grained annotations that can capture the nuances of complex language tasks. We propose training text encoders with a simple pair classification loss that utilizes binary cross-entropy on relevance labels. When compared to the standard softmax-based loss for multi-class classification against multiple text alternatives, we find that training with our proposed loss improves the average score across 56 English language tasks of the Massive Text Embedding Benchmark (MTEB), while finetuning the same Meta-Llama-3-8B-Instruct model on the same mix of open datasets. Furthermore, our models excel in the Pair Classification and the Semantic Textual Similarity benchmarks, outperforming many models that are trained on more extensive data. Finally, thorough experiments using graded relevance data from TREC-DL 2023 during training demonstrate that binary cross-entropy provides generalization improvements that the softmax-based loss fails to achieve.
Rejected_Submission
/pdf/1fe8122019f30ec9cc08a06366557f33b4023e6f.pdf