id
stringlengths
10
10
number
int64
1
25.6k
forum
stringlengths
10
10
title
stringlengths
5
214
abstract
stringlengths
26
4.31k
content_TLDR
stringlengths
1
250
content_keywords
stringlengths
6
1.02k
content_pdf
stringlengths
49
49
content_primary_area
stringclasses
21 values
content_supplementary_material
stringlengths
56
56
signatures
stringlengths
47
51
7QaXJE5nfU
25,054
7QaXJE5nfU
SupCL-GSS: Supervised Contrastive Learning with Guided Sample Selection
We present Supervised Contrastive Learning with Guided Sample Selection (SupCL-GSS), that leverages data maps to construct "hard" positives and "hard" negatives for text classification on pre-trained language models. In our method, we first measure training dynamics to identify the learning difficulty of each training sample with respect to a model---whether samples are easy-to-learn or ambiguous. We then construct positive and negative sets for supervised contrastive learning that allow guided sample selection based on both samples' learning difficulty and their class labels. We empirically validate our proposed method on various NLP tasks including sentence-pair classification (e.g., natural language inference, paraphrase detection, commonsense reasoning) and single-sentence classification (e.g., sentiment analysis, opinion mining), both on in- and out-of-domain settings. Our method achieves better performance and yields lower expected calibration errors compared to competitive baselines.
SupCL-GSS guides supervised contrastive learning with data-map–based difficulty to form hard positives/negatives by label and difficulty, improving accuracy and calibration (lower ECE) across diverse in-/out-of-domain NLP tasks.
['Supervised Contrastive Learning', 'Hard Negatives', 'Model Calibration']
/pdf/9a86a465d5d34721067f18a5e7fc43126aa691bf.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25054/Authors']
3yKCsXUso2
25,053
3yKCsXUso2
StoRM: Stochastic Region Mixup
A number of data-augmentation strategies have been proposed to alleviate problems such as over-fitting, distribution shifts, and adversarial attacks in deep neural networks. A growing body of literature has investigated computationally expensive techniques like inclusion of saliency cues, diffusion processes or even fractal-like noise to improve upon robustness, clean accuracy. Although these methods may be intuitively compelling, there is limited theoretical justification for such techniques, especially given their computational inefficiencies and other issues. Thus, in this paper, we take a detour from them and propose Stochastic Region Mixup (StoRM). We simply focus on increasing the diversity of augmented samples. We show that this strategy can be extended to outperform saliency-based methods with lower computational overheads in several key metrics, and the key bottleneck in mixup based methods is the dimensionality of the vicinial risk space. StoRM—a stochastic extension of Region Mixup—stochastically combines multiple regions from a plurality of images leading to more diverse augmentations. We present empirical studies and theoretical analysis demonstrating that this richer augmentation space yields improved generalization and robustness while preserving label integrity through careful area-based mixing. Across benchmarks, StoRM consistently outperforms state-of-the-art mixup methods. The code will be released publicly upon acceptance.
null
['Mixup', 'Data augmentation', 'Vicinal Risk Minimization']
/pdf/4913903037308bd4cdf1decc613094744db5f872.pdf
other topics in machine learning (i.e., none of the above)
/attachment/a551e75e2e38ddf57c75ec040725095715ba9f1a.pdf
['ICLR.cc/2026/Conference/Submission25053/Authors']
bEejbORUI5
25,052
bEejbORUI5
ExPO-HM: Learning to Explain-then-Detect for Hateful Meme Detection
Hateful memes have emerged as a particularly challenging form of online abuse, motivating the development of automated detection systems. Most prior approaches rely on direct detection, producing only binary predictions. Such models fail to provide the context and explanations that real-world moderation requires. Recent Explain-then-Detect approaches, using Chain-of-Thought prompting or LMM agents, perform worse than simple SFT baselines, and even advanced post-training methods such as GRPO fail to close the gap. Our analysis identifies two key issues of such systems: important policy-relevant cues such as targets and attack types are not hypothesized by the model as a likely explanation; and the binary reward signal is insufficient to guide reasoning. To address these challenges, we propose ExPO-HM (Explain-then-Detect Policy Optimization for Hateful Memes), inspired by the training and evaluation process of human annotators. ExPO-HM combines SFT warmup, GRPO with curriculum learning, and Conditional Decision Entropy (CDE) as both metric and reward for reasoning quality. Across three hateful meme benchmarks, ExPO-HM achieves state-of-the-art performance on binary detection, fine-grained classification, and reasoning quality, with up to 15\% and 17\% F1 improvement over the GRPO and DPO baselines, respectively. By moving hateful meme detection from simple binary alarms to explanation-driven detection, ExPO-HM provides accurate, interpretable, and actionable moderation support.
null
['Hateful Meme Detection']
/pdf/90e3a793aea9fd2b6a906314e91e8de346b5806e.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25052/Authors']
mbu8EEnp3a
25,050
mbu8EEnp3a
Do LLMs Signal When They’re Right? Evidence from Neuron Agreement
Large language models (LLMs) commonly boost reasoning via sample-evaluate-ensemble decoders (e.g., majority voting), achieving label free gains without ground truth. However, prevailing strategies score candidates using only external outputs such as token probabilities, entropies, or self evaluations, and these signals can be poorly calibrated after post training. We instead analyze internal behavior based on neuron activations and uncover three findings: (1) external signals are low dimensional projections of richer internal dynamics; (2) correct responses activate substantially fewer unique neurons than incorrect ones throughout generation; and (3) activations from correct responses exhibit stronger cross sample agreement, whereas incorrect ones diverge. Motivated by these observations, we propose Neuron Agreement Decoding (NAD), an unsupervised best of N method that selects candidates using activation sparsity and cross sample neuron agreement, operating solely on internal signals and without requiring comparable textual outputs. NAD enables early correctness prediction within the first 32 generated tokens and supports aggressive early stopping. Across math and science benchmarks with verifiable answers, NAD matches majority voting; on open ended coding benchmarks where majority voting is inapplicable, NAD consistently outperforms Avg@64. By pruning unpromising trajectories early, NAD reduces token usage by 99\% with minimal loss in generation quality, showing that internal signals provide reliable, scalable, and efficient guidance for label free ensemble decoding.
null
['Neuron-Agreement Decoding (NAD); Neuron activation patterns; Unsupervised answer selection; Chain-of-thought ensembling; Token efficiency']
/pdf/b4188ed87d8238e926d1ecc665363ba8ac4330b3.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25050/Authors']
kByN4v0M3e
25,049
kByN4v0M3e
Recurrent Action Transformer with Memory
Transformers have become increasingly popular in offline reinforcement learning (RL) due to their ability to treat agent trajectories as sequences, reframing policy learning as a sequence modeling task. However, in partially observable environments (POMDPs), effective decision-making depends on retaining information about past events - something that standard transformers struggle with due to the quadratic complexity of self-attention, which limits their context length. One solution to this problem is to extend transformers with memory mechanisms. We propose the Recurrent Action Transformer with Memory (RATE), a novel transformer-based architecture for offline RL that incorporates a recurrent memory mechanism designed to regulate information retention. We evaluate RATE across a diverse set of environments: memory-intensive tasks (ViZDoom-Two-Colors, T-Maze, Memory Maze, Minigrid-Memory, and POPGym), as well as standard Atari and MuJoCo benchmarks. Our comprehensive experiments demonstrate that RATE significantly improves performance in memory-dependent settings while remaining competitive on standard tasks across a broad range of baselines. These findings underscore the pivotal role of integrated memory mechanisms in offline RL and establish RATE as a unified, high-capacity architecture for effective decision-making over extended horizons.
The paper proposes Recurrent Action Transformer with Memory - a transformer model with recurrent memory and a procedure for training it for memory-intensive environments in an Offline RL setting.
['RL', 'Offline RL', 'Memory', 'Transformers', 'POMDP']
/pdf/f12dedfe8e5f78bdbcafe02b45269bd395a16e7d.pdf
reinforcement learning
/attachment/73f84c75ae7e38b617ea2aa219acb4de58b82d54.zip
['ICLR.cc/2026/Conference/Submission25049/Authors']
M9DSMVEqrq
25,048
M9DSMVEqrq
Chemical Priors at Scale: Efficient Foundation Models without Big Corpora
We achieve competitive molecular property prediction using up to two orders of magnitude fewer pretraining molecules by replacing generic masked language modeling with chemically-informed, task-conditioned self-supervision. Our **C**hemicaly **I**nformed **L**anguage **T**ransformer (**CILT**) learns from 300+ programmatically-derived chemical tasks (functional groups, substructure counts, molecular properties) paired with natural language descriptions. During pretraining, the model alternates between predicting masked SMILES tokens conditioned on task descriptions and predicting property values conditioned on molecules, creating a unified architecture for generation, regression, and classification driven by text prompts. This approach yields three key advantages. First, despite using orders of magnitude less data, we match state-of-the-art performance on MoleculeNet benchmarks. Second, the learned representations exhibit chemical interpretability: embeddings cluster by functional groups without explicit supervision, while attention mechanisms route from task descriptions to chemically-relevant atoms. Third, the model demonstrates predictable zero-shot generalization—adaptation speed correlates with semantic similarity between task descriptions, enabling rapid few-shot learning on unseen substructures. Our results demonstrate that structured domain knowledge, encoded through natural language, can substitute for scale in scientific foundation models---establishing a blueprint for data-efficient pretraining in chemistry and beyond.
null
['Molecular language modeling', 'chemically-informed self-supervision', 'scientific foundation models']
/pdf/329c0833732c091fb8b1376c3a2ed726dcafec3f.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25048/Authors']
v0QOVSVPtq
25,047
v0QOVSVPtq
Exploring Diverse Generation Paths via Inference-time Stiefel Activation Steering
Language models often default to a narrow set of high-probability outputs, leaving their generation paths homogeneous and prone to mode collapse. Sampling-based strategies inject randomness but still struggle to guarantee diversity across multiple concurrent generation runs. We address this limitation by introducing STAR (**St**iefel-based **A**ctivation Steering for Diverse **R**easoning), a training-free, inference-time intervention method that transforms activation steering into an exploration engine. At each token, STAR collects the hidden activations of concurrent generation runs and optimizes multiple additive steering directions jointly on the Stiefel manifold. STAR maximizes the geometric volume of the steered activations, while the Stiefel manifold induces orthogonality of the steering interventions. This formulation explicitly promotes divergent activation vectors of concurrent generation runs, and implicitly promotes divergent generation trajectories. This manifold optimization formulation can be solved using a Riemannian gradient descent algorithm with convergence guarantees, but this algorithm is too time-consuming for real-time inference. To guarantee low latency, we further design a lightweight one-step update with an aggressive, closed-form stepsize. For test case generation and scientific discovery benchmarks, STAR consistently outperforms standard sampling methods, achieving greater diversity without sacrificing qualitative performance.
null
['activation steering', 'generation diversity', 'manifold opimization']
/pdf/4eb957af1cb78f43d0cbbcd5abb382f237f5c400.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25047/Authors']
KN2RD4fpnH
25,046
KN2RD4fpnH
Geometry of Nash Mirror Dynamics: Adaptive $\beta$-Control for Stable and Bias-Robust Self-Improving LLM Agents
Self‑improving agents learn by playing competitive, often non-transitive language games (e.g., generator–solver, proposer–verifier) where training can oscillate or drift toward undesirable behaviours. We study this scenario through the lens of reverse‑KL regularised Nash learning, showing how the regularisation strength $\beta$ shapes both where agents converge and how they get there. We derive a continuous‑time view of Nash Mirror Descent (Nash‑MD), revealing a simple geometry: trajectories are spirals on the simplex whose damping grows with $\beta$, while $\beta$ simultaneously pulls equilibria toward the reference policy—amplifying any existing biases. We prove last‑iterate convergence to the $\beta$‑regularised Nash equilibrium, quantify its first‑order shift from the unregularised solution, and link convergence speed to the spectrum of the linearised dynamics. Building on this geometry, we introduce two adaptive $\beta$ controllers: (i) a Hessian‑based rule that targets a desired damping–rotation ratio to accelerate without overshoot, and (ii) a bias‑based rule that caps measurable bias (e.g., output length, calibration, hallucination proxies) while retaining speed. On toy games (e.g. Rock–Paper–Scissors) and small open‑model reasoning benchmarks, our controllers deliver faster, more stable convergence with bounded bias, outperforming baselines. The result is a practical recipe: tune $\beta$ as a control knob to make self‑improving LLM agents both faster and safer.
null
['Large Language Models', 'Learning in Games']
/pdf/e74141d9139731e72e9f048ce45bed1c0775760f.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission25046/Authors']
kHqt0ZSbKT
25,045
kHqt0ZSbKT
Random Controlled Differential Equations
We introduce a training-efficient framework for time-series learning that combines random features with controlled differential equations (CDEs). In this approach, large randomly parameterized CDEs act as continuous-time reservoirs, mapping input paths to rich representations. Only a linear readout layer is trained, resulting in fast, scalable models with strong inductive bias. Building on this foundation, we propose two variants: (i) Random Fourier CDEs (RF-CDEs): these lift the input signal using random Fourier features prior to the dynamics, providing a kernel-free approximation of RBF-enhanced sequence models; (ii) Random Rough DEs (R-RDEs): these operate directly on rough-path inputs via a log-ODE discretisation, using log-signatures to capture higher-order temporal interactions while remaining stable and efficient. We prove that in the infinite-width limit, these model induces the RBF-lifted signature kernel and the rough signature kernel, respectively, offering a unified perspective on random-feature reservoirs, continuous-time deep architectures, and path-signature theory. We evaluate both models across a range of time-series benchmarks, demonstrating competitive or state-of-the-art performance. These methods provide a practical alternative to explicit signature computations, retaining their inductive bias while benefiting from the efficiency of random features. Code is publicly available at: \url{https://anonymous.4open.science/r/RandomSigJax-C768/}
null
['random features', 'time-series', 'path signatures', 'CDEs', 'RDEs', 'reservoir computing', 'kernels']
/pdf/c6f10a55b7e848fe113a2498139d97f18f6b691f.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission25045/Authors']
wWkyL8D9xd
25,044
wWkyL8D9xd
FastFlow: Accelerating The Generative Flow Matching Models with Bandit Inference
Flow-matching models deliver state-of-the-art fidelity in image and video generation, but the inherent sequential denoising process renders them slower. Existing acceleration methods like distillation, trajectory truncation, and consistency approaches are static, require retraining, and often fail to generalize across tasks. We propose FastFlow, a plug-and-play adaptive inference framework that accelerates generation in flow matching models. FastFlow identifies denoising steps that produce only minor adjustments to the denoising path and approximates them without using the full neural network models used for velocity predictions. The approximation utilizes finite-difference velocity estimates from prior predictions to efficiently extrapolate future states, enabling faster advancements along the denoising path at zero compute cost. This enables skipping computation at intermediary steps. We model the decision of how many steps to safely skip before requiring a full model computation as a multi-armed bandit problem. The bandit learns the optimal skips to balance speed with performance. FastFlow integrates seamlessly with existing pipelines and generalizes across image generation, video generation, and editing tasks. Experiments demonstrate a speedup of over $2.6\times$ while maintaining high-quality outputs.
Adaptive inference method for accelerating flow matching based visual generation.
['generative modelling', 'faster inference.']
/pdf/7ba33f4a01b10cedfd0eb078d58d749ac2ca7924.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission25044/Authors']
Q3t0QBVFSG
25,042
Q3t0QBVFSG
Not Just a Flash in Time: Interpreting Long Event Streams through Language
Event cameras operate asynchronously with microsecond-level temporal precision and generate sparse event streams, enabling low-latency visual perception under high dynamic range conditions. However, current multimodal large language models (MLLMs) remain suboptimal when handling such data: they either fail to effectively interpret event streams or are limited to very short temporal sequences. To address this problem, we propose a unified approach for long event-stream–text understanding. This method employs an adaptive compression mechanism that significantly reduces input volume while preserving key motion and structural cues, thereby supporting long-term cross-modal reasoning. The training pipeline adopts a two-stage optimization process: the model is first guided to develop representational capacity for streaming data, followed by cross-modal alignment to enhance semantic consistency between event and textual modalities. To handle the substantial temporal information inherent in long event streams, the model uses text-guided cross-modal queries to select salient features and combines hierarchical clustering with similarity scoring to extract representative event segments. During training, a large-scale event–text aligned dataset is curated and constructed, facilitating more effective embedding of event features within the semantic space of language models. In addition, we establish a comprehensive benchmark covering a diverse set of tasks including reasoning, captioning, classification, temporal localization, and moment retrieval. Experimental results demonstrate that the proposed approach outperforms existing state-of-the-art MLLMs in both descriptive accuracy and semantic understanding on long-duration event streams. All datasets, code, and models will be released publicly.
We uses spatiotemporal compression and two‑stage cross‑modal optimization to condense long event streams and we build a novel event–text dataset and multi‑task benchmark, boosting descriptive accuracy and semantic understanding.
['event', 'multimodal learning', 'long sequence', 'language and vision']
/pdf/d4be2f033e08306ffcebd24eb4d017c3b5370a99.pdf
applications to computer vision, audio, language, and other modalities
/attachment/d970829147853035c5ffaf6b2840905d570003b5.zip
['ICLR.cc/2026/Conference/Submission25042/Authors']
JTnzojFUz7
25,040
JTnzojFUz7
Mask What Matters: Controllable Text-Guided Masking for Self-Supervised Medical Image Analysis
The scarcity of annotated data in specialized domains such as medical imaging presents significant challenges to training robust vision models. While self-supervised masked image modeling (MIM) offers a promising solution, existing approaches largely rely on random high-ratio masking, leading to inefficiency and poor semantic alignment. Moreover, region-aware variants typically depend on reconstruction heuristics or supervised signals, limiting their adaptability across tasks and modalities. We propose Mask What Matters, a controllable text-guided masking framework for self-supervised medical image analysis. By leveraging vision-language models for prompt-based region localization, our method flexibly applies differentiated masking to emphasize diagnostically relevant regions while reducing redundancy in background areas. This controllable design enables better semantic alignment, improved representation learning, and stronger cross-task generalizability. Comprehensive evaluation across multiple medical imaging modalities, including brain MRI, chest CT, and lung X-ray, shows that Mask What Matters consistently outperforms existing MIM methods (e.g., SparK), achieving gains of up to +3.1 percentage points in classification accuracy, +1.3 in box average precision (BoxAP), and +1.1 in mask average precision (MaskAP) for detection. Notably, it achieves these improvements with substantially lower overall masking ratios (e.g., 40% vs. 70%), highlighting its efficiency and flexibility. This work demonstrates that controllable, text-driven masking can enable semantically aligned and generalizable self-supervised learning, advancing the development of robust vision models for medical image analysis.
null
['Medical Image Analysis', 'Self-Supervised Learning', 'Vision-Language Models']
/pdf/64f4c681ae020807e699a88ea2cc85306159cb3b.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
/attachment/87144eebdebc650ed9a538b39faafe12d7c435ed.zip
['ICLR.cc/2026/Conference/Submission25040/Authors']
Eaf5emUUd6
25,039
Eaf5emUUd6
Towards Understanding Feature Learning in Parameter Transfer
Parameter transfer is a central paradigm in transfer learning, enabling knowledge reuse across tasks and domains by sharing model parameters between upstream and downstream models. However, when only a subset of parameters from the upstream model is transferred to the downstream model, there remains a lack of theoretical understanding of the conditions under which such partial parameter reuse is beneficial and of the factors that govern its effectiveness. To address this gap, we analyze a setting in which both the upstream and downstream models are ReLU convolutional neural networks (CNNs). Within this theoretical framework, we characterize how the inherited parameters act as carriers of universal knowledge and identify key factors that amplify their beneficial impact on the target task. Furthermore, our analysis provides insight into why, in certain cases, transferring parameters can lead to lower test accuracy on the target task than training a new model from scratch. Numerical experiments and real-world data experiments are conducted to empirically validate our theoretical findings.
null
['Parameter transfer', 'feature learning theory', 'transfer learning', 'negative transfer']
/pdf/bf06465af47d7a5fef3c7c4c8371dc94bc674103.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25039/Authors']
kTMrlXV1my
25,038
kTMrlXV1my
Style Decomposition and Content Preservation for Artistic Style Transfer
Artistic style transfer is a crucial task that aims to transfer the artistic style of a style image to a content image, generating a new image with preserved content and a distinct style. With the advancement of image generation methods, significant progress has been made in artistic style transfer. However, the existing methods face two key challenges: i) style ambiguity, due to inadequate definition of style, making it difficult to transfer certain style attributes; ii) content nonrestraint, the lack of effective constraint information causes stylistic features of the content, such as color and texture, to seriously influence content preservation effectiveness. To address this challenges, improving the quality of style transfer while ensuring effective content preservation, we propose SDCP, Style Decomposition and Content Preservation for Artistic Style Transfer, to achieve effective style transfer through style decomposition and content preservation. First, distinguishing from previous work, we propose a style decomposing module that effectively represents style based on three basic attributes (brushstrokes, color, and texture) enabling clear style definition. Second, we design a content preserving module that employs line drawings as constraints to discard style elements while preserving content, utilizing cross-modal alignment to preserving semantic. Finally, all representations are injected into the denoising U-Net through a conditional injection mechanism. Quantitative and qualitative experiments are conducted to demonstrate that SDCP outperforms the current state-of-the-art models.
null
['Style Transfer', 'Decompsing', 'Diffusion']
/pdf/9542a3238556d8ef077a48564b583d9cfd954c77.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25038/Authors']
e0qqNM7GtY
25,037
e0qqNM7GtY
A Theory of Training Parameter-Shared Quantum Neural Networks from a Bayesian Perspective
The objective function landscape of Quantum Neural Networks (QNNs) is both numerically and theoretically demonstrated to be highly non-convex, exhibiting numerous local optima. This raises an important question regarding the efficiency of training QNNs: can the optimization error systematically converge to a target threshold as the number of optimization iterations grows polynomially with the number of qubits $n$? In this work, we explore this question by proposing a theoretical framework from a Bayesian perspective. We focus on the trainability of Parameter-Shared QNNs (PS-QNNs), a widely used model for solving combinatorial optimization problems. Our first result shows that noise-free PS-QNNs with a depth of $\tilde{\mathcal{O}}\left(\sqrt{\log n}\right)$ can be trained efficiently. Furthermore, we demonstrate that if each quantum gate is influenced by a $q$-strength local Pauli channel, the noisy PS-QNN with a depth of $\mathcal{O}\left(\log n/\log(1/q)\right)$ can also be trained efficiently. These results provide valuable insights into the performance of QNNs, particularly in the context of the noisy intermediate-scale quantum era.
We rigorously provide the network depth at which parameter-shared quantum neural networks can be trained efficiently, resolving a long-standing open question.
['Quantum Neural Network', 'Trainability', 'Bayesian Optimization', 'Parameter-Shared', 'Random Matrix Theory']
/pdf/52193524732380f300a8482ab378d2ce3162132d.pdf
optimization
/attachment/648f51ff0a5baab0a4ecb1026f921438fb5d03b9.pdf
['ICLR.cc/2026/Conference/Submission25037/Authors']
B2Neq64sm6
25,036
B2Neq64sm6
Direct Advantage Estimation for Scalable and Sample-efficient Deep Reinforcement Learning
Direct Advantage Estimation (DAE) has been shown to improve the sample efficiency of deep reinforcement learning. However, its reliance on full environment observability limits applicability in realistic settings. In the present work, we (i) extend DAE to partially observable domains with minimal modifications, and (ii) reduce its computational overhead by introducing discrete latent dynamics models to approximate transition probabilities efficiently. We evaluate our approach on the Arcade Learning Environment and find that DAE scales with function approximator capacity while maintaining high sample efficiency.
null
['deep reinforcement learning', 'advantage estimation', 'arcade learning environment']
/pdf/1f0e495161e9cae6d759f304564595956a54dc5a.pdf
reinforcement learning
/attachment/928848115ee172d80d03b2b51635a16413f948ea.zip
['ICLR.cc/2026/Conference/Submission25036/Authors']
XQlcvkzMuv
25,035
XQlcvkzMuv
Split, Not Spilled: Practical Obfuscation-Based Privacy-Preserving Split Learning
Split Learning (SL) partitions a deep neural network between client and server, enabling collaborative training while reducing the client’s computational load. However, it has been shown that the intermediate activations (“smashed data”) of the client’s model, shared with the server, leak sensitive information. Existing defenses are limited: many assume only passive adversaries, degrade accuracy significantly, or have already been bypassed by recent reconstruction attacks. In this work, we propose SEAL, a client-side obfuscation framework for SL. By applying secret, client-specific periodic transforms, SEAL creates an exponentially large, unsearchable function space that prevents reconstruction of smashed data. We rigorously characterize the class of periodic functions that yield orthogonal, reversible, and numerically stable transforms, ensuring both security and utility preservation. Extensive experiments on image and text benchmarks show that SEAL withstands state-of-the-art reconstruction attacks while maintaining high accuracy.
null
['Split Learning', 'Discrete Periodic Transform', 'Collaborative Framework']
/pdf/ab2d19fef5a7a9b775c93f536b35421fb6223cd9.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission25035/Authors']
YkfhTzq3hL
25,034
YkfhTzq3hL
Hallucination Benchmark for Speech Foundation Models
Hallucinations in automatic speech recognition (ASR) systems refer to fluent and coherent transcriptions produced by neural ASR models that are completely unrelated to the underlying acoustic input (i.e., the speech signal). While similar to conventional decoding errors in potentially compromising the usability of transcriptions for downstream applications, hallucinations can be more detrimental due to their preservation of syntactically and semantically plausible structure. This apparent coherence can mislead subsequent processing stages and introduce serious risks, particularly in critical domains such as healthcare and law. Conventional evaluation metrics are primarily centered on error-based metrics and fail to distinguish between phonetic inaccuracies and hallucinations. Consequently, there is a critical need for new evaluation frameworks that can effectively identify and assess models with a heightened propensity for generating hallucinated content. To this end, we introduce SHALLOW, the first benchmark framework that systematically categorizes and quantifies hallucination phenomena in ASR along four complementary axes: lexical, phonetic, morphological, and semantic. We define targeted metrics within each category to produce interpretable profiles of model behavior. Through evaluation across various architectures and speech domains, we have found that SHALLOW metrics correlate strongly with word error rate (WER) when recognition quality is high (i.e., low WER). Still, this correlation weakens substantially as WER increases. SHALLOW, therefore, captures fine-grained error patterns that WER fails to distinguish under degraded and challenging conditions. Our framework supports specific diagnosis of model weaknesses and provides feedback for model improvement beyond what aggregate error rates can offer.
This paper introduces a framework that categorizes ASR hallucinations into 4 categories, namely lexical, phonetic, morphological, and semantic hallucinations, to provide more detailed error analysis beyond standard WER.
['Hallucination', 'Automatic Speech Recognition', 'SpeechLLM', 'Speech Foundation Model', 'Benchmark']
/pdf/eff6b9578c8a7d76e0306e9b6ccfca58dc401dc3.pdf
applications to computer vision, audio, language, and other modalities
/attachment/be05564f42cfec91179eb82a0f1a144ea003a266.zip
['ICLR.cc/2026/Conference/Submission25034/Authors']
zu18YgtWfK
25,033
zu18YgtWfK
Efficient Formulation and Quantum Optimization of Combinatorial Problems through Parametrized Hamiltonians
Combinatorial optimization problems (COPs) represent a promising application domain for quantum computing, yet current quantum optimization approaches treat each problem instance independently, requiring expensive re-optimization for every configuration. In this paper we propose a different paradigm inspired by quantum many-body physics, where parameterized Hamiltonians naturally encode system variations under changing global conditions. Our parametrized COPs formulation, where a global parameter changes the problem configuration, allows to model parameterized problems and opens access to problem classes that were previously difficult and inefficient to formulate. Second, we provide a concrete algorithmic framework, using implicit differentiation to solve these parameterized COPs classes efficiently. Drawing from techniques used in quantum susceptibility calculations, our method propagates optimal circuit parameters across different Hamiltonian configurations without expensive re-optimization. We demonstrate this approach by finding globally optimal configurations in Max-Cut problems, where the Hamiltonian parameter controls edge weight distributions. Our implementation systematically generates parameterized problem families from Max-Cut, Knapsack, and Portfolio Optimization domains and translates them into quantum formulations suitable for variational algorithms. Experiments on simulated quantum hardware demonstrate substantial computational speedups compared to independent optimization approaches.
We introduce parameterized Hamiltonians as a framework for combinatorial optimization, enabling new problem types and efficient global optimization via QAOA with implicit differentiation.
['Quantum optimization', 'Parameterized Hamiltonians', 'Combinatorial optimization', 'Implicit differentiation', 'Quantum Approximate Optimization Algorithm (QAOA)', 'Optimization', 'Physics-inspired machine learning', 'Software frameworks for quantum optimization']
/pdf/368b55da76837aed33085015f0c7257887ac4373.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25033/Authors']
kFHg8YIi2M
25,031
kFHg8YIi2M
Certifying Graph Neural Networks Against Label and Structure Poisoning
Robust machine learning for graph-structured data has made significant progress against test-time attacks, yet certified robustness to poisoning – where adversaries manipulate the training data – remains largely underexplored. For image data, state-of-the-art poisoning certificates rely on partitioning-and-aggregation schemes. However, we show that these methods fail when applied in the graph domain due to the inherent label and structure sparsity found in common graph datasets, making effective graph-partitioning difficult. To address this challenge, we propose a novel semi-supervised learning framework called deep Self-Training Graph Partition Aggregation (ST-GPA), which enriches each graph partition with informative pseudo-labels and synthetic edges, enabling effective certification against node-label and graph-structure poisoning under sparse conditions. Our method is architecture-agnostic, scales to large numbers of partitions, and consistently and significantly improves robustness guarantees against both label and structure poisoning across multiple benchmarks, while maintaining strong clean accuracy. Overall, our results establish a promising direction for certifiably robust learning on graph-structured data against poisoning under sparse conditions.
We make certifying robustness in graph learning against node-label and structure poisoning work.
['graph learning', 'robustness', 'robustness certification', 'graph machine learning', 'poisoning', 'provable robustness', 'self-training', 'semi-supervised learning', 'graph neural networks']
/pdf/70e778bf19ac4c5e01c81dc3fed70d8503fa2862.pdf
learning on graphs and other geometries & topologies
null
['ICLR.cc/2026/Conference/Submission25031/Authors']
9cLPurIZMj
25,030
9cLPurIZMj
Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning
Memory is crucial for enabling agents to tackle complex tasks with temporal and spatial dependencies. While many reinforcement learning (RL) algorithms incorporate memory, the field lacks a universal benchmark to assess an agent's memory capabilities across diverse scenarios. This gap is particularly evident in tabletop robotic manipulation, where memory is essential for solving tasks with partial observability and ensuring robust performance, yet no standardized benchmarks exist. To address this, we introduce MIKASA (Memory-Intensive Skills Assessment Suite for Agents), a comprehensive benchmark for memory RL, with three key contributions: (1) we propose a comprehensive classification framework for memory-intensive RL tasks, (2) we collect MIKASA-Base -- a unified benchmark that enables systematic evaluation of memory-enhanced agents across diverse scenarios, and (3) we develop MIKASA-Robo -- a novel benchmark of 32 carefully designed memory-intensive tasks that assess memory capabilities in tabletop robotic manipulation. Our work introduces a unified framework to advance memory RL research, enabling more robust systems for real-world use.
A benchmark of 32 memory tasks for tabletop robotic manipulation, a benchmark to test the memory of an RL agent and classification of memory tasks in RL by type of memory usage
['Memory', 'Benchmark', 'Robots', 'POMDP', 'RL']
/pdf/56108186c8f3b0b6cfd9080baf5c9db77e5f287c.pdf
applications to robotics, autonomy, planning
/attachment/6d7a5c8d8e2608d8e613e3f3ed5b74c8f6c58e29.zip
['ICLR.cc/2026/Conference/Submission25030/Authors']
JrZMFC6Jgo
25,027
JrZMFC6Jgo
THE BLACK–WHITE-BOX OPTIMIZATION NETWORK
We introduce a \textit{Black--White-Box Optimization Network} and its first instance, \textit{Tensor-Train Creator (TTC)}, which couples Ising-style solves, a factorization-machine surrogate, and tensor-train (PROTES) search. Typed couplings, lattice realignment, and warm starts cut oracle calls and time-to-target. On black-box benchmarks and Max-Cut, TTC attains better values under the same evaluation budgets.
In this paper, we introduce TTC - a derivative-free optimization framework that couples HOFM surrogates, Ising solvers, and Tensor-Train.
['Derivative-free optimization', 'Combinatorial optimization', 'Higher-order energy', 'HUBO', 'QUBO', 'Higher-Order Factorization Machines (HOFM)', 'Ising seeding', 'Tensor-Train (TT)']
/pdf/519521e3b849c26ffef02618bec007f1207bbee6.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission25027/Authors']
HfiNG4QCFs
25,026
HfiNG4QCFs
Asymmetric Effects of Self-Corrective Learning on Chain-of-Thought Reasoning for Efficient Policy Adaptation
Recent advances in language model (LM)-powered agents have demonstrated the potential to tackle complex embodied tasks by grounding the models’ commonsense world knowledge in the interactive physical environments in which the agents operate. However, these LM-based agents' adaptation to a stream of diverse tasks over time remains challenging, particularly under limited supervision and resource constraints. In this paper, we present BiCL, an embodied task adaptation framework that addresses the problem of continual LM finetuning across diverse tasks and adaptation stages using only a small dataset per task and a small LM (i.e., with 0.5B parameters). We devise bidirectional CoT learning, which jointly optimizes chain-of-thought (CoT) reasoning and reflexive reasoning through per-task bidirectional supervision: few-shot CoT guidance and rationale-wise correction. The latter enables the model to revise its prior rationale trajectories for new tasks, while the former strengthens multi-step task-specific reasoning through minimal demonstrations. This dual optimization allows the agent to adapt more efficiently through forward knowledge transfer over time, ultimately yielding asymmetric effects by fostering robust CoT reasoning at inference without requiring explicit reflection. Furthermore, we implement rationale-wise test-time scaling, a mechanism that dynamically adjusts the depth of CoT reasoning based on the model’s confidence in actions inferred from its own rationales. Through extensive experiments on VirtualHome and ALFWorld, we demonstrate performance superiority over other LM-based planning and continual task adaptation approaches, while achieving strong efficiency in computation, data usage and model parameters.
null
['embodied agent', 'task adaptaton']
/pdf/a14d24a6b85dad429d9b69e79458fe35273f3e86.pdf
applications to robotics, autonomy, planning
/attachment/a6ed70894f881c08db1a963b2bea73e44e40fb78.zip
['ICLR.cc/2026/Conference/Submission25026/Authors']
ebbVFo9r4B
25,025
ebbVFo9r4B
Object-level self-distillation with bounding-box weak supervision improves vision pretraining
Self-distillation has become a central paradigm for pretraining vision transformers (ViTs). Existing approaches typically operate at the image level and assume that different augmentations of the same image preserve semantic content to be distilled. This premise breaks down in complex scenes with multiple objects with randomly sampled data augmentations. To tackle this, we introduce ODIS (Object-level Self-Distillation), a new framework that refines the self-distillation objective to the level of individual objects using bounding boxes that encapsulate objects. ODIS leverages object-aware cropping to ensure that teacher and student views depict the same object, and employs masked attention to focus the learning signal on objects. Applied to ImageNet-1K, ODIS outperforms image-level distillation methods such as iBOT across both image-level and patch-level benchmarks, and its features transfer better to downstream classification and retrieval tasks. Moreover, ODIS is robust to bounding box noise: using two different off-the-shelf extractors, it consistently improves over SOTA baselines. Our results highlight the importance of object-centric supervision in scalable representation learning and demonstrate how pretrained tools can be integrated into distillation pipelines to enhance generalization.
We propose a weakly-supervised pretraining approach for vision foundation models that shifts the self-distillation granularity from whole images to individual objects.
['object-centric learning', 'vision pretraining', 'weakly-supervised learning']
/pdf/7d05fa7d57229263ed8f47758c1d1886ffe36648.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25025/Authors']
ijhhFHvWS6
25,024
ijhhFHvWS6
Lost in Real-World Scenarios: Concretization Disrupts LLM Logical Reasoning
Although large reasoning models have attracted significant attention, recent studies reveal that even minor variations in input formulation can lead to substantial inconsistencies in reasoning outcomes, underscoring their fragility in real-world scenarios. To systematically investigate this issue, we propose a concretization framework that automatically translates clean reasoning logic into concrete contexts with challenging formulations. In this framework, two translators are trained via a dual-learning approach. The first converts formal language templates into natural language puzzles, guided by a difficulty-aware reward that promotes the exploration of harder formulations. The second translates puzzles back into templates, with isomorphism verification ensuring the consistency of underlying reasoning logic. Applying this framework, we construct extensive paired datasets of formal language templates and natural language puzzles. Through evaluation, we observe a sharp decline in LLM reasoning performance when shifting from formal templates to natural language puzzles. To uncover the underlying causes, we conduct an in-depth analysis of how tokens derived from formal templates and natural language puzzles influence the final answers. This analysis reveals two primary sources of degradation: dispersed reasoning attention across non-essential tokens and conflicts introduced by alternative formulations. To address these issues, we propose a prompt-based approach that instructs LLMs to abstract reasoning logic from concrete contexts before attempting direct solutions, and a training-based approach that further strengthens LLMs’ abstraction ability. Experimental results show that our methods improve LLM performance on natural language puzzles by up to 56.2\%, nearly eliminating the performance loss induced by concretization.
null
['Large Language Models', 'Reasoning Robustness', 'Input Formulation', 'Logical Reasoning']
/pdf/46f20600bdcefda4791216bfdb747705c8a6ce5f.pdf
foundation or frontier models, including LLMs
/attachment/71262cf908b5af022ca61e59e3dfebaf0796cca8.zip
['ICLR.cc/2026/Conference/Submission25024/Authors']
n9m13pabbk
25,021
n9m13pabbk
FraIR: Fourier Recomposition Adapter for Image Restoration
Restoring high-quality images from degraded inputs is a core challenge in computer vision, especially under diverse or compound distortions. While large-scale all-in-one models offer strong performance, they are computationally expensive and poorly generalize to unseen degradations. Parameter-Efficient Transfer Learning (PETL) provides a scalable alternative, but most methods operate in the spatial domain and struggle to adapt to frequency-sensitive artifacts like blur, noise, or compression. We propose \textbf{FraIR}, a Fourier-based Recomposition Adapter for image restoration that enables efficient and expressive adaptation in the spectral domain. FraIR applies a 1D Fourier Transform to decompose token features into frequency components, performs low-rank adaptation via spectral projections with learnable reweighting, and reconstructs the adapted signal using an inverse transform gated by task-specific modulation. Integrated as plug-and-play modules within Transformer layers, FraIR is reparameterizable for zero-latency inference and requires less than 0.5% additional parameters. Extensive experiments across denoising, deraining, super-resolution, and hybrid-degradation benchmarks show that FraIR outperforms prior PETL methods and matches or exceeds fully fine-tuned baselines demonstrating strong generalization with minimal cost. Unlike prior Fourier-based approaches that focus on generative modeling or static modulation, FraIR offers dynamic, degradation-aware recomposition in frequency space for efficient restoration.
FraIR introduces a Fourier-domain, degradation-aware adapter for efficient transfer learning in image restoration, achieving state-of-the-art performance with minimal parameter overhead and zero inference cost.
['fourier adapter', 'parameter-efficient transfer learning', 'image restoration', 'degradation-aware gating', 'spectral modulation']
/pdf/f79d3229bcf905db3112c768e4bee65b3d2ec773.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25021/Authors']
TqG3g75Ni6
25,019
TqG3g75Ni6
Prototypical Knowledge Transfer for Multi-Scenario Recommendation with Optimal Transport
Modern APPs often need to provide personalized recommendations across various scenarios, such as the homepage, local page, and live stream on TikTok. User behaviors in these scenarios differ, resulting in diverse data distributions. To effectively handle varying distributions and enhance performance, current Multi-Scenario Recommendation (MSR) methods usually employ parameter sharing for knowledge transfer across scenarios. However, when there are significant differences in scenario distributions, direct parameter sharing fails to achieve effective knowledge transfer and feature alignment. In this paper, we propose a Prototypical Knowledge Transfer (PKT) method with Optimal Transport for Multi-Scenario Recommendation. Specifically, PKT first employs the Multi-gate Mixture of Experts (MMoE) as a shared feature extractor across all scenarios. We then introduce a prototype layer as an intermediary distribution to serve as a bridge for knowledge transfer between different scenario distributions. Furthermore, we leverage Optimal Transport to facilitate efficient knowledge transfer from scenario distributions to prototypes. To extract scenario-specific features, each scenario is equipped with its own expert network and utilizes the LHUC architecture to enhance scenario-specific features. We conducted offline experiments on two datasets and deployed the method on a video platform, followed by online A/B testing.
null
['Multi-Scenario Recommendation', 'Optimal Transport']
/pdf/00a89dacb3c68220f55f2fdea1c80e062e154b51.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission25019/Authors']
ubAlIOmDoy
25,017
ubAlIOmDoy
Finding the Thread: Context-Driven Incremental Compression for Multi-Turn Dialogue
Modern conversational agents condition on an ever-growing dialogue history at each turn, incurring redundant re-encoding and attention costs that grow with conversation length. To enhance the efficiency, naive truncation or summarization degrades fidelity, and existing context compressors lack mechanisms for cross-turn memory sharing or revision, causing information loss and compounding errors over long dialogues. We revisit the context compression under conversational dynamics and empirically present its fragility. To address both the efficiency and robustness problems, we introduce Context-Driven Incremental Compression (C-DIC), which treats a conversation as interleaved contextual threads and stores revisable per-thread compression states in a single, compact dialogue memory. At each turn, a lightweight retrieve → revise → write-back loop shares information across turns and corrects stale memories, stabilizing behavior over long term dialogue. A lightweight, \emph{gradient-free} policy is proposed to dynamically manage this memory, adapting on-the-fly as conversational contexts evolve without test-time optimization. In addition, we adapt truncated backpropagation-through-time (TBPTT) to our multi-turn setting, learning cross-turn contextual dependencies without full-history backpropagation. Extensive experiments on long-form dialogue benchmarks demonstrate superior performance and efficiency of C-DIC, supporting a scalable path to high-quality dialogue modeling.
null
['multi-turn dialogue', 'context compression']
/pdf/21051c71061a2f7b76250673f52b2a917289cc5b.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25017/Authors']
lJKdOYFF5W
25,014
lJKdOYFF5W
Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation
The incorporation of memory into agents is essential for numerous tasks within the domain of Reinforcement Learning (RL). In particular, memory is paramount for tasks that require the use of past information, adaptation to novel environments, and improved sample efficiency. However, the term ``memory'' encompasses a wide range of concepts, which, coupled with the lack of a unified methodology for validating an agent's memory, leads to erroneous judgments about agents' memory capabilities and prevents objective comparison with other memory-enhanced agents. This paper aims to streamline the concept of memory in RL by providing practical precise definitions of agent memory types, such as long-term vs. short-term memory and declarative vs. procedural memory, inspired by cognitive science. Using these definitions, we categorize different classes of agent memory, propose a robust experimental methodology for evaluating the memory capabilities of RL agents, and standardize evaluations. Furthermore, we empirically demonstrate the importance of adhering to the proposed methodology when evaluating different types of agent memory by conducting experiments with different RL agents and what its violation leads to.
A formal description of the memory types of RL agents and a methodology for conducting an experiment to test the memory.
['RL', 'POMDP', 'Memory', 'Classification']
/pdf/58c2952410f906b51ed5d2733c5c7a0efe11d332.pdf
reinforcement learning
/attachment/ce7aee9440596a38fc87925eb4e933b0fca3a5d6.zip
['ICLR.cc/2026/Conference/Submission25014/Authors']
PNPF7W6s8n
25,013
PNPF7W6s8n
Active Learning for Molecular Conformation Optimization with a Domain-Agnostic Neural Surrogate Oracle
Molecular conformation optimization is crucial to computer-aided drug discovery and materials design, yet conventional force-based minimization with physics oracles (e.g., DFT) is prohibitively expensive. Neural network potentials (NNPs) are capable of accelerating this process but typically require large quantum chemical datasets for training. To reduce data requirements, active learning (AL) approaches have been designed for this task. The state-of-the-art approach, GOLF, relies on the surrogate oracle to sample new data. However, the surrogate oracle utilizes empirical molecular force fields, which necessitates careful domain-specific tuning and limits generality. We introduce a new AL method for efficient conformation optimization that removes the dependency on empirical force fields. Our approach maintains two NNPs: an online NNP that performs conformation optimization and a target NNP that serves as a trainable surrogate oracle. The target network is an exponential-moving-average of the online network. During active sampling, the target NNP supplies potential energy estimates that guide data acquisition, while periodic queries to the physics oracle provide ground-truth corrections. Unlike other AL approaches, our method does not require architectural changes to NNP and adds minimal computational overhead compared to the single-model AL pipelines. Across two challenging conformation-optimization benchmarks spanning different DFT levels, our method consistently outperforms a baseline NNP trained without AL, achieving substantial improvements with only ~1,000 additional conformations.
We propose a data-efficient active learning framework for conformational energy minimization with neural network potentials and domain-agnostic trainable neural surrogate oracle
['energy minimization', 'conformational optimization', 'geometry optimization', 'graph neural networks', 'neural network potentials', 'active learning']
/pdf/22432587a7ab04611af71a639b659f90f1ef320b.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25013/Authors']
Zyy2wbKd8h
25,012
Zyy2wbKd8h
VIPO-R1: Cultivating Video Reasoning in MLLMs via Verifier-Guided Iterative Policy Optimization
Applying Reinforcement Learning (RL) to Multimodal Large Language Models (MLLMs) shows significant promise for complex video reasoning. However, popular Reinforcement Fine-Tuning (RFT) methods, such as outcome-based Group Relative Policy Optimization (GRPO), are limited by data preparation bottlenecks (e.g., noise or high cost) and exhibit unstable improvements in the quality of long chain-of-thoughts (CoTs) and downstream performance. To address these limitations, we propose **VIPO-R1**, a **V**erifier-guided **I**terative **P**olicy **O**ptimization method designed to gradually enhance MLLMs' ability to generate long-term reasoning chains for challenging VideoQA. The core component is the Rollout-Aware Verifier, positioned between the GRPO and Direct Preference Optimization (DPO) training phases to form the GRPO-Verifier-DPO training loop. This verifier leverages small LLMs as a judge to assess the reasoning logic of rollouts, enabling the construction of high-quality contrastive data, including reflective and contextually consistent CoTs. These curated preference samples drive the efficient DPO stage (7x faster than GRPO), leading to marked improvements in reasoning chain quality, especially in terms of length and contextual consistency. This training loop benefits from GRPO's expansive search and DPO's targeted optimization. Experimental results demonstrate: 1) Faster and more effective optimization compared to standard GRPO variants, yielding superior performance; 2) Our trained models exceed the direct inference of large-scale instruction-tuned Video-LLMs, producing long and contextually consistent CoTs on diverse video reasoning tasks; and 3) Our model with one iteration outperforms powerful MLLMs (e.g., Kimi-VL) and thinking models (e.g., Video-R1), highlighting its effectiveness and stability.
null
['video understanding', 'video question answering']
/pdf/6430a59edddcb9b8d53ba3e7b55fa2694a5f8e01.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission25012/Authors']
zNT60EJgO9
25,011
zNT60EJgO9
ERS*: A Bounded, Attribution-Agnostic Metric for Explainable Robustness in Image Recognition
Deep vision models can remain accurate under perturbations while shifting their internal reasoning, which is risky for safety-critical use. We introduce ERS*, a bounded metric (in [0,1]) for explainable robustness that jointly scores (i) normalized performance degradation and (ii) explanation stability between clean and perturbed inputs. Stability is computed across multiple attribution families (Grad-CAM/EigenCAM, attention rollout, LRP, RISE), and we define an ensemble-level attribution via probability-weighted fusion to evaluate ensembles directly. We study ViT-B/16, Swin-T, ResNet-50, and their soft-voting ensemble on a traffic-sign benchmark with ten calibrated physical perturbation suites (fading, dirt splatter, scratches, peeling/rust, etc.), and further demonstrate generality on natural corruption benchmarks beyond traffic signs (CIFAR-C, ImageNet-C). ERS* reveals cases where accuracy stays high but explanations become unstable, with ensembles sometimes achieving strong accuracy yet lower explanation stability than expected. Sensitivity analyses show ERS* rankings are stable across weight choices and attribution methods, and localization metrics plus a small human study indicate that higher ERS* aligns with perceived explanation quality. ERS* complements accuracy and standard robustness metrics (e.g., robust accuracy under corruption) by diagnosing explanation stability, providing a practical post-hoc tool for evaluating reliability and explainability in image recognition.
ERS* is a bounded, attribution-agnostic metric that combines performance degradation and explanation stability to expose when vision models and ensembles, stay accurate but reason inconsistently under real-world perturbations.
['Explainable Robustness Score', 'attribution stability', 'saliency maps', 'Grad-CAM', 'EigenCAM', 'attention rollout', 'LRP', 'RISE', 'ensemble attribution', 'Vision Transformer', 'Swin Transformer', 'ResNet-50', 'traffic sign recognition', 'physical perturbations', 'natural corruptions', 'CIFAR-C', 'ImageNet-C', 'autonomous driving', 'post-hoc evaluation', 'bounded metric']
/pdf/a0349cf5eadf4f470ad01d86b9b1a74610045a38.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission25011/Authors']
N8L7NEARq2
25,010
N8L7NEARq2
ContinualCropBank: Object-Level Replay for Semi-Supervised Online Continual Object Detection
Deep learning has achieved remarkable progress in object detection, but most advances rely on static, fully labeled datasets$\textemdash$an unrealistic assumption in dynamic, real-world environments. Continual Learning (CL) aims to overcome this limitation by enabling models to acquire new knowledge without forgetting prior tasks; however, many approaches assume known task boundaries and require multiple passes over the data. Online Continual Learning (OCL) offers a more practical alternative by processing data in a single pass; however, it remains limited by its dependence on costly annotations. To address this limitation, Label-Efficient Online Continual Object Detection (LEOCOD) extends OCL with a semi-supervised formulation, enabling detectors to leverage unlabeled data alongside limited labeled samples. In this paper, we propose ContinualCropBank, an object-level replay module for LEOCOD that stores object patches cropped from bounding box regions and pastes them into stream images during training. This solution enables fine-grained replay, mitigating catastrophic forgetting while addressing foreground–background imbalance and the scarcity of small objects. Experiments on two benchmark datasets demonstrate that incorporating ContinualCropBank improves detection accuracy and resilience to forgetting, achieving gains of up to $9.57$ percentage points in average accuracy and reducing degradation from forgetting by up to $2.32$ points.
We address the problem of label-efficient online continual object detection by introducing ContinualCropBank, an object-level replay module that mitigates catastrophic forgetting while improving detection performance under limited supervision.
['Continual Learning', 'Semi-Supervised Learning', 'Object Detection', 'Online Continual Learning']
/pdf/70f04f3a5c972abf5d5db35a3bcf9b1345933320.pdf
transfer learning, meta learning, and lifelong learning
/attachment/fdbe81d76a56d78f36e5d05105d0c1d01d871774.zip
['ICLR.cc/2026/Conference/Submission25010/Authors']
5WecBhuCyF
25,006
5WecBhuCyF
Curing the Transitivity Curse: Shortcut Logical Reasoning via A Priori Knowledge Compilation
While large language models (LLMs) have shown remarkable reasoning abilities, they often fail at multi-hop logical reasoning tasks that require chaining inferences, struggling to deduce transitive relations like $(P \to R)$ from $(P \to Q) \land (Q \to R)$. This fundamental limitation, which we term the \textbf{``Transitivity Curse''}, leads to brittle reasoning chains and significant error propagation. Existing reasoning frameworks, often based on Chain-of-Thought, attempt to traverse these long paths sequentially, a process that is both inefficient and prone to failure as complexity increases. To cure this curse, we introduce a novel mechanism designed to be integrated into existing logical reasoners. Our mechanism shifts the paradigm from passively traversing reasoning chains to proactively compiling them through a process we call \textbf{A Priori Knowledge Compilation (APKC)}. This process unfolds in two critical phases. First, it employs a goal-oriented backward analysis to identify a focused, relevant subgraph of the knowledge base. Subsequently, within this constrained boundary, our mechanism performs a systematic forward-chaining process to synthesize new knowledge in the form of both foundational \textbf{derived facts} and powerful \textbf{composite rules}. This compiled knowledge collapses multi-step inferences into fewer, more robust steps.By allowing a host framework to leverage this compiled knowledge, our mechanism enables a more direct form of \textbf{Shortcut Reasoning}, drastically reducing the required depth of runtime inference. Experiments show that when integrated into state-of-the-art reasoning frameworks, our mechanism consistently and significantly boosts their performance on several logical reasoning benchmarks. Our findings demonstrate that APKC, as a plug-in mechanism, is a critical component for making existing LLM-based reasoners more robust, efficient, and trustworthy.
Our work introduces a mechanism that performs A Priori Knowledge Compilation—proactively deriving foundational facts and composing powerful new rules—to enable robust Shortcut Reasoning and cure the Transitivity Curse in LLMs.
['Logical Reasoning', 'Large Language Models', 'Knowledge Compilation']
/pdf/8e3f771644e66caffb1439f4806c9d9c2f8d2f98.pdf
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
null
['ICLR.cc/2026/Conference/Submission25006/Authors']
USEpVtH8qV
25,005
USEpVtH8qV
JIONE: an Approach for Merging Large Language Models via Teacher–Student Prediction Refinement
Large Language Models (LLMs) demonstrated remarkable capabilities across reasoning, problem-solving, and natural language understanding tasks such as Text classification, Multiple-choice question answering. However, relying on a single LLM faces limitations, as models are typically specialized to particular domains or objectives. For example, code-oriented models (e.g., Phi-3-mini) excel on programming benchmarks such as Mostly Basic Programming Problems (MBPP), conversational models (e.g., Qwen1.5) perform better on factual Q\&A tasks like TruthfulQA yet underperform in mathematical reasoning benchmarks such as Grade School Math 8K (GSM8K). This specialization highlights the need for \textbf{merging multiple LLMs} to leverage their complementary strengths. Therefore, a promising direction is to merge multiple LLMs, leveraging their complementary strengths while mitigating individual weaknesses. Existing approaches to model merging, such as SLERP and Task Arithmetic, primarily assume that the models share the same architecture. When different architectures are involved (e.g. FuseChat, ProDistill), prior work shows that existing approaches rely on training-heavy steps that incur computational and data costs. Consequently, an efficient and general method for merging heterogeneous LLMs remains an open challenge. In this paper, we introduce \textbf{JIONE}, a teacher-student prediction refinement approach designed to merge LLMs-agnostic architecture, without additional training/fine-tuning. It operates directly at the output level, where a teacher-student mechanism refines predictions and resolves inconsistencies before producing a merged answer. JIONE was evaluated on four benchmark datasets: TruthfulQA, GSM8K, MBPP, and SST-2 using Phi-3-mini-128k-instruct, Phi-3-mini-4k-instruct, Qwen1.5-1.8B-Chat and Distilbert-base-uncased-finetuned-sst-2-english models. Evaluation across Accuracy, ROUGE-N, and Exact Match Accuracy (EMA) shows that JIONE consistently outperforms SLERP and Task Arithmetic, achieving up to \textbf{+5.99\% improvement} for models of the same architecture and up to \textbf{+3.2\% improvement} when merging models of different architectures. These results demonstrate that JIONE enables effective and scalable merging of diverse LLMs, unlocking a path toward more general and versatile model integration. Experiments show that the teacher-student refinement process induces additional computational costs compared to baselines. However, the observed gain in performance and generalization justify this cost, particularly in applications such as medical diagnostics where prediction quality and robustness is critical. The code used in this work is released at \url{https://gitlab.com/tsotsa/jione}.
This paper proposes an unconstrained model merging approach that accommodates both homogeneous and heterogeneous multiple LLMs. This is a teacher-student approach in which each query is processed by the student first and refined by the teacher.
['Large Language Model', 'Large Languag Model Merging', 'Teacher-Student Approach', 'Prompt Engineering']
/pdf/80f2354c7b591b128224dea126366ced537d5a5f.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission25005/Authors']
rK7yOLa15z
25,003
rK7yOLa15z
SPECTRA: Spectral Target-Aware Graph Augmentation for Imbalanced Molecular Property Regression
Imbalanced regression is pervasive in molecular property prediction, where the most valuable compounds (e.g., high potency) occupy sparse regions of the label space. Standard Graph Neural Networks (GNNs) optimize average error and underperform on these rare but critical cases, while existing oversampling methods often distort molecular topology. We introduce SPECTRA, a Spectral Target-Aware graph augmentation framework that generates realistic molecular graphs in the spectral domain. SPECTRA (i) reconstructs multi-attribute molecular graphs from SMILES; (ii) aligns molecule pairs via (Fused) Gromov–Wasserstein couplings to obtain node correspondences; (iii) interpolates Laplacian eigenvalues/eigenvectors and node features in a stable shared basis; and (iv) reconstructs edges to synthesize physically plausible intermediates with interpolated targets. A rarity-aware budgeting scheme, derived from a kernel density estimation of labels, concentrates augmentation where data are scarce. Coupled with a spectral GNN using edge-aware Chebyshev convolutions, SPECTRA densifies underrepresented regions without degrading global structure. On benchmarks, SPECTRA consistently improves error in rare target ranges while maintaining competitive overall MAE, and yields interpretable synthetic molecules whose structure reflects the underlying spectral geometry. Our results demonstrate that spectral, geometry-aware augmentation is an effective and efficient strategy for imbalanced molecular property regression.
null
['Imbalanced Learning', 'Imbalanced Regression', 'Graph-based Learning', 'Graph Representation Learning', 'Molecular Property Prediction']
/pdf/f3fe71f8c61f0afda5f85d7065f36ef2efab211a.pdf
learning on graphs and other geometries & topologies
null
['ICLR.cc/2026/Conference/Submission25003/Authors']
bm3rbtEMFj
25,001
bm3rbtEMFj
ELMUR: External Layer Memory with Update/Rewrite for Long-Horizon RL
Real-world robotic agents must act under partial observability and long horizons, where key cues may appear long before they affect decision making. However, most modern approaches rely solely on instantaneous information, without incorporating insights from the past. Standard recurrent or transformer models struggle with retaining and leveraging long-term dependencies: context windows truncate history, while naive memory extensions fail under scale and sparsity. We propose $\textbf{ELMUR}$ ($\textbf{E}$xternal $\textbf{L}$ayer $\textbf{M}$emory with $\textbf{U}$pdate/$\textbf{R}$ewrite), a transformer architecture with structured external memory. Each layer maintains memory embeddings, interacts with them via bidirectional cross-attention, and updates them through an $\textbf{L}$east $\textbf{R}$ecently $\textbf{U}$sed $\textbf{(LRU)}$ memory module using replacement or convex blending. ELMUR extends effective horizons up to 100,000 times beyond the attention window and achieves a 100% success rate on a synthetic T-Maze task with corridors up to one million steps. In POPGym, it outperforms baselines on more than half of the tasks. On MIKASA-Robo sparse-reward manipulation tasks with visual observations, it nearly doubles the performance of strong baselines. These results demonstrate that structured, layer-local external memory offers a simple and scalable approach to decision making under partial observability.
ELMUR is a transformer model with layer-local external memory and LRU-based memory updates for long-horizon reasoning in POMDPs
['RL', 'POMDP', 'Memory', 'Transformer', 'Robotics']
/pdf/cfd2fdf517449e0e46ee727f55b775b0e2846745.pdf
reinforcement learning
/attachment/42c0b9b6dddf2f4dcff1141421581f361f5f5da6.zip
['ICLR.cc/2026/Conference/Submission25001/Authors']
IJAPVmxQYU
25,000
IJAPVmxQYU
Improving Extreme Wind Prediction with Frequency-Informed Learning
Accurate prediction of extreme wind velocities has substantial significance in industry, particularly for the operation management of wind power plants. Although the state-of-the-art data-driven models perform well for general meteorological forecasting, they may exhibit large errors for extreme weather—for example, systematically underestimating the magnitudes and short-term variation of extreme winds. To address this issue, we conduct a theoretical analysis of how the data frequency spectrum influences errors in extreme wind prediction. Based on these insights, we propose a novel loss function that incorporates a gradient penalty to mitigate the magnitude shrinkage of extreme weather. To capture more precise short-term wind velocity variations, we design a novel structure of physics-embedded machine learning models with frequency reweighting. Experiments demonstrate that, compared to the baseline models, our approach achieves significant improvements in predicting extreme wind velocities while maintaining robust overall performance.
null
['Extreme Weather Forecasting', 'Meteorological Analysis', 'AI for Science']
/pdf/5412d4187049f4f647d2d332b2764ec331c54941.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission25000/Authors']
tzS9roOTdj
24,998
tzS9roOTdj
Reinforcement Learning Fine-Tuning Enhances Activation Intensity and Diversity in the Internal Circuitry of LLMs
Large language models (LLMs) acquire extensive prior knowledge through large-scale pretraining and can be further enhanced via supervised fine-tuning (SFT) or reinforcement learning (RL)–based post-training. A growing body of evidence has shown that RL fine-tuning improves the capability of LLMs beyond what SFT alone achieves. However, the underlying mechanisms why RL fine-tuning is able to enhance the capability of various LLMs with distinct intrinsic characteristics remain underexplored. In this study, we draw inspiration from prior work on edge attribution patching (EAP) to investigate the internal differences of LLMs before and after RL fine-tuning. Our analysis across multiple model families shows two robust effects of online RL post-training: (i) an overall increase in activation intensity, indicating that more internal pathways are engaged and their signals become stronger, and (ii) greater diversity in activation patterns, reflected by higher entropy and less concentrated edge distributions. These changes suggest that RL reshapes information flow to be both more redundant and more flexible, which may explain its advantage in generalization. Notably, models fine-tuned with Direct Preference Optimization (DPO) deviate from these trends, exhibiting substantially weaker or inconsistent internal changes compared to PPO- and GRPO-based training. Together, our findings provide a unified view of how RL fine-tuning systematically alters the internal circuitry of LLMs and highlight the methodological distinctions between online RL and preference-based approaches. Our code is open source at https://anonymous.4open.science/r/llm_rl_probing_analysis-F673.
This work utilizes edge attribution patching (EAP) to investigate the internal differences of LLMs before and after RL fine-tuning, and uncovers that RL enhances activation intensity and diversity in the internal circuitry of LLMs.
['Large Language Models; Reinforcement Learning Fine-Tuning; Edge Attribution Patching']
/pdf/9c081f056bc96764ba5c55afbb00b09fadb6739f.pdf
interpretability and explainable AI
null
['ICLR.cc/2026/Conference/Submission24998/Authors']
XFnrBCAmAQ
24,996
XFnrBCAmAQ
Differential Privacy of Hybrid Quantum-Classical Algorithms
Differential privacy has been successfully used to safeguard the privacy of classical algorithms and has more recently been extended to protect the privacy of quantum algorithms. However, in the present era of Noisy Intermediate-Scale Quantum (NISQ) computing, practical applications are limited to hybrid quantum-classical algorithms (e.g., quantum machine learning and variational quantum algorithms) to tackle computational tasks due to inherent quantum noise. Unfortunately, the issue of privacy in such algorithms has been largely disregarded. This paper addresses this gap by defining the differential privacy of quantum measurements as a means to protect the overall privacy of hybrid quantum-classical algorithms. The core concept involves the use of differentially private quantum measurements to ensure privacy since hybrid quantum-classical algorithms heavily rely on quantum measurements for the interaction between quantum and classical computing. To address this, we explore post-processing and composition theorems to establish the efficiency and feasibility of differentially private quantum measurements. By introducing quantum depolarizing noise or a unique classical noise (measurement-based exponential mechanisms) into quantum measurements, we bolster the security of algorithms against privacy violations. Taking the hybrid nature of differentially private quantum measurements, our framework offers both classical and quantum differential privacy. To validate these theoretical results, we carry out various numerical experiments demonstrating the effectiveness and practicality of our framework using differentially private quantum measurements to protect the privacy of hybrid quantum-classical algorithms.
null
['Quantum differential privacy', 'hybrid quantum-classical algorithms', 'noise mechanism']
/pdf/26688fc61aa244ba69c94feec27a7a68aeb0e10b.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/8cd414a22ed185601fc5fb1b79b6521644c02982.zip
['ICLR.cc/2026/Conference/Submission24996/Authors']
IyV1QEc95F
24,994
IyV1QEc95F
Model-Aware Tokenizer Transfer
Large Language Models (LLMs) are trained to support an increasing number of languages, yet their predefined tokenizers remain a bottleneck for adapting models to lower-resource or distinct-script languages. Existing tokenizer transfer methods typically rely on semantic heuristics to initialize new embeddings, ignoring higher-layer model dynamics and limiting transfer quality. We propose Model-Aware Tokenizer Transfer (MATT), a method that incorporates model internals into the tokenizer transfer process. MATT introduces an Attention Influence Modeling (AIM) objective that distills inter-token communication patterns from a source model into a target model with a new tokenizer, providing an efficient warm-up before standard language modeling. Unlike approaches that focus solely on embedding similarity, MATT leverages attention behavior to guide embedding initialization and adaptation. Experiments across diverse linguistic settings show that MATT recovers a large fraction of the original model’s performance within a few GPU hours, outperforming heuristic baselines. These results demonstrate that incorporating model-level signals offers a practical and effective path toward robust tokenizer transfer in multilingual LLMs.
This paper introduces Model-Aware Tokenizer Transfer, a method that leverages inter-token communication patterns in attention layers to efficiently adapt pretrained language models to new tokenizers and recover performance across diverse languages.
['Large Language Models', 'Tokenizer transfer', 'Embedding initialization', 'Attention distillation', 'Model-aware adaptation', 'Multilingual NLP', 'Vocabulary adaptation', 'Low-resource languages', 'Mid-resource languages', 'Model-Aware Tokenizer Transfer', 'Attention Influence Modeling', 'Cross-Tokenizer Distillation']
/pdf/b47c846b9e3cb140ff026eb6dfcffe0937a423e9.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
/attachment/a50063a233ac8f00fb4b70a0e760a29affed9cda.zip
['ICLR.cc/2026/Conference/Submission24994/Authors']
jVKhAfg0LS
24,993
jVKhAfg0LS
Adversarial Attacks on Medical Hyperspectral Imaging Exploiting Spectral-Spatial Dependencies and Multiscale Features
Medical hyperspectral imaging (HSI) represents a transformative innovation in diagnosing diseases and planning treatments by capturing detailed spectral and spatial features of tissues. However, the integration of deep learning into medical HSI classification has unveiled critical vulnerabilities to adversarial attacks. These attacks compromise the reliability of clinical applications, potentially leading to diagnostic inaccuracies and jeopardizing patient outcomes. This study identifies two fundamental reasons for the susceptibility of medical HSI models to adversarial manipulation: their reliance on local pixel dependencies, which are essential for preserving tissue structures, and their dependence on multiscale spectral-spatial features, which encode hierarchical tissue information. To address these vulnerabilities, we propose a novel adversarial attack framework specifically tailored to medical HSI. Our approach introduces the Local Pixel Dependency Attack, which exploits spatial relationships between neighboring pixels, and the MultiScale Information Attack, which perturbs spectral and spatial features across hierarchical scales. Experiments on the Brain and MDC datasets reveal that our method significantly reduces classification accuracy, particularly for critical tumor regions, while maintaining imperceptible perturbations. Compared to existing methods, the proposed framework highlights the unique fragility of medical HSI models and underscores the urgent need for robust defenses. This work highlights critical vulnerabilities in medical HSI models and demonstrates how leveraging local pixel dependencies and multiscale spectral-spatial features can guide the development of targeted defenses to enhance model robustness and clinical reliability.
null
['medical hyperspectral', 'adversarial attack', 'spectral-spatial dependencie', 'multiscale features']
/pdf/1bcd63b2d0180bd22985f592f2fae55e79f04754.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/2e2a81b0195e7b50f576bb6e744f113dcd14babb.zip
['ICLR.cc/2026/Conference/Submission24993/Authors']
JxmjzC6syB
24,989
JxmjzC6syB
Benchmarking Stochastic Approximation Algorithms for Fairness-Constrained Training of Deep Neural Networks
The ability to train Deep Neural Networks (DNNs) with constraints is instrumental in improving the fairness of modern machine-learning models. Many algorithms have been analysed in recent years, and yet there is no standard, widely accepted method for the constrained training of DNNs. In this paper, we provide a challenging benchmark of real-world large-scale fairness-constrained learning tasks, built on top of the US Census (Folktables, Ding et al, 2021). We point out the theoretical challenges of such tasks and review the main approaches in stochastic approximation algorithms. Finally, we demonstrate the use of the benchmark by implementing and comparing three recently proposed, but as-of-yet unimplemented, algorithms both in terms of optimization performance, and fairness improvement. We will release the code of the benchmark as a Python package after peer-review.
We provide a benchmark for comparing stochastic approximation algorithms, based on real-world fairness-constrained learning problems.
['Fair Machine Learning', 'stochastic approximation', 'Augmented Lagrangian', 'Sequential Quadratic Programming', 'benchmarking']
/pdf/eebea101b0a319bf45b9eecb82342596c42a9d06.pdf
datasets and benchmarks
/attachment/393f20d1e0652c87966d7409b40825a424783f54.zip
['ICLR.cc/2026/Conference/Submission24989/Authors']
UESTP6dR1K
24,986
UESTP6dR1K
Automated Stateful Specialization for Adaptive Agent Systems
Current automated agent design frameworks produce either static workflows that lack adaptability or per-query optimizers that prevent the accumulation of deep, agent-level task expertise. We propose a new direction that reconciles these paradigms: creating stateful teams of specialist agents that accumulate knowledge over time and can be reconfigured for novel tasks entirely without human intervention. To this end, we introduce \textsc{ASpec}, a framework that manages this full agent lifecycle by first autonomously $\textbf{discovering}$ specialist archetypes via evolutionary search and then $\textbf{cultivating}$ their expertise through experience, mirroring how human experts learn through practice and reflection. We further introduce a lightweight hierarchical control policy, "retain-then-escalate," which governs when to leverage the established agent system versus when to adapt its structure. Through comprehensive experiments, we demonstrate that this approach leads to significant performance gains on expert-level scientific benchmarks like GPQA while matching the state-of-the-art on broader domain tasks, demonstrating a promising path toward agent systems that are simultaneously expert, adaptive, and efficient.
We introduce a framework that creates persistent, specialist agent teams through an offline lifecycle of discovery and cultivation, and deploys them with an online policy that efficiently adapts the team's structure for novel tasks.
['LLMs', 'Autonomous Agents', 'Agent Specialization']
/pdf/e80582ce468d83036273dd5b4ebdc6bd3decc715.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24986/Authors']
yuRO2wZ8su
24,983
yuRO2wZ8su
Cross-Lingual Data Scaling for Large Language Models
Large language models (LLMs) achieve consistent performance gains through data scaling, yet low-resource languages remain limited by small and stagnant dataset sizes. To address this limitation, we introduce cross-lingual data scaling, where performance in low-resource languages scales with the dataset size of high-resource languages. We systematically investigate two potential approaches: (i) transforming high-resource language data into synthetic data for low-resource languages via translation or code-switching, and (ii) transferring the learned knowledge from high-resource languages to low-resource languages by adjusting language order and proportion during pretraining. Experiments on English and Chinese show that data transformation fails to sustain cross-lingual data scaling, whereas knowledge transfer enables low-resource language performance to scale with the growth of high-resource language data. Building on these findings, we propose ScaleX, a two-stage pretraining framework designed for effective cross-lingual data scaling. In the first stage, LLMs are pretrained on high-resource language data under a constant learning rate schedule; in the second stage, training continues on a mixture of high- and low-resource languages under a cosine learning rate schedule. ScaleX outperforms existing approaches with progressively larger margins as high-resource data scales up, and further generalizes to both multilingual and large-scale bilingual pretraining. Our analysis also reveals that learning rate scheduling and shared tokens across languages are critical to sustaining performance scaling in low-resource languages.
Scaling low-resource language performance with high-resource language data
['cross-lingual pretraining', 'data scaling', 'low-resource languages']
/pdf/0c80be481b36da3c5efe475ebf2abee6ae30f04c.pdf
transfer learning, meta learning, and lifelong learning
null
['ICLR.cc/2026/Conference/Submission24983/Authors']
Uh0F0079Lh
24,982
Uh0F0079Lh
A Concept Level Energy-Based Framework for Interpreting Black-Box Large Language Model Responses
The widespread adoption of proprietary Large Language Models (LLMs) accessed strictly through closed-access APIs has created a critical challenge for their reliable deployment: a fundamental lack of interpretability. In this work, we propose a model-agnostic, post-hoc interpretation framework to address this. Our approach defines an energy model that quantifies the conceptual consistency between prompts and the corresponding LLM-generated responses. We use this energy to guide the training of an interpreter network for a set of target sentences. Once trained, our interpreter operates as an efficient, standalone tool, providing sentence-level importance scores without requiring further queries to the original LLM API or energy model. These scores quantify how much each prompt sentence influences the generation of specific target sentences. A key advantage is that our framework globally trains a local interpreter, which helps mitigate common biases in LLMs. Our experiments demonstrate that the energy network accurately captures the target LLM's generation patterns. Furthermore, we show that our interpreter effectively identifies the most influential prompt sentences for any given output.
We propose a framework for training a model-agnostic interpreter that identifies influential prompt components for black-box LLM responses by leveraging a global energy-based training objective.
['Black-box large language models', 'Post-hoc interpretation', 'Energy based models', 'Model-agnostic feature attribution']
/pdf/8fe1cd0809cb3fc900af2d9f036c3bb1c6025418.pdf
interpretability and explainable AI
/attachment/0c4d9fac4a52f639267b5d3384ffeafa76fafed6.zip
['ICLR.cc/2026/Conference/Submission24982/Authors']
PSW2bVPkVf
24,981
PSW2bVPkVf
Probing Memes in LLMs: A Paradigm for the Entangled Evaluation World
Current evaluations of large language models (LLMs) often treat datasets and models in isolation, obscuring phenomena that only emerge from their collective interaction. Items in datasets are reduced to labeled entries, disregarding the multidimensional properties they reveal when examined across model populations. Models, in turn, are summarized by overall scores such as accuracy, neglecting performance patterns that can only be captured through diverse data item interactions. To address this gap, this paper conceptualizes LLMs as composed of invisible memes, understood as cultural genes in the sense of Dawkins that function as replicating units of knowledge and behavior. Building on this perspective, the Probing Memes paradigm reconceptualizes evaluation as an entangled world of models and data. At its core lies the perception matrix, which captures interaction patterns and enables two complementary abstractions: probe properties, extending dataset characterization beyond labels, and phemotypes, revealing fine-grained capability structures of models. Applied to 9 datasets and 4,507 LLMs, Probing Memes reveals hidden capability structures and reveals phenomena invisible under traditional paradigms (e.g., elite models failing on problems that most models answer easily). This paradigm not only supports more informative, extensible, and fair benchmarks but also lays the foundation for population-based evaluation of LLMs.
null
['Meme', 'Large Language Model', 'Evaluation', 'Probe', 'Paradigm']
/pdf/1aa51d3d5b553be1334da2c4a510cb2bd37169c1.pdf
datasets and benchmarks
/attachment/d0f4fc32a792ea8781b6f00a1938c800bbc7b9e2.zip
['ICLR.cc/2026/Conference/Submission24981/Authors']
Ue6QMEDTRV
24,980
Ue6QMEDTRV
ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability
Detecting texts generated by Large Language Models (LLMs) could cause grave mistakes due to incorrect decisions, such as undermining student's academic dignity. LLM text detection thus needs to ensure the interpretability of the decision, which can help users judge how reliably correct its prediction is. When humans verify whether a text is human-written or LLM-generated, they intuitively investigate with which of them it shares more similar spans. However, existing interpretable detectors are not aligned with the human decision-making process and fail to offer evidence that users easily understand. To bridge this gap, we introduce ExaGPT, an interpretable detection approach grounded in the human decision-making process for verifying the origin of a text. ExaGPT identifies a text by checking whether it shares more similar spans with human-written vs. with LLM-generated texts from a datastore. This approach can provide similar span examples that contribute to the decision for each span in the text as evidence. Our human evaluation demonstrates that providing similar span examples contributes more effectively to judging the correctness of the decision than existing interpretable methods. Moreover, extensive experiments in four domains and three generators show that ExaGPT massively outperforms prior powerful detectors by up to +37.0 points of accuracy at a false positive rate of 1%. We will release our code after acceptance.
null
['Machine-generated Text Detection', 'Human Interpretability']
/pdf/b046bb70057c47e1abe44be54b0ef3eb6149f760.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24980/Authors']
hSpA4DAoMk
24,978
hSpA4DAoMk
Adaptive Methods Are Preferable in High Privacy Settings: An SDE Perspective
Differential Privacy (DP) is becoming central to large-scale training as privacy regulations tighten. We revisit how DP noise interacts with *adaptivity* in optimization through the lens of *stochastic differential equations*, providing the first SDE-based analysis of private optimizers. Focusing on DP-SGD and DP-SignSGD under per-example clipping, we show a sharp contrast under fixed hyperparameters: DP-SGD converges at a privacy-utility trade-off $O(1/\varepsilon^2)$ with speed independent of $\varepsilon$, while DP-SignSGD converges at a speed *linear* in $\varepsilon$ with a $O(1/\varepsilon)$ trade-off, dominating in high-privacy or high-noise regimes. Under optimal learning rates, both methods reach comparable theoretical asymptotic performance; however, the optimal learning rate of DP-SGD scales linearly with $\varepsilon$, while that of DP-SignSGD is essentially $\varepsilon$-independent. This makes adaptive methods far more practical, as their hyperparameters transfer across privacy levels with little or no re-tuning. Empirical results confirm our theory across training and test metrics, and extend from DP-SignSGD to DP-Adam.
With SDEs, we show that while DP-SignSGD is better under tight privacy or noisy batches, DP-SGD is better otherwise, and adaptivity needs far less hyperparameter tuning across privacy levels.
['Stochastic Differential Equations', 'Differential Privacy']
/pdf/401371433d65ebc4f06fe84432b73962d4e25d36.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission24978/Authors']
Q8HRE2E5wp
24,977
Q8HRE2E5wp
Can Recommender Systems Teach Themselves? A Recursive Self-Improving Framework with Fidelity Control
The scarcity of high-quality training data presents a fundamental bottleneck to scaling machine learning models. This challenge is particularly acute in recommendation systems, where extreme sparsity in user interactions leads to rugged optimization landscapes and poor generalization. We propose the Recursive Self-Improving Recommendation (RSIR) framework, a paradigm in which a model bootstraps its own performance without reliance on external data or teacher models. RSIR operates in a closed loop: the current model generates plausible user interaction sequences, a fidelity-based quality control mechanism filters them for consistency with true user preferences, and a successor model is retrained on the enriched dataset. Our theoretical analysis shows that RSIR acts as a data-driven implicit regularizer, smoothing the optimization landscape and guiding models toward more robust solutions. Empirically, RSIR yields consistent, cumulative gains across multiple benchmarks and architectures. Notably, even smaller models benefit, and weak models can generate effective training curricula for stronger ones. These results demonstrate that recursive self-improvement is a general, model-agnostic approach to overcoming data sparsity, suggesting a scalable path forward for recommender systems and beyond. Our anonymized code is available at https://anonymous.4open.science/status/RSIR-7C5B.
null
['Self-improving; Recommendation System; Data Generation']
/pdf/3cf9423a98df87df8aeb6580e53c3fa60427fc47.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24977/Authors']
J8lWv7WOZ5
24,975
J8lWv7WOZ5
Dyna-ViT: Parameter-Free Dynamic Token Pruning for Efficient Vision Transformers
Vision Transformers (ViTs) achieve state-of-the-art results, yet their quadratic self-attention is inefficient, largely due to redundant processing of low-information background patches. We introduce Dyna-ViT, a simple, parameter-free framework for dynamic token pruning that ranks patches with an unsupervised saliency proxy and retains only the top-K before the encoder. The backbone remains an unmodified ViT; no extra modules or learnable parameters are added. Across three benchmarks, Dyna-ViT preserves accuracy while reducing compute. On PASCAL VOC, keeping 70% of patches is 25% faster per epoch and improves validation accuracy (97.1%) over the full-token baseline (96.8%). On CIFAR-100, Dyna-ViT attains 91.3% test accuracy versus 92.0% for the baseline with a 28% speed-up. On Tiny-ImageNet, it reaches 81.4% validation accuracy with 20–25% faster training. A simple analytic FLOPs model that scales with sequence length closely matches external estimates (e.g., K=60%, S=119: 10.48 vs. 10.23 GFLOPs), aligning with measured throughput gains. Ablations over K and alternative scoring functions (Sobel, Entropy) confirm robustness, and LIME visualizations show retained tokens align with semantically relevant regions. Under matched token budgets and backbones, Dyna-ViT is competitive with, and sometimes exceeds, learned sparsification (DynamicViT) and in-encoder token merging (ToMe), while introducing no additional parameters. These results indicate that parameter-free patch selection can substantially improve ViT efficiency, often acting as a beneficial regularizer with minimal or positive impact on accuracy.
Dyna-ViT prunes tokens before the encoder using a parameter-free saliency score (top-K patches), keeping a standard ViT backbone while delivering ~20–28% faster training with matched or better accuracy on VOC, CIFAR-100, and Tiny-ImageNet.
['Vision Transformers (ViT)', 'dynamic token pruning', 'parameter-free saliency', 'sparse token selection', 'efficient attention', 'analytic FLOPs', 'PASCAL VOC', 'CIFAR-100', 'Tiny-ImageNet', 'LIME explainability', 'DynamicViT', 'ToMe']
/pdf/5c464d97cbb834bdc44d406dd0bb0393277ef424.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24975/Authors']
IezZyvgdO3
24,972
IezZyvgdO3
SAGE: Fast, Generalizable and Photorealistic 3D Human Reconstruction from a Single Image
In this paper, we present SAGE, a Large Human Reconstruction Model, that can produce a photorealistic 3D reconstruction of a human from a single image in less than 1 second. To support scalable model training, we first design an effective data generation pipeline to alleviate the shortage of available photorealistic 3D human data. In this pipeline, we follow two strategies. The first one is to leverage existing rigged assets and animate them with extensive poses from daily life. The second strategy is to utilize existing multi-camera captures of humans and employ fitting to generate more diverse views for training. These two strategies enable us to scale up to 100k assets, significantly enhancing both the quantity and the diversity of data for robust model training. In terms of the architecture, our framework is inspired by Large Reconstruction Models (LRMs) and extracts tokenized features from the input image and the estimated simplified human mesh (SMPL) without detailed geometry or appearance. A mapping network takes this tokenized information as conditioning and employs a cross-attention mechanism to iteratively enhance an initial feature representation. Ultimately, the output is a triplane representation that depicts the 3D human, while novel views are rendered using a standard ray marching method given a camera viewpoint. Extensive experiments on three benchmarks demonstrate the superiority of our approach, both quantitatively and qualitatively, as well as its robustness under diverse input image conditions.
We propose a Large Human Reconstruction Model, which can produce a photorealistic 3D reconstruction of a human from a single image in less than 1 second.
['3D Human Reconstruction; Single Image; Large Human Reconstruction Model']
/pdf/94a136a9310edac19416728a1d617ca33d65e5a0.pdf
applications to computer vision, audio, language, and other modalities
/attachment/86c2101bf226053f7e7ae7427adfdfcdee21c0da.zip
['ICLR.cc/2026/Conference/Submission24972/Authors']
XFY7kvIFSw
24,971
XFY7kvIFSw
MediX-R1: Open Ended Medical Reinforcement Learning
We introduce MediX-R1, an open-ended reinforcement learning (RL) framework for medical multimodal large language models (MLLMs) that enables clinically grounded, free-form answers beyond multiple-choice formats. MediX-R1 fine-tunes a baseline vision–language backbone with Group Relative Policy Optimization (GRPO) and a composite reward tailored for medical reasoning: an LLM-based accuracy reward that judges semantic correctness with a strict YES/NO decision, a medical embedding–based semantic reward to capture paraphrases and terminology variants, and lightweight format and modality rewards that enforce interpretable reasoning and modality recognition. This multi-signal design provides stable, informative feedback for open-ended outputs where traditional verifiable or MCQ-only rewards fall short. To measure progress, we propose a unified evaluation framework for both text-only and image+text tasks that uses an LLM-as-judge in place of brittle string-overlap metrics, capturing semantic correctness, reasoning, and contextual alignment. Despite using only $\sim 50$K instruction examples, MediX-R1 achieves excellent results across standard medical LLM and VLM benchmarks, outperforming strong open-source baselines and delivering particularly large gains on open-ended clinical tasks (e.g., radiology summarization and report generation). Our results demonstrate that open-ended RL with comprehensive reward signals and LLM-based evaluation is a practical path toward reliable medical reasoning in multimodal models. Our trained models, curated datasets and source code will be publicly released.
We introduce MediX-R1, an open-ended RL framework that equips medical multimodal LLMs with clinically grounded reasoning and evaluation for reliable free-form answers beyond multiple-choice tasks.
['Medical MLLMs', 'Reinforcement Learning', 'GRPO', 'Open-ended Reward design', 'Semantic evaluation', 'Open-ended medical reasoning']
/pdf/5d2be061940e195a32ca4927b344b221f03b15fe.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
null
['ICLR.cc/2026/Conference/Submission24971/Authors']
59OJOgKLzN
24,970
59OJOgKLzN
Rethinking the High-Throughput LLM Inference: An Opportunity for Speculative Decoding
Speculative decoding is a widely adopted method for accelerating autoregressive generation by drafting multiple candidate tokens and verifying them jointly with the target model. While effective in small-batch settings, it has been considered impractical under large-batch inference due to the belief that such regimes are compute-bound. Motivated by recent system-level findings that memory bandwidth, not compute, remains the dominant bottleneck in large-batch inference, we revisit the feasibility of speculative decoding under high-throughput conditions. We introduce \emph{$\gamma$-tolerance}, a latency-based criterion that characterizes when speculative decoding provides tangible speedup, and empirically validate that acceleration remains attainable across practical batch sizes and system configurations. Building on this insight, we derive a revised success condition for speculative decoding and demonstrate that most existing drafter architectures violate it due to poor trade-offs between accuracy and efficiency. To address this, we identify Multi-Token Prediction with Gated LoRA as a promising approach and develop a high-performance implementation. Our system achieves up to $2.37{\times}$ speedup at batch size 256 without requiring long-context prompts or architectural changes to the target model, demonstrating that speculative decoding can be both feasible and effective in large-batch inference.
null
['Speculative Decoding', 'Large Language Models', 'High-Throughput Inference']
/pdf/cc38068eb4474fd3795bdd899efb124f7a5c204c.pdf
foundation or frontier models, including LLMs
/attachment/5dc38b8a0bdb66d17c28b0312572276ad583868c.zip
['ICLR.cc/2026/Conference/Submission24970/Authors']
y6je0oiwEg
24,969
y6je0oiwEg
Datatype tagging and prompt alignment: a recipe for boosting LLMs on algorithmic tasks
This paper contributes toward strengthening the bridge between LLMs as programmers and classical ideas in programming languages (PL). Specifically, we show that aligning prompts with *typed programs* enables even small models to reliably emit one-line Python code. We present a simple yet effective recipe consisting of three key ingredients: (i) inline datatype tagging for prompt and code; (ii) a fine-tuned dual-head GPT-2-small with an auxiliary span probe over the prompt; and (iii) a fixed decoder that enforces a finite-state grammar, validates AST shape, and repairs outputs deterministically. On a stratified GPT-4o based dataset that covers primitives such as $\texttt{add}$, $\texttt{subtract}$, $\texttt{max}$, $\texttt{min}$, and $\texttt{sort}$, the decoder alone raises execution accuracy by over 40\% (from $0.58$ to $0.82$)! For counting and repeated addition, prompts map deterministically to single expressions (for example, $\texttt{s.count('r')}$ and $\texttt{sum([1]*100)}$), yielding near-zero errors within coverage. Our approach runs on a single GPU, and presents a proof-of-concept on how "datatype-aware tokenization'' and "grammar-first decoding,'' among other ideas inspired by PL, improve reliability, coverage, and quality at low cost.
We describe a compact recipe that aligns prompts with a typed program space and reliably emits a single legal Python expression. This helps LLMs align with algorithmic intents more easily and provides a quickfix boost to their algorithmic abilities
['tokenizers', 'datatype tagging', 'algorithmic alignment', 'LLMs and coding', 'LLMs and arithmetic', 'algebra']
/pdf/e602014cb454e6dbbbe1e10286eff0c34b3d6e28.pdf
foundation or frontier models, including LLMs
/attachment/629b909f2a7011fc619b8e430ada97509e8df6b5.zip
['ICLR.cc/2026/Conference/Submission24969/Authors']
FVmWGMIoES
24,966
FVmWGMIoES
Restoring Trust in Medical LLMs: GNN-Powered Knowledge Graph Reconstruction for Robust Defense
Medical large language models (LLMs) have demonstrated remarkable capabilities in clinical decision support and biomedical question-answering, yet they remain highly vulnerable to adversarial threats such as prompt injection, data poisoning, and parameter tampering. As reported in Nature Medicine (2025), existing defense mechanisms based on static triple-form knowledge graphs (KGs) lack structural adaptability, making them ineffective against multi-hop reasoning attacks or semantic perturbations. To address this challenge, we propose a structure-aware KG reconstruction framework powered by graph neural networks (GNNs), which dynamically reweights relational edges, filters adversarial connections, and stabilizes semantic propagation while preserving triple compatibility. By incorporating relation-aware weighted triples, our method exhibits stronger adversarial robustness compared to conventional equal-weight KGs. The results show that our method can improve accuracy and other indicators by an average of 3\% on QA benchmarks compared to existing defense methods. In terms of drug recommendation ranking tasks, our method can balance accuracy and completeness. Our approach outperforms vanilla LLMs and existing defense methods, effectively restoring pre-attack performance and enabling trustworthy, robust medical LLM applications.
null
['Medical large language models', 'Robust Defense', 'Data-poisoning Attacks']
/pdf/25d93b0a279f6b2a757b0b279013792e18d027c9.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/73ff035edb9d8783bb56759739fee8f44a7971b9.zip
['ICLR.cc/2026/Conference/Submission24966/Authors']
pW7ORPqwzG
24,965
pW7ORPqwzG
Epidemiology of Large Language Models: A Benchmark for Observational Distribution Knowledge
Artificial intelligence (AI) systems hold great promise for advancing various scientific disciplines, and are increasingly used in real-world applications. Despite their remarkable progress, further capabilities are expected in order to achieve more general types of intelligence. A critical distinction in this context is between factual knowledge, which can be evaluated against true or false answers (e.g., "what is the capital of England?"), and probabilistic knowledge, which reflects probabilistic properties of the real world (e.g., "what is the sex of a computer science graduate in the US?"). Much of previous work on evaluating large language models (LLMs) focuses on factual knowledge, while in this paper, our goal is to build a benchmark for understanding the capabilities of LLMs in terms of knowledge of probability distributions describing the real world. Given that LLMs are trained on vast amounts of text, it may be plausible that they internalize aspects of these distributions. Indeed, this idea has gained traction, with LLMs being touted as powerful and universal approximators of real-world distributions. At the same time, classical results in statistics, known under the term curse of dimensionality, highlight fundamental challenges in learning distributions in high dimensions, challenging the notion of universal distributional learning. In this work, we develop the first benchmark to directly test this hypothesis, evaluating whether LLMs have access to empirical distributions describing real-world populations across domains such as economics, health, education, and social behavior. Our results demonstrate that LLMs perform poorly overall, and do not seem to internalize real-world statistics naturally. This finding also has important implications that can be interpreted in the context of Pearl’s Causal Hierarchy (PCH). Our benchmark demonstrates that language models do not contain knowledge on observational distributions (Layer 1 of the PCH), and thus the Causal Hierarchy Theorem implies that interventional (Layer 2) and counterfactual (Layer 3) knowledge of these models is also limited.
null
['Large Language Models', 'Probabilistic Reasoning']
/pdf/b9e2734cf7e8c22a9a6363754d8f3b89d84c88bb.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24965/Authors']
O4rR59WKHL
24,962
O4rR59WKHL
Synthesizing Feature Extractors: An Agentic Approach for Algorithm Selection
Feature engineering remains a critical bottleneck in machine learning, often requiring significant manual effort and domain expertise. While end-to-end deep learning models can automate this process by learning latent representations, they do so at the cost of interpretability. We propose a gray-box paradigm for automated feature engineering that leverages Large Language Models for program synthesis. Our framework treats the LLM as a meta-learner that, given a high-level problem description for constraint optimization, generates executable Python scripts that function as interpretable feature extractors. These scripts construct symbolic graph representations and calculate structural properties, combining the generative power of LLMs with the transparency of classical features. We validate our approach on algorithm selection across 227 combinatorial problem classes. Our synthesized feature extractors achieve 58.8\% accuracy, significantly outperforming the 48.6 \% of human-engineered extractors, establishing program synthesis as an effective approach to automating the ML pipeline.
null
['constraint solving', 'algorithm selection', 'LLM', 'combinatorial optimization', 'feature extraction']
/pdf/f7870dd10c490650b3c9fe1dfaf26dabcf9b36ec.pdf
optimization
/attachment/856a135e1ac07553f9025d18b5185342d8ddd0cf.zip
['ICLR.cc/2026/Conference/Submission24962/Authors']
VqnBaeu43F
24,961
VqnBaeu43F
From Parameters to Behaviors: Unsupervised Compression of the Policy Space
Despite its recent successes, Deep Reinforcement Learning (DRL) is notoriously sample-inefficient. We argue that this inefficiency stems from the standard practice of optimizing policies directly in the high-dimensional and highly redundant parameter space $\\Theta$. This challenge is greatly compounded in multi-task settings. In this work, we develop a novel, unsupervised approach that compresses the policy parameter space $\\Theta$ into a low-dimensional latent space $\\mathcal Z$. We train a generative model $g:\\mathcal Z\\to\\Theta$ by optimizing a behavioral reconstruction loss, which ensures that the latent space is organized by functional similarity rather than proximity in parameterization. We conjecture that the inherent dimensionality of this manifold is a function of the environment's complexity, rather than the size of the policy network. We validate our approach in continuous control domains, showing that the parameterization of standard policy networks can be compressed up to five orders of magnitude while retaining most of its expressivity. As a byproduct, we show that the learned manifold enables task-specific adaptation via Policy Gradient operating in the latent space $\\mathcal{Z}$.
null
['reinforcement learning', 'unsupervised reinforcement learning', 'unsupervised representation learning']
/pdf/3870585c897ac82ab25224802a2d18e5449c09f4.pdf
reinforcement learning
/attachment/12481342ecb228e3021fba6a496cff2e5b535643.zip
['ICLR.cc/2026/Conference/Submission24961/Authors']
JTK6nljnag
24,960
JTK6nljnag
Scent of Health (S-O-H): Olfactory Multivariate Time-Series Dataset for Non-Invasive Disease Screening
Exhaled breath analysis has become an advantageous alternative to traditional medical diagnostic methods. Electronic nose (eNose) sensors can enable low-cost, non-invasive disease screening from exhaled breath. Still, progress is limited by small, site-specific datasets and sensor-specific temporal artifacts (e.g., baseline drift). In this paper, we introduce Scent of Health, the largest printed-metal-oxide eNose clinical dataset with curated temporal splits. We also introduce breath diagnosis as a realistic multivariate time-series task with temporally stratified splits that mimic deployment. We provide a reproducible benchmark, including classical algorithms with handcrafted features, convolutional neural networks with data augmentation, and specialized time series classification methods, and show that, while these methods offer useful inductive biases, substantial gaps remain in robustness and generalization under drift and limited labels. Our findings demonstrate that machine learning for data from eNose can achieve clinically relevant performance in detecting malignant lung neoplasms and differentiating respiratory diseases. The substantial sample size of this dataset addresses a critical gap in research and provides a valuable resource for developing and validating disease classification models and olfactory data representation.
A multivariate dataset from an Enose sensor for non-invasive disease screening with data from over 1000 unique patients.
['enose', 'dataset', 'medicine', 'olfactory']
/pdf/ddf85d4b3d5fec94c5d1951b426d6f775aa9a102.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24960/Authors']
hnItP9g9Bf
24,959
hnItP9g9Bf
A Strategy-Agnostic Framework for Partial Participation in Federated Learning
Partial participation (PP) is a fundamental paradigm in federated learning, where only a fraction of clients can be involved in each communication round. In recent years, a wide range of mechanisms for partial participation have been proposed. However, the effectiveness of a particular technique strongly depends on problem-specific characteristics, e.g. local data distributions. Consequently, achieving better performance requires a comprehensive search across a number of strategies. This observation highlights the necessity of a unified framework. In this paper, we address this challenge by introducing a general scheme that can be combined with almost any client selection strategy. We provide a unified theoretical analysis of our approach without relying on properties specific to individual heuristics. Furthermore, we extend it to settings with unstable client-server connections, thereby covering real-world scenarios in federated learning. We present empirical validation of our framework across a range of PP strategies on image classification tasks, employing modern architectures, such as FasterViT.
null
['Partial participation', 'Stochastic optimization', 'Convex optimization', 'Non-convex optimization']
/pdf/86de3e045752aa2ec3e3565da8f1c1ed1d36b766.pdf
optimization
/attachment/f97f2fbe86053a5466704630ceaef032a8b2f918.zip
['ICLR.cc/2026/Conference/Submission24959/Authors']
HDZ2GBwrWo
24,957
HDZ2GBwrWo
MoEsturizer: Resource-Efficient MoE Upcycling for Small Language Models
Large language models (LLMs) are typically scaled through billions of parameters and trillions of tokens, making progress largely restricted to organizations with substantial resources. Recent work on Mixture-of-Experts (MoE) upcycling shows that dense pretrained models can be transformed into sparse MoE variants, but prior studies have focused on large models and required extensive additional training. In this work, we demonstrate that MoE upcycling is also effective for small language models (sub-billion parameters) using only a few hundred thousand samples of supervised fine-tuning. Remarkably, upcycled models consistently outperform their dense base models and remain competitive with dense counterparts of equivalent total size, despite activating fewer parameters at inference. Our study highlights MoE upcycling as a lightweight and practical scaling strategy, while providing empirical insights into its efficiency and limitations. These results establish MoE upcycling as a reproducible pathway for enhancing small models under realistic resource budgets, broadening access to language model improvement.
150k samples, one 96GB GPU: upcycling small LMs to sparse MoEs (Experts-Top K: 4-2/8-2) beats dense bases on 9 benchmarks and rivals larger tiers at far lower active parameters; depth scaling or higher top-k adds little.
['Mixture-of-Experts (MoE)', 'Model upcycling', 'Small language models (SLMs)', 'Resource-constrained training']
/pdf/f9e30f4befe28c3d17ab2a198c88d423e4928494.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24957/Authors']
5v0p0Bmp6S
24,956
5v0p0Bmp6S
Learning from Examples and Self-Exploration: A New Paradigm for Dynamic Fusion
Alignment of Large Language Models with human preferences is dominated by two paradigms: Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), exemplified by methods like Group Relative Policy Optimization. Yet, they face a trade-off challenge: SFT excels at incorporating external knowledge but often fails to foster deep comprehension, whereas RL can internalize knowledge but struggles to expand the model's knowledge frontier. To resolve this, we propose **LESE** (**L**earning from **E**xamples and **S**elf-**E**xploration), a framework that dynamically interpolates between SFT and RL. LESE introduces an instance-adaptive mechanism that assesses a model's real-time task proficiency and exploration diversity, thereby allocating a dynamic weight between SFT and RL for each training instance. This adaptive methodology addresses the limitations of static strategies by adjusting the balance between SFT and RL at the instance level. Empirically, it improves performance on mathematical benchmarks and enhances training stability, while maintaining consistency with human-preferred outputs.
null
['Supervised Fine-Tuning', 'Large Language Models', 'Reinforcement Learning', 'Mathematical Reasoning', 'Dynamic Fusion']
/pdf/397a5f0e7886a8a057053df12ac9fddc52eaeb89.pdf
reinforcement learning
null
['ICLR.cc/2026/Conference/Submission24956/Authors']
YeWsA0VFZ5
24,955
YeWsA0VFZ5
LoCoT2V-Bench: A Benchmark for Long-Form and Complex Text-to-Video Generation
Recently text-to-video generation has made impressive progress in producing short, high-quality clips, but evaluating long-form outputs remains a major challenge especially when processing complex prompts. Existing benchmarks mostly rely on simplified prompts and focus on low-level metrics, overlooking fine-grained alignment with prompts and abstract dimensions such as narrative coherence and thematic expression. To address these gaps, we propose LoCoT2V-Bench, a benchmark specifically designed for long video generation (LVG) under complex input conditions. Based on various real-world videos, LoCoT2V-Bench introduces a suite of realistic and complex prompts incorporating elements like scene transitions and event dynamics. Moreover, it constructs a multi-dimensional evaluation framework that includes our newly proposed metrics such as event-level alignment, fine-grained temporal consistency, content clarity, and the Human Expectation Realization Degree (HERD) that focuses on more abstract attributes like narrative flow, emotional response, and character development. Using this framework, we conduct a comprehensive evaluation of nine representative LVG models, finding that while current methods perform well on basic visual and temporal aspects, they struggle with inter-event consistency, fine-grained alignment, and high-level thematic adherence, etc. Overall, LoCoT2V-Bench provides a comprehensive and reliable platform for evaluating long-form complex text-to-video generation and highlights critical directions for future method improvement.
LoCoT2V-Bench is a new benchmark for long-form text-to-video generation that uses complex prompts and multi-dimensional metrics.
['Video Generation Benchmark; Text-to-Video Generation; Long-Form Video Evaluation; Multi-Dimensional Assessment']
/pdf/5ed408d1f09b5545f3a1a31519d18c7d134cfc20.pdf
datasets and benchmarks
/attachment/d4c5f787a3635c38c0e9337d5b475c0441f266a8.zip
['ICLR.cc/2026/Conference/Submission24955/Authors']
XRf2Uscsa4
24,954
XRf2Uscsa4
Dual-Path Inertial Odometry with Temporal Attention
We present a dual-path inertial odometry framework that processes the IMU stream through two parallel branches. One branch works directly on raw measurements to preserve high-frequency transients, while the other applies a Savitzky–Golay filter to enforce smoother, Newton-consistent motion and reduce drift. The outputs are fused online by a compact temporal-attention mechanism that adjusts their relative weights according to the motion dynamics. On the RONIN dataset, our method reduces final position error by about 10% compared with the previous state of the art, and this advantage persists across four smartphone models and three sampling rates. Integrating the dual-path block into other backbones yields similar gains — for example, roughly a 10% error reduction for a ResNet-based odometry network — and produces consistent improvements for both TCN and LSTM baselines, suggesting the approach generalizes across architectures.
Dual-path IMU odometry fusing raw and SG-filtered signals via temporal attention cuts RONIN error by 10%, improving robustness to devices, sampling rates and backbones (ResNet, TCN, LSTM) with minimal overhead.
['Dual-path IMU odometry; Temporal attention fusion; Cross-backbone improvement']
/pdf/efe321146770e37e5bfa3bb54ec88d8b62010613.pdf
learning on time series and dynamical systems
null
['ICLR.cc/2026/Conference/Submission24954/Authors']
3qiCnLf3jf
24,953
3qiCnLf3jf
Best-of-Infinity: Asymptotic Performance of Test-Time Compute
We study best-of-$N$ for large language models (LLMs) where the selection is based on majority voting. In particular, we analyze the limit $N \to \infty$, which we denote as best-of-$\infty$. While this approach achieves impressive performance in the limit, it requires an infinite test-time budget. To address this, we propose an adaptive generation scheme that selects $N$ based on answer agreement, thereby efficiently allocating inference-time computation. Beyond adaptivity, we extend the framework to weighted ensembles of multiple LLMs, showing that such mixtures can outperform any individual model. The optimal ensemble weighting is formulated and efficiently computed as a mixed-integer linear program. Extensive experiments demonstrate the effectiveness of our approach.
null
['LLM', 'test-time compute', 'majority voting', 'LLM ensemble']
/pdf/16ef754fa1d42d4c12ee2c055332802c24e9e87b.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24953/Authors']
pWjehHNGg3
24,951
pWjehHNGg3
Machine Text Detectors are Membership Inference Attacks
Although membership inference attacks (MIAs) and machine-generated text detection target different goals, identifying training samples and synthetic texts, their methods often exploit similar signals based on a language model’s probability distribution. Despite this shared methodological foundation, the two tasks have been independently studied, which may lead to conclusions that overlook stronger methods and valuable insights developed in the other task. In this work, we theoretically and empirically investigate the transferability, i.e., how well a method originally developed for one task performs on the other, between MIAs and machine text detection. For our theoretical contribution, we prove that the metric that achieves the asymptotically highest performance on both tasks is the same. We unify a large proportion of the existing literature in the context of this optimal metric and hypothesize that the accuracy with which a given method approximates this metric is directly correlated with its transferability. Our large-scale empirical experiments, including 7 state-of-the-art MIA methods and 5 state-of-the-art machine text detectors across 13 domains and 10 generators, demonstrate very strong rank correlation (ρ > 0.6) in cross-task performance. We notably find that Binoculars, originally designed for machine text detection, achieves state-of-the-art performance on MIA benchmarks as well, demonstrating the practical impact of the transferability. Our findings highlight the need for greater cross-task awareness and collaboration between the two research communities. To facilitate cross-task developments and fair evaluations, we introduce Mint, a unified evaluation suite for MIAs and machine-generated text detection, with implementation of 15 recent methods from both tasks.
null
['Membership Inference Attack', 'Machine-generated Text Detection']
/pdf/a6864c7b78666b0e6091d5f0ad378efe7a16116d.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24951/Authors']
3q3LnQ63Az
24,950
3q3LnQ63Az
Towards Fine-grained Audio Captioning with Multimodal Contextual Fusion
High-quality, large-scale audio captioning is crucial for advancing audio understanding, yet current automated methods often generate captions that lack fine-grained detail and contextual accuracy, primarily due to their reliance on limited unimodal or superficial multimodal information. Drawing inspiration from human auditory perception, which adeptly integrates cross-modal cues and performs sophisticated auditory scene analysis, we introduce a novel two-stage automated pipeline. This pipeline first employs specialized pretrained models to extract diverse contextual cues (e.g., speech, music, general sounds, and visual information from associated video). A large language model (LLM) then synthesizes these rich, multimodal inputs to generate detailed and context-aware audio captions. Key contributions of this work include: (1) the proposed scalable method for fine-grained audio caption generation; (2) FusionAudio, a new large-scale dataset comprising 1.2 million such detailed captions, combined with 6 million QA pairs; and (3) enhanced audio models developed using FusionAudio, specifically a CLAP-based audio encoder with superior audio-text alignment and instruction following. This paper paves the way for more nuanced and accurate automated understanding of complex audio environments. Code and data can be found in
null
['Fine-grained Audio Caption Dataset', 'Large Audio Language Models']
/pdf/c07a0b06e9c11859cb6dec4a8c43cf44ea2d7603.pdf
datasets and benchmarks
/attachment/f7575694c9eb18c24adc3273e37551d9f0a8c69b.zip
['ICLR.cc/2026/Conference/Submission24950/Authors']
23AHaRy1QO
24,949
23AHaRy1QO
Efficient Fine-tuning with Decomposed Foundation Model
Fine-tuning billion-scale large language models (LLMs) is challenging due to the extremely large model size, particularly in memory-constrained scenarios, even with parameter-efficient fine-tuning (PEFT) and quantization. To address this challenge, we propose a novel method based on the decomposition then fine-tuning (DeFT) paradigm, which effectively decomposes the foundation model and reduces the number of model parameters during fine-tuning, while retaining model quality. DeFT introduces a highly efficient layer importance aware search algorithm for fine-grained model decomposition and successfully repurposes model decomposition for fine-tuning. Additionally, DeFT can seamlessly integrate with PEFT and quantization methods to enhance fine-tuning efficiency further. Extensive experiments on various LLM backbones demonstrate that DeFT achieves comparable or even better performance than the baseline PEFT and quantization methods, while improving both memory efficiency and computation efficiency for fine-tuning. Remarkably, DeFT enables fine-tuning of a 65B model on a consumer GPU with just 24GB of memory, all without relying on offloading strategies, saving significant expenses for purchasing or renting high-end GPUs.
null
['Large Language Model Fine-tuning', 'Foundation Model Decomposition']
/pdf/312d561d8dcd358407bac5a34453bdc16904d950.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24949/Authors']
B7r8ZkBk4F
24,948
B7r8ZkBk4F
Two-Way Garment Transfer: Unified Diffusion Framework for Dressing and Undressing Synthesis
While recent advances in virtual try-on (VTON) have achieved realistic garment transfer to human subjects, its inverse task, virtual try-off (VTOFF), which aims to reconstruct canonical garment templates from dressed humans, remains critically underexplored and lacks systematic investigation. Existing works predominantly treat them as isolated tasks: VTON focuses on garment dressing while VTOFF addresses garment extraction, thereby neglecting their complementary symmetry. To bridge this fundamental gap, we propose the Two-Way Garment Transfer Model (TWGTM), to the best of our knowledge, the first unified framework for joint clothing-centric image synthesis that simultaneously resolves both mask-guided VTON and mask-free VTOFF through bidirectional feature disentanglement. Specifically, our framework employs dual-conditioned guidance from both latent and pixel spaces of reference images to seamlessly bridge the dual tasks. On the other hand, to resolve the inherent mask dependency asymmetry between mask-guided VTON and mask-free VTOFF, we devise a phased training paradigm that progressively bridges this modality gap. Extensive qualitative and quantitative experiments conducted across the DressCode and VITON-HD datasets validate the efficacy and competitive edge of our proposed approach.
This work propose the first unified framework for joint clothing-centric image synthesis that simultaneously resolves both mask-guided virtual try-on and mask-free virtual try-off. Extensive experiments validate the effectiveness of the model.
['Diffusion Models', 'Image Generation', 'Virtual Try-On', 'Virtual Try-off']
/pdf/df1eb507510f6946be38c7286ac71af9d4d1f215.pdf
applications to computer vision, audio, language, and other modalities
/attachment/d40b21d45766621f8b00d8d68526c70eddedc04e.zip
['ICLR.cc/2026/Conference/Submission24948/Authors']
CvICPoKwRf
24,946
CvICPoKwRf
GENATATORs: ab initio Gene Annotation With DNA Language Models
Inference of gene structure and location from genome sequences - known as de novo gene annotation - is a fundamental task in biological research. However, sequence grammar encoding gene structure is complex and poorly understood, often requiring costly transcriptomic data for accurate gene annotation. In this work, we revisit standard evaluation protocols, showing that commonly used per-token and per-sequence metrics fail to capture the challenges of real-world gene annotation. We introduce and theoretically justify new biologically grounded interval level metrics, along with benchmarking datasets that better capture annotation quality. We show that pretrained DNA language model (DNA LM) embeddings do not capture the features necessary for precise gene segmentation, and that task specific fine-tuning remains essential. We comprehensively evaluate the impact of model architecture, training strategy, receptive field size, dataset composition, and data augmentations on gene segmentation performance. We show that fine-tuned DNA LMs outperform existing annotation tools, generalizing across species separated by hundreds of millions of years from those seen during training, and providing segmentation of previously intractable non-coding transcripts and untranslated regions of protein-coding genes. Our results thus provide a foundation for new biological applications centered on accurate and scalable gene annotation.
null
['DNA language models', 'genome annotation', 'ab initio', 'long sequence processing', 'recurrent models', 'state space models', 'computational genomics']
/pdf/70bdbcef1093218113ae466bf00a28ea9c4b973a.pdf
applications to physical sciences (physics, chemistry, biology, etc.)
/attachment/e78e0be3d055f09774390622f54cac488e259607.zip
['ICLR.cc/2026/Conference/Submission24946/Authors']
4YBRDJ5TN3
24,945
4YBRDJ5TN3
Exploring Redundancy and Shared Representations for Transformer Models Optimization
Large Language Models (LLMs) deliver state-of-the-art performance but at the cost of extreme computational and energy demands, raising the question of how much of their capacity is truly necessary. This paper explores structural and weight redundancies in Transformer-based architectures, aiming to identify inefficiencies and leverage them through targeted compression techniques. A central focus is assessing whether different modules perform overlapping functions. Although some degree of similarity is observed in the analyzed cases, redundancy proves to be lower than expected, challenging the assumption that weight matrices can be interchanged across layers without compromising performance. Additionally, an analysis of model matrices examines whether they exhibit an inherently low-rank structure. To further explore these aspects, three novel compression methods are introduced: MASS, which enforces weight aggregation and sharing, along with two factorization-based techniques, GlobaL Fact and ABACO. Experimental results show that while these approaches achieve model compression, their ability to maintain performance is limited, reducing their practical viability. The findings highlight the complexity of extracting redundancy from Transformer architectures, raising questions about its extent across layers and blocks. By addressing these challenges, this paper aims to contribute to ongoing efforts to improve the efficiency of LLMs.
null
['Large Language Models', 'Redundancy', 'Weight sharing', 'Model compression', 'Low-rank approximation']
/pdf/76e4ac384af79d8da464c98e5690cceb91ffd45a.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24945/Authors']
UWGe5PDwjk
24,944
UWGe5PDwjk
Decomposing Visual Classification: Assessing Tree-Based Reasoning in VLMs
Vision language models (VLMs) excel at zero-shot visual classification, but their performance on fine-grained tasks and large hierarchical label spaces is understudied. This paper investigates whether structured, tree-based reasoning can enhance VLM performance. We introduce a framework that decomposes classification into interpretable decisions using decision trees and evaluates it on fine-grained (GTSRB) and coarse-grained (CIFAR-10) datasets. Although the model achieves 98.2% accuracy in understanding the tree knowledge, tree-based reasoning consistently underperforms standard zero-shot prompting. We also explore enhancing the tree prompts with LLM-generated classes and image descriptions to improve alignment. The added description enhances the performance of the tree based and zero-shot methods. Our findings highlight limitations of structured reasoning in visual classification and offer insights for designing more interpretable VLM systems.
More structure ≠ better performance. Sometimes the simplest approach (zero-shot prompting) is genuinely superior, and added complexity just creates more opportunities for failure.
['Vision-Language Models', 'Hierarchical Classification', 'Sensitivity Analysis']
/pdf/960754ea2115e25ef5d921b6c6bb4e257779255e.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24944/Authors']
xFdT63wm5e
24,943
xFdT63wm5e
Unified Continuous Generative Models for Denoising-based Diffusion
Recent advances in continuous generative models, encompassing multi-step processes such as diffusion and flow matching (typically requiring $8$-$1000$ steps) and few-step methods such as consistency models (typically $1$-$8$ steps), have yielded impressive generative performance. However, existing work often treats these approaches as distinct paradigms, leading to disparate training and sampling methodologies. We propose a unified framework for the training, sampling, and analysis of diffusion, flow matching, and consistency models. Within this framework, we derive a surrogate unified objective that, for the first time, theoretically shows that the few-step objective can be viewed as the multi-step objective plus a regularization term. Building on this framework, we introduce the **U**nified **C**ontinuous **G**enerative **M**odels **T**rainer and **S**ampler (**UCGM**), which enables efficient and stable training of both multi-step and few-step models. Empirically, our framework achieves state-of-the-art results. On ImageNet $256\times256$ with a $675\text{M}$ diffusion transformer, UCGM-T trains a multi-step model achieving $1.30$ FID in $20$ steps, and a few-step model achieving $1.42$ FID in only $2$ steps. Moreover, applying UCGM-S to REPA-E improves its FID from $1.26$ (at $250$ steps) to $1.06$ in only $40$ steps, without additional cost.
null
['generative modeling', 'denoising diffusion', 'consistency model', 'image generation']
/pdf/be5df6cd8475f363a14a0f34a1f6d89629985e6d.pdf
generative models
/attachment/9e42aa3c5ead91dd0b69435d56f490167dc541b7.zip
['ICLR.cc/2026/Conference/Submission24943/Authors']
4H8xZA4zuj
24,942
4H8xZA4zuj
On the Interaction of Batch Noise, Adaptivity, and Compression, under $(L_0,L_1)$-Smoothness: An SDE Approach
Understanding the dynamics of distributed stochastic optimization requires accounting for several major factors that affect convergence, such as gradient noise, communication compression, and the use of adaptive update rules. While each factor has been studied in isolation, their joint effect under realistic assumptions remains poorly understood. In this work, we develop a unified theoretical framework for Distributed Compressed SGD (DCSGD) and its sign variant Distributed SignSGD (DSignSGD) under the recently introduced $(L_0, L_1)$-smoothness condition. Our analysis leverages stochastic differential equations (SDEs), and we show that while standard first-order SDEs might lead to misleading conclusions, including higher-order terms helps capture the fine-grained interaction between learning rates, gradient noise, compression, and the geometry of the loss landscape. These tools allow us to inspect the dynamics under general gradient noise assumptions, including heavy-tailed and affine-variance regimes, which extend beyond the classical bounded-variance setting. Our results show that normalizing the updates of DCSGD emerges as a natural condition for stability, with the degree of normalization precisely determined by the gradient noise structure, the landscape’s regularity, and the compression rate. In contrast, our model predicts that DSignSGD converges even under heavy-tailed noise with standard learning rate schedules, a finding which we empirically verify. Together, these findings offer both new theoretical insights and practical guidance for designing stable and robust distributed learning algorithms.
We develop an SDE-based framework for DCSGD and DSignSGD, showing DCSGD needs noise- and compression-dependent normalization for stability, while DSignSGD remains robust and convergent even under heavy-tailed noise.
['Stochastic Differential Equations', '$(L_0', 'L_1)$-Smoothness', 'Distributed Learning', 'Adaptivity']
/pdf/7097654e3053818b66e4550423e0ffb1af0d5d1d.pdf
optimization
null
['ICLR.cc/2026/Conference/Submission24942/Authors']
pnw3FGpqzF
24,941
pnw3FGpqzF
Knowledge-Enhanced Tabular Data Generation
Tabular data generation methods aim to synthesize artificial samples by learning the distribution of training data. However, most existing tabular data generation methods are purely data-driven. They perform poorly when the training samples are insufficient or when there exists a distribution shift between training and true data. In many real-world scenarios, data owners are often able to provide additional knowledge beyond the raw data, such as domain-specific description or dependencies among features. Motivated by this, we categorize the types of knowledge that can effectively support tabular data generation, and incorporate selected knowledge as auxiliary information to guide the generation process. To this end, we propose KTGen, a $\textbf{K}$nowledge-enhanced $\textbf{T}$abular data $\textbf{Gen}$eration framework. KTGen leverages auxiliary information by training a correction network in the latent space produced by a VAE, aligning the generated data with the auxiliary information. Our experiments demonstrate that, when training on limited, biased data, incorporating auxiliary information makes the distribution of synthetic samples closer to the true data distribution, and also improves the performance of downstream models trained on the synthetic samples.
null
['Tabular data generation']
/pdf/9daa4802f466c09f5273a48aac19d663258053bf.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24941/Authors']
QnHENtIAKL
24,939
QnHENtIAKL
Adaptive kernel selection for Stein Variational Gradient Descent
A central challenge in Bayesian inference is efficiently approximating posterior distributions. Stein Variational Gradient Descent (SVGD) is a popular variational inference method which transports a set of particles to approximate a target distribution. The SVGD dynamics are governed by a reproducing kernel Hilbert space (RKHS) and are highly sensitive to the choice of the kernel function, which directly influences both convergence and approximation quality. The commonly used median heuristic offers a simple approach for setting kernel bandwidths but lacks flexibility and often performs poorly, particularly in high-dimensional settings. In this work, we propose an alternative strategy for adaptively choosing kernel parameters over an abstract family of kernels. Recent convergence analyses based on the kernelized Stein discrepancy (KSD) suggest that optimizing the kernel parameters by maximizing the KSD can improve performance. Building on this insight, we introduce Adaptive SVGD (Ad-SVGD), a method that alternates between updating the particles via SVGD and adaptively tuning kernel bandwidths through gradient ascent on the KSD. We provide a simplified theoretical analysis that extends existing results on minimizing the KSD for fixed kernels to our adaptive setting, showing convergence properties for the maximal KSD over our kernel class. Our empirical results further support this intuition: Ad-SVGD consistently outperforms standard heuristics in a variety of tasks.
null
['adaptive kernel selection', 'Stein Variational Gradient Descent', 'kernelized Stein discrepancy']
/pdf/031d520ccbd83369b192af1bde8cfec036b49b05.pdf
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
/attachment/78873d6fec904d88f3bb7e01852288740d924873.zip
['ICLR.cc/2026/Conference/Submission24939/Authors']
crKJJ4Ej60
24,938
crKJJ4Ej60
Copy-Paste to Mitigate Large Language Model Hallucinations
While Retrieval-Augmented Generation (RAG) enables large language models (LLMs) to generate contextually grounded responses, contextual faithfulness remains challenging as LLMs may not consistently trust provided context, leading to hallucinations that undermine reliability. We observe an inverse correlation between response copying degree and context-unfaithful hallucinations on RAGTruth, suggesting higher copying degrees reduce hallucinations by fostering genuine contextual belief. We propose \textbf{CopyPasteLLM}, obtained through two-stage high-copying response preference training. We design three prompting methods to enhance copying degree, demonstrating that high-copying responses achieve superior contextual faithfulness and hallucination control. These approaches enable a fully automated pipeline that transforms generated responses into high-copying preference data for training CopyPasteLLM. On FaithEval, ConFiQA and PubMedQA, CopyPasteLLM achieves best performance in both counterfactual and original contexts, remarkably with 12.2\% to 24.5\% accuracy improvements on FaithEval over the best baseline, while requiring only 365 training samples—\textit{1/50th} of baseline data. To elucidate CopyPasteLLM's effectiveness, we propose the \textit{Context-Parameter Copying Capturing} algorithm. Interestingly, this reveals that CopyPasteLLM recalibrates reliance on internal parametric knowledge rather than external knowledge during generation.
We propose CopyPasteLLM that trains models to simply copy from context, achieving 12.2-24.5% accuracy improvements with only 365 training samples (1/50th of baseline) and revealing how copy-paste recalibrates parameteric knowledge.
['RAG Hallucination', 'Contextual Faithfulness', 'Model Interpretability', 'Large Language Model', 'Knowledge Conflict']
/pdf/942c621d35e45ff23d09823ec1f1015dc180dcef.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24938/Authors']
1qLZsyJN2t
24,937
1qLZsyJN2t
The Information Game: Active Inference as Bilevel Optimization and a Game-Theoretic Benchmark for LLM Inquiry
Large language models (LLMs) increasingly operate in settings where they must gather information rather than simply recall facts. We model this task as a multi-street game of incomplete information casting each round of information gathering as a bilevel optimization: an inner variational Bayesian step that updates beliefs over a hidden target object, and an outer query-selection step that minimizes expected free energy, which is equivalent to maximizing expected information gain. This game-theoretic formulation motivates \emph{Optimal Question Asking} (OQA), a benchmark designed as a tractable "toy game" to measure an agent's inquiry strategy by measuring how quickly an agent reduces uncertainty about the target. By solving this game for its Game-theory optimal (GTO) policy, we create a perfect oracle against which we measure the planning gap—the expected number of suboptimal queries.On 25-object tasks, models like GPT-4o and Claude 3.5 Haiku exhibit a planning gap of 1-2 queries. On 100-object tasks, flagship models like GPT-o3 and Gemini 2.5 Pro, while closer to optimal, still show significant strategic leaks. Our synthetic datasets, which remove linguistic priors, reveal deeper deficits. OQA exposes inefficiencies invisible to answer-centric metrics, offering a controlled testbed for forging agents that play the information game not just exploitatively, but optimally.
We frame question answering as bilevel optimization and use that to benchmark frontier LLMs on their efficiency at reducing uncertainty through question asking; we find these LLMs still lag an information-theoretic oracle
['active inference', 'bilevel optimization', 'question asking', 'query optimality', 'inference', 'LLMs']
/pdf/d906aedf92345936f636241eb400a1db1efdf48d.pdf
foundation or frontier models, including LLMs
/attachment/eb639218e64dadd32da00fed52d513b28b430fd5.zip
['ICLR.cc/2026/Conference/Submission24937/Authors']
S8bmkHXqgT
24,934
S8bmkHXqgT
Interpretable Preference Elicitation: Aligning User Intent with Controllable Long-tailed Learning
Long-tailed recognition remains a significant challenge, where models often struggle with tail class performance and adaptability to diverse user preferences. While recent controllable paradigms leveraging hypernetworks allow numerical specification of head-tail trade-offs, defining these multi-dimensional preference vectors can be unintuitive for users. This paper introduces a novel framework that bridges this gap by enabling users to articulate their preferences through natural language. We propose a two-stage approach: first, optimal numerical preference vectors are identified for canonical distribution scenarios, and a rich corpus of corresponding textual descriptions is generated. Subsequently, a lightweight neural network learns to map sentence embeddings of these textual descriptions to the underlying 3D preference vectors controlling the expert ensemble. Our method significantly enhances the usability and interpretability of controllable long-tailed learning systems without compromising, and even slightly improving, their performance on benchmark datasets. This work facilitates more accessible and practical adaptation of long-tailed models to specific real-world requirements.
null
['Long-tail learning']
/pdf/e6a44a2afe03f21badbbada67bd8967aa184aa9c.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
/attachment/14af7c0b5b46143aa2f31aa87f65fa65d9f7cb5a.zip
['ICLR.cc/2026/Conference/Submission24934/Authors']
CPajDOuA3h
24,933
CPajDOuA3h
RL-Obfuscation: Can Language Models Learn to Evade Latent-Space Monitors?
Latent-space monitors aim to detect undesirable behaviours in Large Language Models by leveraging their internal representations rather than relying solely on black-box outputs. These methods have shown promise in identifying behaviours such as deception and unsafe completions. However, these monitors may themselves become training signals, for example, by using problematic samples found in deployment to retrain models. This raises an important question: can models learn to evade such monitors? To evaluate this capability, we introduce RL-Obfuscation, in which LLMs are finetuned via reinforcement learning to evade latent-space monitors while maintaining their blackbox behaviour. We apply RL-Obfuscation to Language Models ranging from 7B to 14B parameters and evaluate their Evasion Success Rate against a suite of latent-space monitors. We find that token-level monitors are highly vulnerable to this attack while more holistic monitors, such as max-pooling or attention-based probes, remain robust. Moreover, for these vulnerable monitors, models trained to evade a single static monitor can generalise to evade other unseen monitors. We also find that the models can be trained to conditionally bypass latent-space monitors on only certain inputs. Finally, we study how the models bypass these monitors and find that the model can learn to repurpose tokens to have different internal representations.
null
['Probes', 'AI Safety', 'model internals', 'capability evaluation', 'interpretability', 'whitebox control']
/pdf/42c58c75076c57e37df3dd81b619eca7e58d42c9.pdf
alignment, fairness, safety, privacy, and societal considerations
null
['ICLR.cc/2026/Conference/Submission24933/Authors']
3FOfBcEEy1
24,931
3FOfBcEEy1
Action-Conditioned Transformers for Decentralized Multi-Agent World Models
Multi-agent reinforcement learning (MARL) has achieved strong results on large-scale decision making, yet most methods are model-free, limiting sample efficiency and stability under non-stationary teammates. Model-based reinforcement learning (MBRL) can reduce data usage, but planning and search scale poorly with joint action spaces. We adopt a world model approach to long-horizon coordination while avoiding expensive planning. We introduce MACT, a decentralized transformer world model with linear complexity in the number of agents. Each agent processes discretized observation–action tokens with a shared transformer, while a single cross-agent Perceiver step provides global context under centralized training and decentralized execution. MACT achieves long-horizon coordination by coupling the Perceiver-derived global context with an action-conditioned contrastive objective that predicts future latent spaces several steps ahead given the planned joint action window and binding team actions to their multi-step dynamics. It produces consistent long-horizon rollouts and stronger team-level coordination. Experiments on the StarCraft Multi-Agent Challenge (SMAC) show that MACT surpasses strong model-free baselines and prior world model variants on most tested maps, with pronounced gains on coordination-heavy scenarios.
A decentralized transformer world model for multi-agent RL that couples Perceiver global context with action-conditioned contrastive prediction, yielding coherent long-horizon rollouts and stronger teammate coordination.
['Multi-Agent Reinforcement Learning', 'Reinforcement Learning', 'Contrastive Learning', 'World Model']
/pdf/0bc3851bf4aa3d658dcd44a2ea51ca0fe4fbe038.pdf
reinforcement learning
/attachment/82710fce9d3fc1704abc1dc8c6904f765bd3a0ab.zip
['ICLR.cc/2026/Conference/Submission24931/Authors']
bqaClExo4A
24,929
bqaClExo4A
From Guanyin, UFOs to Paradise: Capturing Cultural Variation in Dream Interpretation
Humans have long sought to uncover the mystery of dreams from divine signs for predicting fortune and future, to psychology framing them as reflections of the subconscious. This curiosity extends to large language models (LLMs), where commercial LLMs e.g., OpenAI and DeepSeek exhibit preliminary dream interpretation abilities. However, open-source research remains limited to monolingual, western-centric datasets, with evaluations largely confined to classification tasks. We address these gaps by introducing a bilingual dataset of 31,877 unique dream–interpretation pairs spanning three cultural contexts: China, the Middle East and the West, in English and Arabic. Analysis shows $<$18\% of dream symbols overlap across cultures. Chinese symbols emphasize scenario-based activities and figures like *Guanyin*, Arabic symbols reference religion and concepts such as *paradise* and *fasting*, while English symbols draw on technology like *UFOs* and fictional creatures. We evaluated 17 models and found that new state-of-the-art models integrating general-purpose and reasoning modes into one model perform best in reasoning mode, whereas earlier models separating chat and reasoning favor chat settings. While language is not a bottleneck for SOTA models, capturing cultural nuances of under-represented regions e.g., the Middle East remains challenging. Further fine-tuning of six LLMs shows that LoRA benefits larger models, while full-parameter is better for smaller ones. Although SFT equips models with cultural knowledge, post-training knowledge is less stable than pre-training, exhibiting sensitivity to training settings. Data and code are available at `http://URL.withheld.for.review`.
null
['bilingual dream interpretation', 'cross-cultural alignment']
/pdf/c7da198d015236c4649293205eb5621d486b2c9a.pdf
datasets and benchmarks
null
['ICLR.cc/2026/Conference/Submission24929/Authors']
gCkRzjVT7m
24,927
gCkRzjVT7m
Parameter-Efficient Fine-Tuning of LLMs with Mixture of Space Experts
Large language models (LLMs) have achieved remarkable progress, with Parameter-Efficient Fine-Tuning (PEFT) emerging as a key technique for downstream task adaptation. However, existing PEFT methods mainly operate in Euclidean space, fundamentally limiting their capacity to capture complex geometric structures inherent in data. While alternative geometric spaces, such as hyperbolic geometries for hierarchical data and spherical manifolds for circular patterns, offer theoretical advantages, constraining representations to single manifold types fundamentally limits expressiveness, even with learnable curvature parameters. To address this, we propose \textbf{MoS} (Mixture of Space), a unified framework that leverages multiple geometric spaces simultaneously to learn richer, curvature-aware representations. Building on this scheme, we develop \textbf{MoSELoRA}, which extends Low-Rank Adaptation (LoRA) with heterogeneous geometric experts, enabling models to adaptively select or combine appropriate geometric spaces based on input. Besides, to address the computational overhead of frequent manifold mapping, we develop a lightweight projection mechanism. Moreover, We provide empirical insights into how curvature optimization impacts training stability and model performance. Our experiments across diverse benchmarks demonstrate that MoSELoRA consistently outperforms strong baselines.
null
['Large Language Models', 'Non-Euclidean Space', 'Parameter-Efficient Fine-tuning', 'Mixture of Experts']
/pdf/645bceaf9c34fcd107fdb894814bf44cffb4e344.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24927/Authors']
yTzFfHOyyU
24,926
yTzFfHOyyU
Noise-Guided Transport for Imitation Learning
We consider imitation learning in the low-data regime, where only a limited number of expert demonstrations are available. In this setting, methods that rely on large-scale pretraining or high-capacity architectures can be difficult to apply, and efficiency with respect to demonstration data becomes critical. We introduce Noise-Guided Transport (NGT), a lightweight off-policy method that casts imitation as an optimal transport problem solved via adversarial training. NGT requires no pretraining or specialized architectures, incorporates uncertainty estimation by design, and is easy to implement and tune. Despite its simplicity, NGT achieves strong performance on challenging continuous control tasks, including high-dimensional Humanoid tasks, under ultra-low data regimes with as few as 20 transitions.
Noise-Guided Transport (NGT) is a lightweight off-policy imitation learning method for low-data settings that frames imitation as an optimal transport problem solved adversarially.
['Reinforcement Learning', 'Imitation Learning', 'Optimal Transport']
/pdf/49e8dec3c62f05697607971675455aa24aa3f266.pdf
reinforcement learning
/attachment/28b67c54fe2e1fa1f69bd34b3ecd7932a59907d0.zip
['ICLR.cc/2026/Conference/Submission24926/Authors']
MtdNbFQp5O
24,924
MtdNbFQp5O
Single LLM Debate, MoLaCE: Mixture of Latent Concept Experts Against Confirmation Bias
Large language models (LLMs) are highly vulnerable to input confirmation bias. When a prompt implies a preferred answer, models often reinforce that bias rather than explore alternatives. This phenomenon remains underexplored, yet it is already harmful in base models and poses an even greater risk in multi-agent debate, where echo chambers reinforce bias instead of correction. We introduce \emph{\textbf{M}ixture \textbf{o}f \textbf{La}tent \textbf{C}oncept \textbf{E}xperts (\textbf{MoLaCE})}, a framework that directly addresses confirmation bias through a mixture of hidden experts. Our method identifies a latent direction in the model internal representations that reflects confirmation bias, instantiates experts as different activation strengths along this direction, and employs a gating mechanism to adaptively mix their predictions. This design enables a single LLM to emulate the benefits of debate internally while remaining lightweight and scalable. It can also be integrated into multi-agent debate frameworks to diversify perspectives and reduce correlated errors. We empirically show that it consistently reduces confirmation bias, improves robustness, and matches or surpasses multi-agent debate while requiring only a fraction of the computation.
null
['LLM', 'Question Answering', 'Bias']
/pdf/ee5a1f085efa5995fe9dbcd8328b9c0452bc2514.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24924/Authors']
2qS5fes4RL
24,923
2qS5fes4RL
EchoRAG: A Cognitive Memory-Inspired Framework for RAG with Semantic Gist
Retrieval-Augmented Generation (RAG), a pivotal technology connecting external knowledge with large language models, has been widely applied in various knowledge-intensive tasks. However, due to the inherent discrete representation of textual information and retrieval paradigms in current mainstream RAG systems, there is a prevalent issue of lack of semantic integrity, which leads to deviations in semantic retrieval. Therefore, we propose the concept of semantic gist and design EchoRAG, a novel RAG framework that simulates human cognitive memory. Specifically, inspired by the human episodic memory mechanism, this framework first achieves an understanding of semantic gist through reasoning and uses this to construct a multi-dimensional knowledge graph. During retrieval, it relies on a thought diffusion module to conduct global thinking on the knowledge graph, building a more comprehensive semantic landscape. Additionally, we propose the CogniRank algorithm, which incorporates the structural relevance of entity nodes and entity frequency metrics to measure node importance, thereby completing efficient ranking of candidate knowledge. To verify the effectiveness of EchoRAG, experiments were conducted on 5 public datasets for Question Answering (QA) tasks and multi-hop reasoning tasks. The results show that compared with current mainstream RAG methods, EchoRAG significantly improves answer accuracy and recall metrics while enhancing speed.
null
['Retrieval-augmented generation', 'Large language models']
/pdf/4e8f209159adc29afb710e418e26221cf7b92b33.pdf
applications to computer vision, audio, language, and other modalities
/attachment/1fd57277eddfbef2220db500445f8fd7ec046af3.zip
['ICLR.cc/2026/Conference/Submission24923/Authors']
LRpJ5sYgcy
24,922
LRpJ5sYgcy
BayesShift: Evolving Domain Generalization via Hamiltonian Monte Carlo
Evolving Domain Generalization (EDG) addresses learning scenarios where the data distribution evolves over time, a setting crucial for real-world applications under varying environmental conditions. Recently, structure-aware variational models have shown promise by disentangling static and variant information, but their reliance on point estimates for model parameters neglects parameter uncertainty, limiting both adaptability and reliability. We propose BayesShift, a full Bayesian framework that parameterizes a latent structure-aware autoencoder to capture static features, distribution drift, and categorical shifts. Unlike standard variational inference, our method leverages Hamiltonian Monte Carlo (HMC) to approximate the posterior over latent variables, enabling principled quantification of uncertainty, which not only improves robustness to evolving distributions but also provides confidence estimates for predictions, a critical property in safety-sensitive domains. Experiments on benchmark datasets demonstrate that BayesShift achieves higher robustness to evolving distributions, outperforming state-of-the-art baselines in both predictive accuracy and adaptability. These results highlight the effectiveness of Bayesian inference for evolving domain generalization.
We propose a full Bayesian framework that parameterizes a latent structure-aware autoencoder to capture static features, distribution drift, and categorical shifts, leveraging Hamiltonian Monte Carlo to approximate the posterior over latent variables
['Evolving Domain Generalization', 'Hamiltonian Monte Carlo', 'Variational Autoencoder']
/pdf/be5487dd2334950f8cdef6b65e19ea865dc87974.pdf
transfer learning, meta learning, and lifelong learning
/attachment/94f28a1615084008707a86064cca2689ee50e413.zip
['ICLR.cc/2026/Conference/Submission24922/Authors']
oKmnyMNLGT
24,920
oKmnyMNLGT
Scaling Language Model Reliability via Determinantal Point Process Prompt Sampling
Language models achieve stronger performance when given multiple opportunities to solve a task, as in best-of-$N$ inference. However, naive approaches to scaling at test time—such as high-temperature sampling or random prompt ensembling—suffer from correlated failures, where many attempts repeat the same mistakes. We argue that improving pass@$k$ performance requires selecting prompts that are individually strong at eliciting correct answers while also nudging the model toward semantically distinct reasoning paths. To this goal, we introduce a lightweight, query-conditioned framework for prompt selection based on Determinantal Point Processes (DPPs). We build an accuracy–diversity target kernel by combining accuracy labels with hidden-activation similarities, and train a small encoder to approximate this target kernel. The encoder is optimized via a Kullback-Leibler divergence objective, which admits an unbiased gradient estimator. Given the compute budget of $k$ generations at inference, the encoder alone is used to generate the test-time DPP and sample a diverse subset of $k$ prompts that maximize coverage of complementary paths. Experiments on multiple benchmarks demonstrate that our approach outperforms competitive baselines.
null
['Language Model Reliability', 'Prompt Sampling', 'Determinantal Point Process']
/pdf/7c428664b0f8a53f0dd998e39c78551093cc7d72.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24920/Authors']
y1OWj26FCo
24,916
y1OWj26FCo
Programming by Backprop: Learning Behaviour from Symbolic Descriptions
Large language models (LLMs) are typically trained to acquire behaviours from demonstrations or experience, yet much of their training data consists of symbolic descriptions: instructions, rules, and strategies that specify procedures without examples. We investigate whether LLMs can learn to execute such behaviours directly from their abstract description, a process we term *Programming by Backprop* (PBB). We study this phenomenon in two domains: first, using source code as a canonical form of procedural description by comparing models finetuned on algorithms versus execution examples; and second, extending beyond code to abstract grammar rules, testing whether models learn to generate compliant text. Our findings show that PBB can be elicited through targeted finetuning, demonstrating that LLMs can acquire new behaviours from symbolic descriptions, albeit not yet with full reliability. Once elicited, PBB enables models to internalise reusable procedural abstractions - generalising across inputs, executing procedures implicitly in a forward pass, and benefiting further from chain-of-thought reasoning. These results position PBB as a distinct pathway through which LLMs acquire behavioural skills from symbolic descriptions, with implications for both more efficient capability acquisition and aligning models through formal specifications rather than demonstrations alone.
LLMs can learn to execute procedures that are described symbolically in their training data, but only with specific finetuning curricula.
['Large Language Models', 'Abstraction', 'Procedural Knowledge']
/pdf/0dfdbde6c5f44af2f328e183bda747836949283f.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24916/Authors']
t3FSGlOcsG
24,915
t3FSGlOcsG
FedHyMoe: Hypernetwork-Driven Mixture-of-Experts for Federated Domain Generalization
Federated Learning (FL) enables collaborative model training without sharing raw data, but most existing solutions implicitly assume that each client’s data originate from a single homogeneous domain. In practice, domain shift is pervasive: clients gather data from diverse sources, domains are heterogeneously distributed across clients, and only a subset of clients participate in each round. These factors cause substantial degradation on unseen target domains. Prior Federated Domain Generalization (FedDG) methods often assume complete single-domain datasets per client and sometimes rely on sharing domain-level information, raising privacy concerns and limiting applicability in real-world federations. In this paper, we introduce FedHyMoe, a Hypernetwork-Driven Mixture-of-Experts framework that addresses these challenges by shifting from parameter-space fusion to embedding-space parameter synthesis. Each client is represented by a compact domain embedding, and a shared hypernetwork generates its Mixture-of-Experts (MoE) adapter parameters. At test time, unseen domains are handled by attending over source client embeddings to form a test-domain embedding, which the hypernetwork uses to synthesize a specialized adapter. This enables non-linear interpolation and extrapolation beyond convex averages of stored parameters, while reducing communication and storage overhead and mitigating privacy risks by exchanging only low-dimensional embeddings. FedHyMoe consistently achieves higher generalization accuracy and improved calibration compared to baselines under domain heterogeneity and partial participation highlighting embedding-driven hypernetwork synthesis as a powerful inductive bias for robust, efficient, and privacy-conscious Federated Domain Generalization.
TL;DR: **FedHyMoe uses hypernetworks with client embeddings to synthesize Mixture-of-Experts adapters, enabling robust, efficient, and privacy-preserving domain generalization in federated learning under heterogeneity and partial participation.**
['Federated Learning', 'Domain Generalization', 'Hypernetworks', 'Mixture-of-Experts', 'Privacy-Preserving Learning', 'Cross-Domain Adaptation']
/pdf/f2243f77eacb3fb6ff575638a8a73e87426f75ba.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission24915/Authors']
dYaIotpCiK
24,913
dYaIotpCiK
Self-Guided Plan Extraction for Instruction-Following Tasks with Goal-Conditional Reinforcement Learning
We introduce a framework for instruction-following tasks. Unlike prior methods that rely on predefined subtasks, our approach enables a language model to generate and refine high-level plans through a self-learning mechanism, reducing the need for manual dataset annotation. The method involves iterative co-training: an RL agent is trained to follow the generated plans, while the language model adapts and modifies these plans based on RL feedback and preferences. This creates a feedback loop where both the agent and the planner improve jointly. We validate the framework in environments with rich dynamics and stochasticity. Results show that our agents adhere to instructions more strictly than baseline methods, while also demonstrating strong generalization to previously unseen instructions.
A self-improving framework couples language-model plan generation with reinforcement learning feedback to achieve robust, generalizable instruction following without predefined subtasks.
['Instruction Following; Reinforcement Learning; Multimodal RL']
/pdf/d88db8cd1ca1c6f2aa39e74cc65237ced4cde352.pdf
applications to robotics, autonomy, planning
null
['ICLR.cc/2026/Conference/Submission24913/Authors']
f43lpq1Q8i
24,912
f43lpq1Q8i
When Validity Isn't Enough: Reliability Gaps in Molecular Generation and KRAS Case Study
Molecule generation remains a core challenge in computational chemistry. Practical use of generative models is complicated by strict chemical, structural, and biological constraints: candidate compounds must satisfy physicochemical bounds, avoid reactive or toxic substructures, be synthesizable, and plausibly bind a target. We are the first to perform such comprehensive analysis of modern molecule generators via the Five-Stage Filtering Pipeline, a target-agnostic, practice-oriented benchmark for evaluating de novo generators using the following stages: (i) physicochemical descriptors; (ii) structural alerts; (iii) synthesis feasibility; (iv) docking and binding affinity estimation; and (v) blind medicinal chemist review. We compare 18 generators across three families (unconditional, ligand-based, and protein-based), and to make it practically relevant, apply the pipeline to KRAS G12D switch-II pocket for conditional design case study. Less than 1% of molecules pass all stages, exposing a gap between high scores on standard generative metrics and practical medicinal chemistry usage. We release our benchmark, and code to enable reproducible evaluation and to focus future model development on practically useful chemical space.
null
['Molecule Generation', 'Generative Models', 'KRAS', 'Benchmark']
/pdf/98e09c2d0c1aaf8716b2aa0002ed353623b602ff.pdf
datasets and benchmarks
/attachment/836d7f40afd35369c91e74860a15bbb416e13469.zip
['ICLR.cc/2026/Conference/Submission24912/Authors']
XKLPlnfZzM
24,911
XKLPlnfZzM
Learning to Deaggregate: Large-scale Trajectory Generation with Spatial Priors
Generating realistic large-scale trajectories is essential for applications in urban mobility and transportation, yet current generative models either do not offer any controllability or rely on strong sample-specific conditioning. We introduce the Temporal Deaggregation Diffusion Model (TDDM), a hierarchical framework that first represents mobility using spatial priors, which are marginal distributions over geographical occupancy, and then deaggregates them into trajectories. This separation enables generation without sample-specific conditions, supporting transfer to new regions. To support evaluation, we build a benchmark across three cities spanning different continents (Beijing, Porto, San Francisco), with standardized metrics for fidelity and distributional coverage. Across all datasets, TDDM improves trajectory fidelity and coverage over leading baselines, and demonstrates stable performance when applied to unseen cities. By explicitly decoupling spatial allocation from temporal realization, our work highlights the role of spatial occupancy priors in enabling scalable and generalizable trajectory generation.
null
['trajectory generation', 'deaggregation', 'spatial priors', 'urban mobility', 'diffusion models']
/pdf/45dfd52680bd33c4ae1547c0110e64cbc9ce177f.pdf
generative models
null
['ICLR.cc/2026/Conference/Submission24911/Authors']
CPxZClPMiy
24,910
CPxZClPMiy
Aria: an Agent for Retrieval and Iterative Auto-Formalization via Dependency Graph
Accurate auto-formalization of theorem statements is essential for advancing automated discovery and verification of research-level mathematics, yet remains a major bottleneck for LLMs due to hallucinations, semantic mismatches, and their inability to synthesize new definitions. To tackle these issues, we present Aria (**A**gent for **R**etrieval and **I**terative **A**utoformalization), a system for conjecture-level formalization in Lean that emulates human expert reasoning via a two-phase Graph-of-Thought process: recursively decomposing statements into a dependency graph and then constructing formalizations from grounded concepts. To ensure semantic correctness, we introduce **AriaScorer**, a checker that retrieves definitions from Mathlib for term-level grounding, enabling rigorous and reliable verification. We evaluate Aria on diverse benchmarks. On ProofNet, it achieves 91.6\% compilation success rate and 68.5\% final accuracy, surpassing previous methods. On FATE-X, a suite of challenging algebra problems from research literature, it outperforms the best baseline with 44.0\% vs. 24.0\% final accuracy. On a dataset of homological conjectures, Aria reaches 42.9\% final accuracy while all other models score 0\%.
null
['Lean 4', 'Autoformalization', 'LLM', 'Graph-of-Thought', 'Retrieval Augmented Generation']
/pdf/e212662185b551e06441435260d5f375f2bc6aec.pdf
foundation or frontier models, including LLMs
null
['ICLR.cc/2026/Conference/Submission24910/Authors']
Tf29oMgErW
24,909
Tf29oMgErW
ReynoldsFlow: Spatiotemporal Flow Representations for Video Learning
Representation learning for videos has largely relied on spatiotemporal modules embedded in deep architectures, which, while effective, often require heavy computation and heuristic design. Existing approaches, such as 3D convolutional modules or optical flow networks, may also overlook changes in illumination, scale variations, and structural deformations in video sequences. To address these challenges, we propose ReynoldsFlow, a physics-inspired flow representation that leverages the Helmholtz decomposition and the Reynolds transport theorem to derive principled spatiotemporal features directly from video data. Unlike classical optical flow, ReynoldsFlow captures both divergence-free and curl-free components under more general assumptions, enabling robustness to photometric variation while preserving intrinsic structure. Beyond its theoretical grounding, ReynoldsFlow remains lightweight and adaptable, combining frame intensity with flow magnitude to yield texture-preserving and dynamics-aware representations that substantially enhance tiny object detection. Experiments on benchmarks with various target scales demonstrate that ReynoldsFlow is consistently comparable to or outperforms existing flow-based features, while also improving interpretability and efficiency. These results position ReynoldsFlow as a compelling representation for video understanding and a strong foundation for downstream model learning. The code will be made publicly available.
We propose ReynoldsFlow, a physics-inspired spatiotemporal flow representation that is lightweight, interpretable, and robust to photometric and structural variations for efficient video representation learning.
['Video Representation Learning', 'Spatiotemporal Modeling', 'Physics-Inspired Flow', 'Helmholtz Decomposition', 'Reynolds Transport Theorem']
/pdf/f684429d37a7e3c2489e07985f6eda3f13e9c2d7.pdf
unsupervised, self-supervised, semi-supervised, and supervised representation learning
null
['ICLR.cc/2026/Conference/Submission24909/Authors']
YsVQBe0HNA
24,907
YsVQBe0HNA
BatonVoice: An Operationalist Framework for Enhancing Controllable Speech Synthesis with Linguistic Intelligence from LLMs
The rise of Large Language Models (LLMs) is reshaping multimodel models, with speech synthesis being a prominent application. However, existing approaches often underutilize the linguistic intelligence of these models, typically failing to leverage their powerful instruction-following capabilities. This limitation hinders the model's ability to follow text instructions for controllable Text-to-Speech~(TTS). To address this, we propose a new paradigm inspired by operationalism that decouples instruction understanding from speech generation. We introduce BatonVoice, a framework where an LLM acts as a conductor, understanding user instructions and generating a textual plan -- explicit vocal features (e.g., pitch, energy). A separate TTS model, the orchestra, then generates the speech from these features. To realize this component, we develop BatonTTS, a TTS model trained specifically for this task. Our experiments demonstrate that BatonVoice achieves strong performance in controllable and emotional speech synthesis, outperforming strong open- and closed-source baselines. Notably, our approach enables remarkable zero-shot cross-lingual generalization, accurately applying feature control abilities to languages unseen during post-training. This demonstrates that objectifying speech into textual vocal features can more effectively unlock the linguistic intelligence of LLMs.
null
['LLM', 'TTS']
/pdf/3da98520a275f06fcd211b524f04bea1fe84f9e6.pdf
foundation or frontier models, including LLMs
/attachment/b7975f0643fdb45924b946567981e470a3238593.zip
['ICLR.cc/2026/Conference/Submission24907/Authors']
DVnK3ZgG9D
24,905
DVnK3ZgG9D
Empowering Channel-of-Mobile-Experts with Informative Hybrid-Capabilities Reasoning
Mobile Agents can autonomously execute user instructions, which requires hybrid-capabilities reasoning, including screen summary, subtask planning, action decision and action function. However, existing agents struggle to achieve both decoupled enhancement and balanced integration of these capabilities. While Mixture-of-Experts (MoE) supports capability decoupling, the input-oriented activation prevents the selection of expert aligning with the reasoning stage. To address these challenges, we propose Channel-of-Mobile-Experts (CoME), a novel agent architecture consisting of four distinct experts, each aligned with a specific reasoning stage, CoME activates the corresponding expert to generate output tokens in each reasoning stage via output-oriented activation. To empower CoME with hybrid-capabilities reasoning, we introduce a progressive training strategy: Expert-FT enables decoupling and enhancement of different experts' capability; Router-FT aligns expert activation with the different reasoning stage; CoT-FT facilitates seamless collaboration and balanced optimization across multiple capabilities. To mitigate error propagation in hybrid-capabilities reasoning, we propose InfoGain-Driven DPO (Info-DPO), which uses information gain to evaluate the contribution of each intermediate step, thereby guiding CoME toward more informative reasoning. Comprehensive experiments show that CoME outperforms dense mobile agents and MoE methods on both AITZ and AMEX datasets.
We propose Channel-of-Mobile-Experts (CoME) to enhance hybrid-capabilities reasoning on mobile task automation, via infomation gain driven DPO
['Mobile Agent', 'Hybrid-Capabilities Reasoning']
/pdf/3c5e3666d13bfcfb91f58f4e4fd6ec14e0487d5b.pdf
applications to computer vision, audio, language, and other modalities
null
['ICLR.cc/2026/Conference/Submission24905/Authors']
nTWZCXrnvs
24,902
nTWZCXrnvs
FedDefuse: Mitigating Strategic Model Poisoning for Federated Learning via Divide-and-Compute Driven Composite Behavioral Analysis
Federated Learning (FL) enables collaborative model training across distributed clients without sharing local data, but it is highly vulnerable to strategic model poisoning, where adversaries dominate participation rounds and may selectively launch arbitrary attacks under non-i.i.d. data. Existing defenses, often relying on single-perspective behavioral heuristics, fail to reliably distinguish and suppress malicious behaviors due to the erased distinctions between benign and malicious updates. In this paper, we propose FedDefuse, a principled defense framework built upon a novel composite behavioral pattern that judiciously fuses two complementary indicators, intra-client recoverability and inter-client similarity, in a divide-and-compute manner. FedDefuse first divides the uploaded model updates into two candidate clusters based on their recoverability, which quantifies how faithfully each update can be reproduced through simulated local training. It then identifies benign updates as those exhibiting higher similarity scores to provisional benign clusters in the frequency domain. This design allows FedDefuse to effectively suppress adversarial contributions in the global aggregation without sacrificing benign ones. Extensive experiments demonstrate that FedDefuse significantly outperforms state-of-the-art defenses under strategic model poisoning scenarios, achieving considerable improvements in terms of both detection and model accuracy across diverse settings.
null
['Federated learning', 'model poisoning attack', 'non-i.i.d. data', 'composite behavioral pattern']
/pdf/171c619e6dfb35a965c3d0bef10b80c735a17bb5.pdf
other topics in machine learning (i.e., none of the above)
null
['ICLR.cc/2026/Conference/Submission24902/Authors']
Lx4MhiIzhH
24,901
Lx4MhiIzhH
Automated Overrefusal Prompt Generation and Repair with Delta Debugging
While safety alignment and guardrails help large language models (LLMs) avoid harmful outputs, they also introduce the risk of overrefusal—unwarranted rejection of benign queries that only appear risky. We introduce DDOR (Delta Debugging for OverRefusal), a fully automated, causally grounded framework that generates interpretable test items with explicit refusal triggers. Unlike prior benchmarks that operate at a coarse prompt level or rely heavily on manual design, DDOR produces one thousand high-quality prompts per model and consistently increases measured overrefusal rates relative to seed sets, demonstrating strong diagnostic capability. Moreover, our mRTF-based repair method substantially lowers overrefusal rates without compromising safety on genuinely harmful inputs. By combining precise trigger isolation with scalable generation and principled filtering, DDOR provides a practical framework to both evaluate and mitigate overrefusal, thereby improving LLM usability while maintaining safety.
We introduce DDOR, an automated, causally grounded framework that detects and reduces LLM overrefusal by extracting interpretable refusal triggers.
['overrefusal', 'llm', 'delta debugging', 'safety alignment', 'prompt repair']
/pdf/0fc3a586c5a02de384274c9181b65457b35666c2.pdf
alignment, fairness, safety, privacy, and societal considerations
/attachment/690429d5f9bdc24de861fb00aeb1deaeeaa7f638.zip
['ICLR.cc/2026/Conference/Submission24901/Authors']