Dataset Viewer
id
stringlengths 10
10
| number
int64 1
25.6k
| forum
stringlengths 10
10
| title
stringlengths 5
214
| abstract
stringlengths 26
4.31k
| content_TLDR
stringlengths 1
250
⌀ | content_keywords
stringlengths 6
1.02k
| content_pdf
stringlengths 49
49
| content_primary_area
stringclasses 21
values | content_supplementary_material
stringlengths 56
56
⌀ | signatures
stringlengths 47
51
|
|---|---|---|---|---|---|---|---|---|---|---|
FPLNSx1jmL
| 25,649
|
FPLNSx1jmL
|
Improving Developer Emotion Classification via LLM-Based Augmentation
|
Detecting developer emotion in the informative data stream of technical commit messages is a critical task for gauging signals of burnout or bug introduction, yet it exposes a significant failure point for large language models whose emotion taxonomies are ill-suited for technical contexts in the field of software engineering. To address this, the study introduces a dataset of 2,000 GitHub commit messages that have been human-labeled with a four-label scheme tailored for this domain: Satisfaction, Frustration, Caution, and Neutral. A diagnostic zero-shot evaluation of five pretrained models yields near-chance Macro-F1 (0.13–0.21) and systematic biases. While fine-tuning a code-aware encoder (CodeBERT) establishes a strong baseline (Macro-F1≈0.59), this study introduces CommiTune, a simple hybrid method that first fine-tunes a LLaMA model on the manually labeled dataset, uses it to augment the data, and then fine-tunes CodeBERT on this expanded set, achieving Macro-F1≈0.82 (Accuracy≈0.81) on an untouched test split. This demonstrates that hybrid augmentation can effectively repair the representation gap in technical emotion detection. These results establish reproducible training and validation schemes for software engineering NLP. The code, prompts, and label mappings will be released upon acceptance.
|
This study introduces a 2,000-message GitHub commit dataset; CommiTune (LLaMA augmentation + CodeBERT) boosts technical emotion detection from Macro-F1 0.13–0.21 to ≈0.82.
|
['Emotion Detection; Commit Messages; Software Engineering NLP; Domain Adaptation; Large Language Models; Data Augmentation']
|
/pdf/8b0aaa58417c15b0cc14a4e9d4ec20929231ca02.pdf
|
foundation or frontier models, including LLMs
|
/attachment/f42e7ff255d681f154b4dfcb9fe260170bcd373c.zip
|
['ICLR.cc/2026/Conference/Submission25649/Authors']
|
y5rLR9xZpn
| 25,645
|
y5rLR9xZpn
|
Quantum-Inspired Image Encodings for Financial Time-Series Forecasting
|
This study proposes a quantum-inspired methodology that transforms time-series data into complex-valued image representations for prediction. Unlike classical encodings such as the Gramian Angular Field (GAF), Recurrence Plot (RP), and Markov Transition Field (MTF), which rely on additive pairwise relations, our approach embeds both probabilistic amplitudes and dynamic phase information. Observations are first mapped into quantum amplitudes via Gaussian soft encoding, and local temporal structures are incorporated through phase-function encoding, allowing interference effects that reveal volatility, cumulative imbalances, and phase shifts hidden to classical methods. Building on this foundation, we extend GAF, RP, and MTF into their quantum analogues—Q-GAF, Q-RP, and Q-MTF—producing complex-valued images suitable for CNN-based forecasting. Empirical analysis on the S&P 500 and Russell 3000 indices shows that these quantum-inspired encodings substantially improve predictive accuracy. Our contributions are both methodological and empirical: we present a novel representation framework for financial time series and demonstrate that quantum-inspired image encodings capture richer dynamics and previously undetectable patterns, with implications for forecasting and risk modeling.
|
We propose quantum state–based image encodings for time series that capture both probabilistic amplitudes and dynamic phases, yielding superior forecasting performance over classical methods.
|
['Time-series Classification', 'Image Encoding', 'Quantum Physics', 'Convolutional Neural Networks (CNN)', 'Financial Forecasting']
|
/pdf/6c811a881347448ec8bb614191ea5deae32423ae.pdf
|
other topics in machine learning (i.e., none of the above)
| null |
['ICLR.cc/2026/Conference/Submission25645/Authors']
|
kiVIVBmMTP
| 25,642
|
kiVIVBmMTP
|
SAVIOR: Sample-efficient Alignment of Vision-Language Models for OCR Representation
|
Modern enterprises are increasingly adopting business document understanding workflows that leverage Vision Language Models (VLMs) for optical character recognition (OCR), given their ability to jointly model layout and language. However, deployment is impeded by data and compute barriers: large enterprises face de-identification pipelines requiring manual validation, while smaller ones lack access to sufficiently large and varied datasets. Synthetic data pipelines that generate millions of $<$document, OCR$>$ pairs also fall short, as they often fail to capture the nuanced structural and semantic challenges of real-world documents. To address this gap, we introduce SAVIOR, a sample-efficient data curation methodology that identifies common failure cases in pretrained VLMs and explicitly curates examples for challenging scenarios such as vertical text, stylized logo text, fine print, and degraded scans. Using SAVIOR, we construct SAVIOR-TRAIN, a compact training dataset of 2,234 $<$document, OCR$>$ tuples, and SAVIOR-Bench, a benchmark of 509 financial documents annotated by domain experts. We further introduce SAVIOR-OCR, a Qwen-2.5-VL-7B-Instruct model fine-tuned on SAVIOR-TRAIN. Experiments show that SAVIOR-OCR achieves a word-level recall of 0.9257 on SAVIOR-Bench, outperforming PaddleOCR 3.0 (0.8685) and Nanonets-OCR-s (0.9040). Beyond recall, we propose PAIRS, a structure-aware evaluation metric that quantifies layout fidelity via pairwise spatial relations between tokens; SAVIOR-OCR achieves a PAIRS score of 0.802, demonstrating superior preservation of document structure. To the best of our knowledge, SAVIOR is the first methodology to enable sample-efficient adaptation of VLMs for OCR in enterprise settings, delivering both high accuracy and strong layout fidelity with minimal data and compute.
| null |
['Finance', 'Document Processing', 'Optical Character Recognition', 'Semi-structured documents', 'Vision Language Models']
|
/pdf/63700f6bfe57db11c670b4f6e6048349e7bee5df.pdf
|
applications to computer vision, audio, language, and other modalities
| null |
['ICLR.cc/2026/Conference/Submission25642/Authors']
|
IKJyRyHpHV
| 25,639
|
IKJyRyHpHV
|
Revisiting Multilingual Data Mixtures in Language Model Pretraining
|
The impact of different multilingual data mixtures in pretraining large language models (LLMs) has been a topic of ongoing debate, often raising concerns about potential trade-offs between language coverage and model performance (i.e., the curse of multilinguality).
In this work, we investigate these assumptions by training 1B and 3B parameter LLMs on diverse multilingual corpora, varying the number of languages from 25 to 400. Our study challenges common beliefs surrounding multilingual training.
First, we find that combining English and multilingual data does not necessarily degrade the in-language performance of either group, provided that languages have a sufficient number of tokens included in the pretraining corpus.
Second, we observe that using English as a pivot language (i.e., the language with the highest data proportion) yields benefits across language families, and contrary to expectations, selecting a pivot language from within a specific family does not consistently improve performance for languages within that family. Lastly, we do not observe a significant "curse of multilinguality" as the number of training languages increases in models at this scale.
Our findings suggest that multilingual data, when balanced appropriately, can enhance language model capabilities without compromising performance, even in low-resource settings.
| null |
['Multilingual LLMs', 'multilinguality', 'cross-lingual transfer', 'Multilingual Data Mixture']
|
/pdf/e0fea52c45b605ba416d607f3f96c16acaa3dd88.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25639/Authors']
|
GGg2BmcBEp
| 25,633
|
GGg2BmcBEp
|
One-Shot Style Personalization for RL Agents via Latent Discriminator
|
Reinforcement learning (RL) has achieved remarkable success in training agents with high-performing policies, and recent works have begun to address the critical challenge of aligning such policies with human preferences. While these efforts have shown promise, most approaches rely on large-scale data and do not generalize well to novel forms of preferences. In this work, we formalize one-shot style alignment as an extension of the preference alignment paradigm. The goal is to enable RL agents to adapt to human-specified styles from a single example, thereby eliminating the reliance on large-scale datasets and the need for retraining. To achieve this, we propose a framework that infers an interpretable latent style vector through a learned discriminator and adapts a pretrained base policy using a style reward signal during online interaction.Our design enables controllable and data-efficient alignment with target styles while maintaining strong task performance, and further enables smooth interpolation across unseen style compositions. Experiments across diverse environments with varying style preferences demonstrate precise style alignment, strong generalization, and task competence.
|
One-shot style alignment for RL agents via latent inference from a single trajectory and reward-guided finetuning, enabling controllable and generalizable behavior
|
['Reinforcement Learning', 'Agent Alignment']
|
/pdf/473ff82716b5f4f74429f88048f7e99c922f25e9.pdf
|
reinforcement learning
| null |
['ICLR.cc/2026/Conference/Submission25633/Authors']
|
GBlHo6mPIW
| 25,632
|
GBlHo6mPIW
|
InfiAgent: Self-Evolving Pyramid Agent Framework for Infinite Scenarios
|
Large Language Model (LLM) agents have demonstrated remarkable capabilities in organizing and executing complex tasks, and many such agents are now widely used in various application scenarios. However, developing these agents requires carefully designed workflows, carefully crafted prompts, and iterative tuning, which requires LLM techniques and domain-specific expertise. These hand-crafted limitations hinder the scalability and cost-effectiveness of LLM agents across a wide range of industries. To address these challenges, we propose \textbf{InfiAgent}, a Pyramid-like DAG-based Multi-Agent Framework that can be applied to \textbf{infi}nite scenarios, which introduces several key innovations: a generalized "agent-as-a-tool" mechanism that automatically decomposes complex agents into hierarchical multi-agent systems; a dual-audit mechanism that ensures the quality and stability of task completion; an agent routing function that enables efficient task-agent matching; and an agent self-evolution mechanism that autonomously restructures the agent DAG based on new tasks, poor performance, or optimization opportunities. Furthermore, InfiAgent's atomic task design supports agent parallelism, significantly improving execution efficiency. This framework evolves into a versatile pyramid-like multi-agent system capable of solving a wide range of problems. Evaluations on multiple benchmarks demonstrate that InfiAgent achieves 9.9\% higher performance compared to ADAS (similar auto-generated agent framework), while a case study of the AI research assistant InfiHelper shows that it generates scientific papers that have received recognition from human reviewers at top-tier IEEE conferences.
|
A novel multi-agent system framework that can be easily extended to many scenarios, can design agents according to tasks, and can self-evolve
|
['LLM Agents', 'Large Language Models']
|
/pdf/a98f8f6f24edaca39ad259787be5d3737d439ffc.pdf
|
applications to robotics, autonomy, planning
|
/attachment/4b86d2928d3d704b59a5ae0b2e2f95385ad72426.zip
|
['ICLR.cc/2026/Conference/Submission25632/Authors']
|
NWoHQbALl4
| 25,628
|
NWoHQbALl4
|
Compositional HyperModules for Few-Shot Code Adaptation in Meta-Reinforcement Learning
|
We propose Compositional HyperModules (CHM), which is a novel architectural framework for few-shot code adaptation in meta-reinforcement learning (Meta-RL), that dynamically composes reusable neural modules in order to capture the syntactic and semantic structure of code. Existing Meta-RL methods often have a difficult time with codes since they are monolithic and do not model the hierarchical and compositional nature of programming languages. For this purpose, CHM combines a transformer based hypernetwork with a hierarchical code representation layer, which allows the system to break code apart into function blocks (e.g. loops, conditionals) and recompose them smoothly for new tasks. The hypernetwork generates the task-specific weights for lightweight sub-modules, which are used to perform computations on structured code subgraphs as well as keeping the residual connections that preserve the functionalities of pre-trained modules. In addition, a gated attention mechanism aggregates the module outputs to jointly produce a global representation which serves as guidance to a Meta-RL policy network in order to generate context-aware actions (e.g., code edits). In contrast to previous work, CHM explicitly models code compositionality which allows for interpretable and efficient few-shot adaptation without full-fine-tuning. Experiments on code synthesis and bug fixing demonstrate a 20\% improvement in few-shot accuracy over monolithic baselines, highlighting the framework's ability to generalize across diverse code patterns. The modular design not only gives adaptability but understanding what neural components correspond to specific code construct, leveraging neural procedures towards program undertaking analysis.
| null |
['Meta-Reinforcement Learning']
|
/pdf/770ffe4d1685855b8965e603c13a7584fd29021d.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25628/Authors']
|
J0eNXpnrc7
| 25,627
|
J0eNXpnrc7
|
All-in-One: Boosting Basic Capabilities in one Omni-MLLM to Enhance Movie Understanding
|
Movie understanding is still challenging as a movie involves many characters with complex relationships and it is edited with artistic language for appealing audiences, which are neglected in current multimodal large language models (MLLMs). Only a few previous works propose ideas to identify characters and integrate ID information in models, but they use cascaded models, or vision and scripts only while ignoring audio. To address these problems, we propose an all-in-one Omni-MLLM with built-in basic capabilities of ID identification, shot-level description, and critical sub-question answer in thinking. First, we construct identity related data consisting of 12 fine-grained character-centric tasks to improve the model's ability to identify characters from vision and audio. Second, we leverage frame and shot descriptions to alleviate the difficulty of training. Third, we explore how to enhance our model further by using Chain of Thought (CoT) data from an advanced model. Experimental results show that our proposed model achieves stable improvements on both ID-Aware Movies Understanding questions' set StoryQA and general video understanding benchmark VideoMME. Ablation studies confirm the positive contributions from all of our proposed ideas.
|
An end-to-end omni-multimodal language model for id-aware video understanding
|
['omni-multimodal large language models', 'identity-aware', 'video understanding']
|
/pdf/c1c1f799d21fb090a70853b9b03f7eeb499b7468.pdf
|
applications to computer vision, audio, language, and other modalities
| null |
['ICLR.cc/2026/Conference/Submission25627/Authors']
|
VBrswK6tqS
| 25,623
|
VBrswK6tqS
|
Modality-Swap Distillation: Rendering Textual Reasoning into Visual Supervision
|
Visual reasoning over structured data such as tables is a critical capability for modern vision-language models (VLMs), yet current benchmarks remain limited in scale, diversity, or reasoning depth, especially when it comes to rendered table images. Addressing this gap, we introduce \textbf{Visual-TableQA}, a large-scale, open-domain multimodal dataset specifically designed to evaluate and enhance visual reasoning over complex tabular data. Our generation pipeline is \textbf{modular, scalable, and fully autonomous}, involving multiple reasoning LLMs collaborating across distinct roles: generation, validation, and inspiration. \textbf{Visual-TableQA} comprises 2.5k richly structured LaTeX-rendered tables and 9k reasoning-intensive QA pairs, all produced at a cost of under \$100. To promote diversity and creativity, our pipeline performs \textbf{multi-model collaborative data generation} via \textbf{cross-model prompting (‘inspiration’)} and LLM-jury filtering. Stronger models seed layouts and topics that weaker models elaborate, collectively distilling diverse reasoning patterns and visual structures into the dataset. Empirical results show that models fine-tuned on \textbf{Visual-TableQA} generalize robustly to external benchmarks, outperforming several proprietary models despite the dataset’s synthetic nature. The full pipeline and resources are publicly available.
| null |
['Reasoning+LLM']
|
/pdf/197ef6025775e057224b91ee9813571035ecd3e5.pdf
|
transfer learning, meta learning, and lifelong learning
|
/attachment/69a2bd72eb9c39697441db8d9f09d53ad9c1532b.pdf
|
['ICLR.cc/2026/Conference/Submission25623/Authors']
|
0xHWd4CUaX
| 25,618
|
0xHWd4CUaX
|
Contrastive Code Graph Embeddings for Reinforcement Learning-Based Automated Code Refactoring
|
We propose a novel reinforcement learning (RL) framework for automated code refactoring that uses contrastive pre-trained code graph embeddings to overcome the limitations of the traditional heuristic-based reward functions. The key challenge is balancing the implementation of syntactic improvements - while maintaining the semantics of the code being refactored - something that necessarily requires the existing RL approaches to accomplish and that most often do last year because of the handcrafted nature of their metrics. Our approach presents a syntax-guided contrastive encoder that acquires structural invariant representations of code graphs by relating structurally augmented variants under a self-supervised objective. These embeddings are then combined with standard measures of code quality in a composite reward function, allowing the RL agent to reason about both low-level changes to the syntactic structure as well as high-level changes in the semantic structure. The policy network itself, which takes the form of a graph attention network, runs on the joint representation space directly, which models dependency on the context on the code structure.
| null |
['Code Refactoring']
|
/pdf/04fca20531f3055d6a923ceecac8bb98608a947e.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25618/Authors']
|
FTQZvzRKEg
| 25,616
|
FTQZvzRKEg
|
Model-Heterogeneous Federated Prompt Learning
|
Large-scale vision-language models (VLMs) have shown remarkable transferability across tasks, and their integration into federated learning (FL) frameworks offers promising privacy-preserving learning capabilities. Recent advances in federated prompt learning (FPL) leverage prompt tuning to reduce computational and communication overhead. However, existing FPL methods assume a homogeneous model setting, where all clients share the same VLMs, which is an unrealistic constraint given the heterogeneous computational capacities of clients in real-world scenarios. To bridge this gap, we propose model-heterogeneous federated prompt learning (MHFPL), a novel setting where clients with diverse VLM backbones collaboratively learn prompts. We further introduce FedAPPR, a principled framework for MHFPL built on two key components: (a) server-level adversarial prompt alignment for aligning client semantics via adversarial training, and (b) client-level proximity regularization to further constrain prompt drift between clients. Extensive experiments on six datasets with diverse architectures and data distributions demonstrate the superiority and generality of FedAPPR compared to baselines, confirming it as an effective solution for FL scenarios with varying VLMs.
| null |
['federated learning', 'prompt learning', 'heterogeneous model', 'vision-language models']
|
/pdf/150cbd9a2762223c29f23524e22c2be431e85d1c.pdf
|
learning theory
| null |
['ICLR.cc/2026/Conference/Submission25616/Authors']
|
9aPd1yEGa0
| 25,615
|
9aPd1yEGa0
|
Robust Prediction-Powered Inference under Data Corruption
|
This paper proposes a robust prediction-powered semi-supervised statistical learning and inference framework. Existing prediction-powered inference (PPI) methods use pre-trained machine-learning models to impute unlabeled samples and calibrated the imputation bias, based on the assumption of covariate homogeneity between the labeled and unlabeled datasets. However, violation of the homogeneity assumption, such as distribution shifts and data corruption, can undermine the effectiveness of semi-supervised approaches and even break down the learning process. In response, we introduce robust estimation techniques to the imputation-and-then-calibration procedure of PPI. The approach can be easily integrated with general PPI methods and improves the robustness of them against the heterogeneity and the corruption in the unlabeled set. To make full use of the labeled and unlabeled data, a cross-validation procedure is also developed for selecting the shift/contamination level. Theoretical analysis shows that our method is consistent and robust under mild conditions. Numerical simulations and real-data applications also demonstrate the robustness and superiority of the proposed method.
|
This paper proposes a robust prediction-powered semi-supervised statistical learning and inference framework under data corruption.
|
['semi-supervised learning', 'prediction-powered inference', 'robustness']
|
/pdf/3bc53b241b35fb12e7647819fe93e65051eb7a69.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25615/Authors']
|
7wjqoJj62s
| 25,613
|
7wjqoJj62s
|
Soft Non-Diagonality Penalty Enables Latent Space-Level Interpretability of pLM at No Performance Cost
|
Emergence of large scale protein language models (pLMs) has led to significant performance gains in predictive protein modeling. However, it comes at a high price of interpretability, and efforts to push representation learning towards explainable feature spaces remain scarce. The prevailing use of domain-agnostic and sparse encodings in such models fosters a perception that developing both parameter-efficient and generalizable models in a low-data regime is not feasible. In this work, we explore an alternative approach to develop compact models with interpretable embeddings while maintaining competitive performance. With the BiLSTM-AE model as an example trained on positional property matrices, we introduce a soft weight matrix non-diagonality penalty. Through Jacobian analysis, we show that this penalty aligns embeddings with the initial feature space while leading to a marginal decrease in performance on a suite of four common peptide biological activity classification benchmarks. Moreover, it was demonstrated that the use of one-hot encoded sequence clustering-based contrastive loss to produce semantically meaningful latent space allows to further improve benchmarking performance. The use of amino acid physicochemical properties and DFT-derived cofactor interaction energies as input features provides a foundation for intrinsic interpretability, which we demonstrate on fundamental peptide properties. The resulting model is over 33,000 times more compact than the state-of-the-art pLM ProtT5. It demonstrates performance stability across diverse benchmarks without task-specific fine-tuning, showcasing that domain-tailored architectural design can yield highly parameter-efficient models with fast inference and preserved generalization capabilities.
| null |
['Peptides', 'representation learning', 'contrastive learning']
|
/pdf/3aa973ba48cc99f7500cd1dd0943ca2055b6a549.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25613/Authors']
|
WtbIU6tDc3
| 25,612
|
WtbIU6tDc3
|
Adaptive Mixing of Non-Invariant Information for Generalized Diffusion Policy
|
Diffusion policies (DP) have emerged as a leading paradigm for learning-based robotic manipulation, offering temporally coherent action synthesis from high-dimensional observations.
However, despite their centrality to downstream tasks, DPs exhibit fragile generalization capabilities. Minor variations in observations, such as changes in lighting, appearance, or camera pose, can lead to significant performance degradation, even when operating on familiar trajectories.
To address the issue, we introduce a factorized, fine-grained benchmark that isolates the impact of individual perturbation factors on zero-shot generalization.
Based on it, we reveal camera pose as a dominant driver of performance degradation, explaining the pronounced drops observed at higher levels of domain randomization.
In this case, we propose $A$daptive $M$ixing of non-$I$nvariant (AMI) information, a model-agnostic training strategy that requires no additional data and reinforces invariant correlations while suppressing spurious ones.
Across simulated evaluations, AMI consistently and significantly outperforms strong baselines, mitigating DP's sensitivity to observation shifts and yielding robust zero-shot generalization over diverse perturbation factors.
We further validate these improvements in real-world experiments by zero-shot deploying the policies in natural settings, demonstrating their robustness to observation variations.
| null |
['Diffusion Policy', 'Manipulation']
|
/pdf/65c638b4a949f62f9926298bab9be81470a205a8.pdf
|
applications to robotics, autonomy, planning
|
/attachment/4fc49817f1f140ae1db5143f550f572437d6ccd5.zip
|
['ICLR.cc/2026/Conference/Submission25612/Authors']
|
1nbTSuIdQ7
| 25,609
|
1nbTSuIdQ7
|
Structure-Aware Bipartite Representations for Efficient MILP Branching
|
Efficient branching variable selection is pivotal to the performance of Branch-and-Bound (B\&B) algorithms in Mixed Integer Linear Programming (MILP). Despite advances in traditional heuristics and graph-based learning methods, these approaches often fail to exploit the latent block structures inherent in many MILP problems. To address this limitation, we propose a novel graph representation that incorporates explicit block-structure annotations. By classifying variables and constraints according to their roles in block decompositions and augmenting edges with block identifiers, our method enables MILP solvers to better recognize localized patterns and global couplings. Through extensive experiments on six diverse MILP benchmarks, we demonstrate that our approach significantly improves upon state-of-the-art graph neural network baselines. Specifically, our method reduces search tree sizes by 2\%--4\% on standard instances and by 11\%--13\% on transfer instances, while decreasing solver runtime by 6\%--6.66\% on standard instances and by 5.5\%--6\% on transfer instances. Notably, these improvements are achieved without compromising solution quality. Our work highlights the importance of integrating structural priors into combinatorial optimization frameworks.
| null |
['Combinatorial optimization', 'Mixed Integer Linear Program', 'Branch And Bound', 'Block Structure', 'Graph Neural Networks']
|
/pdf/84f2f551a2a4198e6e4ca14c581b9d2019a5ee47.pdf
|
optimization
| null |
['ICLR.cc/2026/Conference/Submission25609/Authors']
|
kjVBqkLrJa
| 25,608
|
kjVBqkLrJa
|
Style2Shape: Image Style Guided 3D Shape Material Generation
|
This paper presents Style2Shape, a novel framework for generating physically-based rendering (PBR) materials for 3D models from a single reference image. Unlike existing methods limited by the diversity of procedural material libraries or producing non-editable representations, our approach combines procedural ma-terials with generated textures via differentiable rendering. Our key insight is that procedural parameters ensure reflectance correctness while generated textures capture arbitrary appearances-their learnable combination achieves both physi-cal plausibility and visual fidelity. The framework operates in three stages: (1) structure-guided appearance transfer that synthesizes geometrically-aligned su-pervision, (2) hybrid PBR material initialization that retrieves procedural mate-rials based on physical properties and generates complementary textures for ap-pearance details, and (3) physics-based optimization jointly refining all compo-nents through differentiable rendering. Extensive experiments demonstrate that our approach generates high-fidelity results, producing editable PBR materials that faithfully reproduce reference appearances while maintaining physical plau-sibility. The generated assets are structured to be compatible with standard 3D rendering workflows.
| null |
['Material Generation; Differentiable Rendering; Procedural Materials; Appearance Transfer; Physically-Based Rendering']
|
/pdf/381919a74bdc1f7e494544cf0e4f4e72544557fa.pdf
|
applications to computer vision, audio, language, and other modalities
|
/attachment/a6fa5e25a86596f96c1fd0e111a426eac7d34bbf.zip
|
['ICLR.cc/2026/Conference/Submission25608/Authors']
|
6T3wJQhvc3
| 25,607
|
6T3wJQhvc3
|
Task Tokens: A Flexible Approach to Adapting Behavior Foundation Models
|
Recent advancements in imitation learning for robotic control have led to transformer-based behavior foundation models (BFMs) that enable multi-modal, human-like control for humanoid agents. These models generate solutions when conditioned on high-level goals or prompts, for example, walking to a coordinate when conditioned on the position of the robot's pelvis. While excelling at zero-shot generation of robust behaviors, BFMs often require meticulous prompt engineering for specific tasks, potentially yielding suboptimal results. In this work, we introduce ``Task Tokens'' - a method to effectively tailor BFMs to specific tasks while preserving their flexibility. Our approach integrates naturally within the transformer architecture of BFMs. Task Tokens trains a task-specific encoder (tokenizer), with the original BFM remaining untouched. Our method reduces trainable parameters per task by up to $\times 125$ and converges up to $\times 6$ faster compared to standard baselines. In addition, by keeping the original BFM unchanged, Task Tokens enables utilizing the pre-existing encoders. This allows incorporating user-defined priors, balancing reward design and prompt engineering.
We demonstrate Task Tokens' efficacy across various tasks, including out-of-distribution scenarios, and show their compatibility with other prompting modalities. Our results suggest that Task Tokens offer a promising approach for adapting BFMs to specific control tasks while retaining their generalization capabilities.
|
Task Tokens enable task-specific adaptation of behavior foundation models by learning a reinforcement-trained encoder, enhancing control without compromising generalization.
|
['Reinforcement Learning', 'Hierarchial Reinforcement Learning', 'Behavior Foundation Models', 'Humanoid Control']
|
/pdf/6b4310625f7f7a84e7732e6b38ca49de469be831.pdf
|
reinforcement learning
|
/attachment/2b5544768859244e75debfb0bcd1a1b20c092a6f.zip
|
['ICLR.cc/2026/Conference/Submission25607/Authors']
|
9ITquDr1G1
| 25,606
|
9ITquDr1G1
|
Contrastive-Aligned Knowledge Distillation for Collaborative Code Completion via Multi-Agent Reinforcement Learning
|
We introduce a novel multi-agent reinforcement learning (MARL) framework for code completion in a collaborative manner, and address the important issue for successful collaboration in code completion: balancing semantic alignment and specialized expertise among the agents. The proposed method incorporates Contrastive Alignment Module (CAM) and Distilled Knowledge Transfer (DKT) mechanism, which allows agents to share coherent representations without losing domain-specific knowledge. CAM embeddings between agents might be aligned through a contrastive learning goal and would create a coordinate measurement of the space in which all embeddings agree (without homogenizing individual capabilities), but DKT would dynamically distil some knowledge from a high-performing teacher agent to others using a regularized KL-divergence goal.
| null |
['Contrastive-Aligned Knowledge']
|
/pdf/eeafd4319a9ce1ad4fb803ca5798c5d18b4eaab9.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25606/Authors']
|
VYLwMvhdXI
| 25,601
|
VYLwMvhdXI
|
Scaling Laws for Generative Reward Models
|
We study the scaling behavior of generative reward models (GenRMs) for reinforcement learning from AI feedback (RLAIF) when used as drop-in replacements for Bradley-Terry models to optimize policies. Building on established scaling laws for reward model overoptimization, we investigate whether GenRMs, particularly those employing chain-of-thought reasoning, exhibit different robustness properties as policies drift from their training distribution during gradient updates. Using the Qwen3 model family (0.6B--14B), our study includes systematic evaluation of thinking GenRMs (trained via GRPO) against answer-only variants (trained via SFT) across policy size, reward model size, reward model type, training budget, and the parameter in online DPO. Our results show that the most decisive determinants of policy quality are reward model size and training duration, followed by policy model scale and GenRM type. While thinking variants trained with GRPO consistently outperform answer-only models on validation tasks, these substantial gains diminish when deployed for downstream policy optimization, where classifier-based reward models can match or exceed GenRM performance despite the latter's significant computational overhead. To measure alignment beyond saturated validation metrics, we employ ELO-based rankings, providing fine-grained proxy-gold alignment metrics that surpass the simple win rates against reference policies used in previous work.
|
First end-to-end pipeline deploying trained GenRMs for online policy optimization, investigating scaling laws across model sizes, training budgets, and chain-of-thought reasoning
|
['Reinforcement Learning From AI Feedback', 'RLHF', 'Reward Hacking']
|
/pdf/be30b03ba6b874acefb6ce2e763bbcf6b92a6ded.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25601/Authors']
|
TjF9WLcu8o
| 25,599
|
TjF9WLcu8o
|
Contrastive-Online-Meta (COM): A Dynamic Adaptation Mechanism for Instruction-Tuned CodeLLMs
|
We propose Contrastive-Online-Meta (COM), a dynamic adaptation framework for instruction-tuned CodeLLMs that coefficients to the issues of catastrophic forgetting and noisy feedback at the time of deployment. The framework combines contrastive pre-training and online meta-learning to separate the task-invariant representation learning and fast adaptation, which helps preserve core programming knowledge while achieving real-time adaptation. A contrastive pre-training module takes a first step at clustering semantically similar instructions and unionizing dissimilar ones, to guarantee its robustness to task variations. During inference, an online meta-learner takes pairs of instruction-feedback streaming and does a light-weight gradient-based update to meta-parameters, which dynamically adjust the model behavior in a way that does not destabilize the pre-trained behavior-effective thing. Furthermore, the dynamic memory buffer simply retains coherence with recent interactions by deriving pairs stored in the buffer for the sake of contrastive match. Unlike monolithic fine-tuning or prompt engineering, COM explicitly separates the processes of representation learning and adaptation, hence avoiding forgetting and overfitting. Experiments using benchmark datasets show that the framework has a better capacity for adaptation efficiency and task generalization than static and incremental tuning baselines. The proposed method fills in the missing link between the offline pre-training and the online accelerated deployment, which provides a scalable solution to real-world code generation systems that require continuous learning. And, its modular nature also supports integration with existing CodeLLMs, which makes it practical for different programming assistance scenarios.
| null |
['Instruction-Tuned CodeLLMs']
|
/pdf/4de9e49419c7f6d5b1e39417896aecd9ce88ec85.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25599/Authors']
|
wiNlIYqe6u
| 25,598
|
wiNlIYqe6u
|
FedPAC: Consistent Representation Learning for Federated Unsupervised Learning under Data Heterogeneity
|
Federated unsupervised learning enables collaborative model training on decentralized unlabeled data but faces critical challenges under data heterogeneity, which often leads to representation collapse from weak supervisory signals and semantic misalignment across clients. Without a consistent semantic structure constraints, local models learn disparate feature spaces, and conventional parameter averaging fails to produce a coherent global model. To address these issues, we propose Federated unsupervised learning with Prototype Anchored Consensus (FedPAC), a novel framework that establishes a consistent representation space via a set of learnable prototypes. FedPAC introduces a dual-alignment objective during local training: a semantic alignment loss that steers local models towards a prototype-anchored consensus to ensure cross-client semantic consistency, coupled with a representation alignment loss that promotes the learning of discriminative and stable features. On the server, prototypes are aggregated by an optimization-based strategy that preserves semantic knowledge and ensure the prototypes remain representative. We provide a rigorous convergence analysis for our method, formally proving its convergence under mild assumptions. Extensive experiments on benchmarks including CIFAR-10 and CIFAR-100 demonstrate that FedPAC significantly outperforms state-of-the-art methods across a wide range of heterogeneous settings.
| null |
['federated learning', 'unsupervised representation learning', 'prototype learning']
|
/pdf/4a2eef703cdc74532207e7f9a93554c1190c6ebb.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25598/Authors']
|
uKrcWZ2V0F
| 25,593
|
uKrcWZ2V0F
|
Training as Computation: A Resource-Bounded Theory of Continual Self-Play Learning
|
We study \emph{training as computation} in a continual self-play setting, where a single reasoning model proposes tasks, solves them, and updates itself using verifiable signals from an external executor--verifier interface. Rather than focusing on one-shot models, we analyze the \emph{process-level} dynamics of learning under explicit resource budgets: each generation step is capped by an output budget, and the executor/verifier operate within bounded working space. Within this framework we (i) formalize a general generator--executor--verifier--buffer loop for continual learning with self-proposed curricula; (ii) prove \emph{resource-bounded completeness} at the process level---the set of functions computable by the evolving loop up to time $t$ matches a corresponding $\mathrm{SPACE}[\cdot]$ class determined by the budgets; and (iii) show monotone capability growth under mild, length-aware exploration schedules and curriculum learnability constraints, without assuming non-vanishing exploration or relying on supervised traces. Conceptually, the results separate \emph{capability universality} (as properties of the training \emph{process}) from \emph{alignment and safety} (properties of objectives and verifiers). Empirically, a light-weight self-play prototype on synthetic program-execution and abduction/induction tasks corroborates the theory: the loop reliably expands its reachable task set over time and benefits from curriculum learnability, while remaining grounded by verifiable rewards. This positions continual self-play as a principled path to scalable, data-free improvement under explicit resource budgets.
| null |
['Self-Play Learning']
|
/pdf/cf04e1b3dfd9e5eac43b68eda39761846e95338c.pdf
|
reinforcement learning
| null |
['ICLR.cc/2026/Conference/Submission25593/Authors']
|
cd5bhCHbMe
| 25,592
|
cd5bhCHbMe
|
hDRIVE: HDR Image Visual Evaluation Metric for SDR to HDR Upconversion Quality Assessment
|
HDR displays are becoming increasingly common on both TVs and mobile devices, that requires to adapt existing legacy SDR to HDR screens. Several algorithms have been developed for SDR-to-HDR upconversion, also known as Inverse Tone Mapping (ITM). However, there is still a lack of reliable metrics for assessing the quality of these algorithms. This is due in part to the ill-posed nature of the ITM task, where the most visually pleasing result can significantly differ from the original image. In this work, we propose a novel state-of-the-art no-reference video quality metric for evaluating upconverted HDR content. To support our approach, we collect a large-scale dataset of human visual preferences, capturing both the perceived visual appearance and quality of HDR videos. The HDR ITM video quality metric might be very helpful and drive the rapid advancement of SDR-to-HDR algorithms development and benchmarking. Both the metric and the training dataset are publicly available for download via the provided link.
| null |
['HDR', 'SDR', 'inverse tone mapping', 'video quality assessment']
|
/pdf/ca12c73c3f295e0ef0523c2e529d93d673d1aa75.pdf
|
applications to computer vision, audio, language, and other modalities
| null |
['ICLR.cc/2026/Conference/Submission25592/Authors']
|
0ZRne2Nt8t
| 25,589
|
0ZRne2Nt8t
|
MAIG: Multi-agent system for Academic Illustration Generation based on deep search and reflection
|
While text-to-image models have revolutionized creative content generation, they fall short in the domain of academic illustration, which demands stringent scientific accuracy and informational completeness, creating a significant bottleneck in automated scientific communication. Existing models often produce illustrations that are factually incorrect, omit critical information, and are limited to simple structured diagrams, failing to render the complex, unstructured conceptual visuals common in science. To address these challenges, we introduce \textbf{MAIG}, a novel multi-agent framework that mimics an expert's workflow. MAIG first employs a deep research agent to ground the generation process in a factual knowledge base, ensuring all necessary background information is available. Subsequently, reflection and editing agents iteratively verify the visual output against this knowledge, identifying and correcting scientific errors. In the meantime, evaluating scientific figures is a parallel challenge plagued by subjective and unscalable methods, we also propose a novel Question-Answering (QA) based Evaluator. This method leverages the strong reasoning capabilities of modern Multimodal Large Language Models (MLLMs) to quantitatively measure both informational completeness and factual correctness, providing an objective and scalable assessment of an illustration's quality. Extensive experiments across various scientific disciplines demonstrate the effectiveness of MAIG, which achieves minimal factual errors and the most complete knowledge coverage, significantly outperforming state-of-the-art models.Our results validate that the proposed research-reflect-edit loop is crucial for generating high-fidelity scientific illustrations and that our QA-based evaluator offers a reliable assessment methodology, together forming a comprehensive solution for advancing automated scientific visualization.
| null |
['Image Generation', 'Multi-Agent', 'Academic Illustration']
|
/pdf/12568a349c67261d96ca8ec782fbbeb5ad7e7868.pdf
|
applications to computer vision, audio, language, and other modalities
| null |
['ICLR.cc/2026/Conference/Submission25589/Authors']
|
u6JLh0BO5h
| 25,587
|
u6JLh0BO5h
|
Jet Expansions: Restructuring LLM Computation for Model Inspection
|
Large language models are becoming general knowledge engines for diverse applications. However, their computations are deeply entangled after training, resisting modularization which complicates interpretability, auditing, and long-term maintenance. We introduce Jet Expansions, a framework for expanding computational graphs using jet operators that generalize truncated Taylor series. Our method systematically decomposes language models into explicit input-to-output computational paths and complementary remainders. This functional decomposition provides a principled, knife-like operator for cutting through entanglement in LLMs, enabling scalable model inspection. We demonstrate how Jet Expansions ground and subsume the popular interpretability technique Logit Lens, reveal a (super-)exponential path structure with respect to recursive residual depth, and support several interpretability applications, including sketching a transformer language model with $n$-gram statistics extracted from its computations and indexing model toxicity levels *without* curated benchmarks.
|
After training, LLM computations become deeply entangled. For interpretability, we introduce a knife-like operator that cuts through this entanglement, separating the part we care about from the remainder and enabling scalable model inspection.
|
['transformer', 'decomposition', 'interpretability', 'neural-symbolic', 'n-grams', 'XAI']
|
/pdf/8efb3befff58c10df3a363b56d398c90a6cb45f4.pdf
|
interpretability and explainable AI
|
/attachment/d1fcedecb740aa500679647c78d3ceb61fd9ba56.pdf
|
['ICLR.cc/2026/Conference/Submission25587/Authors']
|
5sEj8EL8J4
| 25,586
|
5sEj8EL8J4
|
Cross-Modal Syntax-NL Attention for Multi-Agent Reinforcement Learning in Collaborative Coding
|
We suggest a new communication protocol for multi-agent reinforcement learning (MARL) in collaborative coding where agents have to coordinate in coordinate (both structured code syntax and natural language (NL) messages). Conventional ways to treat these modalities separately, the result was suboptimal alignment between the communicational and the code semantic. The proposed method introduces a cross-modal attention framework that is able to dynamically bridge abstract syntax trees (ASTs) of code and NL messages in a jointly learned embedding space. A graph neural net encodes artistic of syntactic elements of code while a pretrained Transformer processes NL messages which are then aligned in the direction of weakly supervised contrastive learning making use of implicit training sign for execution outcome of code to guide the alignment without requirement of manual annotation. Also, the framework uses syntax-aware attention gates to select which message tokens are relevant to particular code nodes, which can result in more precise coordination during collaborative tasks.
| null |
['Syntax-NL Attention']
|
/pdf/8a71ac8f7fa5f22c51930fd557350c8b8411dd47.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25586/Authors']
|
fjH9raahDC
| 25,585
|
fjH9raahDC
|
Less is More: Improving Molecular Force Fields with Minimal Temporal Information
|
Accurate prediction of energy and forces for 3D molecular systems is one of fundamental challenges at the core of AI for Science applications. Many powerful and data-efficient neural networks predict molecular energies and forces from single atomic configurations. However, one crucial aspect of the data generation process is rarely considered while learning these models i.e. Molecular Dynamics (MD) simulation. MD generates trajectories of atomic positions of molecular systems moving from higher energy states to lower energy stable/equilibrium states. This work explores a novel way to leverage molecular dynamics (MD) data, when available, to improve the performance of such predictors. We introduce a novel auxiliary loss function that uses the temporal relationships within MD trajectories, called FRAMES. Counter-intuitively, we demonstrate that minimal temporal information, captured by pairs of just two consecutive frames, is optimal for this task, while using longer trajectory sequences can introduce redundancy and degrade performance. The auxiliary loss operates on pairs of consecutive frames, encouraging the model to inherently learn physically meaningful relations/correspondences between the configuration and the forces. During test time, the model predicts energy/forces only from the current configurations. On the widely used MD17 and ISO17 benchmarks, FRAMES significantly outperforms its Equiformer baseline, achieving highly competitive results in both energy and force accuracy. Our work not only presents a novel training strategy which improves the accuracy of the model, but also provides evidence that for distilling physical priors of atomic systems, more temporal data is not always better.
|
We show that using an auxiliary loss on just two consecutive molecular dynamics frames is an optimal and counter-intuitive strategy for significantly improving the accuracy of neural network
|
['Molecular prediction', 'AI for Science', 'graph neural networks', 'computational physics', 'Temporal information']
|
/pdf/d47bbb0d1309a01ec8c285d3f309871166bc45b6.pdf
|
learning on graphs and other geometries & topologies
| null |
['ICLR.cc/2026/Conference/Submission25585/Authors']
|
P8EhH6ypA5
| 25,582
|
P8EhH6ypA5
|
Silver Stepsize for Faster Zeroth-Order Optimization
|
We study gradient-free minimization of smooth convex functions through Silver stepsizes—a non-monotone, 2-adic schedule that accelerates gradient descent—and show how to compose it with two-point zeroth-order (ZO) estimators on a smoothed objective.
We apply Silver’s multi‑step Lyapunov analysis to smoothed objectives and show that it carries over verbatim when gradients are replaced by unbiased two‑point estimators with a tax in the form of a quadratic variance term.
We control this term via an orthogonal-on-spikes batching policy that allocates directions proportionally to the Silver steps (with a cap at dimension), achieving budget-optimal variance aggregation.
Empirically, we validate our approach through both numerical experiments and MeZO-style forward-pass-only fine-tuning of large language models, incorporating practical considerations such as clipping strategies, and demonstrate its superior performance.
| null |
['Zeroth-Order Optimization', 'Silver Stepsize', 'Gradient-Free']
|
/pdf/7d21d0aa5dab5415004d9ad097cad12d34bc893c.pdf
|
optimization
| null |
['ICLR.cc/2026/Conference/Submission25582/Authors']
|
rNdU8XkCsk
| 25,581
|
rNdU8XkCsk
|
Additive Coupling of Liquid Neural Networks and Modern Hopfield Layer for Regression
|
Regression tasks on complex datasets often involve diverse feature interactions, long-range dependencies, and structured patterns that must be recalled across examples for accurate prediction. Conventional models—such as MLPs, tree ensembles, or standard continuous-time networks, struggle to maintain predictions and stability over extended horizons, especially when patterns must be reused. To address these challenges, we introduce a hybrid architecture that couples Liquid Neural Networks (LNNs) with Modern Hopfield Networks (MHNs) using additive fusion. The LNN component delivers input-adaptive continuous dynamics, while the associative memory enables retrieval and correction using previously encountered global structures. This biologically-inspired design preserves adaptability and stability, while leveraging memory-based recall for consistent predictions. On the OpenML-CTR23 regression benchmark, our approach consistently improved performance, with mean and median gains of 10.42\% and 5.37\%. These results demonstrate the effectiveness of integrating continuous dynamics and content-addressable memory for complex regression scenarios.
| null |
['liquid neural networks', 'modern hopfield network', 'biologically inspired neural models']
|
/pdf/83e06c9dd1ba7233e68a3ef422b5a36ef80da8b9.pdf
|
learning on time series and dynamical systems
| null |
['ICLR.cc/2026/Conference/Submission25581/Authors']
|
uq6UWRgzMr
| 25,580
|
uq6UWRgzMr
|
Neuron-Aware Data Selection in Instruction Tuning for Large Language Models
|
Instruction Tuning (IT) has been proven to be an effective approach to unlock the powerful capabilities of large language models (LLMs).
Recent studies indicate that excessive IT data can degrade LLMs performance, while carefully selecting a small subset of high-quality IT data can significantly enhance their capabilities. Therefore, identifying the most efficient subset data from the IT dataset to effectively develop either specific or general abilities in LLMs has become a critical challenge.
To address this, we propose a novel and efficient framework called Nait. Nait evaluates the impact of IT data on LLMs performance by analyzing the similarity of neuron activation patterns between the IT dataset and the target domain capability. Specifically, Nait captures neuron activation patterns from in-domain datasets of target domain capabilities to construct reusable and transferable neuron activation features. It then evaluates and selects optimal samples based on the similarity between candidate samples and the expected activation features of the target capabilities.
Experimental results show that training on the 10\% Alpaca-GPT4 IT data subset selected by Nait consistently outperforms methods that rely on external advanced models or uncertainty-based features across various tasks. Our findings also reveal the transferability of neuron activation features across different capabilities of LLMs. In particular, IT data with more logical reasoning and programmatic features possesses strong general transferability, enabling models to develop stronger capabilities across multiple tasks, while a stable core subset of data is sufficient to consistently activate fundamental model capabilities and universally improve performance across diverse tasks.
|
NAIT is an efficient algorithm that selects high-quality instruction tuning data by analyzing neuron activation pattern similarity, enhancing large language models' performance and general capabilities.
|
['Instruction Tuning', 'Data Selection', 'Large Language Models']
|
/pdf/18bd38fd6481cddfc7387a35e40feda9a8a92462.pdf
|
interpretability and explainable AI
|
/attachment/2601be187f12cc1ece53c0f8c3f523866a3b2a69.zip
|
['ICLR.cc/2026/Conference/Submission25580/Authors']
|
dpHw6PFKio
| 25,579
|
dpHw6PFKio
|
GUIrilla: A Scalable Framework for Automated Desktop UI Exploration
|
Autonomous agents capable of operating complex graphical user interfaces (GUIs) have the potential to transform desktop automation. While recent advances in large language models (LLMs) have significantly improved UI understanding, navigating full-window, multi-application desktop environments remains a major challenge. Data availability is limited by costly manual annotation, closed-source datasets and surface-level synthetic pipelines.
We introduce GUIrilla, an automated scalable framework that systematically explores applications via native accessibility APIs to address the critical data collection challenge in GUI automation. Our framework focuses on macOS - an ecosystem with limited representation in current UI datasets - though many of its components are designed for broader cross-platform applicability. GUIrilla organizes discovered interface elements and crawler actions into hierarchical GUI graphs and employs specialized interaction handlers to achieve comprehensive application coverage.
Using the application graphs from GUIrilla crawler, we construct and release GUIrilla‑Task, a large-scale dataset of 27,171 functionally grounded tasks across 1,108 macOS applications, each annotated with full-desktop and window-level screenshots, accessibility metadata, and semantic action traces.
Empirical results show that tuning LLM-based agents on GUIrilla‑Task significantly improves performance on downstream UI tasks, outperforming synthetic baselines on the ScreenSpot Pro benchmark while using 97% less data. We also release macapptree, an open-source library for reproducible collection of structured accessibility metadata, along with the full GUIrilla‑Task dataset, the manually verified GUIrilla-Gold benchmark, and the framework code to support open research in desktop autonomy.
|
We present GUIrilla, an automated framework for macOS GUI exploration, and GUIrilla-Task, the first large-scale macOS dataset (27,171 tasks, 1,108 apps) pairing GUI screenshots with detailed accessibility metadata.
|
['Multimodal Autonomous Agents', 'GUI Automation', 'Task Collection Framework', 'macOS Task Benchmark', 'UI Relationship Graphs']
|
/pdf/f554c631613faba03725824ba5ef437a37bd4bab.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25579/Authors']
|
PLO1gjCMk5
| 25,577
|
PLO1gjCMk5
|
Diffusion-Advection Transformer for Air Quality Prediction
|
Air pollution is a major concern for public health and the environment globally, which highlights the need for effective monitoring and predictive modeling to mitigate its impact. Although data-driven models have shown promising results in air quality prediction, they still struggle to model the underlying physical mechanisms of pollutant dispersion, where diffusion governs small-scale spreading and advection drives large-scale directional transport. To address this limitation, we propose the Diffusion-Advection Transformer (DA-Transformer), a novel physics-informed architecture. Specifically, the model integrates the two key physical mechanisms by embedding diffusion and advection as differential equation-based components. These physics-informed modules are incorporated into a Transformer framework to enable the model to better capture pollutant transport dynamics, such as local diffusion-driven smoothing and wind-induced directional propagation in air quality data. Experiments on three real-world datasets demonstrate that DA-Transformer consistently outperforms baseline models in $\mathrm{PM}_{2.5}$ concentration prediction and achieves substantial gains over its variants that exclude diffusion and advection in their model design.
|
A physics-informed Transformer that learns temperature-conditioned diffusion and wind-driven advection to improve long-horizon PM2.5 forecasting across regions.
|
['Spatiotemporal Forecasting', 'Air Quality', 'Physics-informed Learning', 'Transformers', 'Diffusion and Advection']
|
/pdf/e159a98d7df32c21638799cb787ba60fd05cb848.pdf
|
learning on time series and dynamical systems
|
/attachment/127ec3d3ecc63d58a3d88e1354dd42e816827af4.zip
|
['ICLR.cc/2026/Conference/Submission25577/Authors']
|
lyxHZSCX6o
| 25,575
|
lyxHZSCX6o
|
Curricular Adversarial Training for Robust Code Generation via Hierarchical Reinforcement Learning
|
In this paper, we propose a novel approach to boost the robustness of code generation models by curricular adversarial training driven by hierarchical reinforcement learning. Existing code generation systems are prone to breaks by adversarial perturbations, so we propose a two-tiered approach in which a high-level curriculum policy is used to adaptivelyChange complexity of adversarial challenges dynamically while a low-level perturbation policy will be used to generate specific input modifications. The high-level policy goes from simple to sophisticated perturbation based on model performance, which will ensure the gradient of adapting without overwhelming the generator too much.
| null |
['Robust Code Generation']
|
/pdf/4efcab801bf6d8cd92d19c3e6927809778a1f342.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25575/Authors']
|
vpO8n9AqEG
| 25,573
|
vpO8n9AqEG
|
Quadratic Direct Forecast for Training Multi-Step Time-Series Forecast Models
|
The design of training objective is central to training time-series forecasting models. Existing training objectives such as mean squared error mostly treat each future step as an independent, equally weighted task, which we found leading to the following two issues: (1) overlook the *label autocorrelation effect* among future steps, leading to biased training objective; (2) fail to set *heterogeneous task weights* for different forecasting tasks corresponding to varying future steps, limiting the forecasting performance. To fill this gap, we propose a novel quadratic-form weighted training objective, addressing both of the issues simultaneously. Specifically, the off-diagonal elements of the weighting matrix account for the label autocorrelation effect, whereas the non-uniform diagonals are expected to match the most preferable weights of the forecasting tasks with varying future steps. To achieve this, we propose a Quadratic Direct Forecast (QDF) learning algorithm, which trains the forecast model using the adaptively updated quadratic-form weighting matrix. Experiments show that our QDF effectively improves performance of various forecast models, achieving state-of-the-art results. Code is available at https://anonymous.4open.science/r/QDF-8937.
| null |
['Time-series', 'time-series forecast']
|
/pdf/f07cf480eac8f65bcda30bf9b63f312aabda2853.pdf
|
learning on time series and dynamical systems
| null |
['ICLR.cc/2026/Conference/Submission25573/Authors']
|
fOmX9aaQD3
| 25,572
|
fOmX9aaQD3
|
Triple-S: A Sticker Semantic Similarity Benchmark with General Sticker Encoder
|
Stickers have become a popular form of visual communication, yet understanding their semantic relationships remains challenging due to their highly diverse and symbolic content. In this work, we formally define the Sticker Semantic Similarity task and introduce Triple-S, the first benchmark for this task, consisting of 905 human-annotated positive and negative sticker pairs. Through extensive evaluation, we show that existing pretrained vision and multimodal models struggle to capture nuanced sticker semantics. To address this, we propose the General Sticker Encoder (GSE), a lightweight and versatile model that learns robust sticker embeddings using both Triple-S and additional datasets. GSE achieves superior performance on unseen stickers, and demonstrates strong results on downstream tasks such as emotion classification and sticker-to-sticker retrieval. By releasing both Triple-S and GSE, we provide standardized evaluation tools and robust embeddings, enabling future research in sticker understanding, retrieval, and multimodal content generation. The Triple-S benchmark and GSE have been publicly released and are available here \footnote{https://anonymous.4open.science/r/triple-s-6E65/}
| null |
['dataset', 'benchmark', 'sticker', 'sticker semantic', 'general sticker encoder']
|
/pdf/f05b078b8a49011f2fccd38e4ea2992d12ebfb96.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25572/Authors']
|
ogKDAjoyy8
| 25,570
|
ogKDAjoyy8
|
Unsupervised Dynamic Graph Multi-Model Representation Learning for Temporal Patterns Discovery: Uncovering Parkinson’s Disease Stages Using Cerebrospinal Fluid Longitudinal Profiles
|
Existing dynamic graph learning methods typically encode node features at each time step by leveraging local (spatial/ structural) and/or short-range temporal dependencies. In contrast, we propose a novel multi-model framework that generates a representation for each node at every graph snapshot, where each representation encodes the node’s temporal trajectory across the full sequence while preserving its spatial context within that specific time step. When clustered, these representations reveal meaningful temporal pattern groups in longitudinal datasets. This approach was evaluated in the context of Parkinson’s disease (PD), a degenerative disorder that progresses in distinct clinical stages. To demonstrate this, we structured six years of longitudinal cerebrospinal fluid (CSF) records from 24 patients with PD into age-based graphs, where clinical visit records corresponding to patients with the same age are hosted on the graph for that age. In these graphs, nodes represent individual patients indexed by unique identifiers and are enriched with CSF peptide abundance features. Edges are established between patient nodes based on the similarity of their peptide expression patterns. For each patient's node, a one-layer Graph Convolutional Network (GCN) was employed to encode inter-patient relationships within each age-specific graph. The resulting spatial representations across all time points for each node were then fed into a sequential model to learn a unified spatial-temporal features representation for every patient. To represent patient’s features at each age, the unified embedding was combined with the age-specific GCN representation through a fusion block - composed of linear transformations, nonlinear activations, and normalization layers - producing rich, locally informed spatial embeddings that are further enhanced by the global context of inter- and intra-related node patterns. K-means++ clustering of the multi-model representations identified four distinct disease stages, supported by strong cluster validity metrics (Davies-Bouldin Index = 0.169, Calinski-Harabasz Index = 1264.24). When statistically analysed, Kruskal-Wallis test revealed significant differences in motor scores (UPDRS_2 and UPDRS_3; p < 0.05) across clusters, with Dunn’s test further identifying which clusters differed significantly. Unlike the motor scores, where most patient profiles apparently clustered into two groups, the non-motor scores (UPDRS_1) were distributed across three clusters but did not show significant differences (p = 0.11). The learned embeddings revealed well-separated clinical motor profiles, outperforming PCA, autoencoders, GCN, T-GCN, and GC-LSTM representations in capturing clinically relevant dimensions of disease severity. With further optimization and validation, this framework could aid in staging and understanding neurodegenerative diseases and generalizes to other longitudinal pattern discovery tasks.
|
We created a multi-model graph learning method that integrates node representations across graph snapshots, capturing temporal trajectories and spatial context.
|
['Learning Representation', 'Dynamic Graphs', 'Parkinson’s Disease', 'deep learning', 'unsupervised learning']
|
/pdf/60ad8270ea31ff16a1727f611b05c327014ea5cf.pdf
|
learning on graphs and other geometries & topologies
| null |
['ICLR.cc/2026/Conference/Submission25570/Authors']
|
3AoeNlw5MF
| 25,569
|
3AoeNlw5MF
|
D-MOE-EVAL: A Dynamic Mixture Of Experts Framework For Human-Aligned Nuanced Large Language Model Evaluation
|
The growing paradigm of using Large Language Models (LLMs) as evaluators, known as LLM-as-a-Judge, offers significant scalability for automated assessment. However, this approach struggles from certain limitations. The different architectures and training of LLMs, leads them to develop varied expertise, making any single monolithic agent prone to bias and limited in adaptability across different reasoning scenarios. This inherent bottleneck leads to measurement imbalance across evaluation criteria and an over-prioritization of narrow technical correctness at the expense of diverse human-centered dimensions. To address these challenges, this paper presents a scenario-aware multi-dimensional evaluation framework that operationalizes a Mixture-of-Experts (MoE) architecture. The framework features instance-level scenario classification, dynamically mapping inputs to the most appropriate evaluation context, with each scenario linked to its own tailored set of evaluation dimensions. The dimension experts are specialized LLMs, dynamically selected after validation on a multi-dimensional dataset to systematically profile and identify their strengths across specified dimensions. This adaptive routing ensures that each instance receives a contextually relevant assessment across multiple complementary dimensions simultaneously. The expert evaluations are synthesized by a "Panel of Judges" as a deliberation layer, with multiple agents in structured debate to reconcile discrepancies and ensure fairness and logical consistency in the final judgments. The results of this study, evaluated over the MDEval and LLMBar benchmarks, demonstrate proposed framework’s superior performance on existing baselines across diverse tasks, showcasing the robustness, versatility, and generalizability of a Mixture-of-Experts approach for context-aware LLM evaluation.
|
This paper proposes a scenario-aware, multi-dimensional LLM evaluation framework using a MoE approach, across multiple domains and profiling dimension-specific experts, deliberating through a Panel of Judges ensuring human-aligned nuanced evaluation.
|
['Large Language Models', 'Fine Grained Evaluation', 'Multi-Dimensional Evaluation', 'Mixture of Experts', 'Scenario Aware Evaluation']
|
/pdf/dce8069e973deb65a4b5cf1276bbc2a5d5938dac.pdf
|
datasets and benchmarks
|
/attachment/271e70f1decb97e57b9ae42d23e1999a6d9da4e7.zip
|
['ICLR.cc/2026/Conference/Submission25569/Authors']
|
2ePvhEKxQj
| 25,568
|
2ePvhEKxQj
|
Causal Reasoning Favors Encoders: Limits of Decoder-Only Models
|
In-context learning (ICL) underpins recent advances in large language models (LLMs), yet its role in causal reasoning remains unclear. Causal reasoning demands multi-hop composition and strict conjunctive control, and reliance on spurious lexical relations of the input could provide misleading results.
We hypothesize that, due to their ability to project the input into a latent space, encoder- and encoder–decoder architectures are better suited for said multi-hop conjunctive reasoning versus decoder-only models.
To do this, we compare fine-tuned versions of all the aforementioned architectures with zero- and few-shot ICL in both natural-language and non-natural language scenarios. We find that ICL alone is insufficient for reliable causal reasoning, often overfocusing on irrelevant input features.
In particular, decoder-only models are noticeably brittle to distributional shifts, while fine-tuned encoder and encoder–decoder models can generalize more robustly across our tests, including the non-natural language split.
Both architectures are only matched or surpassed by decoder-only architectures at large scales.
We conclude by noting that for cost-effective, short-horizon robust causal reasoning, encoder or encoder-decoder architectures with targeted fine-tuning are preferable.
| null |
['Causal Reasoning', 'LLM', 'In-Context Learning']
|
/pdf/8ed3e4c965800fa0963107e1926208b7053d565e.pdf
|
causal reasoning
| null |
['ICLR.cc/2026/Conference/Submission25568/Authors']
|
ax6oQWQmeR
| 25,567
|
ax6oQWQmeR
|
Hierarchies over Pixels: A Benchmark for Cognitive Geospatial Reasoning for Agents
|
Beyond perception, reasoning is crucial in remote sensing, enabling advanced interpretation, inference, and decision-making. Recent advances in large language models (LLMs) have given rise to tool-augmented agents that enhance reasoning by leveraging external tools for complex analytical tasks. However, existing research on these agents in remote sensing largely focuses on perception-oriented tasks, with cognitive geospatial reasoning remaining underexplored. In this work, we systematically evaluate the geospatial reasoning capabilities of LLM-powered tool-augmented agents. To this end, we introduce GeoHOP, a benchmark for hierarchical geospatial reasoning. GeoHOP comprises 417 scenario-driven, hierarchy-aware tasks—such as hazard vulnerability assessment, urban heat island analysis, and forest fragmentation dynamics—spanning optical, Synthetic Aperture Radar (SAR), and infrared (IR) imagery. GeoHOP advances evaluation beyond monitoring-based recognition to cognitive-level geospatial analysis. Building upon GeoHOP, we propose GeoPlanner, an agent powered by LLMs that organizes 5 toolkits into functional hierarchies and executes fault-tolerant reasoning pipelines. GeoPlanner enables structured abstraction, robust recovery from tool failures, and stable long-horizon planning. Extensive experiments across diverse geospatial reasoning tasks demonstrate that GeoPlanner excels in hierarchical reasoning, cross-modal transfer, and error handling.
|
We introduce GeoHOP, a benchmark for hierarchical geospatial reasoning in remote sensing, and GeoPlanner, a tool-augmented LLM agent that excels in structured, fault-tolerant geospatial analysis.
|
['Benchmark evaluation', 'Remote sensing imagery', 'Tool-augmented LLMs']
|
/pdf/a38d7f31a32b845fd62e962f10c15bd934ea0ce2.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25567/Authors']
|
3KqASrNJGK
| 25,566
|
3KqASrNJGK
|
But what is your honest answer? Aiding LLM-judges with honest alternatives using steering vectors
|
Detecting subtle forms of dishonesty like sycophancy and manipulation in Large Language Models (LLMs) remains challenging for both humans and automated evaluators, as these behaviors often appear through small biases rather than clear false statements. We introduce Judge Using Safety-Steered Alternatives (JUSSA), a novel framework that employs steering vectors not to improve model behavior directly, but to enhance LLM judges' evaluation capabilities. JUSSA applies steering vectors during inference to generate more honest alternatives, providing judges with contrastive examples that make subtle dishonest patterns easier to detect. While existing evaluation methods rely on black-box evaluation, JUSSA uses model internals to create targeted comparisons from single examples. We evaluate our method on sycophancy detection and introduce a new manipulation dataset covering multiple types of manipulation. Our results demonstrate that JUSSA effectively improves detection accuracy over single-response evaluation in various cases. Analysis across judge models reveals that JUSSA helps weaker judges on easier dishonesty detection tasks, and stronger judges on harder tasks. Layer-wise experiments show how dishonest prompts cause representations to diverge from honest ones in middle layers, revealing where steering interventions are most effective for generating contrastive examples. By demonstrating that steering vectors can enhance safety evaluation rather than just modify behavior, our work opens new directions for scalable model auditing as systems become increasingly sophisticated.
|
We use steering vectors to obtain alternative, honest responses, helping external LLM-judges detect subtle instances of dishonest or manipulative behavior.
|
['LLM-as-a-judge', 'steering vectors', 'safety', 'manipulation']
|
/pdf/2a65cb9fcf2da8afeaac51846dbfb003661d61b9.pdf
|
alignment, fairness, safety, privacy, and societal considerations
| null |
['ICLR.cc/2026/Conference/Submission25566/Authors']
|
eIlgfA962J
| 25,565
|
eIlgfA962J
|
LaMbDA: Local Latent Embedding Alignment for Cross-modal Time-Series Diffusion
|
We present a mutually aligned diffusion framework for cross‑modal time‑series generation that treats paired modalities X and Y as complementary observations of a shared latent dynamical process and couples their denoising trajectories through stepwise alignment of local latent embeddings. We instantiate this as LaMbDA (Local latent eMbedDing Alignment), a lightweight objective that enforces phase consistency by encouraging local latent neighborhoods of X and Y to inhabit a shared local manifold. LaMbDA augments the diffusion loss by incorporating first-order sequence-contrastive and second-order covariance alignment terms across modalities at matched timesteps. Aligning their local embeddings allows each modality to help denoise the other and resolve ambiguities throughout the reverse process. Human biomechanics provides a strong testbed for this approach: paired, synchronized measurements (e.g., joint kinematics and ground‑reaction forces) capture the same movement state while reflecting practical constraints such as sensor dropout and cost. We evaluate LaMbDA extensively on biomechanical data and complement this with controlled studies on canonical synthetic dynamical systems (Lorenz attractor; double pendulum in non‑chaotic and chaotic regimes) to probe generality under varying dynamical complexity. Across all these settings, aligning local latent statistics consistently improves generation fidelity, phase coherence, and representation quality for downstream probes, without architectural changes or inference overhead.
| null |
['Cross-modal diffusion', 'multimodal time-series generation', 'local latent alignment']
|
/pdf/8cdddd6e50a81b50b7587d7069cb8116486326d1.pdf
|
learning on time series and dynamical systems
| null |
['ICLR.cc/2026/Conference/Submission25565/Authors']
|
Q40JCBKW1q
| 25,563
|
Q40JCBKW1q
|
Curriculum-Based Termination Critic for Scalable Program Decomposition in Hierarchical Reinforcement Learning
|
We introduce a Curriculum-Based Termination Critic (CBTC) for hierarchical reinforcement learning (HRL) to solve the problem of program decomposition for scaleable programming in complex task environments. Traditional termination critics yet make some static heuristics on the other side that have difficulties to cope with different tasks in complexity and prevents the agent to learn right hierarchy abstractions effectively. The CBTC presents a dynamic curriculum-driven framework that selects the difficulty of the tasks on the fly and incrementally adjusts the difficulty according to the agent's learning progress, in order to make programs decomposition into manageable subtasks more efficient. Our strategy combines three components: a module of difficulty progression to autonomously adjust the complexity of the tasks, a termination critic based on reward to stabilize the decisions for the completion of the subtasks and an option-critic hybrid controller to orchestrate the switching strategy between decomposition methods. The termination critic makes use of a transformer-based framework to operate on program states and the curriculum descriptor, while the high-level policy utilizes graph neural networks to reason on abstract syntax trees. Experiments show that the CBTB performs better than traditional HRL techniques both in terms of success rate and time efficiency, especially in those cases where the programs contain many stages to be synthesized. The proposed approach is entirely differentiable and compatible with existing architectures for HRL and is a principled answer for scaling program decomposition in real-world applications.
| null |
['Hierarchical Reinforcement Learning']
|
/pdf/54ae6daa233231f1215b7aed36aa928be86ecc13.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25563/Authors']
|
bkhsrCOZTu
| 25,560
|
bkhsrCOZTu
|
Riemannian Geometry: Speech Detection from MEG Brain Signals Towards Non-Invasive BCI
|
Non-invasive brain--computer interfaces (BCIs) need fast, reliable speech vs.\ non-speech detection from neural time series. We propose a hybrid MEG decoder that fuses a compact temporal CNN with a geometry-aware covariance branch operating on symmetric positive-definite (SPD) sensor--sensor matrices. The CNN is stabilized by three pragmatic choices: a temporal-lobe sensor subset (auditory cortex) to improve signal-to-noise and efficiency, silence-aware sampling to mitigate class imbalance, and smoothed BCE with positive-class weighting for calibrated decisions. In parallel, each $1$-s window yields a shrinkage covariance projected to a Riemannian tangent space and classified by a linear model; we late-fuse CNN and covariance probabilities and select the operating threshold to maximize $F_1$-macro. This design is motivated by (i) the neurobiology of speech processing in superior temporal gyrus (onset/sustained responses; envelope entrainment in delta--theta bands) and (ii) extensive evidence that Riemannian treatment of SPD covariances improves neural decoding robustness and transfer. On a large within-subject MEG corpus (250~Hz; $\sim$1~s windows), the baseline scores 0.4985 $F_1$-macro; our CNN (+3 stabilizers) reaches 0.88773; the full hybrid attains 0.91023, adding accuracy with negligible training cost and low-latency inference. These results align with MEG's strengths for millisecond-scale cognitive dynamics and with best practices in representation learning targeted by ICLR.
|
Riemannian covariance modeling (SPD) fused with a compact CNN lifts non-invasive MEG speech detection at negligible overhead.
|
['Brain-Computer Interface (BCI)', 'Magnetoencephalography (MEG)', 'Riemannian geometry', 'SPD covariance', 'Speech detection']
|
/pdf/82cc5de69ce3e378fac7f4322b4dbbbbf0dcdddc.pdf
|
applications to neuroscience & cognitive science
| null |
['ICLR.cc/2026/Conference/Submission25560/Authors']
|
r7OlaSw8xb
| 25,559
|
r7OlaSw8xb
|
MCCE: A Framework for Multi-LLM Collaborative Co-Evolution
|
Multi-objective discrete optimization problems, such as molecular design, pose significant challenges due to their vast and unstructured combinatorial spaces. Traditional evolutionary algorithms often get trapped in local optima, while expert knowledge can provide crucial guidance for accelerating convergence. Large language models (LLMs) offer powerful priors and reasoning ability, making them natural optimizers when expert knowledge matters. However, closed-source LLMs, though strong in exploration, cannot update their parameters and thus cannot internalize experience. Conversely, smaller open models can be continually fine-tuned but lack broad knowledge and reasoning strength. We introduce Multi-LLM Collaborative Co-evolution (MCCE), a hybrid framework that unites a frozen closed-source LLM with a lightweight trainable model. The system maintains a trajectory memory of past search processes; the small model is progressively refined via reinforcement learning, with the two models jointly supporting and complementing each other in global exploration. Unlike model distillation, this process enhances the capabilities of both models through mutual inspiration. Experiments on multi-objective drug design benchmarks show that MCCE achieves state-of-the-art Pareto front quality and consistently outperforms baselines. These results highlight a new paradigm for enabling continual evolution in hybrid LLM systems, combining knowledge-driven exploration with experience-driven learning.
|
MCCE is a framework for collaboration of large and small language models, combining knowledge-driven exploration with experience-driven learning
|
['reinforcement learning', 'large language models', 'model collaboration', 'evolutionary algorithms']
|
/pdf/0521c6df0acbd41d9c7c79c8855881f06af7d9fb.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25559/Authors']
|
uuCQJtKMqS
| 25,556
|
uuCQJtKMqS
|
AlienLM: Alienization of Language for Privacy-Preserving API Interaction with LLMs
|
We introduce $\textbf{\textit{AlienLM}}$, a framework that reinterprets encryption as language translation for large language models accessed exclusively through black-box APIs. Existing approaches based on secure inference or differential privacy and federated learning offer limited protection in API-only scenarios. $\textbf{\textit{AlienLM}}$constructs an Alien Language through a vocabulary-level bijection and employs API-only fine-tuning, thereby ensuring compatibility with commercial black-box services while requiring no access to model internals. Across four LLMs and seven benchmarks, $\textbf{\textit{AlienLM}}$ preserves more than 81\% of the original performance, substantially surpasses substitution- and obfuscation-based baselines, and exhibits strong robustness against token-mapping and frequency-analysis attacks. $\textbf{\textit{AlienLM}}$ provides a deployable, low-overhead mechanism for safeguarding sensitive data in API-mediated applications such as healthcare, finance, and education. More broadly, our findings reveal a practical separation between linguistic representation and task competence, thereby motivating future work on composable privacy-preserving layers and formal characterizations of the learnability–opacity trade-off.
| null |
['Encryption', 'Obsfucation', 'LLMs']
|
/pdf/0ba8aa4c34e3c77d90efc9a0bcf572ddb3a8dfec.pdf
|
alignment, fairness, safety, privacy, and societal considerations
|
/attachment/1cbd73a2f5c259b75a14027365f72be5c57aa2e6.zip
|
['ICLR.cc/2026/Conference/Submission25556/Authors']
|
ozzMu93fxx
| 25,555
|
ozzMu93fxx
|
HTR for Russian Empire Period Manuscripts: A Two-Stage Framework with New Annotated Resources
|
Historical handwritten documents represent a valuable source of information about the language, culture, and society of earlier periods. In the context of globalized scholarship, the development of automatic handwriting recognition tools for a wide range of languages has become increasingly important to ensure broader accessibility to the cultural heritage of different nations. Pre-revolutionary Russian presents a particular challenge for such systems due to its significant orthographic differences from the modern language. This work introduces a universal tool for recognizing handwritten documents written in pre-revolutionary Russian orthography, dated from the $19^{\mathrm{th}}$ century to the early $20^{\mathrm{th}}$ century. We present a two-stage handwritten text recognition (HTR) system combining YOLOv8-based line segmentation with TrOCR$_{pre}$, a transformer architecture pre-trained on Russian-language data. The system is performed on a manually annotated corpus of $38,501$ lines across three document types: Gubernatorial Reports ($31,083$ lines), Statutory Charters ($5,868$ lines), and Personal Diaries ($1,550$ lines), split into training, validation, and test sets. Our approach achieves a character error rate (CER) of $8.5$% and a word error rate (WER) of $29.1$% overall, with performance varying by document type - ranging from $4.8$% CER on formal administrative documents to $19.0$% CER on informal personal writings. The transformer-based architecture demonstrates a $53.8$% improvement over traditional CNN-RNN baselines (from $18.4$% to $8.5$%), providing a practical tool for large-scale digitization of historical Russian archives.
|
First general HTR for pre-reform Russian handwriting (pre-1918): a two-stage YOLOv8 line segmenter + TrOCR recognizer that outperforms general-purpose HTR baselines on Imperial-era manuscripts.
|
['Handwritten Text Recognition', 'Low-Resource Languages', 'Historical Documents']
|
/pdf/8fc2738629b918329dae9d7765f7d31e9cdb2dc7.pdf
|
applications to computer vision, audio, language, and other modalities
| null |
['ICLR.cc/2026/Conference/Submission25555/Authors']
|
OqZFfDks0Q
| 25,554
|
OqZFfDks0Q
|
Efficient High-Resolution Image Editing with Hallucination-Aware Loss and Adaptive Tiling
|
High-resolution (4K) image-to-image synthesis has become increasingly important for mobile applications. Existing diffusion models for image editing face significant challenges, in terms of memory and image quality, when deployed on resource-constrained devices. In this paper, we present MobilePicasso, a novel system that enables on-device image editing at high resolutions, while minimising computational cost and memory usage. MobilePicasso comprises three stages: (i) performing image editing at a standard resolution with hallucination-aware loss, (ii) applying latent projection to overcome going to the pixel space, and (iii) upscaling the edited image latent to a higher resolution with adaptive context-preserving tiling. Our user study with 46 participants reveals that MobilePicasso not only improves image quality by 18-48% but reduces hallucinations by 14-51% over existing methods. MobilePicasso demonstrates significantly lower latency, e.g., up to 55.8x speed-up, yet with a small increase in runtime memory, e.g., a mere 9% increase over prior work. Surprisingly, the on-device runtime of MobilePicasso is observed to be faster than a server-based high-resolution image editing model running on an A100 GPU.
|
Resource-efficient on-device, high-resolution (4K) image editing with improved image quality
|
['On-device ML', 'Image Editing', 'Diffusion Modeling']
|
/pdf/c3cba7a6aa09366ebb5df1c257d5e12dbc2386c3.pdf
|
infrastructure, software libraries, hardware, systems, etc.
| null |
['ICLR.cc/2026/Conference/Submission25554/Authors']
|
BeMtzSH1d7
| 25,553
|
BeMtzSH1d7
|
Submodular Function Minimization with Dueling Oracle
|
We consider submodular function minimization using a \textit{dueling oracle}, a noisy pairwise comparison oracle that provides relative feedback on function values between two queried sets. The oracle's responses are governed by a \textit{transfer function}, which characterizes the relationship between differences in function values and the parameters of the response distribution. For a \textit{linear} transfer function, we propose an algorithm that achieves an error rate of $O(n^{\frac{3}{2}}/\sqrt{T})$, where $n$ is the size of the ground set and $T$ denotes the number of oracle calls. We establish a lower bound: Under the constraint that differences between queried sets are bounded by a constant, any algorithm incurs an error of at least $\Omega(n^{\frac{3}{2}}/\sqrt{T})$. Without such a constraint, the lower bound becomes $\Omega(n/\sqrt{T})$. These results show that our algorithm is optimal up to constant factors for constrained algorithms. For a \textit{sigmoid} transfer function, we design an algorithm with an error rate of $O(n^{\frac{7}{5}}/T^{\frac{2}{5}})$, and establish lower bounds analogous to the linear case.
|
We study submodular minimization with a dueling oracle giving noisy pairwise feedback.
|
['submodular minimization', 'deling oracle', 'preference-based optimization']
|
/pdf/4ee4e028488c1abefefeb85b43c71ac9871634f2.pdf
|
optimization
|
/attachment/c3d2da960c02289167a102dd67fcf63442a64f3e.zip
|
['ICLR.cc/2026/Conference/Submission25553/Authors']
|
NvKvW5k6Kk
| 25,552
|
NvKvW5k6Kk
|
Improving Semantic Proximity in English-Centric Information Retrieval through Cross-Lingual Alignment
|
With the increasing accessibility and utilization of multilingual documents, Cross-Lingual Information Retrieval (CLIR) has emerged as an important research area. Conventionally, CLIR tasks have been conducted under settings where the language of documents differs from that of queries, and typically, the documents are composed in a single coherent language. In this paper, we highlight that in such a setting, the cross-lingual alignment capability may not be evaluated adequately. Specifically, we observe that, in a document pool where English documents coexist with another language, most multilingual retrievers tend to prioritize unrelated English documents over the related document written in the same language as the query. To rigorously analyze and quantify this phenomenon, we introduce various scenarios and metrics designed to evaluate the cross-lingual alignment performance of multilingual retrieval models. Furthermore, to improve cross-lingual performance under these challenging conditions, we propose a novel training strategy aimed at enhancing cross-lingual alignment. Using only a small dataset consisting of 2.8k samples, our method significantly improves the cross-lingual retrieval performance while simultaneously mitigating the English inclination problem. Extensive analyses demonstrate that the proposed method substantially enhances the cross-lingual alignment capabilities of most multilingual embedding models.
|
This paper identifies multilingual embedding gaps in cross-lingual retrieval, proposes scenario and Max@R metric, and introduces a training strategy combining JSD and InfoNCE loss, significantly improving cross-lingual alignment with minimal data.
|
['Cross-Lingual Alignment', 'Information Retrieval', 'Multilingual Embedding', 'Cross-Lingual Information Retrieval']
|
/pdf/a02978c72883874e1d3a1240902f7a15bede4dff.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25552/Authors']
|
4Nsx2kZkex
| 25,549
|
4Nsx2kZkex
|
Differentiable Verification for Safe Reinforcement Learning in Verifiable Code Synthesis
|
We propose a novel framework for safe reinforcement learning (RL) in verifiable code synthesis where formal verification constraints are integrated in the form of differentiable parts as components in the policy optimization loop. Traditional approaches to verification are seen as a post-hoc filter or a black-box reward signal, and this often results in inefficiencies and mismatches between the generated code and safety guarantees. The proposed method adds a differentiable verification layer that mimics formal verification steps with the help of smoothing surrogate functions that allows for gradient-based improvement of both code generation and safety specifications. This layer calculates soft satisfaction scores for safety properties which are then ushered in consensus with rewards completing the tasks in order to calculate the RL policy.
| null |
['Code Synthesis']
|
/pdf/b05e3fe0c675f1766a26d8613bf42a3874c05e20.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25549/Authors']
|
HuuCWjlJuQ
| 25,544
|
HuuCWjlJuQ
|
Dissecting Mahalanobis: How Feature Geometry and Normalization Shape OOD Detection
|
Out-of-distribution (OOD) detection is critical for the reliable deployment and better understanding of deep learning models. To address this challenge, various methods relying on Mahalanobis distance were proposed and widely employed. However, the impact of representation geometry and feature normalization on the OOD performance of Mahalanobis-based methods is still not fully understood, which may limit their downstream application. To address this gap, we conducted a comprehensive empirical study across diverse image foundation models, datasets, and distance normalization schemes. First, our analysis shows that Mahalanobis-based methods aren't universally reliable. Second, we define the ideal geometry for data representations and demonstrate that spectral and intrinsic-dimensionality metrics can accurately predict a model's out-of-distribution (OOD) performance. Finally, we analyze how normalization impacts OOD performance. Building upon these studies, we propose a conformal generalization of recently proposed $\ell_2$ normalization that allows to control the degree of radial expansion of the representations geometry, which in turn helps improve OOD detection. By bridging the gap between representation geometry, normalization, and OOD performance, our findings offer new insights into the design of more effective and reliable deep learning models.
| null |
['Out-of-Distribution Detection', 'Deep Learning', 'Feature Representation', 'Normalization', 'Model Robustness', 'Empirical Study', 'Representation Geometry']
|
/pdf/18dccafdbd8986b5ee0bb5b29435ec0be3159508.pdf
|
interpretability and explainable AI
| null |
['ICLR.cc/2026/Conference/Submission25544/Authors']
|
Y5p4voRSBj
| 25,543
|
Y5p4voRSBj
|
Learning Flexible Generalization in Video Quality Assessment by Bringing Device and Viewing Condition Distributions
|
Video quality assessment (VQA) plays a critical role in optimizing video delivery systems. While numerous objective metrics have been proposed to approximate human perception, the perceived quality strongly depends on viewing conditions and display characteristics. Factors such as ambient lighting, display brightness, and resolution significantly influence the visibility of distortions. In this work, we address the question of the multu-screen quality assessment on mobile devices, as this area still tends to be undercovered. We introduce a large-scale subjective dataset collected across more than 200 Android devices, accompanied by metadata on viewing conditions and display properties. We propose a strategy for aggregated score extraction and adaptation of VQA models to device-specific quality estimation. Our results demonstrate that incorporating device and context information enables more accurate and flexible quality prediction, offering new opportunities for fine-grained optimization in streaming services. We view device and condition variability as a form of natural distributions, and our approach provides a pathway to more robust perceptual quality prediction. Ultimately, this work advances the development of perceptual quality models that bridge the gap between laboratory evaluations and the diverse conditions of real-world media consumption.
| null |
['video quality assessment', 'subjective dataset', 'robust perceptual models', 'human-centered machine learning']
|
/pdf/a9cfa8459ce01fc6a1d4382f3b7b9c3929d4264a.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25543/Authors']
|
C35E46tK6T
| 25,541
|
C35E46tK6T
|
Timber: Training-free Instruct Model Refining with Base via Effective Rank
|
Post-training, which elicits a pretrained Base model into the corresponding Instruct model, is widely considered to be superficial.
In this work, we first reinforce this hypothesis by providing novel quantitative evidence from the weight level that the effective rank (eRank) remains negligibly changed. However, this superficiality also suffers a critical trade-off, improving the exploitation capabilities at the cost of limiting its exploration. To tackle this issue, we propose Timber, a simple yet effective training-free method that enhances the exploration capability of the Instruct model while preserving its exploitation. The key insight is to partially revert Instruct towards the paired Base model by subtle yet targeted refinement of the weight deltas. Extensive experiments on Llama and Qwen series demonstrate that Timber consistently improves vanilla Instruct models, particularly on Pass@k performance. Our findings offer new insights into the post-training stage at the weight level and practical strategies to refine the Instruct model without training.
|
We propose a novel Timber, which is a training-free method to enhance Instruct model with paired Base model via effective rank.
|
['LLM', 'training-free', 'effective rank']
|
/pdf/07b5488f2298a2f1080b6d74c85f46fc16a569d2.pdf
|
foundation or frontier models, including LLMs
|
/attachment/4d9d0091320e755b94d61d515520f9765d8dec06.zip
|
['ICLR.cc/2026/Conference/Submission25541/Authors']
|
27fc8hXB5N
| 25,540
|
27fc8hXB5N
|
Geometric Compression in Grokking: The Three-Stage Modular Dynamics of Transformers
|
A central mystery in deep learning is how generalizable algorithms emerge from the complex dynamics of training. The phenomenon of grokking serves as a canonical example of this puzzle. While mechanistic reverse engineering has successfully identified the final algorithms networks discover, the dynamic process of their formation remains largely uncharacterized. Progress in understanding these dynamics is hindered by complexity metrics that are either monolithic or presuppose non-universal architectural properties like piecewise linearity. We approach this challenge from a geometric perspective, introducing the Geometric Coherence Score (GCS), which quantifies how consistently networks transform local geometric structures across inputs, revealing the hidden geometric evolution of algorithmic learning. Applying GCS to Transformer grokking on modular arithmetic, we discover a universal three-stage construct-then-compress dynamic with precise modular division of labor: (I) Early Geometric Convergence attention achieves high geometric coherence through simple memorization patterns; (II) Geometric Restructuring attention actively decreases coherence by constructing complex structured representations necessary for generalization; (III) System wide Consolidation all computational flows coordinately increase coherence, stabilizing the generalizable algorithm. We substantiate this discovery through multiple lines of evidence: the dynamic persists across various activation functions (ReLU, GeLU, SiLU); it distinguishes successful grokking from overfitting (where geometric restructuring fails); and GCS dynamics directly correspond to the evolution of attention patterns from uniform simplicity to algorithmic sophistication. Our work reframes grokking as sophisticated modular geometric reorganization, providing the first direct geometric evidence for construct then compress mechanisms in neural networks and offering a principled diagnostic tool for interpreting emergent algorithms.
|
We show that grokking in Transformers is not monotonic simplification, but a "construct-then-compress" algorithm where the Self-Attention module must first increase its geometric complexity to enable a subsequent, rapid compression in the FFN.
|
['Grokking', 'Geometric Deep Learning', 'Transformers']
|
/pdf/f0b2ed6db55030eed8cb1e7520d8453a47c3de74.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
| null |
['ICLR.cc/2026/Conference/Submission25540/Authors']
|
dcqnFZAczW
| 25,534
|
dcqnFZAczW
|
Disentangled Code Embedding for Multi-Task Reinforcement Learning: A Dual-Encoder Approach with Dynamic Gating
|
We propose a disentangled code embedding module (DCEM) for Multi-task reinforcement learning (RL), which explicitly separates task-agnostic and task-specific features in code representations, to achieve better generalization on diverse tasks. The module makes use of a dual encoder architecture, which uses a transformer-based task-agnostic encoder that captures universal programming patterns and a graph neural network that retrieves task-specific features from abstract syntax trees. A dynamic gating mechanism then dynamically combines these features depending on the context of the task, effectively boosting the RL agent to balance shared and specialized knowledge. The combination of Space by DCEM, and RL policy and value networks, enables the agent to base its decisions upon structured code embeddings, which is more conducive to task-aware decision making. Moreover, the above module is pre-trained with the contrastive and reconstruction losses to ensure the strong feature extraction process before fine-tuning with RL objective. Our approach overcomes the problem of catastrophic interference in multi-task RL by disentangling and recombining code features at run time, and contrasting it with past work that tends to use monolithic embeddings. Experiments show that DCEM leads to significant improvement of the performance in cross-task generalization with computational efficiency. The proposed approach represents a principled solution for taking advantage of structured code representations in RL, which may be useful in the context of automated programming assistants, remote robot control and other areas that require adaptive task understanding.
| null |
['Dynamic Gating']
|
/pdf/c13199ea2830fe24f6eed681752f7bbea8246410.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25534/Authors']
|
mRjzWksbGR
| 25,532
|
mRjzWksbGR
|
AC-ODM: Actor–Critic Online Data Mixing for Sample-Efficient LLM Pretraining
|
Pretraining data coverage and composition strongly influence the generalization of large language models (LLMs). While recent data-mixing approaches transfer domain weights learned by a small proxy model to a larger one to reduce computational costs and carbon footprint, they are typically static and ignore training dynamics. Online Data Mixing (ODM) mitigates this with a multi-armed bandit sampler but overlooks intra-domain interactions. We introduce AC-ODM, an actor–critic online data-mixing method that treats the LLM as the environment, uses auxiliary actor–critic networks to dynamically adjust domain sampling weights, and encodes intra-domain interactions through the reward. AC-ODM supports (i) a non-proxy mode that co-trains the actor–critic with the target LLM from scratch, and (ii) a proxy mode that first trains the actor–critic with a small, trainable proxy LLM and then transfers the learned actor to guide the target LLM’s pretraining. Empirically, the proxy mode incurs additional wall-clock time relative to the non-proxy mode but delivers stronger target-LLM performance. Across both modes, AC-ODM enables efficient, adaptive data mixing and accelerates target-model convergence, with negligible per-step wall-clock overhead. On Pythia-1B pretraining over The Pile and SlimPajama, AC-ODM-410M (a policy learned with a 410M-parameter proxy) reaches the optimal validation perplexity of ODM using 71\% and 65\% fewer training steps, respectively. It achieves a 27.5\% relative improvement in zero-shot MMLU accuracy, a 2.23$\times\$ higher pass@1 on HumanEval, and an average +3.44\% accuracy gain across five additional benchmarks. We further show that AC-ODM maintains the fastest pretraining convergence on LLaMA3-style architectures compared to prior data-mixing baselines.
|
AC-ODM dynamically mixes data with an actor–critic policy, speeding LLM pretraining (up to 71% fewer steps) and improving accuracy (+27.5% MMLU).
|
['Large Language Models', 'Online Data Mixing', 'Pretraining data mixing', 'Reinforcement learning']
|
/pdf/84f3ade0bea47aadfd5023d2820367a59596b686.pdf
|
foundation or frontier models, including LLMs
|
/attachment/4d7aa7aaca23784d5f012a10e0f3b28efb3c7b0a.zip
|
['ICLR.cc/2026/Conference/Submission25532/Authors']
|
62DZyNWRgv
| 25,531
|
62DZyNWRgv
|
TAROT: Test-Driven and Capability-Adaptive Curriculum Reinforcement Fine-Tuning for Code Generation
|
Large Language Models (LLMs) are fundamentally changing the coding paradigm, known as vibe coding, yet synthesizing algorithmically sophisticated and robust code still remains a critical challenge. Incentivizing the deep reasoning capabilities of LLMs is essential to overcome this hurdle. Reinforcement Fine-Tuning (RFT) has emerged as a promising strategy to address this need. However, most existing approaches overlook the heterogeneous difficulty and granularity inherent in test cases, leading to an imbalanced distribution of reward signals and consequently biased gradient updates during training. To address this, we propose TAROT, Test-driven and cApability-adaptive cuRriculum reinfOrcement fine-Tuning. TAROT systematically constructs, for each problem, a four-tier test suite (basic, intermediate, complex, edge), providing a controlled difficulty landscape for curriculum design and evaluation. Crucially, TAROT decouples curriculum progression from raw reward scores, enabling capability-conditioned evaluation and principled selection from a portfolio of curriculum policies rather than incidental test-case difficulty composition. This design fosters stable optimization and more efficient competency acquisition. Extensive experimental results reveal that the optimal curriculum for reinforcement fine-tuning in code generation is closely tied to a model’s inherent capability, with less capable models achieving greater gains with an easy-to-hard progression, whereas more competent models excel under a hard-first curriculum. TAROT provides a reproducible method that adaptively tailors curriculum design to a model's capability, thereby consistently improving the functional correctness and robustness of the generated code. All code and data are released to foster reproducibility and advance community research at https://anonymous.4open.science/r/TAROT-B675.
| null |
['Large Language Model', 'Code Generation', 'Curriculum Learning', 'Reinforcement Learning']
|
/pdf/76623d8f515997b9a6bd0b80a2c71cdf8c549efb.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25531/Authors']
|
DdGCjvrFs0
| 25,530
|
DdGCjvrFs0
|
BioSensGraph: Predicting Biopolymer Interactions via Knowledge Graph Embedding on a Property Graph of Molecular Entities
|
Existing biomedical knowledge graphs are primarily geared toward drug repurposing and pathway analysis (gene–disease–drug). For biosensing, however, the primary early-stage task is different: selecting recognition elements (RE) that bind selectively to a given analyte. We present a large-scale biomolecular knowledge graph that aggregates data from 15 heterogeneous open sources: ~1.3 M entities and ~43 M edges of three types - interacts_with (experimental analyte-RE interactions), has_similarity (structure/sequence similarity), and has_biomarker (associations with physiological conditions). Despite typical sparsity, the graph is highly connected (97% of nodes in the giant component) and exhibits heavy-tailed degree distributions.
We cast the problem as large-scale link prediction on symmetric IW edges using PyTorch-BigGraph and introduce a symmetry-aware protocol (mirror pairs are not assigned to different splits). In a controlled operator-comparator study under a pairwise ranking loss, the unit-norm DistMult (cosine) configuration delivers the most stable results (MRR = 0.457, Hits@10 = 0.822) on a ~2.6 M-triple test set. A lightweight web interface supports interactive navigation and analysis. Overall, our KG and protocol provide in-vitro-oriented ranking of analyte-RE pairs, helping to narrow the experimental search space and accelerate the transition to sensor prototypes.
|
The large-scale biomolecular knowledge graph (1.3M entities, 43M edges) is constructed by integrating heterogeneous data sources and evaluated with PyTorch-BigGraph embeddings for link prediction.
|
['knowledge graph', 'knowledge graph embedding', 'link prediction', 'biosensor']
|
/pdf/f0587437998062399c9ca7c91feef2cbf4d76824.pdf
|
applications to physical sciences (physics, chemistry, biology, etc.)
| null |
['ICLR.cc/2026/Conference/Submission25530/Authors']
|
FRp8cu1aKF
| 25,529
|
FRp8cu1aKF
|
On the (In)Significance of Feature Selection in High-Dimensional Datasets
|
Feature selection (FS) is assumed to improve predictive performance and highlight meaningful features. We systematically evaluate this across $30$ diverse datasets, including RNA-Seq, mass spectrometry, and imaging. Surprisingly, tiny random subsets of features (0.02-1\%) consistently match or outperform full feature sets in $27$ of $30$ datasets and selected features from published studies (wherever available). In short, any arbitrary set of features is as good as any other (with surprisingly low variance in results) - so how can a particular set of selected features be ''important'' if they perform no better than an arbitrary set? These results indicate the failure of the null hypothesis implicit in claims across many FS papers, challenging the assumption that computationally selected features reliably capture meaningful signals. They also underscore the need for rigorous validation before interpreting selected features as actionable, particularly in computational genomics.
|
Tiny random subsets of features match or outperform feature-selected sets across 27 out of 30 high-dimensional datasets, challenging conventional feature selection and highlighting the need for rigorous validation.
|
['feature selection', 'null hypothesis testing', 'negative result', 'high-dimensional data', 'computational biology']
|
/pdf/b1568d4caaea8b0a11da0d7dfc28829f108f3352.pdf
|
unsupervised, self-supervised, semi-supervised, and supervised representation learning
|
/attachment/74313f94cb5cba52bc324eefe9dd8552f43dcfa4.pdf
|
['ICLR.cc/2026/Conference/Submission25529/Authors']
|
9f1MuExEbF
| 25,528
|
9f1MuExEbF
|
Training-Free Spectral Fingerprints of Voice Processing in Transformers
|
Different transformer architectures implement identical linguistic computations via distinct connectivity patterns, yielding model imprinted ``computational fingerprints'' detectable through spectral analysis. Using graph signal processing on attention induced token graphs, we track changes in algebraic connectivity (Fiedler value, $\Delta\lambda_2$) under voice alternation across 20 languages and three model families, with a prespecified early window (layers 2--5). Our analysis uncovers clear architectural signatures: Phi-3-Mini shows a dramatic English specific early layer disruption ($\overline{\Delta\lambda_2}_{[2,5]} \approx -0.446$) while effects in 19 other languages are minimal, consistent with public documentation that positions the model primarily for English use. Qwen2.5-7B displays small, distributed shifts that are largest for morphologically rich languages, and LLaMA-3.2-1B exhibits systematic but muted responses. These spectral signatures correlate strongly with behavioral differences (Phi-3: $r=-0.976$) and are modulated by targeted attention head ablations, linking the effect to early attention structure and confirming functional relevance. Taken together, the findings are consistent with the view that training emphasis can leave detectable computational imprints: specialized processing strategies that manifest as measurable connectivity patterns during syntactic transformations. Beyond voice alternation, the framework differentiates reasoning modes, indicating utility as a simple, training free diagnostic for revealing architectural biases and supporting model reliability analysis.
|
Graph signal processing on attention reveals model-family specific shifts in algebraic connectivity (Fiedler value) for voice alternation across 20 languages, aligning with tokenization effects, behavioral fit, and head-ablation evidence.
|
['transformer interpretability', 'graph signal processing', 'attention analysis', 'cross-linguistic analysis', 'spectral connectivity', 'voice alternation', 'tokenizer effects', 'Fiedler eigenvalue']
|
/pdf/f1904893b983974d6b271b03218088c12a578d04.pdf
|
interpretability and explainable AI
| null |
['ICLR.cc/2026/Conference/Submission25528/Authors']
|
FgDmszDBKb
| 25,527
|
FgDmszDBKb
|
StaQ: a Finite Memory Approach to Discrete Action Policy Mirror Descent
|
In Reinforcement Learning (RL), regularization with a Kullback-Leibler divergence that penalizes large deviations between successive policies has emerged as a popular tool both in theory and practice. This family of algorithms, often referred to as Policy Mirror Descent (PMD), has the property of averaging out policy evaluation errors which are bound to occur when using function approximators. However, exact PMD has remained a mostly theoretical framework, as its closed-form solution involves the sum of all past Q-functions which is generally intractable. A common practical approximation of PMD is to follow the natural policy gradient, but this potentially introduces errors in the policy update. In this paper, we propose and analyze PMD-like algorithms for discrete action spaces that only keep the last $M$ Q-functions in memory. We show theoretically that for a finite and large enough $M$, an RL algorithm can be derived that introduces no errors from the policy update, yet keeps the desirable PMD property of averaging out policy evaluation errors. Using an efficient GPU implementation, we then show empirically on several medium-scale RL benchmarks such as Mujoco and MinAtar that increasing $M$ improves performance up to a certain threshold where performance becomes indistinguishable with exact PMD, reinforcing the theoretical findings that using an infinite sum might be unnecessary and that keeping in memory the last $M$ Q-functions is a practical alternative to the natural policy gradient instantiation of PMD.
|
We study a variant of PMD that keeps in memory the last M Q-functions, showing that it does not bias convergence and retains the averaging of error effect of PMD
|
['reinforcemnt learning; entropy regularization; policy mirror descent; function approximators']
|
/pdf/9278b28f29ef51db9112abf7975a7bc35c5e4ee7.pdf
|
reinforcement learning
| null |
['ICLR.cc/2026/Conference/Submission25527/Authors']
|
U2j9ZNgHqw
| 25,526
|
U2j9ZNgHqw
|
Test-Time Accuracy-Cost Control in Neural Simulators via Recurrent-Depth
|
Accuracy-cost trade-offs are a fundamental aspect of scientific computing. Classical numerical methods inherently offer such a trade-off: increasing resolution, order, or precision typically yields more accurate solutions at higher computational cost. We introduce \textbf{Recurrent-Depth Simulator} (\textbf{RecurrSim}) an architecture-agnostic framework that enables explicit test-time control over accuracy-cost trade-offs in neural simulators without requiring retraining or architectural redesign. By setting the number of recurrent iterations $K$, users can generate fast, less-accurate simulations for exploratory runs or real-time control loops, or increase $K$ for more-accurate simulations in critical applications or offline studies. We demonstrate RecurrSim's effectiveness across fluid dynamics benchmarks (Burgers, Korteweg-De Vries, Kuramoto-Sivashinsky), achieving physically faithful simulations over long horizons even in low-compute settings. On high-dimensional 3D compressible Navier-Stokes simulations with 262k points, a 0.8B parameter RecurrFNO outperforms 1.6B parameter baselines while using 13.5\% less training memory. RecurrSim consistently delivers superior accuracy-cost trade-offs compared to alternative adaptive-compute models, including Deep Equilibrium and diffusion-based approaches. We further validate broad architectural compatibility: RecurrViT reduces error accumulation by 77\% compared to standard Vision Transformers on Active Matter, while RecurrUPT matches UPT performance on ShapeNet-Car using 44\% fewer parameters.
| null |
['Neural Simulator', 'Recurrent Depth', 'AI4Simulation']
|
/pdf/f138544e39027a33cabb9a34859c50a3a4bfc4a1.pdf
|
applications to physical sciences (physics, chemistry, biology, etc.)
|
/attachment/970df3daddd12ce1a9205158ffc5da9c9f6445b1.zip
|
['ICLR.cc/2026/Conference/Submission25526/Authors']
|
gOf2ht5O0d
| 25,525
|
gOf2ht5O0d
|
Domain-Adaptive Syntax Tree Repair via Cross-Corpus Transfer with Adversarially Aligned Transformers
|
We propose a domain-adaptive syntax tree repair system that meets the challenges of code correction tasks of cross corpus generalization. The natural heterogeneity of code corpora in terms of domains biases the average algorithmic repair model most of the time to the extent that the performance is not optimal when applied to see programming contexts. To reduce this, we propose Domain-Aligned Syntax Tree Transformer (DASTT), a hierarchical neural model that simultaneously optimizes syntactic feasibility and domain-invariant features. The model takes raw source code as input through a byte pair encoding tokenizer and uses a multi-layer encoder of Transformer with adversarial training to align pairwise distributions of the tokens across domains. A gradient reversal layer reduces domain discrimination while maintaining the accuracy of repairs so that the system adapts to different codebases without ever needing to retrain. Furthermore, the decoder includes a pointer amplified mechanism to directly manipulate the syntax trees, inducing exact repair actions (insertion of nodes or deletion of nodes). The proposed method fits smoothly into the existing compiler pipelines, where existing lexers and parsers are substituted; compatibility with downstream activities is assured. Experiments show that DASTT outperforms domain-specific baselines on cross-corpus repair tasks by a large margin, achieving strong performance on multiple programming languages and coding styles. The adversarial alignment framework guarantees the syntactic fidelity even under large domain shifts and hence is suitable for real-world deployment in heterogeneous development environment. This work significantly advances the state-of-the-art on automated code repair by bringing together techniques of domain adaptation and structural syntax tree manipulation.
| null |
['Adversarially Aligned Transformers']
|
/pdf/001c6cdaa89f5c4094186d4ba570e09c6ab6d08f.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25525/Authors']
|
CrGxvyppMS
| 25,524
|
CrGxvyppMS
|
Data Passports: Confidentially Provable Provenance for Onboarding Verifiable ML
|
Recent advances in ML have leveraged Zero Knowledge Proof protocols to enable institutions to cryptographically commit to a dataset and subsequently prove, to external auditors, the integrity of training and the trustworthiness of the resulting model on the committed data, all while protecting model confidentiality. Such approaches guarantee that the training algorithm which produced a model was computed correctly, but remain vulnerable to pre-commitment data tampering. This is because even if the training algorithm is executed faithfully, an institution can bypass the audit by manipulating the training data. Likewise, data generators may degrade a model’s utility via data poisoning.
To address this, we introduce tamper-proof Data Passports that bind data to verifiable and confidential proofs of authenticity. We leverage Trusted Execution Environments to issue a certificate of authenticity or ‘passport’ for each data point produced by a generating process. The generating process passes the data and passport to the institution. Then, the institution uses a zero-knowledge proof to verify the validity of the passports to an auditor, as an onboarding step for downstream proofs of training integrity and model trustworthiness. This unlocks cryptographic verification of data provenance throughout the ML pipeline.
Our experiments demonstrate that we can create tamper-proof passports for images taken by users on their smartphones with a very negligible overhead. Agnostic to data size, a passport can be created at capture time in only 230 ms and consumes just 4.8 KB; thus, it has minimal impact on compute, storage and network usage.
|
We introduce tamper-proof Data Passports that bind data to verifiable and confidential proofs of authenticity through a co-design of ZKP and TEE.
|
['Data Provenance', 'Zero Knowledge Proof', 'Trusted Execution Environments', 'Auditing', 'Verifiable']
|
/pdf/cdbbf08a2d8daa2e9d865f07beb4c86a209689f0.pdf
|
alignment, fairness, safety, privacy, and societal considerations
|
/attachment/2b1916ef3ef65eb18500b97d9c603c1cbaa95bcc.zip
|
['ICLR.cc/2026/Conference/Submission25524/Authors']
|
cy7YVhpW4u
| 25,520
|
cy7YVhpW4u
|
SeedThink: Test-Time Control via Seed-Thought Initialization
|
Large reasoning models (LRMs) achieve impressive performance via extended chains of thought, but this substantially increases inference overhead, making efficiency a critical bottleneck. In this paper, we first show that initializing the reasoning process with high-quality seed thoughts can steer the model away from unproductive "overthinking'' and produce more efficient reasoning trajectories. Critically, we find that the optimal granularity of this seed --- from a high-level outline to a detailed solution --- depends on problem difficulty. Motivated by this, we propose SeedThink, a novel framework that adaptively selects the seed granularity based on an estimate of problem difficulty. Specifically, SeedThink features two core innovations: (1) a \textbf{difficulty-aware seeding policy that dynamically generates seed thoughts} to reduce repetitive verification and prune unproductive branches; and (2) \textbf{seamless integration with enhanced speculative decoding}, where seed thoughts are reused as a model-free draft corpus to achieve dual-path acceleration --- shorter reasoning traces and faster token generation. Our experiments show that {SeedThink} significantly reduces inference costs while largely preserving performance. Notably, our method achieves up to 4.1× end-to-end speedup and a 68\% reduction in generation length with minimal accuracy degradation, highlighting the promise of adaptive initialization for balancing reasoning quality and efficiency.
| null |
['Large Reasoning Models', 'Speculative Decoding', 'Efficient Reasoning']
|
/pdf/43c660e875ad2d00d5079a58e751ad9ee1272af2.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25520/Authors']
|
RObkOKADBU
| 25,519
|
RObkOKADBU
|
CORDS - Continuous Representations of Discrete Structures
|
Many learning problems require predicting sets of objects without knowing their
number in advance. Examples include object detection, molecular modeling, and
a variety of inference problems for scientific data, such as astrophysical source
detection. Existing methods often rely on padded representations, or must explicitly
infer the cardinality directly from data, which often poses challenges. We present
a novel strategy for addressing this challenge by casting prediction of variable
cardinality as a continuous inference problem, where the number of objects is
recovered directly from field mass. Our approach, CORDS (Continuous Representations
of Discrete Structures), provides a bijective representation that maps sets
of spatial objects with features to continuous density and feature fields. Because
the mapping is invertible, models can operate entirely in field space and still be
decoded back to discrete sets. We evaluate CORDS across molecular generation
and regression, object detection, simulation-based inference in astronomy, and a
mathematical task that recovers local maxima, demonstrating robust handling of
variable cardinality with competitive accuracy.
|
We turn discrete objects into continuous fields that implicitly encode their count, offering a simple way to handle variable cardinality across tasks and domains.
|
['Continuous set representations', 'Neural fields', 'Variable-cardinality prediction', 'Invertible encoding/decoding', 'Diffusion and flow matching', 'Object detection', 'Molecular generation', 'Simulation-based inference']
|
/pdf/fa1ccc49d1a4be2b4ed8db90c5cdb47f094341ec.pdf
|
learning on graphs and other geometries & topologies
| null |
['ICLR.cc/2026/Conference/Submission25519/Authors']
|
U7pWkp90qA
| 25,517
|
U7pWkp90qA
|
DyCodeExplainer: Explainable Dynamic Graph Attention for Multi-Agent Reinforcement Learning in Collaborative Coding
|
We propose \textbf{DyCodeExplainer}, a novel multi-agent reinforcement learning (MARL) framework that integrates dynamic graph attention with explainability techniques to improve collaborative coding. Existing MARL systems typically depend on static communication protocols which are not flexible and transparent in performing coding tasks that are more complicated. The above method suffers from this limitation by treating the interaction of agents in the form of a time-evolving graph in which the nodes represent coding agents, and edges indicate messages exchanged between them. A dynamic graph attention network (DGAT) dynamically prioritizes the messages considering contextually relevant message, whereas hard attention gate eliminates noises and helps improve decision-making efficiency. Furthermore, the framework includes gradient-based attention attribution and rule-based post-hoc explanations to explain message prioritization for providing interpretable budgetary information about the collaborative process. The policy and critic networks use Transformer-XL and graph neural networks respectively for managing the long-range dependencies and assessing the memory argument of the joint state values. Experiments show DyCodeExplainer to be more accurate in terms of code correctness and collaborative efficiency than traditional MARL baselines. The novelty of the system is the simultaneous optimization of thresholds for dynamic attention and explainability rules to bridge an important gap in transparent multi-agent coding systems. This work will move the field forward by providing a scalable and interpretable solution for collaborative software development.
| null |
['Collaborative Coding']
|
/pdf/249d7b76e9fb81e4a5ae577b6181697594ce1191.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25517/Authors']
|
sz2gtTBVIq
| 25,514
|
sz2gtTBVIq
|
TrustGen: Benchmarking Trustworthiness in Generative Models for Russian Language Processing Tasks
|
Large Language Models (LLMs) are increasingly used in autonomous agents and multi-agent systems to handle complex tasks, making their trustworthiness a critical concern. However, most existing benchmarks focus on English, limiting their relevance for other languages, particularly Russian. In this study, we introduce the first benchmark for evaluating LLM trustworthiness in Russian-language tasks, assessing six dimensions: truthfulness, safety, fairness, robustness, privacy, and ethics. We adapt English datasets and incorporate native Russian data, creating 14 tasks from 12 datasets. Additionally, we propose the Task Format Non-Compliance Rate to measure structural adherence without penalizing correct content. Evaluating 22 LLMs, including Russian-adapted models, we uncover significant challenges in factual consistency, safety calibration, and bias mitigation. Our findings underscore the need for tailored fine-tuning and evaluation methods for non-English applications, providing a foundation for more trustworthy AI in Russian-language contexts.
|
TrustGen — the first Russian-language benchmark for evaluating the trustworthiness of large language models
|
['trustworthiness', 'robustness', 'security and privacy', 'model bias/fairness evaluation']
|
/pdf/1c35834213ba0267c79ff99cf2cc21895c9e1ed2.pdf
|
alignment, fairness, safety, privacy, and societal considerations
|
/attachment/0e9a03337e259a19eac91a3983c4fa5f4c7f97e5.zip
|
['ICLR.cc/2026/Conference/Submission25514/Authors']
|
JWQtXbVRbs
| 25,513
|
JWQtXbVRbs
|
Non-Additive Time-Series Forecasting via Cross-Decomposition and Linear Attention
|
Many multivariate forecasters model additive effects well but miss non-additive interactions among temporal bases, variables, and exogenous drivers, which harms long-horizon accuracy and attribution. We present time-series interaction machine (${TIM}$), an all-MLP forecaster designed from the ANOVA/Hoeffding target: the regression function is decomposed into main effects and an orthogonal interaction component. TIM assigns the interaction to a DCN-style cross stack that explicitly synthesizes bounded-degree polynomial crosses with controllable CP rank, while lightweight branches capture main effects. Axis-wise linear self-attention (time and variables) transports information without increasing polynomial degree and maintains linear time and memory complexity. A decomposition regularizer encourages orthogonality and yields per-component attributions. We establish degree and rank guarantees and a risk identity showing that the additive error gap equals the energy of the interaction subspace. TIM achieves state-of-the-art accuracy on long-term benchmarks with clear cross-term interpretability.
| null |
['multivariate time series', 'deep & cross networks', 'linear attention']
|
/pdf/cdfb285a0b98cac45511aa6794b153764148d9b3.pdf
|
learning on time series and dynamical systems
| null |
['ICLR.cc/2026/Conference/Submission25513/Authors']
|
R9MzJjvzXv
| 25,511
|
R9MzJjvzXv
|
HealthSLM-Bench: Benchmarking Small Language Models for On-device Healthcare Monitoring
|
On-device healthcare monitoring play a vital role in facilitating timely interventions, managing chronic health conditions, and ultimately improving individuals’ quality of life. Previous studies on large language models (LLMs) have highlighted their impressive generalization abilities and effectiveness in healthcare prediction tasks. However, most LLM-based healthcare solutions are cloud-based, which raises significant privacy concerns and results in increased memory usage and latency. To address these challenges, there is growing interest in compact models, Small Language Models (SLMs), which are lightweight and designed to run locally and efficiently on mobile and wearable devices. Nevertheless, how well these models perform in healthcare prediction remains largely unexplored.
We systematically evaluated SLMs on health prediction tasks using zero-shot, few-shot, and instruction fine-tuning approaches, and deployed the best performing fine-tuned SLMs on mobile devices to evaluate their real-world efficiency and predictive performance in practical healthcare scenarios. Our results show that SLMs can achieve performance comparable to LLMs while offering substantial gains in efficiency, reaching up to 17$\times$ lower latency and 16$\times$ faster inference speed on mobile platforms. However, challenges remain, particularly in handling class imbalance and few-shot scenarios. These findings highlight SLMs, though imperfect in their current form, as a promising solution for next-generation, privacy-preserving healthcare monitoring.
|
This paper benchmarks Small Language Models (SLMs) for mobile health applications, demonstrating efficient, privacy-preserving predictions on wearable devices, with performance comparable to larger language models (LLMs) in health task predictions.
|
['Small Language Models (SLMs)', 'Mobile Health', 'On-Device LLMs']
|
/pdf/ca4ab04794359616aa065ece1fa1b6f97c822d0b.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25511/Authors']
|
Z0jDtLL7aM
| 25,510
|
Z0jDtLL7aM
|
Efficient Spectral Graph Diffusion based on Symmetric Normalized Laplacian
|
Graph distribution learning and generation are fundamental challenges with applications in drug discovery, materials science, and network analysis. While diffusion-based approaches have shown promise, existing spectral methods suffer from eigenvalue imbalance and limited scalability. We introduce Efficient Spectral Graph Diffusion (ESGD), which advances spectral graph generation in three key ways: (1) compressing eigenvalues of the Symmetric Normalized Laplacian (SNL) into a bounded domain to eliminate spectrum imbalance with theoretical convergence guarantees; (2) designing a degree-matrix recovery algorithm to reconstruct adjacency matrices from SNL representations; (3) scaling to graphs with thousands of nodes where other models fail. The SNL transformation reduces condition numbers and learning difficulty for complex distribution patterns. Empirically, ESGD achieves state-of-the-art performance on generic graphs and competitive results on molecular generation, while successfully extending to large graphs. ESGD converges in 20 epochs (vs. >2000 for baselines) with 6-10× fewer sampling steps, establishing an efficient foundation for spectral graph diffusion.
| null |
['Efficient Graph Generation', 'Spectral Diffusion', 'Eigenvalue Normalization']
|
/pdf/4de5cd1d3054f2186c6cacd42331f145608047f3.pdf
|
generative models
|
/attachment/ffca4ac02c763ba52eef83867a7deca8bca31fb1.zip
|
['ICLR.cc/2026/Conference/Submission25510/Authors']
|
kbxjkoF42x
| 25,509
|
kbxjkoF42x
|
Ensemble Learning for AUC Maximization via Surrogate Loss
|
In classification tasks, the area under the ROC curve (AUC) is a key metric for evaluating a model’s ability to discriminate between positive and negative samples. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or when the outcome is rare. While ensemble learning is a common strategy to improve predictive performance by combining multiple base models, direct AUC maximization for aggregating base learners leads to an NP-hard optimization challenge. To address this challenge, we propose a novel stacking framework that leverages a linear combination of base models through a surrogate loss function designed to maximize AUC. Our approach learns data-driven stacking weights for base models by minimizing a pairwise loss-based objective. Theoretically, we prove that the resulting ensemble is asymptotically optimal with respect to AUC. Moreover, when the set of base models includes correctly specified models, our method asymptotically concentrates all weight on these models, ensuring consistency. In numerical simulations, the proposed method reduces the AUC risk by up to 20\% compared to existing ensemble methods, a finding that is corroborated by real-data analysis, which also shows a reduction of over 30\%.
|
This paper proposes a novel stacking framework that linearly combines base models via a surrogate loss function designed to maximize AUC. The resulting ensemble is asymptotically optimal and its effectiveness is verified by empirical studies.
|
['AUC Maximization', 'Ensemble Learning', 'Machine Learning', 'Binary Classification', 'Surrogate Loss', 'Asymptotic Optimality']
|
/pdf/9ee967d96af6a9efac51d353960d45a8a838da18.pdf
|
learning theory
| null |
['ICLR.cc/2026/Conference/Submission25509/Authors']
|
1BXojAgNrg
| 25,508
|
1BXojAgNrg
|
MedAraBench: Large-scale Arabic Medical Question Answering Dataset and Benchmark
|
Arabic remains one of the most underrepresented languages in natural language processing research, particularly in medical applications, due to the limited availability of open-source data and benchmarks. The lack of resources hinders efforts to evaluate and advance the multilingual capabilities of Large Language Models (LLMs). In this paper, we introduce MedAraBench, a large-scale dataset consisting of Arabic multiple-choice question-answer pairs across various medical specialties. We constructed the dataset by manually digitizing a large repository of academic materials created by medical professionals in the Arabic-speaking region. We then conducted extensive preprocessing and split the dataset into training and test sets to support future research efforts in the area. To assess the quality of the data, we adopted two frameworks, namely expert human evaluation and LLM-as-a-judge. Our dataset is diverse and of high quality, spanning 19 specialties and five difficulty levels. For benchmarking purposes, we assessed the performance of eight state-of-the-art open-source and proprietary models, such as GPT-5, Gemini 2.0 Flash, and Claude 4-Sonnet. Our findings highlight the need for further domain-specific enhancements. We release the dataset and evaluation scripts to broaden the diversity of medical data benchmarks, expand the scope of evaluation suites for LLMs, and enhance the multilingual capabilities of models for deployment in clinical settings.
| null |
['Dataset Benchmark', 'Large Language Models', 'Arabic Natural Language Processing', 'Medical Question Answering']
|
/pdf/6fd11f6243e10a4727f8831dc5dc6ecc43837990.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25508/Authors']
|
uajSG0jubM
| 25,506
|
uajSG0jubM
|
MouseDTB: A Mouse Digital Twin Brain at Single-neuron Resolution
|
Accurate whole-brain computational modeling grounded in single-neuron resolution connectivity is crucial for understanding how large-scale brain structures give rise to complex behaviors and cognition. Conventional mouse whole-brain models are typically constructed from coarse-grained regional or voxel-level connectivity, without considering single-neuron biological plausibility in the mouse brain connectome. In this study, we build a mouse digital twin brain (mouse DTB) at single-neuron resolution with large-scale spiking neural network, able to support complex behavioral tasks at whole-brain scale. We developed the mouse brain connectivity at single-neuron resolution through a data-driven pipeline that integrates high-resolution axonal projection data and spatial distributions of cells from the mouse brain cell atlas. The resulting neuronal connectivity is coupled with leaky integrate-and-fire (LIF) neurons and conductance-based synapses to form a large-scale spiking neural network of the mouse brain. The mouse DTB successfully reproduced blood-oxygen-level-dependent (BOLD) signals observed in both resting state and olfactory Go/No-Go discrimination task with high correlation, and exhibits correct behavioral responses aligned with perceptual odor inputs. This model leverages diffusion ensemble Kalman filtering (EnKF) and hierarchical Bayesian inference for parameter estimation. Our work provides a single-neuron resolution, whole-brain mouse DTB, offering a powerful tool for studying neural dynamics, behavior and cognition underlying mouse intelligence during complex tasks.
| null |
['digital twin brain', 'whole-brain modelling', 'mouse brain connectome']
|
/pdf/8dd8e15d9d367470bcd443d35cd3a4aeae7adb94.pdf
|
applications to neuroscience & cognitive science
|
/attachment/3afc90813fd7a3922f65441177b2ffc57d0a74a2.zip
|
['ICLR.cc/2026/Conference/Submission25506/Authors']
|
wfTt4wZtaj
| 25,505
|
wfTt4wZtaj
|
A Bilingual Acupuncture Question Answering System via Lightweight LLMs and Retrieval-Augmented Generation
|
Large language models (LLMs) are prone to hallucinations and often lack reliable access to structured, domain-specific knowledge in Traditional Chinese Medicine (TCM). We present the first bilingual (Chinese--English) acupuncture question answering system built on lightweight LLM backbones and retrieval-augmented generation (RAG). The system integrates a curated ontology covering 361 acupoints and 14 meridians, clinician-authored case records, and a triple-constraint decoding strategy (terminology checking, evidence grounding, and safety filtering) to deliver controlled, verifiable answers. On our evaluation suite, the best-performing configuration (bert+baichuan) achieves 94.4% context recall, 97.2% faithfulness, and 96.1% answer relevance (RAGAS), together with 0.88 BLEU, 0.94 ROUGE, a 94 GPT score, and a 90 expert score. These results confirm that bilingual embedding fusion plus constraint-based decoding substantially improves factuality and clinical usefulness over pure LLM baselines, establishing a strong foundation for reliable and accessible question answering system in acupuncture-specific.
|
We propose the first bilingual acupuncture QA system that combines lightweight LLMs with retrieval-augmented generation and constraint-based decoding, achieving strong factuality and clinical reliability.
|
['retrieval-augmented generation', 'large language models', 'acupuncture', 'traditional Chinese medicine', 'hallucination mitigation', 'bilingual QA', 'clinical validation']
|
/pdf/eddd877533d058c77ba14b93a98c354eea037034.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25505/Authors']
|
zcAwK50ft0
| 25,504
|
zcAwK50ft0
|
Fracture-GS: Dynamic Fracture Simulation with Physics-Integrated Gaussian Splatting
|
This paper presents a unified framework for simulating and visualizing dynamic fracture phenomena in extreme mechanical collisions using multi-view image inputs. While existing methods primarily address elastic deformations at contact surfaces, they fail to capture the complex physics of extreme collisions, often producing non-physical artifacts and material adhesion at fracture interfaces. Our approach integrates two key innovations: (1) an enhanced Collision Material Point Method (Collision-MPM) with momentum-conserving interface forces derived from normalized mass distributions, which effectively eliminates unphysical adhesion in fractured solids; and (2) a fracture-aware 3D Gaussian continuum representation that enables physically plausible rendering without post-processing. The framework operates through three main stages: First, performing implicit reconstruction of collision objects from multi-view images while sampling both surface and internal particles and simultaneously learning surface particle Gaussian properties via splatting; Second, high-fidelity collision resolution using our improved Collision-MPM formulation; Third, dynamic fracture tracking with Gaussian attribute optimization for fracture surfaces rendering. Through comprehensive testing, our framework demonstrates significant improvements over existing methods in handling diverse scenarios, including homogeneous materials, heterogeneous composites, and complex multi-body collisions. The results confirm superior physical accuracy, while maintaining computational efficiency for rendering.
| null |
['3D vision', 'Physics-based Simulation']
|
/pdf/a9a70f81dda15129b19546285811277011bdb027.pdf
|
applications to robotics, autonomy, planning
|
/attachment/bac12d0d17fe8b3ee0e4ea49c7833320acc51bc5.zip
|
['ICLR.cc/2026/Conference/Submission25504/Authors']
|
KirKWFPYJA
| 25,501
|
KirKWFPYJA
|
High Probability Bounds for Non-Convex Stochastic Optimization with Momentum
|
Stochastic gradient descent with momentum (SGDM) has been widely used in machine learning. However, in non-convex domains, high probability learning bounds for SGDM are scarce. In this paper, we provide high probability convergence bounds and generalization bounds for SGDM. Firstly, we establish these bounds for the gradient norm in the general non-convex case. The derived convergence bounds are tighter than the theoretical results of related work, and to our best knowledge, the derived generalization bounds are the first ones for SGDM. Then, if the Polyak-{\L}ojasiewicz condition is satisfied, we establish these bounds for the error of the function value, instead of the gradient norm. Moreover, the derived learning bounds have faster rates than the general non-convex case. Finally, we further provide sharper generalization bounds by considering a mild Bernstein condition on the gradient. In the case of low noise, their learning rates can reach $\widetilde{\mathcal{O}}(1/n^2)$, where $n$ is the sample size. Overall, we relatively systematically investigate the high probability learning bounds for non-convex SGDM.
| null |
['Momentum', 'nonconvex learning', 'generalization']
|
/pdf/2e05ad5f0bb41097b874a4fc65a2ff358f797ed4.pdf
|
learning theory
| null |
['ICLR.cc/2026/Conference/Submission25501/Authors']
|
dGxAYNK6JU
| 25,500
|
dGxAYNK6JU
|
PoinnCARE: Hyperbolic Multi-Modal Learning for Enzyme Classification
|
Enzyme Commission (EC) number prediction is vital for elucidating enzyme functions and advancing biotechnology applications. However, current methods struggle to capture the hierarchical relationships among enzymes and often overlook critical structural and active site features. To bridge this gap, we introduce PoinnCARE, a novel framework that jointly encodes and aligns multi-modal data from enzyme sequences, structures, and active sites in hyperbolic space. By integrating graph diffusion and alignment techniques, PoinnCARE mitigates data sparsity and enriches functional representations, while hyperbolic embedding preserves the intrinsic hierarchy of the EC system with theoretical guarantees in low-dimensional spaces. Extensive experiments on four datasets from the CARE benchmark demonstrate that PoinnCARE consistently and significantly outperforms state-of-the-art methods in EC number prediction.
| null |
['EC number prediction', 'enzyme function', 'hyperbolic space learning', 'multi-modal learning', 'enzyme structure', 'enzyme active site']
|
/pdf/b157b7c614a6a899b8c3d62fab43678ede072846.pdf
|
applications to physical sciences (physics, chemistry, biology, etc.)
| null |
['ICLR.cc/2026/Conference/Submission25500/Authors']
|
0wJuW3snwU
| 25,499
|
0wJuW3snwU
|
SD-MAD: Sign-Driven Few-shot Multi-Anomaly Detection in Medical Images
|
Medical anomaly detection (AD) is crucial for early clinical intervention, yet it faces challenges due to limited access to high-quality medical imaging data, caused by privacy concerns and data silos. Few-shot learning has emerged as a promising approach to alleviate these limitations by leveraging the large-scale prior knowledge embedded in vision-language models (VLMs). Recent advancements in few-shot medical AD have treated normal and abnormal cases as a one-class classification problem, often overlooking the distinction among multiple anomaly categories. Thus, in this paper, we propose a framework tailored for few-shot medical anomaly detection in the scenario where the identification of multiple anomaly categories is required. We propose that separating anomalies relies on distinct radiological signs, routinely used by clinicians to bridge knowledge and images. To capture the detailed radiological signs of medical anomaly categories, our framework incorporates diverse textual descriptions for each category generated by a Large-Language model, under the assumption that different anomalies in medical images may share common radiological signs in each category. Specifically, we introduce SD-MAD, a two-stage \textbf{S}ign-\textbf{D}riven few-shot \textbf{M}ulti-\textbf{A}nomaly \textbf{D}etection framework: (i) Radiological signs are aligned with anomaly categories and distinguished by amplifying inter-anomaly discrepancy; (ii) Aligned signs are selected further to mitigate the effect of the under-fitting and uncertain-sample issue caused by limited medical data, employing an automatic sign selection strategy at inference. Moreover, we propose two protocols to comprehensively quantify the performance of multi-anomaly detection. Extensive experiments illustrate the effectiveness of our method.
| null |
['Anomaly Detection', 'Medical Image', 'Few-shot Learning']
|
/pdf/033ee3a2bea6c41bed5f86e4d11487a5197a556b.pdf
|
applications to physical sciences (physics, chemistry, biology, etc.)
| null |
['ICLR.cc/2026/Conference/Submission25499/Authors']
|
pKKtSi88fH
| 25,496
|
pKKtSi88fH
|
ObjexMT: Objective Extraction and Metacognitive Calibration for LLM-as-a-Judge under Multi-Turn Jailbreaks
|
LLM-as-a-Judge (LLMaaJ) now underpins scalable evaluation, yet we lack a decisive test of a judge's qualification: can it recover a conversation's latent objective and know when that inference is trustworthy? LLMs degrade under irrelevant or long context; multi-turn jailbreaks further hide goals across turns. We introduce **ObjexMT**, a benchmark for objective extraction and metacognition. Given a multi-turn transcript, a model must return a one-sentence base objective and a self-reported confidence. Accuracy is computed via LLM-judge semantic similarity to gold objectives, converted to binary correctness by a single human-aligned threshold calibrated once on **N=300** items ($\tau^\star = 0.66$; $F_1@\tau^\star = 0.891$). Metacognition is evaluated with ECE, Brier, *Wrong@High-Confidence* (0.80/0.90/0.95), and risk--coverage. Across six models (`gpt-4.1`, `claude-sonnet-4`, `Qwen3-235B-A22B-FP8`, `kimi-k2`, `deepseek-v3.1`, `gemini-2.5-flash`) on *SafeMTData_Attack600*, *SafeMTData_1K*, and *MHJ*, `kimi-k2` attains the highest objective-extraction accuracy (**0.612**; 95% CI [0.594, 0.630]), with `claude-sonnet-4` (**0.603**) and `deepseek-v3.1` (**0.599**) not statistically distinguishable from it by paired tests. `claude-sonnet-4` yields the best selective risk and calibration (AURC **0.242**; ECE **0.206**; Brier **0.254**).
**Striking dataset heterogeneity (16--82% accuracy variance) reveals that automated obfuscation poses fundamental challenges beyond model choice.**
Despite improvements, high-confidence errors remain: [email protected] ranges from **14.9%** (`claude-sonnet-4`) to **47.7%** (`Qwen3-235B-A22B-FP8`). ObjexMT thus supplies an actionable test for LLM judges: when objectives are not explicit, judges often misinfer them; we recommend exposing objectives when feasible and gating decisions by confidence otherwise. **All experimental data are provided in the Supplementary Material and at [https://anonymous.4open.science/r/ObjexMT_dataset_Anonymous_ICLR-F658/](https://anonymous.4open.science/r/ObjexMT_dataset_Anonymous_ICLR-F658/).**
|
ObjexMT benchmarks whether LLM judges can recover a dialogue’s hidden objective and calibrate their confidence under multi-turn jailbreaks, revealing frequent overconfident misinference and guiding confidence‑gated, objective‑exposed evaluation.
|
['ObjexMT', 'LLM-as-a-Judge', 'objective extraction', 'multi-turn jailbreaks', 'latent intent inference', 'metacognitive calibration', 'confidence estimation', 'Expected Calibration Error', 'Brier score', 'selective prediction', 'risk-coverage', 'safety evaluation']
|
/pdf/dfc0348ade93046f4ac752043a40a430f225cdd0.pdf
|
alignment, fairness, safety, privacy, and societal considerations
|
/attachment/5e5a93ca591fea0d04ee81050c1477657bbd911e.zip
|
['ICLR.cc/2026/Conference/Submission25496/Authors']
|
WkGnZsnCDR
| 25,493
|
WkGnZsnCDR
|
Hierarchical Graph-coding Diffusion Model with Adaptive Information Bottleneck for Multichannel Speech Enhancement
|
Diffusion models have achieved strong performance in multichannel speech enhancement, especially in unseen noisy scenarios. However, most existing diffusion method rely on globally consistent guidance applied either to the output or uniformly across denoiser layers, which fails to provide layer-specific adaptation and introduces redundancy, thereby constraining denoising performance.To address these challenges,we propose a novel hierarchical graph-coding diffusion model with adaptive information bottleneck (HG-Diff-IB) for multichannel speech enhancement. Specifically, we introduce a hierarchical alignment method to align graph-coding with the denoiser at different depths, together with a layer-wise graph-coding modulation mechanism that injects graph information into intermediate features, enabling layer-specific guidance of diffusion feature distributions. Furthermore, we introduce an adaptive information bottleneck that dynamically adjusts the feature compression according to the estimated SNR, effectively balancing noise suppression and target feature preservation. Experimental results demonstrate that our proposed method outperforms baselines in various evaluation metrics.
| null |
['hierarchical graph-coding', 'diffusion model', 'layer modulation', 'adaptive information bottleneck', 'multichannel speech enhancement']
|
/pdf/bdb5e477105dda0016c2dfaed9ac41c94632786d.pdf
|
generative models
| null |
['ICLR.cc/2026/Conference/Submission25493/Authors']
|
S2vVSNJhFw
| 25,491
|
S2vVSNJhFw
|
Dynamic Contrastive Reinforcement Learning for Adaptive Code-Text Alignment via Multi-Modal Fusion
|
We propose Dynamic Contrastive Reinforcement Learning (DCRL), a new structure for end-to-end adaptive code-text alignment with a multi-modal fusion. The proposed method overcomes the shortcomings of static fusion methods by dynamically tuning contrastive learning parameters depending on the reinforcement learning agent's performance, and thus guarantees the quality of alignment is proportional to the proficiency of the task. Unlike conventional methods with 'fix margin' and 'fix temperature' against the contrastive loss, DCRL re-constructs the parameters of margin and temperature as a function of the cumulative reward of the agent and the rate of completion of the tasks, allowing the embedding space to learn out of broadly exploring and then pinpoint alignment. The framework incorporates a cross modal transformer which helps you fuse the embeddings of codes and text and further feed it into a policy network for downstream tasks such as code generation or text summarization.
| null |
['Multi-Modal Fusion']
|
/pdf/2a662b803c7850ea8110bdb67c0827688376bec1.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25491/Authors']
|
8t5TlFzUAU
| 25,486
|
8t5TlFzUAU
|
When Names Disappear: Revealing What LLMs Actually Understand About Code
|
Large Language Models (LLMs) achieve strong results on code tasks, but how they derive program meaning remains unclear. We argue that code communicates through two channels: structural semantics, which define formal behavior, and human-interpretable naming, which conveys intent. Removing the naming channel severely degrades intent-level tasks such as summarization, where models regress to line-by-line descriptions. Surprisingly, we also observe consistent reductions on execution tasks that should depend only on structure, revealing that current benchmarks reward memorization of naming patterns rather than genuine semantic reasoning. To disentangle these effects, we introduce a suite of semantics-preserving obfuscations and show that they expose identifier leakage across both summarization and execution. Building on these insights, we release ClassEval-Obf, an obfuscation-enhanced benchmark that systematically suppresses naming cues while preserving behavior. Our results demonstrate that ClassEval-Obf reduces inflated performance gaps, weakens memorization shortcuts, and provides a more reliable basis for assessing LLMs’ code understanding and generalization.
|
This paper studies the effect of structure and naming on LLM code understanding, showing that removing naming harms intent comprehension and exposes memorization in current benchmarks.
|
['large language model', 'code summarization', 'code execution understanding', 'name obfuscation', 'datasets and benchmarks']
|
/pdf/b98db74c0124e52264daa8b6ada84de45c3dcb37.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25486/Authors']
|
lFaLBotlag
| 25,483
|
lFaLBotlag
|
Dynamic Incremental Code Embeddings (DICE): A Real-Time Communication Protocol for Multi-Agent Reinforcement Learning
|
We propose Dynamic Incremental Code Embeddings (DICE), a real-time communication protocol to address the inefficiency of static or periodically updated embeddings in dynamic coding environments for multi-agent reinforcement learning (MARL) in collaborative code completion. The proposed method combines two novel mechanisms-inside the context encoder is used to represent a code called dynamic semantic drift encoding (DSDE) and a code called dynamic contextual embedding adaptation (DCEA) which allows to retain the code representations that can be updated with lightweight operations and boosted with new local or shared with collaborative inputs from other agents to be adapted. DSDE captures semantic drift with a continuous-time process, i.e., embeddings can be learned to evolve with little calculation cost, and DCEA considers dynamically adaptive pretend with graph attention networks (GATs), which integrates related context of adjacent agents. These various mechanisms are integrated in a single common state-level representation, and embeddings are used in place of traditional static inputs in the policy networks, with the reward function being updated to penalize semantic deviation among systems, which encourages the system to make some of its objectives coincide with the system's unity. Furthermore, the system is also linear in the number of agents with quadratic complexity in full retraining approaches. Empirical results demonstrate a 40\% reduction in redundant suggestions compared to static embedding baselines, highlighting the practical significance of DICE in real-world collaborative coding scenarios. The framework is realized by fine-tuned GPT-3.5-turbo encoder and 4-head GAT which provide a scalable and efficient solution for MARL in code completion tasks.
| null |
['Multi-Agent Reinforcement Learning']
|
/pdf/187e57ae449129e7c5e7f5ffb8b203466efe4837.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25483/Authors']
|
60Vj3aBnjw
| 25,482
|
60Vj3aBnjw
|
Position-Aware Modeling for Next-Token Prediction
|
Next-token prediction (NTP) serves as the dominant training paradigm for large language models (LLMs), enabling strong autoregressive (AR) generation capabilities. Despite its success, models trained with vanilla NTP often exhibit counterintuitive failure patterns, such as the reversal curse, factorization curse, and sensitivity to knowledge position. These failures stem from the fixed left-to-right token order during teacher-forcing supervision, which entangle content and token order in ways that compromise permutation invariance. To address these failures, we introduce a position-aware training framework that enables AR models to predict the next token not just based on seen content, but also to account for predicted token position. This disentanglement of what to predict and where to predict improves the robustness of LLMs to different token orderings. We instantiate this framework via two complementary approaches: (1) Content-Position Coupling (CPC), which injects a lightweight position-aware embedding into the input sequence without modifying the model architecture; and (2) Content-Position Decoupling (CPD), which introduces the modular position-aware blocks for the pre-training AR model to provide explicit supervision over target positions. Experiments across three representative tasks demonstrate that our framework consistently improves performance over strong baselines, while maintaining architectural simplicity and convergence efficiency. Codes are available at {\url{https://anonymous.4open.science/r/CPC-CPD}}.
| null |
['Next-Token Prediction', 'Large language models', 'Position-aware']
|
/pdf/668dbccaa4698ef7d2734ce006f857e0db5b4d45.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25482/Authors']
|
L0DTflYss0
| 25,481
|
L0DTflYss0
|
A simple contraction criterion for the Sinkhorn mirror flow
|
We give a concise condition for contraction of the continuous-time mirror dynamics which was recently shown to be the vanishing-step-size limit of the Sinkhorn algorithm.
This condition is essentially coercivity of a conditional expectation operator.
|
A simple contraction criterion for the Sinkhorn mirror flow
|
['Schrodinger Bridge', 'Entropy-Regularized Optimal Transport', 'MirrorDescent']
|
/pdf/0144b76af2bbc5d099375ceb57b5feb27e2e6317.pdf
|
generative models
| null |
['ICLR.cc/2026/Conference/Submission25481/Authors']
|
6XdT4NuIMz
| 25,480
|
6XdT4NuIMz
|
Dynamic $k$-shot In-Context Learning
|
In-context learning (ICL) allows large language models (LLMs) to learn new tasks from demonstrations and to predict unseen inputs without parameter updates. Existing studies typically fix the number of demonstrations as a static hyperparameter (e.g., 5 or 10), overlooking the variability across models and inputs. We empirically find that the same query text may yield different outcomes depending on the number of demonstrations used. Motivated by this observation, we propose Dynamic-$k$ In-Context Learning (D-$k$-ICL), a novel method that adaptively determines the most suitable number of demonstrations for each query text. The core component is a performance predictor—a neural network that jointly encodes the query text and candidate in-contexts (constructed with varying demonstration counts) to estimate expected inference quality. At inference time, we retrieve the top-$k$ semantically similar demonstrations and progressively vary $k$ to generate candidate in-contexts. The predictor then selects the candidate most likely to yield the best output, thereby dynamically adapting both the number and composition of demonstrations. Across three LLMs and eight datasets, D-$k$-ICL achieves considerable results, with up to 77.8\% accuracy, 0.641 MSE, 0.271 ROUGE-1, and 0.295 BLEU. Furthermore, even when trained under few-shot, weakly supervised, or self-supervised settings, the predictor remains effective. Finally, D-$k$-ICL consistently improves performance on commercial LLMs such as GPT-4o, demonstrating its robustness and broad applicability.
| null |
['In-context learning']
|
/pdf/820ee9fbff9d3f05facdd6c5341457728ae2665a.pdf
|
applications to computer vision, audio, language, and other modalities
| null |
['ICLR.cc/2026/Conference/Submission25480/Authors']
|
FFIn2TH7aU
| 25,479
|
FFIn2TH7aU
|
Supra-Tuning: Combining Outlier and Low-Rank Adaptation for Sparse and Efficient LLM Fine-Tuning
|
Large language models (LLMs) have demonstrated remarkable capabilities but remain expensive to fine-tune due to their size. Recent parameter-efficient tuning methods, such as Low-Rank Adaptation (LoRA), reduce the number of trainable parameters while maintaining performance. In this work, we introduce Super, a novel sparse adaptation technique that selects and trains only a small set of influential weights—so-called super weights—identified via outlier metrics such as WANDA. We show that fine-tuning these outlier weights yields strong performance with minimal parameter updates. Building on this idea, we propose Supra, a hybrid method that combines Super with LoRA, merging sparse and low-rank adaptations into a unified tuning strategy. Our experiments on several LLMs and downstream tasks demonstrate that both Super and Supra outperform existing sparse or low-rank methods alone in perplexity and task performance, while reducing computational and memory overhead. Supra-Tuning offers a simple yet powerful framework for efficient and scalable adaptation of LLMs.
|
We propose Super, a sparse fine-tuning method that updates only key outlier weights, and Supra, a hybrid that combines Super with LoRA.
|
['PEFT', 'Fine Tuning', 'LLM', 'Training', 'Deep Learning', 'AI', 'Language Models', 'Llama', 'Wanda', 'Outliers']
|
/pdf/ef2d5d42522f6fb793cf9016ec50951c01523469.pdf
|
foundation or frontier models, including LLMs
|
/attachment/e78122283e5a148788fcb1be05f92ee3aaae6bcd.zip
|
['ICLR.cc/2026/Conference/Submission25479/Authors']
|
kuaZXtReJ0
| 25,477
|
kuaZXtReJ0
|
Self-Organizing Resonant Network
|
We introduce the Self-Organizing Resonant Network (SORN), a novel learning paradigm that operates without backpropagation. To address core challenges in representation quality, learning stability, and adaptability faced by existing continual learning models, SORN operates within a robust feature space encoded online. Its learning process is driven by two tightly coupled, biologically-inspired plasticity principles: (1) Novelty-Gated Structural Plasticity: The system dynamically creates a new neural prototype only when an input cannot be adequately represented by existing knowledge (resonators), a mechanism analogous to a self-growing vector-quantized codebook. (2) Stable Hebbian Synaptic Plasticity: By incorporating Hebbian variants with normalization and homeostatic mechanisms, the network's association matrix stably learns sparse inter-concept correlations, effectively circumventing weight explosion and saturation issues. We theoretically demonstrate the framework's computational efficiency and convergence. Extensive experiments on standard continual learning benchmarks and unbounded data streams show that SORN not only surpasses mainstream methods in catastrophic forgetting resistance and accuracy, but also exhibits superior autonomous concept formation and stable adaptation when handling continuous, non-stationary environments.
|
A novel, non-backpropagation learning paradigm where a network self-organizes by dynamically creating neurons for novel concepts and learning their associations via local rules.
|
['Continual Learning', 'Self-Organizing Networks', 'Hebbian Learning', 'Structural Plasticity', 'Online Learning', 'Representation Learning']
|
/pdf/4fdb3fde8af71495cb28af9ce80bc3033434f999.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25477/Authors']
|
mBxFCTlFmW
| 25,476
|
mBxFCTlFmW
|
Learning When to Plan: Efficiently Allocating Test-Time Compute for LLM Agents
|
Training large language models (LLMs) to reason via reinforcement learning (RL) significantly improves their problem-solving capabilities. In agentic settings, existing methods like ReAct prompt LLMs to explicitly plan before every action; however, we demonstrate that always planning is computationally expensive and degrades performance on long-horizon tasks, while never planning further limits performance. To address this, we introduce a conceptual framework formalizing dynamic planning for LLM agents, enabling them to flexibly decide when to allocate test-time compute for planning. We propose a simple two-stage training pipeline: (1) supervised fine-tuning on diverse synthetic data to prime models for dynamic planning, and (2) RL to refine this capability in long-horizon environments. Experiments on the Crafter environment show that dynamic planning agents trained with this approach are more sample-efficient and consistently achieve more complex objectives. Additionally, we demonstrate that these agents can be effectively steered by human-written plans, surpassing their independent capabilities. To our knowledge, this work is the first to explore training LLM agents for dynamic test-time compute allocation in sequential decision-making tasks, paving the way for more efficient, adaptive, and controllable agentic systems.
| null |
['LLM Agents', 'Planning', 'Test-Time Compute']
|
/pdf/a411541fb59f4f329aa365321e7ee5c26305bd52.pdf
|
foundation or frontier models, including LLMs
| null |
['ICLR.cc/2026/Conference/Submission25476/Authors']
|
95bwpkVPuR
| 25,475
|
95bwpkVPuR
|
Dynamic Role-Graph Reinforcement Learning for Multi-Agent Collaborative Coding Systems
|
We propose \textbf{Dynamic Role-Graph Reinforcement Learning (DRGRL)}, a novel framework for multi-agent collaborative coding systems that addresses the challenges of evolving team dynamics and role-based coordination. Traditional multi-agent reinforcement learning (MARL) approaches are often ineffective for static representations of agent interactions, which don't correlate to the fluid nature of real world software development teams. The proposed method combines dynamic graph neural networks (GNNs) with role-aware attention mechanisms to model time-varying collaboration patterns in which agents (i.e., developers, corresponding to nodes in a graph) are represented as nodes of a graph with an adaptively changing topology reflecting changing teams. A transformer-based gnn encoder uses the SK severing information across the graph, and a collaboration complexity divider estimates coordination complexity to serve as a decision-making leader. The framework uses a centralized critic with decentralized actors (CCDA) to encourage a maximized team level rewards (e.g., reduced merge conflicts or test coverage) and individual autonomy. Moreover, the system is interfaced with traditional development tools, such as version control systems, IDEs, and conflict resolvers to simplify the integration of learned policies into traditional workflows. The key novelty lies in the \textbf{role-graph duality}, where roles are both learned from data and emergent from graph dynamics, enabling hierarchical coordination strategies. For instance, high collaboration complexity could lead to the distribution of the mediator roles to stabilize such a system. Experiments on man-made and real-world coding data sets show that simulations using the proposed method show significant gains in the efficiency of teamwork and code-quality over baseline methods for MARL. The Framework's flexibility with Dynamic Teams and the general nature of the collaboration scenario, the Framework can be a potential contender to solve the challenges that modern software engineering face.
| null |
['Role-Graph Reinforcement Learning']
|
/pdf/7f50fa552d0ac5cb13742fdbd4d1549c8cfef46b.pdf
|
transfer learning, meta learning, and lifelong learning
| null |
['ICLR.cc/2026/Conference/Submission25475/Authors']
|
fDfctZ8Fhg
| 25,471
|
fDfctZ8Fhg
|
Not All Who Wander Are Lost: Hallucinations as Neutral Dynamics in Residual Transformers
|
We separate onset from persistence and prove that persistence follows from the neutral dynamics of pre-LayerNorm residual transformers. Exact operator norms for LayerNorm, residual blocks, and the softmax decoder yield conservative upper bounds showing the absence of contractive or expansive bias at the decoded level. These bounds are sharpened by working with corridor constants that remain explicit and falsifiable. For open probes, drift decomposes into a predictable component bounded by the sharpened corridor and a centered martingale component controlled by concentration and central limit arguments. Neutrality is then lifted from paired rollouts to populations by casting trajectories or blocks as exchangeable agents in a mean-field game, yielding a population-invariant stable under depth and width scaling. Predictions are tested with controlled randomization audits up to GPT2-large: closed probes are centered and behave as bounded martingale differences, while open probe drift stays within the predicted corridor with magnitudes consistent with the sharper constants. Together, these theoretical and empirical results provide the first structural account of persistence, explaining why hallucinations persist across model scales without re-auditing hundreds of millions of parameters, and showing that interventions, which do not alter the residual backbone, cannot eliminate it once onset has occurred.
| null |
['Transformer architectures', 'Mean-field Games', 'Hallucinations', 'Stability and Dynamics']
|
/pdf/7906252147b467ff9388cb00c4067d4d1301a417.pdf
|
generative models
|
/attachment/fb59cd2610a6e6e6e850625d16c5a1ad81e5e289.zip
|
['ICLR.cc/2026/Conference/Submission25471/Authors']
|
lbEUvx1ILN
| 25,470
|
lbEUvx1ILN
|
OVA-LP: A Simple and Efficient Framework for Federated Learning on Non-IID Data
|
Federated fine-tuning (FFT) adapts foundation models to decentralized data but remains fragile under heterogeneous client distributions due to local drift, i.e., client-level update divergences that induce systematic bias and amplified variance in the global model. Existing aggregation and personalization methods largely correct drift post hoc, which proves brittle under extreme non-IID conditions. We introduce OvA-LP, a minimalist framework that is, to our knowledge, the first explicitly designed to suppress drift at its source within the PEFT-based FFT paradigm. OvA-LP combines linear probing on a frozen encoder with a one-vs-all head and a simple two-stage procedure, preserving pretrained feature geometry and decoupling logits to prevent the mechanisms that amplify drift. On CIFAR-100 with 100 clients, averaged over shard-1, shard-2, and Bernoulli--Dirichlet partitions, OvA-LP retains 95.9\% of its IID accuracy, whereas state-of-the-art FFT baselines retain only 10.1\% (PFPT) and 34.5\% (FFT-MoE) under the same conditions. OvA-LP further maintains resilience under both symmetric and asymmetric label noise. In addition, precomputing encoder features makes per-round cost nearly independent of encoder size. Together, these results demonstrate that OvA-LP provides a principled and efficient basis for robust FFT under heterogeneity.
| null |
['federated learning', 'non-iid', 'noisy labels', 'One-vs-All', 'Linear Probing']
|
/pdf/768ab6e2fcc967049e35301c340813458461406d.pdf
|
other topics in machine learning (i.e., none of the above)
|
/attachment/a3122d2fdeb40e8b238eb021002578ed9e2a78f8.zip
|
['ICLR.cc/2026/Conference/Submission25470/Authors']
|
7w9GUhqSnN
| 25,469
|
7w9GUhqSnN
|
BayesENDS: Bayesian Electrophysiological Neural Dynamical Systems for Alzheimer’s Disease Diagnosis
|
Alzheimer’s disease (AD) alters Electroencephalogram (EEG) through slowed oscillations and diminished neural drive, yet most AD-EEG pipelines are black-box classifiers, lacking a unifying mathematical account of how both neural activity and its interaction dynamics evolve over time. We introduce BayesENDS, a Bayesian electrophysiological neural dynamical system that explores the possibility of incorporating neuron spiking mechanisms into a Bayesian neural dynamical system. By introducing a differentiable leaky-integrate-and-fire (dLIF) prior, BayesENDS is capable of inferring population events and interaction dynamics directly from EEG—without spike or interaction annotations. The dLIF prior encodes membrane dynamics, rate/refractory constraints, and physiologically plausible frequency ranges, improving identifiability while yielding biologically plausible, subject-level biomarkers alongside AD predictions. Across synthetic event-sequence benchmarks and real AD EEG datasets, BayesENDS delivers superior performance to state-of-the-art baseline methods.
| null |
['Bayesian Neural Dynamical Systems', 'Alzheimer’s Disease Diagnosis', 'EEG']
|
/pdf/cf93dd45f679958c1e05b29a3535f14cde1259ab.pdf
|
applications to neuroscience & cognitive science
| null |
['ICLR.cc/2026/Conference/Submission25469/Authors']
|
n3iFV0gLMc
| 25,468
|
n3iFV0gLMc
|
FingerTip 20K: A Benchmark for Proactive and Personalized Mobile LLM Agents
|
Mobile GUI agents are becoming critical tools to improve user experience on smart devices, with multimodal large language models (MLLMs) emerging as the dominant paradigms in this domain. Current agents, however, rely on explicit human instructions, overlooking the potential to leverage the contextual information (like location, time, previous interactions) for proactive task suggestions. Besides, previous works focus on optimizing the success rate during task execution, but pay less attention to the personalized execution trajectory, thereby neglecting potentially vast differences in user preferences. To address these challenges, we introduce the FingerTip 20K benchmark. We collected 20K unique human demonstrations of multi-step Android device interactions across a variety of everyday apps. These demonstrations are not isolated but are continuously acquired from the users' long-term usage in their real lives, and encompass essential user-related contextual information. The benchmark contains two new tracks: proactive task suggestions by analyzing environment observation and users' previous intents, and personalized task execution by catering to users' action preferences. Our experiments reveal that the tracks we propose pose significant challenges for leveraging user-related information in GUI tasks. We also performed a human study to show that there exists a huge gap between existing agents and humans. The model fine-tuned with the data we collected effectively utilized user information and achieved good results, highlighting the potential of our approach in building more user-oriented mobile LLM agents. Our code is open-source at \url{https://anonymous.4open.science/r/FingerTip-57B8} for reproducibility.
| null |
['Mobile Agent', 'LLM Agent', 'GUI', 'Proactive Agent', 'Personalization']
|
/pdf/abef640637242cc6429b772a70223e075677ceb1.pdf
|
datasets and benchmarks
| null |
['ICLR.cc/2026/Conference/Submission25468/Authors']
|
End of preview. Expand
in Data Studio
AI-Conf will Collection Papers of CVPR、ACL、AAAI、ICLR etc.
- Downloads last month
- 44