Abstract
INSPO, a novel Instruction-Policy co-evolution framework, dynamically optimizes instructions within the reinforcement learning loop, enhancing performance in multi-turn retrieval and reasoning tasks compared to static instruction-based methods.
Reinforcement Learning with Verifiable Rewards (RLVR) has advanced the reasoning capability of large language models (LLMs), enabling autonomous agents that can conduct effective multi-turn and tool-integrated reasoning. While instructions serve as the primary protocol for defining agents, RLVR typically relies on static and manually designed instructions. However, those instructions may be suboptimal for the base model, and the optimal instruction may change as the agent's policy improves and explores the interaction with the environment. To bridge the gap, we introduce INSPO, a novel Instruction-Policy co-evolution framework that integrates instruction optimization as a dynamic component of the reinforcement learning (RL) loop. INSPO maintains a dynamic population of instruction candidates that are sampled with questions, where reward signals in RL loops are automatically attributed to each instruction, and low performers are periodically pruned. New instructions are generated and verified through an on-policy reflection mechanism, where an LLM-based optimizer analyzes past experience from a replay buffer and evolves more effective strategies given the current policy. We conduct extensive experiments on multi-turn retrieval and reasoning tasks, demonstrating that INSPO substantially outperforms strong baselines relying on static instructions. INSPO discovers innovative instructions that guide the agent toward more strategic reasoning paths, achieving substantial performance gains with only a marginal increase in computational overhead.
Community
INSPO is a novel Instruction-Policy Co-Evolution framework that integrates instruction optimization as a dynamic component of the reinforcement learning (RL) loop for training large language models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- In-the-Flow Agentic System Optimization for Effective Planning and Tool Use (2025)
- DSPO: Stable and Efficient Policy Optimization for Agentic Search and Reasoning (2025)
- InfoFlow: Reinforcing Search Agent Via Reward Density Optimization (2025)
- Tool-Augmented Policy Optimization: Synergizing Reasoning and Adaptive Tool Use with Reinforcement Learning (2025)
- EvolveR: Self-Evolving LLM Agents through an Experience-Driven Lifecycle (2025)
- Scaling Agent Learning via Experience Synthesis (2025)
- WebSeer: Training Deeper Search Agents through Reinforcement Learning with Self-Reflection (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper