Abstract
Context-Folding, an end-to-end reinforcement learning framework, enables LLM agents to manage context effectively by branching into subtasks and folding them, outperforming baselines on long-horizon tasks with reduced context size.
Large language model (LLM) agents are fundamentally constrained by context length on long-horizon tasks. We introduce Context-Folding, a framework that empowers agents to actively manage their working context. An agent can procedurally branch into a sub-trajectory to handle a subtask and then fold it upon completion, collapsing the intermediate steps while retaining a concise summary of the outcome. To make this behavior learnable, we develop an end-to-end reinforcement learning framework FoldGRPO with specific process rewards to encourage effective task decomposition and context management. On complex long-horizon tasks (Deep Research and SWE), our folding agent matches or outperforms the ReAct baselines while using an active context 10times smaller and significantly outperforms models that rely on summarization-based context management.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Scaling LLM Multi-turn RL with End-to-end Summarization-based Context Management (2025)
- ReSum: Unlocking Long-Horizon Search Intelligence via Context Summarization (2025)
- SFR-DeepResearch: Towards Effective Reinforcement Learning for Autonomously Reasoning Single Agents (2025)
- DeepDive: Advancing Deep Search Agents with Knowledge Graphs and Multi-Turn RL (2025)
- In-the-Flow Agentic System Optimization for Effective Planning and Tool Use (2025)
- Multi-Agent Tool-Integrated Policy Optimization (2025)
- Memory-R1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper