EvoSyn: Generalizable Evolutionary Data Synthesis for Verifiable Learning
Abstract
An evolutionary framework synthesizes verifiable data for language models, improving reinforcement learning and distillation across various tasks.
Reliable verifiable data has become a key driver of capability gains in modern language models, enabling stable reinforcement learning with verifiable rewards and effective distillation that transfers competence across math, coding, and agentic tasks. Yet constructing generalizable synthetic verifiable data remains difficult due to hallucination-prone generation, and weak or trivial verification artifacts that fail to separate strong from weak solutions. Existing approaches often rely on task-specific heuristics or post-hoc filters that do not transfer across domains and lack a principled, universal evaluator of verifiability. In this work, we introduce an evolutionary, task-agnostic, strategy-guided, executably-checkable data synthesis framework that, from minimal seed supervision, jointly synthesizes problems, diverse candidate solutions, and verification artifacts, and iteratively discovers strategies via a consistency-based evaluator that enforces agreement between human-annotated and strategy-induced checks. This pipeline upgrades filtering into principled synthesis: it reliably assembles coherent, verifiable training instances and generalizes without domain-specific rules. Our experiments demonstrate the effectiveness of the proposed approach under both RLVR and model distillation training paradigms. The results show that training with our synthesized data yields significant improvements on both the LiveCodeBench and AgentBench-OS tasks, highlighting the robust generalization of our framework.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Loong: Synthesize Long Chain-of-Thoughts at Scale through Verifiers (2025)
- Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense (2025)
- Verification Limits Code LLM Training (2025)
- Socratic-Zero : Bootstrapping Reasoning via Data-Free Agent Co-evolution (2025)
- Critique to Verify: Accurate and Honest Test-Time Scaling with RL-Trained Verifiers (2025)
- QueST: Incentivizing LLMs to Generate Difficult Problems (2025)
- Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper