🌟 Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2
📢 Announcement
v2 Update: This iteration is powered by 14,000+ premium Claude 4.6 Opus-style general reasoning samples, with a major focus on achieving massive gains in reasoning efficiency while actively improving peak accuracy.
v2 introduces a refined reasoning scaffold designed to eliminate redundant internal loops, significantly improving the model's cross-task generalization from logic and math into specialized fields like programming. Compared to the original model, autonomy and stability are significantly improved, ensuring the model remains robust and self-consistent during complex, multi-step problem solving. v2 is built to think smarter, not longer, delivering substantial improvements in inference speed and cost-effectiveness while simultaneously boosting baseline accuracy.
Note: Due to the constraints of SFT sample size and training scope, the model's broad general-purpose capabilities might be slightly impacted. The efficiency and accuracy results discussed here are based on the HumanEval and HumanEval+ benchmarks. Thank you for your understanding!
💡 Model Introduction
Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2 is the second iteration of this reasoning-focused Qwen3.5-9B fine-tune, built to drastically improve the efficiency of chain-of-thought generation, unlocking highly substantial gains in reasoning speed and cost-reduction while actually increasing absolute accuracy.
Compared with the earlier version, v2 was trained with 14,000 Claude 4.6 Opus-style general reasoning samples, with a stronger emphasis on transferring concise, reusable reasoning patterns rather than only maximizing raw benchmark scores. The goal of v2 is not simply to make the model "think more," but to help it think more economically: reducing unnecessarily long internal chains, avoiding verbose over-analysis on easy problems, and massively improving the reasoning-cost-to-quality ratio while beating the baseline's benchmark correctness.
A key design choice in v2 is that the distillation data is primarily general-domain reasoning data—specifically focused on mathematics, word problems, logical deduction, and a balanced mix of general knowledge and instructions—rather than specialized code-heavy supervision. Consequently, HumanEval and HumanEval+ are employed here to evaluate cross-task generalization and capability transfer, rather than serving as direct optimization targets. High performance on these benchmarks, despite the lack of code-centric training, confirms that the model's reasoning scaffold has become more robust and transferable, proving that fundamental reasoning logic can effectively power specialized tasks like programming.
Why v2 matters
Relative to the official Qwen3.5-9B baseline, the fine-tuned v2 model achieves a strict upgrade in absolute HumanEval and HumanEval+ accuracy alongside massive, transformative gains in reasoning efficiency:
| Metric | Official Qwen3.5-9B | v2 Fine-tuned Model | Improvement |
|---|---|---|---|
| Average think length (chars) | 2284.3 chars | 1778.0 chars | 🟢 -22.17% (Shorter / Better) |
| Average think length (words) | 400.83 words | 310.33 words | 🟢 -22.58% (Shorter / Better) |
| HumanEval base passes per 10k think chars | 4.004 | 5.041 | 🟢 +25.91% (Higher / Better) |
| HumanEval+ passes per 10k think chars | 3.764 | 4.836 | 🟢 +28.48% (Higher / Better) |
| Think chars needed per HumanEval base pass | 2497.5 | 1983.6 | 🟢 -20.58% (Lower / Better) |
| Think chars needed per HumanEval+ pass | 2656.9 | 2068.0 | 🟢 -22.17% (Lower / Better) |
More impressively, not only does v2 vastly improve reasoning efficiency, it actually outperforms the official baseline on both the standard base tests and the much stricter HumanEval+ benchmark across different test settings.
We conducted two separate evaluations under different sampling temperatures to verify stability and peak performance:
Test Run 1 (T=0.2)
| Fairly Recomputed Benchmark | Official Qwen3.5-9B | v2 Fine-tuned Model | Gap |
|---|---|---|---|
| HumanEval (base tests) pass@1 | 0.8171 | 0.8232 | 🟢 +0.61 pts |
| HumanEval+ (base + extra tests) pass@1 | 0.7622 | 0.7866 | 🟢 +2.44 pts |
Test Run 2 (T=0.6)
| Fairly Recomputed Benchmark | Official Qwen3.5-9B | v2 Fine-tuned Model | Gap |
|---|---|---|---|
| HumanEval (base tests) pass@1 | 0.8170 | 0.8720 | 🟢 +5.50 pts |
| HumanEval+ (base + extra tests) pass@1 | 0.7620 | 0.8170 | 🟢 +5.50 pts |
These consistent dual-improvements make the model undeniably superior for real-world use cases.
For users who care about reasoning efficiency per unit of inference budget, v2 is exceptionally powerful—not only achieving higher peak accuracy, but doing so while consuming over 20% fewer characters and tokens.
That matters especially for:
- Resource-constrained local deployment: On consumer GPUs or lower-memory local setups, shorter and cleaner reasoning traces can reduce latency, memory pressure, and the effective cost of generation.
- Agentic workflows: In multi-step agents, the model often solves many easy or medium subtasks. In those settings, excessively elaborate chain-of-thought can become a tax on throughput. A model that reaches a better answer with fewer reasoning tokens can radically improve end-to-end agent speed and lower cumulative inference cost.
- Open-source tool use and emerging agent stacks: For users building with lightweight open reasoning systems, browser-use agents, terminal agents, or projects in the "OpenClaw / local autonomous agent" style ecosystem, a model that achieves better peak accuracy while drastically improving reasoning economy is highly practical for real-world loops.
- Simple problems at scale: One common issue with strong reasoning-tuned base models is that they sometimes produce very elaborate internal traces even for simple prompts. While that can look impressive, it is often inefficient in practice. v2 is explicitly aimed at trimming this overhead.
In short, v2 no longer forces a trade-off between absolute coding benchmark scores and reasoning economy. It provides a fully optimized deployment-ready profile: faster, shorter, more economical reasoning paired with stronger generalization and accuracy. For local users, agent builders, and cost-sensitive applications, v2 is a strict upgrade.
🗺️ Training Pipeline Overview
Base Model (Qwen3.5-9B)
│
▼
Qwen3.5-9B fine-tuned with Unsloth
│
▼
Supervised Fine-Tuning (SFT) + LoRA
(Response-Only Training masked on "<|im_start|>assistant\n<think>")
│
▼
Jackrong/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2
🧠 Example of Learned Reasoning Scaffold(Example)
The model includes targeted optimizations addressing Qwen3.5’s tendency toward excessive transitional or repetitive reasoning on simple queries. Through deep distillation and structural imitation of Claude-4.6-Opus reasoning chains, the model adopts a more efficient structured thinking pattern:
“Let me analyze this request carefully: 1..2..3...”.
This streamlined reasoning paradigm significantly reduces redundant cognitive loops while preserving deep analytical capacity, resulting in substantially improved inference efficiency.
Let me analyze this request carefully:
1. Identify the core objective of the problem.
2. Break the task into clearly defined subcomponents.
3. Evaluate constraints and edge cases.
4. Formulate a step-by-step solution plan.
5. Execute the reasoning sequentially and verify consistency.
.
.
.
📚 All Datasets Used
The dataset consists of high-quality, filtered reasoning distillation data:
| Dataset Name | Description / Purpose |
|---|---|
| nohurry/Opus-4.6-Reasoning-3000x-filtered | Provides comprehensive Claude 4.6 Opus reasoning trajectories. |
| Roman1111111/claude-opus-4.6-10000x | Large-scale public Claude 4.6 Opus distillation data used to strengthen general reasoning transfer in v2. |
| TeichAI/claude-4.5-opus-high-reasoning-250x | Injecting high-intensity, structured reasoning instances. |
| Jackrong/Qwen3.5-reasoning-700x | Additional curated reasoning samples designed to strengthen structured step-by-step problem solving and improve reasoning diversity. |
⚠️ Limitations & Intended Use
- Hallucination Risk: While reasoning is strong, the model remains an autoregressive LLM; external facts provided during the thinking sequence may occasionally contain hallucinations if verifying real-world events.
- Intended Scenario: Best suited for offline analytical tasks, coding, math, and heavy logic-dependent prompting where the user needs to transparently follow the AI's internal logic.
- This model is a test version intended solely for learning and demonstration purposes, and is for academic research and technical exploration use only.
🙏 Acknowledgements
Significant thanks to the Unsloth AI team for making rapid fine-tuning of large LLM models accessible. Additionally, we acknowledge Qwen internally, and the open-source community developers producing exceptional distilled datasets.
- Downloads last month
- 18,679
