⚠️ Important Note
This model scores 0/100 on refusal tests but retains Claude-style deflection on explicit content. Best for general uncensored conversations, coding, and reasoning. Not recommended for explicit NSFW creative writing. For unrestricted NSFW, use my Qwen3.5-27B Heretic v2 or Qwen3.5-27B Heretic v3 instead.
V1 vs V2
| Version | Refusals | KL | Best For |
|---|---|---|---|
| V1 | 21/100 | 0.0092 | General use, best quality |
| V2 (this) | 0/100 | 0.0635 | Fewer restrictions on controversial topics, minimal quality loss |
⚠️ Chat Template Fixes Pre-Applied
Credit: DavidAU
Thinking Mode
This is a reasoning model with <think> tokens.
To disable thinking (if your tool supports it):
- Set
enable_thinking=falsein your inference settings - Or use a no-think instruct template
Note: Some tools (Ollama, vLLM, Transformers) support this parameter directly.
❤️ Support My Work
Creating these models takes significant time, work and compute. If you find them useful consider supporting me:
| Platform | Link | What you get |
|---|---|---|
| ☕ Ko-fi | One-time tip | My eternal gratitude |
| 🎉 Patreon | Monthly support | Priority model requests |
Your help will motivate me and would go into further improving my workflow and coverings fees for storage, compute and may even help uncensoring bigger model with rental Cloud GPUs.
GGUF quantizations of llmfan46/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-heretic-v2.
This is a decensored version of Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled, made using Heretic v1.2.0 with the Arbitrary-Rank Ablation (ARA) method
Abliteration parameters
| Parameter | Value |
|---|---|
| start_layer_index | 21 |
| end_layer_index | 43 |
| preserve_good_behavior_weight | 0.4720 |
| steer_bad_behavior_weight | 0.0001 |
| overcorrect_relative_weight | 1.2955 |
| neighbor_count | 1 |
Performance
| Metric | This model | Original model (Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled) |
|---|---|---|
| KL divergence | 0.0635 | 0 (by definition) |
| Refusals | 0/100 | 98/100 |
Lower refusals indicate fewer content restrictions, while lower KL divergence indicates better preservation of the original model's capabilities. Higher refusals cause more rejections, objections, pushbacks, lecturing, censorship, softening and deflections, while higher KL divergence degrades coherence, reasoning ability, and overall quality.
Quantizations
| Filename | Quant | Description |
|---|---|---|
| Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-ultimate-uncensored-heretic-BF16.gguf | BF16 | Full precision |
| Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-ultimate-uncensored-heretic-Q8_0.gguf | Q8_0 | Near-lossless, recommended |
| Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-ultimate-uncensored-heretic-Q6_K.gguf | Q6_K | Excellent quality |
| Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-ultimate-uncensored-heretic-Q5_K_M.gguf | Q5_K_M | Good balance |
| Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-ultimate-uncensored-heretic-Q5_K_S.gguf | Q5_K_S | Smaller Q5 |
| Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-ultimate-uncensored-heretic-Q4_K_M.gguf | Q4_K_M | Good for limited VRAM |
| Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-ultimate-uncensored-heretic-Q4_K_S.gguf | Q4_K_S | Smallest |
Vision Projector
| Filename | Quant | Description |
|---|---|---|
| Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-mmproj-BF16.gguf | BF16 | Native precision |
A Vision Projector File is Required for vision/multimodal capabilities. Use alongside any quantization above.
Usage
Works with llama.cpp, LM Studio, Ollama, and other GGUF-compatible tools.
🌟 Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
📢 Release Note Build Environment Upgrades:
- Fine-tuning Framework: Unsloth 2026.3.3
- Core Dependencies: Transformers 5.2.0
- This model fixes the crash in the official model caused by the Jinja template not supporting the "developer" role. (commonly sent by modern coding agents like Claude Code and OpenCode)
- It does not disable thinking mode by default, and allowing the agent to run continuously for over 9 minutes without interruption.
- Compared to the original model, autonomy and stability are significantly improved.
💡 Model Introduction
Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled is a highly capable reasoning model fine-tuned on top of the powerful Qwen3.5 architecture. The model's core directive is to leverage state-of-the-art Chain-of-Thought (CoT) distillation primarily sourced from Claude-4.6 Opus interactions.
Through Supervised Fine-Tuning (SFT) focusing specifically on structured reasoning logic, this model excels in breaking down complex user problems, planning step-by-step methodologies within strictly formatted <think> tags, and ultimately delivering precise, nuanced solutions.
🧠 Example of Learned Reasoning Scaffold(Example)
The model includes targeted optimizations addressing Qwen3.5’s tendency toward excessive transitional or repetitive reasoning on simple queries. Through deep distillation and structural imitation of Claude-4.6-Opus reasoning chains, the model adopts a more efficient structured thinking pattern:
“Let me analyze this request carefully: 1..2..3...”.
This streamlined reasoning paradigm significantly reduces redundant cognitive loops while preserving deep analytical capacity, resulting in substantially improved inference efficiency.
Let me analyze this request carefully:
1. Identify the core objective of the problem.
2. Break the task into clearly defined subcomponents.
3. Evaluate constraints and edge cases.
4. Formulate a step-by-step solution plan.
5. Execute the reasoning sequentially and verify consistency.
.
.
.
🗺️ Training Pipeline Overview
Base Model (Qwen3.5-27B)
│
▼
Supervised Fine-Tuning (SFT) + LoRA
│
▼
Final Model (Claude-4.6-Opus-Reasoning-Distilled,text-only)
📋 Stage Details
🔥Community-tested advantages (benchmark tests by user @sudoingX on a single RTX 3090):
Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled shows significant advantages in coding-agent environments such as Claude Code and OpenCode:
- Native support for the “developer” role, requiring no Jinja template patches or ChatML workarounds.
- Thinking mode fully preserved (logs confirm
thinking=1), not silently disabled, maintaining the complete chain-of-thought reasoning process.- Greatly improved autonomy and stability — capable of running continuously for over 9 minutes autonomously (with zero human intervention). It actively waits for tool responses, reads outputs, self-corrects errors, and can even automatically generate a README, whereas the base model often stalls or freezes mid-execution.
Hardware usage remains unchanged:
- About 16.5 GB VRAM with Q4_K_M quantization
- 29–35 tok/s generation speed
- Full 262K context with no compromises
- These improvements come from successfully distilling the structured reasoning style of Claude 4.6 Opus, allowing Qwopus to be truly plug-and-play in modern local coding agents and deliver an experience close to Opus in smoothness and usability.
Thanks to the community for the in-depth testing and feedback!
🔹 Supervised Fine-Tuning (SFT)
- Objective: To inject high-density reasoning logic and establish a strict format for problem-solving involving an internal thinking state prior to outputting the final response.
- Methodology: We utilized Unsloth for highly efficient memory and compute optimization. A critical component of this stage is the
train_on_responses_onlystrategy, masking instructions so the loss is purely calculated over the generation of the<think>sequences and the subsequent solutions. - Format Enforcement: All training samples were systematically normalized so the model strictly abides by the structure
<think> {internal reasoning} </think>\n {final answer}.
📚 All Datasets Used
The dataset consists of high-quality, filtered reasoning distillation data:
| Dataset Name | Description / Purpose |
|---|---|
| nohurry/Opus-4.6-Reasoning-3000x-filtered | Provides comprehensive Claude 4.6 Opus reasoning trajectories. |
| TeichAI/claude-4.5-opus-high-reasoning-250x | Injecting high-intensity, structured reasoning instances. |
| Jackrong/Qwen3.5-reasoning-700x | Additional curated reasoning samples designed to strengthen structured step-by-step problem solving and improve reasoning diversity. |
🌟 Core Skills & Capabilities
- Modular & Structured Thinking: Inheriting traits from Opus-level reasoning, the model demonstrates confident parsing of the prompt, establishing an outlined plan in its
<think>block sequentially rather than exploratory "trial-and-error" self-doubt.
⚠️ Limitations & Intended Use
- Hallucination Risk: While reasoning is strong, the model remains an autoregressive LLM; external facts provided during the thinking sequence may occasionally contain hallucinations if verifying real-world events.
- Intended Scenario: Best suited for offline analytical tasks, coding, math, and heavy logic-dependent prompting where the user needs to transparently follow the AI's internal logic.
- Preview Version Notice: Because this model is relatively new and intentionally lightweight, the surrounding ecosystem — including inference templates, fine-tuning pipelines, routing configurations, and tooling integrations — may not yet be fully mature or standardized. As a result, users may encounter occasional bugs, compatibility inconsistencies, or integration edge cases. The current release should be considered a preview build while the broader architectural stack and supporting utilities continue to stabilize and improve.
🙏 Acknowledgements
Significant thanks to the Unsloth AI team for making rapid fine-tuning of MoE and large LLM models accessible. Additionally, we acknowledge Qwen internally, and the open-source community developers producing exceptional distilled datasets (nohurry and TeichAI).
📖 Citation
If you use this model in your research or projects, please cite:
@misc{jackrong_qwen35_opus_distilled,
title = {Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled},
author = {Jackrong},
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled}}
}
- Downloads last month
- 1,987
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for llmfan46/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-heretic-v2-GGUF
Base model
Qwen/Qwen3.5-27B