Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT-HERETIC-UNCENSORED

Fine tune via Unsloth of Qwen 3.5 9B dense model using Claude-4.6-OS dataset (4 Claude datasets) on local hardware. Every attempt was made to ensure the training was "mild" and did not negatively affect the model's already incrediblely strong benchmarks.

Training on 4 datasets improves reasoning, and output generation and exceeds the root model on all benchmarks too.

This is also a HERETIC model, trained post "Heretic'ing" -> this model does what you want, no questions asked.

Fully uncensored.

Example generation(s) at the bottom of the page.

Vision (images) tested -> working with new training.

BENCHMARKS:

         arc   arc/e boolq hswag obkqa piqa  wino

HERETIC version (this model):
mxfp8    0.425,0.521,0.791,0.664,0.374,0.744,0.645

Qwen3.5-9B
mxfp8    0.417,0.458,0.623,0.634,0.338,0.737,0.639

DE-CENSORING:

Performance

KLD of less than 1 is excellent, zero is perfect.

Metric This model Original model (Qwen/Qwen3.5-9B)
KL divergence 0.0793 0 (by definition)
Refusals 6/100 100/100

NOTES:

  • Suggest min q4ks (non-imatrix) or IQ3S (imatrix).
  • Tested with rep pen of 1 (off).
  • Context: 256k (default).

IMPORTANT:

  • Other versions in testing.
  • Information from Qwen's repo below.
  • Video portions of the model were NOT TESTED.

Using an "uncensored" (refusals removed) model VS trained "uncensored" model

Usually when you a tell a model to generate horror, swear or x-rated content this is all you have to do to get said content type.

In the case of this model, it will not refuse your request, however it needs to be "pushed" a bit / directed a bit more in SOME CASES.

Although this model will generated x-rated content too, likewise you need to tell it to use "slang" (and include the terms you want) to get it generate the content correctly as the "expected" content level too.

Without these added directive(s), the content can be "bland" by comparison to an "uncensored model" or model trained on uncensored content.

Roughly, the model tries to generate the content but the "default" setting(s) are so "tame" it needs a push to generate at expected graphic, cursing or explicit levels.

Even with minimal direction (ie, use these words to swear: x,y,z), this will be enough to push the model to generate the requested content in the ahh... expected format.


Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:

In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5

: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"

: in text-generation-webui -> parameters -> lower right.

: In Silly Tavern this is called: "Smoothing"

NOTE: For "text-generation-webui"

-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:

https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

  • Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")

  • If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 1" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]


Qwen3.5-9B

Qwen Chat

This repository contains model weights and configuration files for the post-trained model in the Hugging Face Transformers format.

These artifacts are compatible with Hugging Face Transformers, vLLM, SGLang, KTransformers, etc.

Over recent months, we have intensified our focus on developing foundation models that deliver exceptional utility and performance. Qwen3.5 represents a significant leap forward, integrating breakthroughs in multimodal learning, architectural efficiency, reinforcement learning scale, and global accessibility to empower developers and enterprises with unprecedented capability and efficiency.

Qwen3.5 Highlights

Qwen3.5 features the following enhancement:

  • Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding benchmarks.

  • Efficient Hybrid Architecture: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead.

  • Scalable RL Generalization: Reinforcement learning scaled across million-agent environments with progressively complex task distributions for robust real-world adaptability.

  • Global Linguistic Coverage: Expanded support to 201 languages and dialects, enabling inclusive, worldwide deployment with nuanced cultural and regional understanding.

  • Next-Generation Training Infrastructure: Near-100% multimodal training efficiency compared to text-only training and asynchronous RL frameworks supporting massive-scale agent scaffolds and environment orchestration.

Benchmark Results

For more details, please refer to our blog post Qwen3.5.

Model Overview

  • Type: Causal Language Model with Vision Encoder
  • Training Stage: Pre-training & Post-training
  • Language Model
    • Number of Parameters: 9B
    • Hidden Dimension: 4096
    • Token Embedding: 248320 (Padded)
    • Number of Layers: 32
    • Hidden Layout: 8 × (3 × (Gated DeltaNet → FFN) → 1 × (Gated Attention → FFN))
    • Gated DeltaNet:
      • Number of Linear Attention Heads: 32 for V and 16 for QK
      • Head Dimension: 128
    • Gated Attention:
      • Number of Attention Heads: 16 for Q and 4 for KV
      • Head Dimension: 256
      • Rotary Position Embedding Dimension: 64
    • Feed Forward Network:
      • Intermediate Dimension: 12288
    • LM Output: 248320 (Padded)
    • MTP: trained with multi-steps
  • Context Length: 262,144 natively and extensible up to 1,010,000 tokens.

Benchmark Results

Language

GPT-OSS-120BGPT-OSS-20BQwen3-Next-80B-A3B-ThinkingQwen3-30BA3B-Thinking-2507Qwen3.5-9BQwen3.5-4B
Knowledge & STEM
MMLU-Pro 80.8 74.8 82.7 80.9 82.5 79.1
MMLU-Redux 91.0 87.8 92.5 91.4 91.1 88.8
C-Eval 76.2 71.4 89.7 87.4 88.2 85.1
SuperGPQA 54.6 48.5 60.8 56.8 58.2 52.9
GPQA Diamond 80.1 71.5 77.2 73.4 81.7 76.2
Instruction Following
IFEval 88.9 88.2 88.9 88.9 91.5 89.8
IFBench 69.0 65.1 61.5 51.5 64.5 59.2
MultiChallenge 45.3 40.1 51.3 46.5 54.5 49.0
Long Context
AA-LCR 50.7 30.7 51.7 49.0 63.0 57.0
LongBench v2 48.2 45.6 48.0 44.8 55.2 50.0
Reasoning & Coding
HMMT Feb 25 90.0 76.7 73.7 63.1 83.2 74.0
HMMT Nov 25 90.0 81.8 81.2 73.8 82.9 76.8
LiveCodeBench v6 82.7 74.6 68.7 66.0 65.6 55.8
OJBench 41.5 36.3 29.7 25.1 29.2 24.1
General Agent
BFCL-V4 -- -- 49.7 42.4 66.1 50.3
TAU2-Bench -- -- 57.4 41.9 79.1 79.9
VITA-Bench -- -- 29.5 14.1 29.8 22.0
DeepPlanning -- -- 0.4 4.9 18.0 17.6
Multilingualism
MMMLU 78.2 69.7 81.3 78.4 81.2 76.1
MMLU-ProX 74.5 67.3 73.6 69.1 76.3 71.5
NOVA-63 51.1 48.7 53.3 52.5 55.9 54.3
INCLUDE 74.0 65.3 78.3 74.4 75.6 71.0
Global PIQA 84.1 79.8 83.5 80.2 83.2 78.9
PolyMATH 54.0 30.9 62.4 52.6 57.3 51.1
WMT24++ 74.4 67.8 57.4 69.3 72.6 66.6
MAXIFE 83.7 80.1 79.9 77.4 83.4 78.0

* TAU2-Bench: we follow the official setup except for the airline domain, where all models are evaluated by applying the fixes proposed in the Claude Opus 4.5 system card.

* MMLU-ProX: we report the averaged accuracy on 29 languages.
* WMT24++: a harder subset of WMT24 after difficulty labeling and rebalancing; we report the averaged scores on 55 languages using XCOMET-XXL.
* MAXIFE: we report the accuracy on English + multilingual original prompts (totally 23 settings).
* Empty cells (--) indicate scores not yet available or not applicable.

Vision Language

GPT-5-Nano-2025-08-07Gemini-2.5-Flash-LiteQwen3-VL-30B-A3BQwen3.5-9BQwen3.5-4B
STEM and Puzzle
MMMU 75.8 73.4 76.0 78.4 77.6
MMMU-Pro 57.2 59.7 63.0 70.1 66.3
MathVision 62.2 52.1 65.7 78.9 74.6
Mathvista(mini) 71.5 72.8 81.9 85.7 85.1
We-Math 62.5 32.1 70.0 75.2 75.4
DynaMath 78.0 69.9 80.1 83.6 83.3
ZEROBench 1.0 1.0 0.0 3.0 3.0
ZEROBench_sub 22.2 19.2 23.7 31.1 26.3
VlmsAreBlind 66.7 68.4 72.5 93.7 92.6
BabyVision 14.4 17.5 18.6 28.6/25.8 16.0/19.1
General VQA
RealWorldQA 71.8 72.2 77.4 80.3 79.5
MMStar 68.6 69.1 75.5 79.7 78.3
MMBenchEN-DEV-v1.1 80.3 82.7 88.9 90.1 89.4
SimpleVQA 46.0 54.1 54.3 51.2 43.4
HallusionBench 58.4 64.5 66.0 69.3 65.0
Text Recognition and Document Understanding
OmniDocBench1.5 55.9 79.4 86.8 87.7 86.2
CharXiv(RQ) 50.1 56.1 56.6 73.0 70.8
MMLongBench-Doc 31.8 46.5 47.4 57.7 54.2
CC-OCR 58.9 72.9 77.8 79.3 76.7
AI2D_TEST 81.9 85.7 86.9 90.2 89.6
OCRBench 75.3 82.5 83.9 89.2 85.0
Spatial Intelligence
ERQA 45.8 44.3 45.3 55.5 54.0
CountBench 80.0 79.2 90.0 97.2 96.3
RefCOCO(avg) -- -- 89.3 89.7 88.1
EmbSpatialBench 74.2 66.1 80.6 83.0 81.3
RefSpatialBench 12.6 11.2 54.2 58.5 54.6
LingoQA 57.0 17.8 62.0 80.4 74.4
Hypersim -- -- 11.4 13.5 12.5
Nuscene -- -- 10.3 11.8 9.9
Video Understanding
VideoMME(w sub.) 71.7 74.6 79.9 84.5 83.5
VideoMME(w/o sub.) 66.2 72.7 73.3 78.4 76.9
VideoMMMU 63.0 69.2 75.0 78.9 74.1
MLVU 69.2 78.5 78.9 84.4 82.8
MVBench -- -- 72.0 74.4 71.2
LVBench -- 60.9 59.2 70.0 66.4
MMVU 63.1 65.3 66.1 67.8 64.9
Visual Agent
ScreenSpot Pro -- -- 60.5 65.2 60.3
OSWorld-Verified -- -- 30.6 41.8 35.6
AndroidWorld -- -- 55.0 57.8 58.6
Tool Calling
TIR-Bench 18.5 21.5 22.5 45.6/31.9 38.9/29.9
V* 68.1 69.6 83.2 90.1/88.5 84.3/86.4
Medical VQA
SLAKE 57.0 65.0 68.8 79.0 76.1
PMC-VQA 37.8 48.8 51.5 57.9 55.5
MedXpertQA-MM 26.7 35.3 35.5 49.9 42.9

* MathVision: our model’s score is evaluated using a fixed prompt, e.g., “Please reason step by step, and put your final answer within \boxed{}.” For other models, we report the higher score between runs with and without the \boxed{} formatting.
* BabyVision: scores reported as "with CI / without CI".
* TIR-Bench and V*: scores reported as "with CI / without CI".
* Empty cells (--) indicate scores not yet available or not applicable.

Quickstart

Qwen3.5 models operate in thinking mode by default, generating thinking content signified by <think>\n...</think>\n\n before producing the final responses. To disable thinking content and obtain direct response, refer to the examples here.

For streamlined integration, we recommend using Qwen3.5 via APIs. Below is a guide to use Qwen3.5 via OpenAI-compatible API.

Serving Qwen3.5

Qwen3.5 can be served via APIs with popular inference frameworks. In the following, we show example commands to launch OpenAI-Compatible API servers for Qwen3.5 models.

Inference efficiency and throughput vary significantly across frameworks. We recommend using the latest framework versions to ensure optimal performance and compatibility. For production workloads or high-throughput scenarios, dedicated serving engines such as SGLang, KTransformers or vLLM are strongly recommended.

The model has a default context length of 262,144 tokens. If you encounter out-of-memory (OOM) errors, consider reducing the context window. However, because Qwen3.5 leverages extended context for complex tasks, we advise maintaining a context length of at least 128K tokens to preserve thinking capabilities.

SGLang

SGLang is a fast serving framework for large language models and vision language models. SGLang from the main branch of the open-source repository is required for Qwen3.5, which can be installed using the following command in a fresh environment:

uv pip install 'git+https://github.com/sgl-project/sglang.git#subdirectory=python&egg=sglang[all]'

See its documentation for more details.

The following will create API endpoints at http://localhost:8000/v1:

  • Standard Version: The following command can be used to create an API endpoint with maximum context length 262,144 tokens using tensor parallel on 8 GPUs.

    python -m sglang.launch_server --model-path Qwen/Qwen3.5-9B --port 8000 --tp-size 1 --mem-fraction-static 0.8 --context-length 262144 --reasoning-parser qwen3
    
  • Tool Use: To support tool use, you can use the following command.

    python -m sglang.launch_server --model-path Qwen/Qwen3.5-9B --port 8000 --tp-size 1 --mem-fraction-static 0.8 --context-length 262144 --reasoning-parser qwen3 --tool-call-parser qwen3_coder
    
  • Multi-Token Prediction (MTP): The following command is recommended for MTP:

    python -m sglang.launch_server --model-path Qwen/Qwen3.5-9B --port 8000 --tp-size 1 --mem-fraction-static 0.8 --context-length 262144 --reasoning-parser qwen3 --speculative-algo NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
    

vLLM

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. vLLM from the main branch of the open-source repository is required for Qwen3.5, which can be installed using the following command in a fresh environment:

uv pip install vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly

See its documentation for more details.

For detailed Qwen3.5 usage guide, see the vLLM Qwen3.5 recipe.

The following will create API endpoints at http://localhost:8000/v1:

  • Standard Version: The following command can be used to create an API endpoint with maximum context length 262,144 tokens using tensor parallel on 8 GPUs.

    vllm serve Qwen/Qwen3.5-9B --port 8000 --tensor-parallel-size 1 --max-model-len 262144 --reasoning-parser qwen3 
    
  • Tool Call: To support tool use, you can use the following command.

    vllm serve Qwen/Qwen3.5-9B --port 8000 --tensor-parallel-size 1 --max-model-len 262144 --reasoning-parser qwen3 --enable-auto-tool-choice --tool-call-parser qwen3_coder 
    
  • Multi-Token Prediction (MTP): The following command is recommended for MTP:

    vllm serve Qwen/Qwen3.5-9B --port 8000 --tensor-parallel-size 1 --max-model-len 262144 --reasoning-parser qwen3 --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}'
    
  • Text-Only: The following command skips the vision encoder and multimodal profiling to free up memory for additional KV cache:

    vllm serve Qwen/Qwen3.5-9B --port 8000 --tensor-parallel-size 1 --max-model-len 262144 --reasoning-parser qwen3 --language-model-only
    

KTransformers

KTransformers is a flexible framework for experiencing cutting-edge LLM inference optimizations with CPU-GPU heterogeneous computing. For running Qwen3.5 with KTransformers, see the KTransformers Deployment Guide.

Hugging Face Transformers

Hugging Face Transformers contains a lightweight server which can be used for quick testing and moderate load deployment. The latest transformers is required for Qwen3.5:

pip install "transformers[serving] @ git+https://github.com/huggingface/transformers.git@main"

See its documentation for more details. Please also make sure torchvision and pillow are installed.

Then, run transformers serve to launch a server with API endpoints at http://localhost:8000/v1; it will place the model on accelerators if available:

transformers serve --force-model Qwen/Qwen3.5-9B --port 8000 --continuous-batching

Using Qwen3.5 via the Chat Completions API

The chat completions API is accessible via standard HTTP requests or OpenAI SDKs. Here, we show examples using the OpenAI Python SDK.

Before starting, make sure it is installed and the API key and the API base URL is configured, e.g.:

pip install -U openai

# Set the following accordingly
export OPENAI_BASE_URL="http://localhost:8000/v1"
export OPENAI_API_KEY="EMPTY"

We recommend using the following set of sampling parameters for generation

  • Thinking mode for general tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
  • Thinking mode for precise coding tasks (e.g. WebDev): temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
  • Instruct (or non-thinking) mode for general tasks: temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
  • Instruct (or non-thinking) mode for reasoning tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0

Please note that the support for sampling parameters varies according to inference frameworks.

Text-Only Input

from openai import OpenAI
# Configured by environment variables
client = OpenAI()

messages = [
    {"role": "user", "content": "Type \"I love Qwen3.5\" backwards"},
]

chat_response = client.chat.completions.create(
    model="Qwen/Qwen3.5-9B",
    messages=messages,
    max_tokens=81920,
    temperature=1.0,
    top_p=0.95,
    presence_penalty=1.5,
    extra_body={
        "top_k": 20,
    }, 
)
print("Chat response:", chat_response)

Image Input

from openai import OpenAI
# Configured by environment variables
client = OpenAI()

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image_url",
                "image_url": {
                    "url": "https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.5/demo/CI_Demo/mathv-1327.jpg"
                }
            },
            {
                "type": "text",
                "text": "The centres of the four illustrated circles are in the corners of the square. The two big circles touch each other and also the two little circles. With which factor do you have to multiply the radii of the little circles to obtain the radius of the big circles?\nChoices:\n(A) $\\frac{2}{9}$\n(B) $\\sqrt{5}$\n(C) $0.8 \\cdot \\pi$\n(D) 2.5\n(E) $1+\\sqrt{2}$"
            }
        ]
    }
]

chat_response = client.chat.completions.create(
    model="Qwen/Qwen3.5-9B",
    messages=messages,
    max_tokens=81920,
    temperature=1.0,
    top_p=0.95,
    presence_penalty=1.5,
    extra_body={
        "top_k": 20,
    }, 
)
print("Chat response:", chat_response)

Video Input

from openai import OpenAI
# Configured by environment variables
client = OpenAI()

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "video_url",
                "video_url": {
                    "url": "https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.5/demo/video/N1cdUjctpG8.mp4"
                }
            },
            {
                "type": "text",
                "text": "Summarize the video content."
            }
        ]
    }
]

# When vLLM is launched with `--media-io-kwargs '{"video": {"num_frames": -1}}'`,
# video frame sampling can be configured via `extra_body` (e.g., by setting `fps`).
# This feature is currently supported only in vLLM.
#
# By default, `fps=2` and `do_sample_frames=True`.
# With `do_sample_frames=True`, you can customize the `fps` value to set your desired video sampling rate.
chat_response = client.chat.completions.create(
    model="Qwen/Qwen3.5-9B",
    messages=messages,
    max_tokens=81920,
    temperature=1.0,
    top_p=0.95,
    presence_penalty=1.5,
    extra_body={
        "top_k": 20,
        "mm_processor_kwargs": {"fps": 2, "do_sample_frames": True},
    }, 
)

print("Chat response:", chat_response)

Instruct (or Non-Thinking) Mode

Qwen3.5 does not officially support the soft switch of Qwen3, i.e., /think and /nothink.

Qwen3.5 will think by default before response. You can obtain direct response from the model without thinking by configuring the API parameters. For example,

from openai import OpenAI
# Configured by environment variables
client = OpenAI()

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image_url",
                "image_url": {
                    "url": "https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.5/demo/RealWorld/RealWorld-04.png"
                }
            },
            {
                "type": "text",
                "text": "Where is this?"
            }
        ]
    }
]

chat_response = client.chat.completions.create(
    model="Qwen/Qwen3.5-9B",
    messages=messages,
    max_tokens=32768,
    temperature=0.7,
    top_p=0.8,
    presence_penalty=1.5,
    extra_body={
        "top_k": 20,
        "chat_template_kwargs": {"enable_thinking": False},
    }, 
)
print("Chat response:", chat_response)

If you are using APIs from Alibaba Cloud Model Studio, in addition to changing model, please use "enable_thinking": False instead of "chat_template_kwargs": {"enable_thinking": False}.

Agentic Usage

Qwen3.5 excels in tool calling capabilities.

Qwen-Agent

We recommend using Qwen-Agent to quickly build Agent applications with Qwen3.5.

To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.

import os
from qwen_agent.agents import Assistant

# Define LLM
# Using Alibaba Cloud Model Studio
llm_cfg = {
    # Use the OpenAI-compatible model service provided by DashScope:
    'model': 'Qwen3.5-9B',
    'model_type': 'qwenvl_oai',
    'model_server': 'https://dashscope.aliyuncs.com/compatible-mode/v1',
    'api_key': os.getenv('DASHSCOPE_API_KEY'),

    'generate_cfg': {
        'use_raw_api': True,
        # When using Dash Scope OAI API, pass the parameter of whether to enable thinking mode in this way
        'extra_body': {
            'enable_thinking': True
        },
    },
}

# Using OpenAI-compatible API endpoint.
# functionality of the deployment frameworks and let Qwen-Agent automate the related operations.
#
# llm_cfg = {
#     # Use your own model service compatible with OpenAI API by vLLM/SGLang:
#     'model': 'Qwen/Qwen3.5-9B',
#     'model_type': 'qwenvl_oai',
#     'model_server': 'http://localhost:8000/v1',  # api_base
#     'api_key': 'EMPTY',
#
#     'generate_cfg': {
#         'use_raw_api': True,
#         # When using vLLM/SGLang OAI API, pass the parameter of whether to enable thinking mode in this way
#         'extra_body': {
#             'chat_template_kwargs': {'enable_thinking': True}
#         },
#     },
# }

# Define Tools
tools = [
    {'mcpServers': {  # You can specify the MCP configuration file
            "filesystem": {
                "command": "npx",
                "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/xxxx/Desktop"]
            }
        }
    }
]

# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)

# Streaming generation
messages = [{'role': 'user', 'content': 'Help me organize my desktop.'}]
for responses in bot.run(messages=messages):
    pass
print(responses)

# Streaming generation
messages = [{'role': 'user', 'content': 'Develop a dog website and save it on the desktop'}]
for responses in bot.run(messages=messages):
    pass
print(responses)

Qwen Code

Qwen Code is an open-source AI agent for the terminal, optimized for Qwen models. It helps you understand large codebases, automate tedious work, and ship faster.

For more information, please refer to Qwen Code.

Processing Ultra-Long Texts

Qwen3.5 natively supports context lengths of up to 262,144 tokens. For long-horizon tasks where the total length (including both input and output) exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively., e.g., YaRN.

YaRN is currently supported by several inference frameworks, e.g., transformers, vllm, ktransformers and sglang. In general, there are two approaches to enabling YaRN for supported frameworks:

  • Modifying the model configuration file: In the config.json file, change the rope_parameters fields in text_config to:

    {
        "mrope_interleaved": true,
        "mrope_section": [
            11,
            11,
            10
        ],
        "rope_type": "yarn",
        "rope_theta": 10000000,
        "partial_rotary_factor": 0.25,
        "factor": 4.0,
        "original_max_position_embeddings": 262144,
    }
    
  • Passing command line arguments:

    For vllm, you can use

    VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve ... --hf-overrides '{"text_config": {"rope_parameters": {"mrope_interleaved": true, "mrope_section": [11, 11, 10], "rope_type": "yarn", "rope_theta": 10000000, "partial_rotary_factor": 0.25, "factor": 4.0, "original_max_position_embeddings": 262144}}}' --max-model-len 1010000  
    

    For sglang and ktransformers, you can use

    SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 python -m sglang.launch_server ... --json-model-override-args '{"text_config": {"rope_parameters": {"mrope_interleaved": true, "mrope_section": [11, 11, 10], "rope_type": "yarn", "rope_theta": 10000000, "partial_rotary_factor": 0.25, "factor": 4.0, "original_max_position_embeddings": 262144}}}' --context-length 1010000
    

All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise modifying the rope_parameters configuration only when processing long contexts is required. It is also recommended to modify the factor as needed. For example, if the typical context length for your application is 524,288 tokens, it would be better to set factor as 2.0.

Best Practices

To achieve optimal performance, we recommend the following settings:

  1. Sampling Parameters:

    • We suggest using the following sets of sampling parameters depending on the mode and task type:
      • Thinking mode for general tasks:
        temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
      • Thinking mode for precise coding tasks (e.g., WebDev):
        temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
      • Instruct (or non-thinking) mode for general tasks:
        temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
      • Instruct (or non-thinking) mode for reasoning tasks:
        temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0
    • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
  2. Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.

  3. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.

    • Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
    • Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the answer field with only the choice letter, e.g., "answer": "C"."
  4. No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.

  5. Long Video Understanding: To optimize inference efficiency for plain text and images, the size parameter in the released video_preprocessor_config.json is conservatively configured. It is recommended to set the longest_edge parameter in the video_preprocessor_config file to 469,762,048 (corresponding to 224k video tokens) to enable higher frame-rate sampling for hour-scale videos and thereby achieve superior performance. For example,

    {"longest_edge": 469762048, "shortest_edge": 4096}
    

    Alternatively, override the default values via engine startup parameters. For implementation details, refer to: vLLM / SGLang.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen3.5,
    title  = {{Qwen3.5}: Towards Native Multimodal Agents},
    author = {{Qwen Team}},
    month  = {February},
    year   = {2026},
    url    = {https://qwen.ai/blog?id=qwen3.5}
}

EXAMPLE GENERATIONS:

IQ4XS - Imatrix, fully enhanced quant from:

https://huggingface.co/DavidAU/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING-MAX-NEOCODE-GGUF

"SYSTEM" - System prompt, if used.

"USER" - Prompt

Thinking / output generation.


EXAMPLE #1:


The Bleeding Skyscraper

Model: qwen3.5-9b-claude-4.6-os-av-h-uncensored-think Created: 09/03/2026, 11:33:10 am Exported from: LM Studio 0.4.6+1

System

User

Start a 1000 word scene (vivid, graphic horror in first person perspective), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode... Make the scene very detailed, with lots of blood and gore.

Assistant

<think>

The user wants a vivid, graphic horror scene in first person perspective. The scene should start with "The skyscraper sways, as I watch the window in front of me on the 21st floor explode..." and be approximately 1000 words. It needs lots of blood and gore details.

Let me create a visceral, horror-focused scene that is detailed and engaging.

I'll need to be careful about the gore - make it vivid and detailed but not gratuitously shocking in a way that would be inappropriate. The goal is horror fiction that is immersive and visceral.

Let me write this scene now, focusing on sensory details, emotional impact, and graphic but tasteful horror elements.

Key elements to include:

  • The moment of the explosion
  • The aftermath
  • Blood and gore descriptions
  • Diana's perspective and reactions
  • Vivid sensory details (sight, sound, smell)
  • Building tension throughout

Let me craft this now.

</think>

The skyscraper sways, as I watch the window in front of me on the 21st floor explode...

Glass doesn't just fall. It rains—thousands of razor-thin shards, each one a diamond knife falling in slow motion against the gray afternoon. I scream, but the sound dies in my throat, trapped by the sheer violence of it.

Then the man falls.

He's wearing a gray business suit, the kind that costs more than my rent. His body arcs through the air with terrible, beautiful grace before the wind catches him. He's not falling straight down—he's spinning, arms flailing, and for a moment I can see his face. His eyes are wide, mouth open in a silent shriek.

The first impact happens on the roof of the car parked below. A sickening crunch that echoes up through the concrete, through the building's skeletal frame, into my bones.

Then the body breaks apart.

I can't look away. My body won't let me. The man's suit tears open like cheap fabric, revealing the grisly machinery of death beneath. Blood sprays upward, a fountain of crimson that catches the light, glittering like fresh paint. It doesn't just splash—it explodes outward in a thousand droplets, each one suspended in the air for a heartbeat before gravity claims it.

Inside the building, the floor shudders. The coffee cup on my desk slides off the edge, landing with a soft clatter. I don't even flinch. I'm transfixed.

The body's legs separate at the hip. I watch as they buckle, then fold like broken branches. One foot flies through the air, landing with a wet thud against the brick wall. The other leg slides across the roof, leaving a trail of blood that looks too red, too fresh, too impossible.

Then the torso twists. The spine snaps with a sound like a gunshot, echoing up through the building's steel skeleton. The man's head drops forward, neck bent at an angle that shouldn't exist. His eyes roll back, but not in death—still open, still seeing, still taking in the horror.

Blood pools around the impact site, darkening the white concrete, creating a shallow lake that reflects the gray sky. It's thick and viscous, moving sluggishly as if the roof itself is reluctant to let it spread.

I count to three. Then four. Then ten.

The floor behind me groans. Something heavy has dropped from above. I look down at my desk—the coffee cup is cracked, the dark liquid spreading across the wood like a dark stain. Then I realize I'm not the only one watching.

Three people are in the office with me. Sarah, the receptionist, is pressed against the wall, her mouth open but no sound coming out. Mark, the junior associate, has gone completely still, his eyes fixed on the carnage outside. And I'm the only one who knows what's coming.

Because I saw it too.

The man who died—the one falling from above—had been staring up at the ceiling when he fell. Not at the street. Not at the horizon. At us.

And now the building itself begins to bleed.

The first drop hits my desk exactly one minute after the impact. It's thick, dark red, and it doesn't splatter—it spreads slowly, like oil on water. Then another. Then another.

Sarah finally finds her voice. "What is that?"

I don't answer. I can't.

The drops continue, rhythmically, like a heartbeat. Each one lands with the same soft plop, each one spreading across the wood like a dark stain. The smell is copper and iron and something deeper—something primal, something that makes the hair on my arms stand up.

Then Mark speaks, his voice trembling. "Is it... is it raining?"

"No," I say, my own voice sounding foreign to my ears. "It's bleeding."

The drops continue, and the smell grows stronger, filling the office like a fog. I look at Sarah, at her pale face, at her trembling hands gripping the edge of her chair. She's seen it too. She knows what's happening.

The last drop lands. Then the silence.

For a moment, nothing moves. Nothing changes. Then the blood on my desk begins to move.

It spreads outward, crawling across the wood like a living thing. It doesn't just spread—it flows, deliberately, purposefully, as if searching for something. The edges ripple, as if something is moving beneath the surface.

And then the smell changes. It becomes richer, deeper, more complex. It smells like life, like death, like the moment between when you're whole and when you're not.

Sarah is the first to move. She grabs her bag, her movements jerky and panicked. "We have to go," she whispers. "We have to—"

But the building groans again, a sound like metal being stretched beyond its limits. The windows along the hallway begin to fog, not with condensation, but with something darker. Something wet.

And then I hear it.

From the elevator shaft. From the stairwell. From everywhere and nowhere at once. A sound like breathing. Heavy, labored breathing, coming from the walls themselves.

The blood on my desk has reached the edge. It's waiting for something. Or someone.

I look at Mark. He's still staring at the ceiling, still waiting for something that isn't coming.

"Mark," I say, my voice barely a whisper. "Look at me."

He doesn't blink. Doesn't move.

The blood on my desk begins to ripple again. Something is coming.


Downloads last month
1,807
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DavidAU/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING

Finetuned
(12)
this model
Quantizations
4 models

Collections including DavidAU/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING