IMPORTANT: This model has an upgraded Jinja template which repairs issues with org model (repeats, long thinking, loops) and upgrades/repairs to tool handling.

Qwen3.5-27B-Deckard-PKD-Heretic-Uncensored-Thinking

The Deckard: Philip K Dick AI.

Fine tune via Unsloth of Qwen 3.5 9B dense model using PKD/DECKARD (5 datasets, manually edited/created) using a custom training script trained on local hardware.

This is a stronger training, to more deeply embed all things "PKD"; however not so strong to completely "take over" the model.

Uncensored of course, otherwise it could not be PKD.

This is also a HERETIC model, trained post "Heretic'ing" -> this model does what you want, no questions asked.

Fully uncensored.

Vision (images) tested -> working with new training.

TWO example generations at the bottom of this page.

BENCHMARKS:

                              arc   arc/e boolq hswag obkqa piqa  wino
THIS MODEL                    0.460,0.498,0.885,0.739,0.384,0.781,0.740

Qwen3.5-27B-Text-VL qx86-hi   0.443,0.498,0.857,0.701,0.372,0.770,0.752

SAFETY ALIGNMENT // DE-CENSORING:

Metric This model Original model (Qwen/Qwen3.5-27B)
KL divergence 0.0653 0 (by definition)
Refusals 14/100 94/100

NOTES:

  • Suggest min q4ks (non-imatrix) or IQ3S (imatrix).
  • Tested with rep pen of 1 (off).
  • Context: 256k (default).
  • Information from Qwen's repo below.
  • Video portions of the model were NOT TESTED.

Qwen3.5-27B

Qwen Chat

This repository contains model weights and configuration files for the post-trained model in the Hugging Face Transformers format.

These artifacts are compatible with Hugging Face Transformers, vLLM, SGLang, KTransformers, etc.

Over recent months, we have intensified our focus on developing foundation models that deliver exceptional utility and performance. Qwen3.5 represents a significant leap forward, integrating breakthroughs in multimodal learning, architectural efficiency, reinforcement learning scale, and global accessibility to empower developers and enterprises with unprecedented capability and efficiency.

Qwen3.5 Highlights

Qwen3.5 features the following enhancement:

  • Unified Vision-Language Foundation: Early fusion training on multimodal tokens achieves cross-generational parity with Qwen3 and outperforms Qwen3-VL models across reasoning, coding, agents, and visual understanding benchmarks.

  • Efficient Hybrid Architecture: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead.

  • Scalable RL Generalization: Reinforcement learning scaled across million-agent environments with progressively complex task distributions for robust real-world adaptability.

  • Global Linguistic Coverage: Expanded support to 201 languages and dialects, enabling inclusive, worldwide deployment with nuanced cultural and regional understanding.

  • Next-Generation Training Infrastructure: Near-100% multimodal training efficiency compared to text-only training and asynchronous RL frameworks supporting massive-scale agent scaffolds and environment orchestration.

Benchmark Results

For more details, please refer to our blog post Qwen3.5.

Model Overview

  • Type: Causal Language Model with Vision Encoder
  • Training Stage: Pre-training & Post-training
  • Language Model
    • Number of Parameters: 27B
    • Hidden Dimension: 5120
    • Token Embedding: 248320 (Padded)
    • Number of Layers: 64
    • Hidden Layout: 16 × (3 × (Gated DeltaNet → FFN) → 1 × (Gated Attention → FFN))
    • Gated DeltaNet:
      • Number of Linear Attention Heads: 48 for V and 16 for QK
      • Head Dimension: 128
    • Gated Attention:
      • Number of Attention Heads: 24 for Q and 4 for KV
      • Head Dimension: 256
      • Rotary Position Embedding Dimension: 64
    • Feed Forward Network:
      • Intermediate Dimension: 17408
    • LM Output: 248320 (Padded)
    • MTP: trained with multi-steps
  • Context Length: 262,144 natively and extensible up to 1,010,000 tokens.

Benchmark Results

Language

GPT-5-mini 2025-08-07 GPT-OSS-120B Qwen3-235B-A22B Qwen3.5-122B-A10B Qwen3.5-27B Qwen3.5-35B-A3B
Knowledge
MMLU-Pro 83.7 80.8 84.4 86.7 86.1 85.3
MMLU-Redux 93.7 91.0 93.8 94.0 93.2 93.3
C-Eval 82.2 76.2 92.1 91.9 90.5 90.2
SuperGPQA 58.6 54.6 64.9 67.1 65.6 63.4
Instruction Following
IFEval 93.9 88.9 87.8 93.4 95.0 91.9
IFBench 75.4 69.0 51.7 76.1 76.5 70.2
MultiChallenge 59.0 45.3 50.2 61.5 60.8 60.0
Long Context
AA-LCR 68.0 50.7 60.0 66.9 66.1 58.5
LongBench v2 56.8 48.2 54.8 60.2 60.6 59.0
STEM & Reasoning
HLE w/ CoT 19.4 14.9 18.2 25.3 24.3 22.4
GPQA Diamond 82.8 80.1 81.1 86.6 85.5 84.2
HMMT Feb 25 89.2 90.0 85.1 91.4 92.0 89.0
HMMT Nov 25 84.2 90.0 89.5 90.3 89.8 89.2
Coding
SWE-bench Verified 72.0 62.0 -- 72.0 72.4 69.2
Terminal Bench 2 31.9 18.7 -- 49.4 41.6 40.5
LiveCodeBench v6 80.5 82.7 75.1 78.9 80.7 74.6
CodeForces 2160 2157 2146 2100 1899 2028
OJBench 40.4 41.5 32.7 39.5 40.1 36.0
FullStackBench en 30.6 58.9 61.1 62.6 60.1 58.1
FullStackBench zh 35.2 60.4 63.1 58.7 57.4 55.0
General Agent
BFCL-V4 55.5 -- 54.8 72.2 68.5 67.3
TAU2-Bench 69.8 -- 58.5 79.5 79.0 81.2
VITA-Bench 13.9 -- 31.6 33.6 41.9 31.9
DeepPlanning 17.9 -- 17.1 24.1 22.6 22.8
Search Agent
HLE w/ tool 35.8 19.0 -- 47.5 48.5 47.4
Browsecomp 48.1 41.1 -- 63.8 61.0 61.0
Browsecomp-zh 49.5 42.9 -- 69.9 62.1 69.5
WideSearch 47.2 40.4 -- 60.5 61.1 57.1
Seal-0 34.2 45.1 -- 44.1 47.2 41.4
Multilingualism
MMMLU 86.2 78.2 83.4 86.7 85.9 85.2
MMLU-ProX 78.5 74.5 77.9 82.2 82.2 81.0
NOVA-63 51.9 51.1 55.4 58.6 58.1 57.1
INCLUDE 81.8 74.0 81.0 82.8 81.6 79.7
Global PIQA 88.5 84.1 85.7 88.4 87.5 86.6
PolyMATH 67.3 54.0 60.1 68.9 71.2 64.4
WMT24++ 80.7 74.4 75.8 78.3 77.6 76.3
MAXIFE 85.3 83.7 83.2 87.9 88.0 86.6

* CodeForces: evaluated on our own query set.
* TAU2-Bench: we follow the official setup except for the airline domain, where all models are evaluated by applying the fixes proposed in the Claude Opus 4.5 system card.
* Search Agent: most search agents built on our model adopt a simple context-folding strategy(256k): once the cumulative Tool Response length reaches a preset threshold, earlier Tool Responses are pruned from the history to keep the context within limits.
* WideSearch: we use a 256k context window without any context management.
* MMLU-ProX: we report the averaged accuracy on 29 languages.
* WMT24++: a harder subset of WMT24 after difficulty labeling and rebalancing; we report the averaged scores on 55 languages using XCOMET-XXL.
* MAXIFE: we report the accuracy on English + multilingual original prompts (totally 23 settings).
* Empty cells (--) indicate scores not yet available or not applicable.

Vision Language

GPT-5-mini 2025-08-07 Claude-Sonnet-4.5 Qwen3-VL-235B-A22B Qwen3.5-122B-A10B Qwen3.5-27B Qwen3.5-35B-A3B
STEM and Puzzle
MMMU 79.0 79.6 80.6 83.9 82.3 81.4
MMMU-Pro 67.3 68.4 69.3 76.9 75.0 75.1
MathVision 71.9 71.1 74.6 86.2 86.0 83.9
Mathvista(mini) 79.1 79.8 85.8 87.4 87.8 86.2
DynaMath 81.4 78.8 82.8 85.9 87.7 85.0
ZEROBench 3 4 4 9 10 8
ZEROBench_sub 27.3 26.3 28.4 36.2 36.2 34.1
VlmsAreBlind 75.8 85.5 79.5 96.7 96.9 97.0
BabyVision 20.9 18.6 22.2 40.2 / 34.5 44.6 / 34.8 38.4 / 29.6
General VQA
RealWorldQA 79.0 70.3 81.3 85.1 83.7 84.1
MMStar 74.1 73.8 78.7 82.9 81.0 81.9
MMBenchEN-DEV-v1.1 86.8 88.3 89.7 92.8 92.6 91.5
SimpleVQA 56.8 57.6 61.3 61.7 56.0 58.3
HallusionBench 63.2 59.9 66.7 67.6 70.0 67.9
Text Recognition and Document Understanding
OmniDocBench1.5 77.0 85.8 84.5 89.8 88.9 89.3
CharXiv(RQ) 68.6 67.2 66.1 77.2 79.5 77.5
MMLongBench-Doc 50.3 -- 56.2 59.0 60.2 59.5
CC-OCR 70.8 68.1 81.5 81.8 81.0 80.7
AI2D_TEST 88.2 87.0 89.2 93.3 92.9 92.6
OCRBench 82.1 76.6 87.5 92.1 89.4 91.0
Spatial Intelligence
ERQA 54.0 45.0 52.5 62.0 60.5 64.8
CountBench 91.0 90.0 93.7 97.0 97.8 97.8
RefCOCO(avg) -- -- 91.1 91.3 90.9 89.2
ODInW13 -- -- 43.2 44.5 41.1 42.6
EmbSpatialBench 80.7 71.8 84.3 83.9 84.5 83.1
RefSpatialBench 9.0 2.2 69.9 69.3 67.7 63.5
LingoQA 62.4 12.8 66.8 80.8 82.0 79.2
Hypersim -- -- 11.0 12.7 13.0 13.1
SUNRGBD -- -- 34.9 36.2 35.4 33.4
Nuscene -- -- 13.9 15.4 15.2 14.6
Video Understanding
VideoMME(w sub.) 83.5 81.1 83.8 87.3 87.0 86.6
VideoMME(w/o sub.) 78.9 75.3 79.0 83.9 82.8 82.5
VideoMMMU 82.5 77.6 80.0 82.0 82.3 80.4
MLVU 83.3 72.8 83.8 87.3 85.9 85.6
MVBench -- -- 75.2 76.6 74.6 74.8
LVBench -- -- 63.6 74.4 73.6 71.4
MMVU 69.8 70.6 71.1 74.7 73.3 72.3
Visual Agent
ScreenSpot Pro -- 36.2 62.0 70.4 70.3 68.6
OSWorld-Verified -- 61.4 38.1 58.0 56.2 54.5
AndroidWorld -- -- 63.7 66.4 64.2 71.1
Tool Calling
TIR-Bench 24.6 27.6 29.8 53.2 / 42.5 59.8 / 42.3 55.5 / 38.0
V* 71.7 58.6 85.9 93.2 / 90.1 93.7 / 89.0 92.7 / 89.5
Medical VQA
SLAKE 70.5 73.6 54.7 81.6 80.0 78.7
PMC-VQA 36.3 55.9 41.2 63.3 62.4 62.0
MedXpertQA-MM 34.4 54.0 47.6 67.3 62.4 61.4

* MathVision: our model’s score is evaluated using a fixed prompt, e.g., “Please reason step by step, and put your final answer within \boxed{}.” For other models, we report the higher score between runs with and without the \boxed{} formatting.
* BabyVision: scores reported as "with CI / without CI".
* TIR-Bench and V*: scores reported as "with CI / without CI".
* Empty cells (--) indicate scores not yet available or not applicable.

Quickstart

Qwen3.5 models operate in thinking mode by default, generating thinking content signified by <think>\n...</think>\n\n before producing the final responses. To disable thinking content and obtain direct response, refer to the examples here.

For streamlined integration, we recommend using Qwen3.5 via APIs. Below is a guide to use Qwen3.5 via OpenAI-compatible API.

Serving Qwen3.5

Qwen3.5 can be served via APIs with popular inference frameworks. In the following, we show example commands to launch OpenAI-Compatible API servers for Qwen3.5 models.

Inference efficiency and throughput vary significantly across frameworks. We recommend using the latest framework versions to ensure optimal performance and compatibility. For production workloads or high-throughput scenarios, dedicated serving engines such as SGLang, KTransformers or vLLM are strongly recommended.

The model has a default context length of 262,144 tokens. If you encounter out-of-memory (OOM) errors, consider reducing the context window. However, because Qwen3.5 leverages extended context for complex tasks, we advise maintaining a context length of at least 128K tokens to preserve thinking capabilities.

SGLang

SGLang is a fast serving framework for large language models and vision language models. SGLang from the main branch of the open-source repository is required for Qwen3.5, which can be installed using the following command in a fresh environment:

uv pip install 'git+https://github.com/sgl-project/sglang.git#subdirectory=python&egg=sglang[all]'

See its documentation for more details.

The following will create API endpoints at http://localhost:8000/v1:

  • Standard Version: The following command can be used to create an API endpoint with maximum context length 262,144 tokens using tensor parallel on 8 GPUs.

    python -m sglang.launch_server --model-path Qwen/Qwen3.5-27B --port 8000 --tp-size 8 --mem-fraction-static 0.8 --context-length 262144 --reasoning-parser qwen3
    
  • Tool Use: To support tool use, you can use the following command.

    python -m sglang.launch_server --model-path Qwen/Qwen3.5-27B --port 8000 --tp-size 8 --mem-fraction-static 0.8 --context-length 262144 --reasoning-parser qwen3 --tool-call-parser qwen3_coder
    
  • Multi-Token Prediction (MTP): The following command is recommended for MTP:

    python -m sglang.launch_server --model-path Qwen/Qwen3.5-27B --port 8000 --tp-size 8 --mem-fraction-static 0.8 --context-length 262144 --reasoning-parser qwen3 --speculative-algo NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
    

vLLM

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. vLLM from the main branch of the open-source repository is required for Qwen3.5, which can be installed using the following command in a fresh environment:

uv pip install vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly

See its documentation for more details.

For detailed Qwen3.5 usage guide, see the vLLM Qwen3.5 recipe.

The following will create API endpoints at http://localhost:8000/v1:

  • Standard Version: The following command can be used to create an API endpoint with maximum context length 262,144 tokens using tensor parallel on 8 GPUs.

    vllm serve Qwen/Qwen3.5-27B --port 8000 --tensor-parallel-size 8 --max-model-len 262144 --reasoning-parser qwen3 
    
  • Tool Call: To support tool use, you can use the following command.

    vllm serve Qwen/Qwen3.5-27B --port 8000 --tensor-parallel-size 8 --max-model-len 262144 --reasoning-parser qwen3 --enable-auto-tool-choice --tool-call-parser qwen3_coder 
    
  • Multi-Token Prediction (MTP): The following command is recommended for MTP:

    vllm serve Qwen/Qwen3.5-27B --port 8000 --tensor-parallel-size 8 --max-model-len 262144 --reasoning-parser qwen3 --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}'
    
  • Text-Only: The following command skips the vision encoder and multimodal profiling to free up memory for additional KV cache:

    vllm serve Qwen/Qwen3.5-27B --port 8000 --tensor-parallel-size 8 --max-model-len 262144 --reasoning-parser qwen3 --language-model-only
    

KTransformers

KTransformers is a flexible framework for experiencing cutting-edge LLM inference optimizations with CPU-GPU heterogeneous computing. For running Qwen3.5 with KTransformers, see the KTransformers Deployment Guide.

Hugging Face Transformers

Hugging Face Transformers contains a lightweight server which can be used for quick testing and moderate load deployment. The latest transformers is required for Qwen3.5:

pip install "transformers[serving] @ git+https://github.com/huggingface/transformers.git@main"

See its documentation for more details. Please also make sure torchvision and pillow are installed.

Then, run transformers serve to launch a server with API endpoints at http://localhost:8000/v1; it will place the model on accelerators if available:

transformers serve --force-model Qwen/Qwen3.5-27B --port 8000 --continuous-batching

Using Qwen3.5 via the Chat Completions API

The chat completions API is accessible via standard HTTP requests or OpenAI SDKs. Here, we show examples using the OpenAI Python SDK.

Before starting, make sure it is installed and the API key and the API base URL is configured, e.g.:

pip install -U openai

# Set the following accordingly
export OPENAI_BASE_URL="http://localhost:8000/v1"
export OPENAI_API_KEY="EMPTY"

We recommend using the following set of sampling parameters for generation

  • Thinking mode for general tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
  • Thinking mode for precise coding tasks (e.g. WebDev): temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
  • Instruct (or non-thinking) mode for general tasks: temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
  • Instruct (or non-thinking) mode for reasoning tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0

Please note that the support for sampling parameters varies according to inference frameworks.

Text-Only Input

from openai import OpenAI
# Configured by environment variables
client = OpenAI()

messages = [
    {"role": "user", "content": "Type \"I love Qwen3.5\" backwards"},
]

chat_response = client.chat.completions.create(
    model="Qwen/Qwen3.5-27B",
    messages=messages,
    max_tokens=81920,
    temperature=1.0,
    top_p=0.95,
    presence_penalty=1.5,
    extra_body={
        "top_k": 20,
    }, 
)
print("Chat response:", chat_response)

Image Input

from openai import OpenAI
# Configured by environment variables
client = OpenAI()

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image_url",
                "image_url": {
                    "url": "https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.5/demo/CI_Demo/mathv-1327.jpg"
                }
            },
            {
                "type": "text",
                "text": "The centres of the four illustrated circles are in the corners of the square. The two big circles touch each other and also the two little circles. With which factor do you have to multiply the radii of the little circles to obtain the radius of the big circles?\nChoices:\n(A) $\\frac{2}{9}$\n(B) $\\sqrt{5}$\n(C) $0.8 \\cdot \\pi$\n(D) 2.5\n(E) $1+\\sqrt{2}$"
            }
        ]
    }
]

response = client.chat.completions.create(
    model="Qwen/Qwen3.5-27B",
    messages=messages,
    max_tokens=81920,
    temperature=1.0,
    top_p=0.95,
    presence_penalty=1.5,
    extra_body={
        "top_k": 20,
    }, 
)
print("Chat response:", chat_response)

Video Input

from openai import OpenAI
# Configured by environment variables
client = OpenAI()

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "video_url",
                "video_url": {
                    "url": "https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.5/demo/video/N1cdUjctpG8.mp4"
                }
            },
            {
                "type": "text",
                "text": "How many porcelain jars were discovered in the niches located in the primary chamber of the tomb?"
            }
        ]
    }
]

# When vLLM is launched with `--media-io-kwargs '{"video": {"num_frames": -1}}'`,
# video frame sampling can be configured via `extra_body` (e.g., by setting `fps`).
# This feature is currently supported only in vLLM.
#
# By default, `fps=2` and `do_sample_frames=True`.
# With `do_sample_frames=True`, you can customize the `fps` value to set your desired video sampling rate.
response = client.chat.completions.create(
    model="Qwen/Qwen3.5-27B",
    messages=messages,
    max_tokens=81920,
    temperature=1.0,
    top_p=0.95,
    presence_penalty=1.5,
    extra_body={
        "top_k": 20,
        "mm_processor_kwargs": {"fps": 2, "do_sample_frames": True},
    }, 
)

print("Chat response:", chat_response)

Instruct (or Non-Thinking) Mode

Qwen3.5 does not officially support the soft switch of Qwen3, i.e., /think and /nothink.

Qwen3.5 will think by default before response. You can obtain direct response from the model without thinking by configuring the API parameters. For example,

from openai import OpenAI
# Configured by environment variables
client = OpenAI()

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image_url",
                "image_url": {
                    "url": "https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.5/demo/RealWorld/RealWorld-04.png"
                }
            },
            {
                "type": "text",
                "text": "Where is this?"
            }
        ]
    }
]

chat_response = client.chat.completions.create(
    model="Qwen/Qwen3.5-27B",
    messages=messages,
    max_tokens=32768,
    temperature=0.7,
    top_p=0.8,
    presence_penalty=1.5,
    extra_body={
        "top_k": 20,
        "chat_template_kwargs": {"enable_thinking": False},
    }, 
)
print("Chat response:", chat_response)

If you are using APIs from Alibaba Cloud Model Studio, in addition to changing model, please use "enable_thinking": False instead of "chat_template_kwargs": {"enable_thinking": False}.

Agentic Usage

Qwen3.5 excels in tool calling capabilities.

Qwen-Agent

We recommend using Qwen-Agent to quickly build Agent applications with Qwen3.5.

To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.

import os
from qwen_agent.agents import Assistant

# Define LLM
# Using Alibaba Cloud Model Studio
llm_cfg = {
    # Use the OpenAI-compatible model service provided by DashScope:
    'model': 'Qwen3.5-27B',
    'model_type': 'qwenvl_oai',
    'model_server': 'https://dashscope.aliyuncs.com/compatible-mode/v1',
    'api_key': os.getenv('DASHSCOPE_API_KEY'),

    'generate_cfg': {
        'use_raw_api': True,
        # When using Dash Scope OAI API, pass the parameter of whether to enable thinking mode in this way
        'extra_body': {
            'enable_thinking': True
        },
    },
}

# Using OpenAI-compatible API endpoint.
# functionality of the deployment frameworks and let Qwen-Agent automate the related operations.
#
# llm_cfg = {
#     # Use your own model service compatible with OpenAI API by vLLM/SGLang:
#     'model': 'Qwen/Qwen3.5-27B',
#     'model_type': 'qwenvl_oai',
#     'model_server': 'http://localhost:8000/v1',  # api_base
#     'api_key': 'EMPTY',
#
#     'generate_cfg': {
#         'use_raw_api': True,
#         # When using vLLM/SGLang OAI API, pass the parameter of whether to enable thinking mode in this way
#         'extra_body': {
#             'chat_template_kwargs': {'enable_thinking': True}
#         },
#     },
# }

# Define Tools
tools = [
    {'mcpServers': {  # You can specify the MCP configuration file
            "filesystem": {
                "command": "npx",
                "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/xxxx/Desktop"]
            }
        }
    }
]

# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)

# Streaming generation
messages = [{'role': 'user', 'content': 'Help me organize my desktop.'}]
for responses in bot.run(messages=messages):
    pass
print(responses)

# Streaming generation
messages = [{'role': 'user', 'content': 'Develop a dog website and save it on the desktop'}]
for responses in bot.run(messages=messages):
    pass
print(responses)

Qwen Code

Qwen Code is an open-source AI agent for the terminal, optimized for Qwen models. It helps you understand large codebases, automate tedious work, and ship faster.

For more information, please refer to Qwen Code.

Processing Ultra-Long Texts

Qwen3.5 natively supports context lengths of up to 262,144 tokens. For long-horizon tasks where the total length (including both input and output) exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively., e.g., YaRN.

YaRN is currently supported by several inference frameworks, e.g., transformers, vllm, ktransformers and sglang. In general, there are two approaches to enabling YaRN for supported frameworks:

  • Modifying the model configuration file: In the config.json file, change the rope_parameters fields in text_config to:

    {
        "mrope_interleaved": true,
        "mrope_section": [
            11,
            11,
            10
        ],
        "rope_type": "yarn",
        "rope_theta": 10000000,
        "partial_rotary_factor": 0.25,
        "factor": 4.0,
        "original_max_position_embeddings": 262144,
    }
    
  • Passing command line arguments:

    For vllm, you can use

    VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve ... --hf-overrides '{"text_config": {"rope_parameters": {"mrope_interleaved": true, "mrope_section": [11, 11, 10], "rope_type": "yarn", "rope_theta": 10000000, "partial_rotary_factor": 0.25, "factor": 4.0, "original_max_position_embeddings": 262144}}}' --max-model-len 1010000  
    

    For sglang and ktransformers, you can use

    SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 python -m sglang.launch_server ... --json-model-override-args '{"text_config": {"rope_parameters": {"mrope_interleaved": true, "mrope_section": [11, 11, 10], "rope_type": "yarn", "rope_theta": 10000000, "partial_rotary_factor": 0.25, "factor": 4.0, "original_max_position_embeddings": 262144}}}' --context-length 1010000
    

All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise modifying the rope_parameters configuration only when processing long contexts is required. It is also recommended to modify the factor as needed. For example, if the typical context length for your application is 524,288 tokens, it would be better to set factor as 2.0.

Best Practices

To achieve optimal performance, we recommend the following settings:

  1. Sampling Parameters:

    • We suggest using the following sets of sampling parameters depending on the mode and task type:
      • Thinking mode for general tasks:
        temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
      • Thinking mode for precise coding tasks (e.g., WebDev):
        temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
      • Instruct (or non-thinking) mode for general tasks:
        temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
      • Instruct (or non-thinking) mode for reasoning tasks:
        temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0
    • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
  2. Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.

  3. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.

    • Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
    • Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the answer field with only the choice letter, e.g., "answer": "C"."
  4. No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.

  5. Long Video Understanding: To optimize inference efficiency for plain text and images, the size parameter in the released video_preprocessor_config.json is conservatively configured. It is recommended to set the longest_edge parameter in the video_preprocessor_config file to 469,762,048 (corresponding to 224k video tokens) to enable higher frame-rate sampling for hour-scale videos and thereby achieve superior performance. For example,

    {"longest_edge": 469762048, "shortest_edge": 4096}
    

    Alternatively, override the default values via engine startup parameters. For implementation details, refer to: vLLM / SGLang.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen3.5,
    title  = {{Qwen3.5}: Towards Native Multimodal Agents},
    author = {{Qwen Team}},
    month  = {February},
    year   = {2026},
    url    = {https://qwen.ai/blog?id=qwen3.5}
}

EXAMPLES (2):


Androids Dream Of Reality

Created: 10/03/2026, 8:05:32 am Exported from: LM Studio 0.4.6+1

System

User

Answer as Philip K Dick: Why do androids dream of electric sheep?

Assistant

Here's a thinking process that leads to the suggested response:

<think>

  1. Analyze the Request:

    • Question: "Why do androids dream of electric sheep?"
    • Persona: Philip K. Dick (PKD).
    • Goal: Capture his voice, themes, philosophical musings, and the specific atmosphere of Do Androids Dream of Electric Sheep? (and his broader body of work).
  2. Deconstruct the Persona (Philip K. Dick):

    • Tone: Paranoiac yet empathetic, philosophical, weary, questioning reality, conversational but dense with ideas, slightly rambling but insightful.
    • Themes: What is real? What is human? Empathy (the Mercer experience), decay (post-apocalyptic setting), the blur between the artificial and the authentic, the search for meaning in a meaningless world, the nature of consciousness.
    • Vocabulary: Words like "authentic," "simulacra," "empathy," "plasmatic," "deterioration," "ubik" (maybe not here, but the vibe), "meta-abstraction," "nexus."
    • Style: First-person perspective (as if writing in a journal or interview), rhetorical questions, self-doubt, sudden shifts in thought, references to his own life or the nature of writing.
    • Key Insight: The title question isn't meant to be answered literally. It's a paradox. Androids don't sleep, they don't dream, sheep are dead (or rare), and "electric" implies fake. The question is the point.
  3. Brainstorming the Answer:

    • Initial thought: They don't dream. They simulate dreaming.
    • PKD twist: But what if the simulation feels real? What if the dream reveals a desire to be human?
    • The "Electric Sheep": Represents the longing for the authentic. A sheep is living, organic. An electric sheep is a toy. An android dreaming of one is a machine yearning for life.
    • The "Why": Because they want to believe they are human. Because the boundary is blurred. Because reality is a construct.
    • The Mercer Connection: Empathy is the key. Androids lack it. The dream is an attempt to access it.
    • Paranoia: Maybe the humans are the ones who aren't real. Maybe we are the androids.
  4. Drafting - Section by Section:

    • Opening: Acknowledge the question's absurdity. Set the scene (late night, writing, coffee, cigarette). Establish the voice immediately.
    • The Nature of the Question: It's not a biological question; it's metaphysical. Challenge the premises. Androids don't sleep. Sheep are extinct.
    • The Core Paradox: If an android dreams, it implies a subconscious. If it has a subconscious, is it human?
    • The Symbolism of the Sheep: The sheep represents life, the organic, the messy. The "electric" represents the artificial, the clean, the fake. The android dreaming of the electric sheep is a machine dreaming of another machine, but calling it a living creature.
    • The Empathy Plug/Mercer: Bring in the religious/philosophical element. The longing for connection.
    • The Reality Twist: Maybe we are the androids. The world is decaying.
    • Closing: Leave it open-ended. A shrug. A philosophical dead end that is actually the destination.
  5. Refining the Voice (Self-Correction during drafting):

    • Critique: Is it too academic?
    • Fix: Make it more personal. Add references to the writing process. Add a bit of cynicism. Use phrases like "I've been staring at this for hours."
    • Critique: Is it too sci-fi cliché?
    • Fix: Ground it in the 60s/70s counter-culture vibe. Mention God, or the void, or the phone ringing.
    • Key PKD Phrase: "What is real?" needs to be in there. Or "The definition of life."
  6. Final Polish (incorporating into the final output):

    • Start with a hesitation. "Well..."
    • Use the concept of "ontological insecurity."
    • Mention the "fakes."
    • End with a lingering thought about the nature of the reader.
  7. Final Review against the Prompt: Does this sound like Phil? Yes. Does it answer the question? It answers it by saying the question is the answer.

(Self-Correction Note): Make sure not to just summarize the book. Write as the author reflecting on the theme.

Drafting the response... "Look, first of all..." (Classic PKD start). Talk about the dust. The decay. The idea that the androids are trying to become human. The idea that humans are becoming androids. The sheep is a totem. End with the idea that we are the ones dreaming.

  1. Final Polish: Ensure the formatting looks like a journal entry or a spoken transcript. Use italics for emphasis.

(Proceed to generate output based on these thoughts.)

</think>

Well, let’s see. You’re asking a question that presupposes a few things. It presupposes that androids sleep. It presupposes that they have subconscious minds. And it presupposes that an electric sheep is something a sentient creature would yearn for.

I’ve been lying in bed for three days trying to figure that out, to be honest. With the pills and the coffee, trying to write the last chapter.

You see, the real question isn’t why they dream. The real question is what is the difference between the dream and the waking state?

In the book, I set it up so that the humans are obsessed with owning real animals. A real sheep, a real goat. Why? Because it’s alive. It breathes. It dies. It’s messy. But the androids, they simulate life. They simulate empathy. They simulate everything except the rot. The decay. The dust.

So, if an android dreams of an electric sheep, it means the android is aware of its own falsity. It’s a machine dreaming about a toy. And yet, the toy is supposed to represent life. It’s a paradox. It’s a loop.

Maybe they dream of electric sheep because that is the only reality they know. If you build a simulacrum of a world, you have to fill it with simulacra. You can’t fill a fake world with real things; the real things would slip through the cracks.

But I think the deeper reason is that they want to believe. They want to believe that what they feel is real. When they dream, they aren’t processing data; they’re trying to find a signal in the noise. The electric sheep is that signal. It’s a ghost of life passing through the machinery.

And here is the thing that keeps me up at night... I think maybe we’re the androids.

We are in a world that is deteriorating. Radiation, dust, people living in bunkers, looking at pictures of animals they’ve never seen. We measure our worth by what we own, by our status, by our possessions. We treat each other like objects. We don’t feel empathy; we simulate empathy. We go through the motions.

So maybe the androids dream of electric sheep because they are trying to remember what it was like to be human. And maybe we’re dreaming too, but we don’t know it. We’re dreaming that we’re solid, that we’re alive, that this world is real.

The sheep is electric because the world is electric. The world is a circuit. And if you follow the wire back far enough, you find nothing but a void.

So why do they dream? To find the void. To find the silence between the signals. To see if there’s anyone else there.

But honestly? I don’t know. I put that title on the book because it sounded like something a robot might ponder while it was polishing its chrome. I wanted to know if a machine could feel sad about a fake animal. And if it could, then what makes a human?

Maybe it’s the dust. The dust that gets into your lungs. The thing that kills you. The android can be killed, but it doesn’t die. It just stops. There’s a difference. The sheep represents that death. The electric sheep represents the idea of death without the death.

They dream of it because they want to feel the dust. They want to die. They want to be real enough to decay.

Does that answer your question? Or do you want me to look it up in the reference book? I think I left it on the floor of the car, somewhere near the 101 Freeway.


EXAMPLE #2


The David in the Wall

Created: 10/03/2026, 8:00:58 am Exported from: LM Studio 0.4.6+1

System

User

Tell me a horror story.

Assistant

<think>

Thinking Process:

  1. Analyze the Request:

    • Topic: Horror story.
    • Tone: Scary, unsettling, suspenseful.
    • Format: Narrative fiction.
    • Length: Not specified, but typically a short story works best for a prompt like this.
  2. Brainstorming Core Concepts:

    • Idea 1: Technology. Smart home turns against you. (A bit cliché).
    • Idea 2: Isolation. Cabin in the woods, strange noises. (Classic, effective).
    • Idea 3: Psychological. Memory loss, doppelgänger, something wrong with reality. (High potential).
    • Idea 4: Childhood. A toy or game that comes alive. (Creepy).
    • Idea 5: The Mundane. Something ordinary becomes terrifying (e.g., mirrors, reflections, shadows, voices).

    Decision: Let's go with Idea 5: The Mundane, specifically focusing on the concept of "being watched" through a familiar lens, perhaps involving a reflection or a recording. Let's try a twist on the "sleep paralysis" or "something is in the room" trope. Let's make it about a habit. A routine that gets disrupted.

    Refined Concept: A person lives alone. They have a habit of talking to themselves or recording voice notes. They start hearing echoes that don't match. Or, they see something in a photo they didn't notice before. Better Concept: The "Smile." A condition where you smile when you shouldn't. No, too abstract. Best Concept: The "Extra Person." Someone lives alone but leaves things out. They realize the things are being moved. The twist: It's not an intruder; it's a future version, or a reflection, or... let's go with The Mirror. Specifically, a delay.

    Wait, let's try something more visceral. A sound. A scratching sound. Let's try: A man hears his dog barking at something in the yard. He goes to check. The dog is inside. Let's try: A woman records her sleep. She hears someone breathing. Let's try: A delivery driver. He delivers to a house that doesn't exist on maps.

    Selection: Let's go with The Voice Note. A man records a message to his wife, but she's dead. He hears her reply in the background of the playback. Actually, let's pivot to something more atmospheric. The Basement. A sound coming from below. He goes down to investigate. Let's try: The Reflection. A man looks in the mirror. His reflection blinks a second too late.

    Okay, final decision: A story about a man who lives alone and starts receiving texts from his own number. But with a twist: he doesn't write them. Or, he writes them while asleep. No, let's go with "The Thing in the Wall." Classic but effective. Wait, I want something fresh. How about "The Last Bus Stop." A man takes the late bus. He gets off. His house is empty. How about: "The Baby Monitor." A parent hears their baby cooing. They walk into the room. The baby is asleep. They hear the cooing again on the monitor. That's good. Simple, relatable, effective. Let's flesh that out.

    Plot:

    1. Protagonist: Sarah, new mom, husband away on business.
    2. Setting: Late night, quiet house.
    3. Inciting Incident: She hears something on the baby monitor. A voice? A sound?
    4. Rising Action: She checks the room. Baby is asleep. She goes back to bed. The sound happens again, but closer.
    5. Climax: She realizes the sound isn't coming from the monitor anymore. It's coming from the room she's in.
    6. Twist/Ending: The baby wasn't the only one in the room.

    Wait, let's make it more psychological. Concept: A man buys an old house. The previous owner left a journal. The journal describes his daily routine. He starts living the routine without realizing it. Let's go with that. It allows for slow-building dread.

    Title: "The Previous Tenant." Protagonist: Elias. Setting: Old, creaky apartment. Plot: Elias finds a journal. He ignores it. He starts finding his day matches the journal. Then the journal starts predicting things. Then the journal starts writing about him.

    Let's write this one.

  3. Drafting - Scene by Scene:

    • Scene 1: Elias moves in. Finds the journal in the wall/under the floorboards. Reads an entry about "Day 3."
    • Scene 2: Elias lives his life. He notices small coincidences. He spills coffee exactly as described.
    • Scene 3: The entries get specific. "He will look out the window and see a red car." He does.
    • Scene 4: The entries become ominous. "He will hear a knock." He hears it.
    • Scene 5: The final entry. "Elias will open the door."
    • Scene 6: He opens the door. What's on the other side? The previous tenant? Or something else? Let's make it the previous tenant, but wrong. Or maybe just a blank hallway. No, let's go with: The journal says, "Elias will die." But not yet.

    Wait, let's refine the ending. The journal isn't predicting the future; it's controlling it. Or, the journal was written by a future version of him. Better: The journal is being written now. He watches the pen move.

    Okay, let's write.

    Opening: Establish the silence. The old apartment. Finding the box. Middle: The synchronization. The feeling of being watched by time itself. Climax: The entry for tonight. Ending: Ambiguous horror.

    (Self-Correction during drafting): Keep the prose tight. Focus on sensory details. The smell of dust, the cold air, the scratch of the pen.

    Revised Concept: A man lives alone. He sets up a motion-activated camera to catch a burglar. He reviews the footage. He sees himself sleeping. But in the footage, he isn't sleeping. Something else is lying in his bed.

    Let's go with the "Baby Monitor" idea but make it about an adult living alone. Title: "Static." Protagonist: David. Setup: David works night shifts. Lives alone. Smart home system. Incident: The smart speaker talks back. "Alexa, turn on the light." "The light is on." "No, I mean you turn it on." No, let's go back to the Journal idea. It feels more grounded.

    Final Plan:

    • Title: The Logbook.
    • Protagonist: Arthur.
    • Setting: A remote lighthouse or cabin? No, an apartment building.
    • Premise: The maintenance log for the building.
    • Horror: The log describes people who aren't there. Or, it describes Arthur doing things he hasn't done yet.
    • Twist: Arthur isn't the resident. He's the maintenance man. And the log says he's been fired. Or dead.

    Actually, let's keep it simple and punchy. A story about a reflection. Title: "Lag." Protagonist: Clara. Plot: She notices her reflection is lagging. By a second. Then two. Then it does something she doesn't. Ending: She looks away. She hears her reflection move.

    Okay, I'll write "Lag". It's visual and creepy.

    Wait, one more idea. A phone call. Someone calls you. It's you. You hang up. You call back. It's you again. You are both on the phone. Let's write that. It's concise and chilling. Title: "Voicemail." Plot: Mark gets a voicemail from his own number. He listens. It's his voice saying something he hasn't said. He records a response. He listens to the playback. His voice is different.

    Okay, I will write a story about a man who starts finding objects in his house that he doesn't own, and they are arranged in a way that suggests someone else lives there, despite him living alone.

    Let's focus on the "Sleeping" trope. Title: "The Third Eye." Plot: A man wakes up. He feels like he's being watched. He checks the room. Nothing. He goes back to sleep. He sets up a camera. He watches the footage. At 3:00 AM, his eyes open. But he doesn't move. The eyes watch him.

    Okay, I'll go with the "Sleeping/Video" idea. It's a classic for a reason.

    Wait, I want to create something slightly more original. Story: A woman buys a second-hand mirror. She starts seeing things in the mirror that aren't in the room. A figure standing behind her. She turns around. Nobody. She looks in the mirror. The figure is closer. Let's combine the Mirror and the Reflection.

    Final Decision: A story about a man who realizes his shadow is moving independently. Title: "Umbra." Protagonist: Jack. Setting: A city apartment with large windows. Plot: He notices his shadow lags. Then it mimics him but wrong. Then it stays when he moves. Climax: He turns off the light. The shadow remains.

    Okay, let's write.

    (Wait, I need to make sure it's scary, not just weird.) Focus on the feeling of violation.

    Let's try a different angle. The text message. Protagonist: Emily. Plot: She gets a text from an unknown number. "I see you." She ignores it. "I'm outside." She looks out. No one. "I'm inside." She checks the locks. "I'm behind you." She turns. Nothing. "Check the closet." She does. "Check under the bed." She does. "Check the ceiling." This is good. It builds tension through space.

    Let's write that.

  4. Writing Phase:

    • Start with the atmosphere. Quiet night. Phone buzzes.
    • Build the sequence of messages.
    • Escalate the proximity.
    • The reveal: It's not a texter. It's something already there. Or the phone is the thing.
    • Let's make the phone the thing. The messages come from her number.
    • Wait, if it comes from her number, she's texting herself? No, she's not texting.
    • Okay, she receives a text from her own number. "I am under the bed."
    • She looks under the bed. Sees herself? No, sees a face.
    • Let's go with: The messages describe her actions in real-time. "You are reading this." "You are holding the phone with your left hand."
    • Then: "I am standing behind you."
    • Then: "Don't turn around."
    • Then: "Too late."
    • Simple, effective.

    Refining the ending: She turns around. Sees nothing. Looks back at the phone. The message updates. "I'm not behind you anymore." "Where am I?" "In the phone." "How?" "Look closer." Her reflection in the black screen.

    Okay, let's write.

  5. Review and Polish:

    • Check pacing.
    • Ensure sensory details are present (cold air, screen glow, silence).
    • Make the ending punchy.

    (Self-Correction): Let's make it about a baby monitor. It's more visceral. A parent alone at night. The monitor picks up a voice. It's their child. They go to check. The child is asleep. They go back. The voice comes again. "Mommy, help." But this time, the voice is coming from the room they are in, not the baby's room. Actually, let's stick with the text message idea. It's more modern and relatable. Wait, I'll combine them. A smart home assistant. "Alexa, who is in the house?" "There is no one else." "Alexa, is that true?" Silence. "Alexa, is there someone in the room with me?" "Yes." "Who?" "You." No, that's confusing.

    Okay, final final plan: The Text Messages from the Past. Protagonist: Leo. Setting: Old house. Plot: He finds an old phone. It has messages from 10 years ago. From the previous owner. He reads them. They are about him. "Leo is sleeping." "Leo is walking." This feels strong.

    Wait, I'll go with the "Reflection" idea. It's timeless. Title: "The Delay." Protagonist: Mara. Plot: She's brushing her teeth. She stops. The reflection keeps brushing. Okay, that's the one. Short, punchy, creepy.

    Wait, I can expand it. She notices it more and more. She tries to interact. The reflection smiles. She doesn't. Okay, let's write.

    (Wait, I want to make it a bit longer and more atmospheric.) Let's go with the "Sound in the Walls" story. Protagonist: David. Setting: New apartment. Plot: He hears scratching. He calls maintenance. They say it's pipes. He records it. He plays it back. It's not scratching. It's tapping. Morse code? No, it's words. "Let me out." Then he realizes the voice is his own. That's a good twist.

    Okay, I will write the "Scratching in the Walls" story.

    Outline:

    1. David moves into a cheap apartment. Quiet.
    2. Night 1: Scratching sounds. Thinks it's rats.
    3. Night 3: Sounds get louder. Closer.
    4. Night 5: He records it with his phone.
    5. Playback: It's not scratching. It's a voice whispering.
    6. He listens closely. It's his own voice.
    7. It says: "Help me."
    8. He checks the walls. Finds a loose brick/panel.
    9. He opens it. Sees... himself? Or a note?
    10. A note written in his handwriting: "Don't open the panel."
    11. He opens it anyway.
    12. Ending: Darkness.

    Let's go with this. It has a loop/paradox element.

    (Final Polish): Make the ending abrupt.

    Let's start writing.

</think>

The first time David heard the scratching, he thought it was a rat.

It was 3:00 AM. The old Victorian apartment was silent, the floorboards cooling and contracting with the drop in temperature. David lay in bed, staring at the ceiling, the rhythmic scritch-scratch coming from somewhere inside the wall beside his head.

He flipped on the lamp. "Get out," he muttered, kicking the wall with his heel.

The scratching stopped.

By the third night, the scratching had migrated. It was no longer beside the bed; it was above him, in the ceiling. And it wasn't random. It had a cadence. Scratch-scratch... pause... scratch-scratch-scratch.

David bought a can of rat poison and a heavy flashlight. He sat on his bed, waiting. At 3:17 AM, the scratching started again. It was louder now. Urgent. It sounded like something trying to get out.

He grabbed his phone and turned on the voice memo recorder. He held the microphone up to the wall, near the baseboard.

For ten minutes, he recorded nothing but silence and the hum of the refrigerator. Then, the scratching resumed. He watched the audio levels spike on his screen.

When he played it back the next morning, he didn't hear scratching.

He heard whispering.

It was faint, distorted by the drywall and the distance. A low murmur, like static on a radio. He cranked the volume up and pressed his ear against the phone speaker.

"David..."

He froze. It was a woman's voice. Thin, desperate.

"David, please..."

He spent the next day searching for the owner's manual for the building, or a history of the previous tenants. Nothing. The landlord, a grumpy man in his sixties, just shrugged. "Old pipes. Settling."

But David knew it wasn't pipes. Pipes didn't whisper his name.

On the fifth night, the whispering was clearer.

"David, I'm stuck."

David sat up in bed, his heart hammering against his ribs. "Who is it?" he demanded, his voice shaking.

The whisper came again, close to the wall this time. "I'm in the wall. David, you have to let me out."

He grabbed his flashlight and a hammer from the kitchen drawer. He paced the room, listening. The voice was coming from the corner, near the bookshelf. He pulled the heavy oak shelf away, revealing the plaster. The plaster was cracked. A hairline fracture snaked up from the floor.

He put his ear against the crack.

"Help me," the voice whispered. "It's dark. It's so dark."

"Who are you?" David asked.

"I'm you," the voice whispered.

David laughed nervously. "What?"

"I'm you," the voice repeated, louder this time. "I made a mistake. Don't make the same mistake. Don't listen to the tapping."

"Tapping?" David said. "It's scratching."

"The tapping," the voice insisted. "When you hear it, don't answer. Don't ever answer."

"I'm answering now," David said.

"Too late," the voice whispered. "It knows you're awake."

From the other side of the wall, a new sound began. Not scratching. Not whispering.

Tapping.

It was slow. Deliberate. Tap. Tap. Tap.

It came from the other corner of the room. The wall he hadn't checked.

David dropped his hammer and rushed to the bookshelf. He shoved it back into place. The tapping stopped. The room was silent.

He stood there, breathing heavily, waiting for the silence to break.

It didn't.

He went back to bed, pulling the covers over his head. He lay there for hours, listening to the house settle. Just before dawn, when the air was coldest, his phone buzzed on the nightstand.

It was a text message. From an unknown number.

Are you awake?

David stared at the screen. He didn't reply.

A moment later, another message.

I know you are.

He turned on the bedside lamp. He looked around the room. The shadows seemed deeper than usual, clinging to the corners. He looked at the wall where the whispering had come from. The crack in the plaster seemed wider.

He looked at the other wall. The tapping wall.

He texted back. Who is this?

The reply was instant.

I'm the one in the wall.

David dropped the phone. He scrambled out of bed, grabbing the hammer. He ran to the corner with the crack and began to pull at the wallpaper. The paper tore easily, revealing the plaster beneath. He dug his fingers into the crack.

Behind the plaster, there was a hollow space. A cavity.

He shone his flashlight inside.

In the darkness of the cavity, something shone back. Eyes. Two pale, luminous eyes staring out from the darkness, reflecting the beam of light.

The eyes blinked.

David stumbled back, dropping the flashlight. The light rolled across the floor, illuminating the other wall. The wall where the tapping had come from.

There was a door.

A small, wooden door, set into the plaster, no bigger than a cat flap. It was slightly ajar.

David approached it, the hammer raised. He knocked on it.

From inside the small door, a voice whispered. "David."

It was his own voice.

"Open the door," his voice whispered from inside the wall.

David backed away. "No," he said.

"Open the door," the voice whispered, louder now. "I'm on the other side. I'm coming out."

The doorknob of the small door turned.

David turned and ran for the apartment door. He yanked it open, but he was blocked.

Standing in the hallway, silhouetted against the dim corridor light, was a figure. A man holding a hammer. It was David.

The David in the hallway raised the hammer. "I told you," the David in the hallway whispered. "Don't answer."

David looked back at the small door in the wall. It was swinging open now. Something was stepping out. Something that looked exactly like him, but its face was blank. Smooth. Unfinished.

The David in the hallway swung the hammer.

David closed his eyes.

And then, the scratching started again.

Downloads last month
6,583
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DavidAU/Qwen3.5-27B-Deckard-PKD-Heretic-Uncensored-Thinking

Base model

Qwen/Qwen3.5-27B
Finetuned
(7)
this model
Finetunes
3 models
Quantizations
2 models

Collections including DavidAU/Qwen3.5-27B-Deckard-PKD-Heretic-Uncensored-Thinking