--- license: apache-2.0 base_model: Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled language: - en - zh tags: - qwen3.5 - gptq - int4 - quantized - reasoning - multimodal - vision library_name: transformers pipeline_tag: image-text-to-text --- # Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GPTQ-int4 This is a **GPTQ INT4 quantized** version of [Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled). Please refer to the original model card for details on the model architecture, training data, and capabilities. > **Note**: While the original fine-tuning focused on text-only reasoning tasks, this model inherits multimodal capabilities from the base Qwen3.5-27B. The vision encoder is preserved and functional for image understanding tasks. ## Quantization Details - **Method**: GPTQ (4-bit INT4, W4A16) - **Group Size**: 128 - **Calibration**: 1024 samples from C4 dataset - **Vision Encoder**: Preserved (not quantized) - **MTP Module**: Preserved (not quantized) ## Usage with vLLM ### Text-only ```python from vllm import LLM, SamplingParams llm = LLM( model="codgician/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GPTQ-int4", trust_remote_code=True, max_model_len=4096, gpu_memory_utilization=0.9, ) sampling_params = SamplingParams(temperature=0.7, max_tokens=2048) prompt = "Explain the difference between TCP and UDP protocols." outputs = llm.generate([prompt], sampling_params) print(outputs[0].outputs[0].text) ``` ### With Image (Multimodal) ```python from vllm import LLM, SamplingParams llm = LLM( model="codgician/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GPTQ-int4", trust_remote_code=True, max_model_len=4096, gpu_memory_utilization=0.9, ) sampling_params = SamplingParams(temperature=0.7, max_tokens=256) messages = [ { "role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}, {"type": "text", "text": "What is in this image?"} ] } ] outputs = llm.chat(messages, sampling_params) print(outputs[0].outputs[0].text) ``` ## Hardware Requirements | Precision | VRAM (Approx.) | |-----------|----------------| | INT4 GPTQ | ~18 GB | ## Acknowledgements - Original model by [Jackrong](https://huggingface.co/Jackrong) - Base model: [Qwen/Qwen3.5-27B](https://huggingface.co/Qwen/Qwen3.5-27B) - Quantization performed using [GPTQModel](https://github.com/ModelCloud/GPTQModel) ## License Apache 2.0 (inherited from original model)