--- license: apache-2.0 library_name: transformers pipeline_tag: image-text-to-text base_model: Qwen/Qwen3.5-27B base_model_relation: quantized tags: - transformers - safetensors - qwen3_5 - quantized - nvfp4 - fp4 - 4-bit - vllm - llm-compressor - image-text-to-text - conversational datasets: - neuralmagic/calibration --- # Qwen3.5-27B-NVFP4 This is a quantized version of [Qwen/Qwen3.5-27B](https://huggingface.co/Qwen/Qwen3.5-27B). This model accepts text and images as inputs and generates text as outputs. The weights and activations were quantized to FP4 using [llm-compressor](https://github.com/vllm-project/llm-compressor) with 512 calibration samples from [neuralmagic/calibration](https://huggingface.co/datasets/neuralmagic/calibration), reducing the model size from 51.8 GB to 18.4 GB (~2.8x reduction) while maintaining 99.1% average accuracy recovery. --- ## Inference As of 2/27/2026, this model is supported in vLLM nightly. To serve the model: ```bash vllm serve Kbenkhaled/Qwen3.5-27B-NVFP4 \ --reasoning-parser qwen3 \ --enable-prefix-caching ``` --- ## Evaluation Evaluated with [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness), 0-shot, thinking mode ON. | Benchmark | Qwen3.5-27B | Qwen3.5-27B-NVFP4 (this model) | Recovery | |---|---|---|---| | GPQA Diamond | 80.30% | 79.29% | 98.7% | | IFEval | 95.08% | 93.88% | 98.7% | | MMLU-Redux | 93.90% | 94.32% | 100.4% | | **Average** | **89.76%** | **89.16%** | **99.1%** |