NVIDIA-Nemotron-Nano-9B-v2-quantized.w4a16

Model Overview

  • Model Architecture: NemotronHForCausalLM
    • Input: Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: INT4
  • Release Date: 10/22/2025
  • Version: 1.0
  • Model Developers: RedHat (Neural Magic)

Model Optimizations

This model was obtained by quantizing the weights of NVIDIA-Nemotron-Nano-9B-v2 to INT4 data type. This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%.

Only the weights of the linear operators within transformers blocks are quantized. Weights are quantized using a symmetric per-group scheme, with group size 64. The GPTQ algorithm is applied for quantization, as implemented in the llm-compressor library.

Deployment

This model can be deployed efficiently using the vLLM backend, as shown in the example below.

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer

model_id = "RedHatAI/NVIDIA-Nemotron-Nano-9B-v2-quantized.w4a16"
number_gpus = 1
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=20, min_p=0, max_tokens=256)

messages = [
    {"role": "user", "content": prompt}
]

tokenizer = AutoTokenizer.from_pretrained(model_id)

messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]

prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompts, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)

vLLM aslo supports OpenAI-compatible serving. See the documentation for more details.

Creation

Creation details This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
from compressed_tensors.quantization import QuantizationScheme, QuantizationArgs, QuantizationType, QuantizationStrategy
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
  
# Load model
model_stub = "nvidia/NVIDIA-Nemotron-Nano-9B-v2"
model_name = model_stub.split("/")[-1]

num_samples = 1024
max_seq_len = 8192

model = AutoModelForCausalLM.from_pretrained(model_stub)

tokenizer = AutoTokenizer.from_pretrained(model_stub)

def preprocess_fn(example):
    return {"text": tokenizer.apply_chat_template(example["messages"], add_generation_prompt=False, tokenize=False)}
  
ds = load_dataset("neuralmagic/LLM_compression_calibration", split="train")
ds = ds.map(preprocess_fn)

# Configure the quantization algorithm and scheme
quant_scheme = QuantizationScheme(
    targets=["Linear"],
    weights=QuantizationArgs(
        num_bits=4,
        type=QuantizationType.INT,
        symmetric=True,
        group_size=64,
        strategy=QuantizationStrategy.GROUP,
        observer="mse",
        actorder="weight"
    ),
    input_activations=None,
    output_activations=None,
)

recipe = [
    GPTQModifier(
        ignore=["lm_head", "NemotronHMamba2Mixer"],
        dampening_frac=0.07,
        config_groups={"group_0": quant_scheme},
    )
]

# Apply quantization
oneshot(
    model=model,
    dataset=ds, 
    recipe=recipe,
    max_seq_length=max_seq_len,
    num_calibration_samples=num_samples,
)

# Save to disk in compressed-tensors format
save_path = model_name + "-quantized.w4a16"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")

Evaluation

The model was evaluated on the set of popular reasoning tasks AIME25, Math-500, and GPQA-Diamond, using lighteval v0.11.1.dev0. vLLM v0.11.1rc2.dev191+g80e945298.precompiled was used as the inference engine for all evaluations.

Evaluation details

lighteval

lighteval_model_arguments.yaml

model_parameters:
  model_name: "hosted_vllm/RedHatAI/NVIDIA-Nemotron-Nano-9B-v2-quantized.w4a16"
  base_url: "http://0.0.0.0:8000/v1"
  generation_parameters:
    temperature: 0.6
    min_p: 0.0
    max_new_tokens: 65536
    top_p: 0.95
    seed: 0
lighteval endpoint litellm lighteval_model_arguments.yaml \
    "lighteval|aime25|0,lighteval|math_500|0,lighteval|gpqa:diamond|0" \
    --output-dir $OUTPUT_DIR \
    --save-details
vllm serve RedHatAI/NVIDIA-Nemotron-Nano-9B-v2-quantized.w4a16 \
  --trust-remote-code \
  --mamba_ssm_cache_dtype float32 \
  -tp 1 \
  --port 8000 \
  --gpu-memory-utilization 0.9

Accuracy

Category Benchmark NVIDIA-Nemotron-Nano-9B-v2 NVIDIA-Nemotron-Nano-9B-v2-quantized.w4a16
(this model)
Recovery
Reasoning
(generation)
AIME 2025 61.33 58.00 94.6%
GPQA diamond 56.26 56.16 99.8%
Math-lvl-5 96.08 96.16 100.0%
Average Score 71.22 70.11 98.44%
Downloads last month
41
Safetensors
Model size
2B params
Tensor type
I64
·
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RedHatAI/NVIDIA-Nemotron-Nano-9B-v2-quantized.w4a16

Collection including RedHatAI/NVIDIA-Nemotron-Nano-9B-v2-quantized.w4a16