NVIDIA Nemotron-3-Nano-30B-A3B — IQ4_XS GGUF (RTX 2080 Ti Optimized)
Production-validated IQ4_XS quantization of NVIDIA Nemotron-3-Nano-30B-A3B, benchmarked and tuned for 11GB VRAM GPUs (RTX 2080 Ti / RTX 3060 / etc).
GGUF sourced from bartowski/nvidia_Nemotron-3-Nano-30B-A3B-GGUF. This repo adds real-world benchmarks, production configs, and deployment guides for constrained-VRAM setups.
Why This Repo?
bartowski provides the quants — we provide the deployment playbook for running this model on a single consumer GPU with real benchmark data, not projections.
- ✅ Validated tool calling (function calls parse correctly)
- ✅ Validated reasoning (chain-of-thought with
<think>tags) - ✅ Production Ollama Modelfile included
- ✅ llama-server launch configs with measured VRAM headroom
- ✅ Tested across 168 messages in a multi-agent war room
Model Details
| Property | Value |
|---|---|
| Base Model | NVIDIA Nemotron-3-Nano-30B-A3B |
| Architecture | MoE Hybrid: 23 Mamba-2 + 6 Attention + 128 experts + 1 shared |
| Total Parameters | 31.6B |
| Active Parameters | ~3.2–3.6B (Mixture of Experts) |
| Quantization | IQ4_XS (imatrix) |
| File Size | 16.8 GB |
| Format | GGUF |
| Prompt Format | ChatML (<|im_start|>) |
| Context Window | Up to 5120 tested stable on 11GB (hardware dependent) |
| License | NVIDIA Open Model License |
Benchmarks (RTX 2080 Ti, 11GB VRAM)
All benchmarks on: Ryzen 7 3800X (8c/16t) · 32GB DDR4-3600 · RTX 2080 Ti 11GB · Ubuntu 24.04
Throughput
| Backend | GPU Layers | KV Cache | Context | Flash Attn | tok/s (single) | tok/s (sustained) |
|---|---|---|---|---|---|---|
| Ollama (Q4_K_M) | 28/52 (est.) | default | 4096 | ❌ | 11.04 | — |
| Ollama (IQ4_XS) | 28/52 | default | 4096 | ❌ | 14.49 | — |
| llama-server (IQ4_XS) | 28 | q8_0 | 4096 | ❌ | 22.85 | 19.45–19.69 |
| llama-server (IQ4_XS) | 28 | q8_0 | 4096 | ✅ | 26.25 | 26.21–26.70 |
VRAM Usage
| Config | VRAM Used | Free | Headroom |
|---|---|---|---|
| Ollama Q4_K_M | 10,449 MiB | 815 MiB | ⚠️ Tight |
| Ollama IQ4_XS | 10,079 MiB | 1,185 MiB | ✅ Good |
| llama-server ctx 2048 (×1 slot) | 10,631 MiB | 633 MiB | ✅ Safe |
| llama-server ctx 3072 (×1 slot) | 10,627 MiB | 637 MiB | ✅ Safe |
| llama-server ctx 4096 (×1 slot) | 10,639 MiB | 625 MiB | ✅ Production |
| llama-server ctx 5120 (×1 slot) | 10,641 MiB | 623 MiB | ✅ Safe |
| llama-server ctx 6144 (×1 slot) | 10,947 MiB | 317 MiB | ⚠️ Tight |
| llama-server ctx 8192 (×1 slot) | 11,085 MiB | 179 MiB | ❌ OOM risk |
| llama-server ctx 4096 (×4 slots) | ~10,953 MiB | 311 MiB | ⚠️ Dangerous |
Why does context barely affect VRAM? Nemotron's hybrid Mamba-2 architecture uses recurrent state (fixed size) for 46 of 52 layers. Only 6 attention layers need KV cache, so context scaling is nearly flat until ~6K where compute buffers jump.
IQ4_XS vs Q4_K_M (Same Model, Ollama)
| Metric | Q4_K_M | IQ4_XS | Delta |
|---|---|---|---|
| Throughput | 11.04 tok/s | 14.49 tok/s | +31% |
| VRAM | 10,449 MiB | 10,079 MiB | −370 MiB |
| Cold Start | 42.7s | 23.7s | −44% |
| Disk Size | 24 GB | 16.8 GB | −6 GB |
| Tool Calling | ✅ | ✅ | — |
| Reasoning | ✅ | ✅ | — |
Quality Validation
- Math reasoning: "What's 15% tip on $28.50?" → Correct ($4.28 → $4.275, rounds properly)
- Tool calling: OpenAI-format function calls parse correctly
- Multi-turn: Sustained coherence over 168-message war room session
Recommended Configurations
Ollama (Easy, Recommended for Most Users)
Create a Modelfile:
FROM ./nvidia_Nemotron-3-Nano-30B-A3B-IQ4_XS.gguf
PARAMETER temperature 0.3
PARAMETER top_k 40
PARAMETER top_p 0.85
PARAMETER num_ctx 4096
PARAMETER num_predict 512
PARAMETER repeat_penalty 1.1
ollama create nemotron-prod -f Modelfile
ollama run nemotron-prod
llama-server (Maximum Performance, +84% over Ollama)
# Production config (recommended for 11GB VRAM)
llama-server \
--model nvidia_Nemotron-3-Nano-30B-A3B-IQ4_XS.gguf \
--n-gpu-layers 28 \
--cache-type-k q8_0 \
--ctx-size 4096 \
--parallel 1 \
--flash-attn on \
--port 8081
Flash Attention: Build llama.cpp with
GGML_CUDA_FA=ONand use--flash-attn on(notauto— auto may fail to enable on split GPU/CPU models). FA gives +35% sustained throughput on this model.
28 GPU layers is the stable ceiling for 11GB VRAM. 29 loads but OOMs during generation. 30+ fails at load.
Context window scaling (measured on RTX 2080 Ti, single slot):
--ctx-size 2048→ 633 MiB free ✅--ctx-size 3072→ 637 MiB free ✅--ctx-size 4096→ 625 MiB free ✅ ← Production sweet spot--ctx-size 5120→ 623 MiB free ✅ (max safe)--ctx-size 6144→ 317 MiB free ⚠️ (too tight)--ctx-size 8192→ 179 MiB free ❌ (OOM risk)
Note: The Mamba-2 hybrid architecture means context barely affects VRAM up to ~5K. The cliff happens around 6K where compute buffers resize.
systemd Service (Auto-start on Boot)
[Unit]
Description=Nemotron IQ4_XS llama-server (CUDA)
After=network.target
[Service]
Type=simple
User=your-user
ExecStart=/opt/llama-server/llama-server \
--model /path/to/nvidia_Nemotron-3-Nano-30B-A3B-IQ4_XS.gguf \
--n-gpu-layers 28 \
--cache-type-k q8_0 \
--ctx-size 4096 \
--parallel 1 \
--flash-attn on \
--port 8081 \
--host 127.0.0.1
Restart=on-failure
RestartSec=5
Environment=CUDA_VISIBLE_DEVICES=0
[Install]
WantedBy=multi-user.target
OpenAI-Compatible API
Both Ollama and llama-server expose /v1/chat/completions. Drop-in compatible with any OpenAI SDK client:
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8081/v1", api_key="none")
response = client.chat.completions.create(
model="nemotron",
messages=[{"role": "user", "content": "Hello!"}]
)
GPU Layer Guide
| VRAM | GPU Layers | Est. tok/s | Notes |
|---|---|---|---|
| 8 GB | 20–22 | ~12 | Tight, reduce context |
| 11 GB | 28 | 22–27 | Sweet spot (this config) |
| 16 GB | 35–40 | ~28–32 | Comfortable headroom |
| 22 GB | 52 (all) | ~35–40 | Full GPU offload 🚀 |
| 24 GB | 52 (all) | ~35–40 | Full offload + large context |
22GB upgrade note: With 22GB VRAM, all 52 layers fit on GPU — zero CPU offload, maximum throughput. Expected ~2x improvement over 28-layer config.
File Listing
| File | Size | Description |
|---|---|---|
nvidia_Nemotron-3-Nano-30B-A3B-IQ4_XS.gguf |
16.8 GB | IQ4_XS quantized model (imatrix) |
Modelfile |
<1 KB | Production Ollama Modelfile |
README.md |
— | This file |
Quantization Source
GGUF quantized by bartowski using llama.cpp release b7423 with imatrix calibration data. See bartowski/nvidia_Nemotron-3-Nano-30B-A3B-GGUF for all available quant sizes.
Credits
- Model: NVIDIA — Nemotron-3-Nano-30B-A3B
- Quantization: bartowski — IQ4_XS imatrix GGUF
- Benchmarking & Deployment: Tinker-Stack — Production validation on RTX 2080 Ti with Disclaw multi-agent war room
- Downloads last month
- 105
4-bit
Model tree for Tinker-Stack/Nemotron-3-Nano-30B-A3B-IQ4_XS-GGUF
Base model
nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16