Cosmos-Reason2-2B-GGUF

GGUF quantizations of nvidia/Cosmos-Reason2-2B for use with llama.cpp and compatible tools.

About the Model

NVIDIA Cosmos Reason 2 is an open, 2B-parameter reasoning vision-language model (VLM) for physical AI and robotics. It is post-trained from Qwen3-VL-2B-Instruct and understands space, time, and fundamental physics.

Key capabilities:

  • Physical AI reasoning with spatio-temporal understanding
  • Object detection with 2D/3D point localization and bounding boxes
  • Long-context understanding up to 256K input tokens
  • Video analytics, data curation, and robot planning

For full details, see the original model card.

Quantization Details

File Quant Size
Cosmos-Reason2-2B-F16.gguf F16 3.8 GB
Cosmos-Reason2-2B-Q8_0.gguf Q8_0 2.1 GB
Cosmos-Reason2-2B-Q4_K_M.gguf Q4_K_M 1.2 GB
mmproj-Cosmos-Reason2-2B-F16.gguf F16 782 MB

Note: The vision encoder (mmproj) is kept at F16 precision.

How to Use

llama-server -hf Kbenkhaled/Cosmos-Reason2-2B-GGUF:Q8_0
llama-server -hf Kbenkhaled/Cosmos-Reason2-2B-GGUF:F16
llama-server -hf Kbenkhaled/Cosmos-Reason2-2B-GGUF:Q4_K_M
Downloads last month
1,680
GGUF
Model size
2B params
Architecture
qwen3vl
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Kbenkhaled/Cosmos-Reason2-2B-GGUF

Quantized
(6)
this model