FAPO-32B-nvfp4

Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: dyyyyyyyy/FAPO-32B
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with HuggingFaceH4/ultrachat_200k.

Notes: Keep lm_head in high precision; calibrate on long, domain-relevant sequences.

Check the original model card for information about this model. It seems to be a fine tune of Qwen2 that is more likely to admit when it doesn't know something.

If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.

Downloads last month
7
Safetensors
Model size
19B params
Tensor type
BF16
·
F32
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Firworks/FAPO-32B-nvfp4

Base model

dyyyyyyyy/FAPO-32B
Quantized
(3)
this model

Dataset used to train Firworks/FAPO-32B-nvfp4