Inference Providers
Active filters: FP4
NVFP4/Qwen3-30B-A3B-Thinking-2507-FP4
Text Generation
• 16B • Updated • 2.39k
• 4
Text Generation
• 0.4B • Updated • 1.06k
nvidia/Phi-4-multimodal-instruct-NVFP4
4B • Updated • 3.94k
• 7
nvidia/Phi-4-reasoning-plus-NVFP4
8B • Updated • 1.25k
• 6
nvidia/Llama-3.1-8B-Instruct-NVFP4
5B • Updated • 75.9k
• 7
Text Generation
• 5B • Updated • 30k
• 15
Text Generation
• 8B • Updated • 278k
• 5
nvidia/Qwen2.5-VL-7B-Instruct-NVFP4
Text Generation
• 5B • Updated • 28.7k
• 13
nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1-FP4-QAD
Image-Text-to-Text
• Updated • 405
• 14
nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-NVFP4-QAD
Image-Text-to-Text
• Updated • 4.36k
• 22
Daemontatox/Qwen3-L-NVFP4
Text Generation
• 133B • Updated • 1
Text Generation
• 5B • Updated • 25
surogate/Qwen3-30B-A3B-NVFP4
Text Generation
• 16B • Updated • 1
Text Generation
• 17B • Updated • 2
Text Generation
• 8B • Updated • 2
Cirrascale/Kimi-K2.5-NVFP4
Text Generation
• Updated • 51
Cirrascale/Qwen3-Coder-480B-A35B-Instruct-NVFP4
Text Generation
• 241B • Updated • 4
Cirrascale/Qwen3.5-397B-A17B-NVFP4
Text Generation
• Updated • 40
Text Generation
• 17B • Updated • 7.17k
• 1
fsgfn/Qwen3.5-122B-A10B-NVFP4
Text Generation
• 64B • Updated • 83