Inference Providers
Active filters: modelopt
LilaRest/gemma-4-31B-it-NVFP4-turbo
Text Generation
• 33B • Updated • 132k
• 254
nvidia/Gemma-4-31B-IT-NVFP4
Text Generation
• 21B • Updated • 1.27M
• 411
lukealonso/MiniMax-M2.7-NVFP4
130B • Updated • 28.3k
• 38
CISCai/gemma-4-31B-it-NVFP4-turbo-GGUF
Text Generation
• 31B • Updated • 2.56k
• 8
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4
Text Generation
• 67B • Updated • 1.34M
• 273
nvidia/MiniMax-M2.5-NVFP4
Text Generation
• 116B • Updated • 49.3k
• 32
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-FP8
Text Generation
• 124B • Updated • 597k
• 234
437B • Updated • 29.3k
• 17
mmangkad/Qwen3.6-35B-A3B-NVFP4
Text Generation
• Updated • 5.52k
• 5
nvidia/Qwen3.5-397B-A17B-NVFP4
Text Generation
• Updated • 558k
• 92
AxionML/Qwen3.5-27B-NVFP4
Image-Text-to-Text
• 17B • Updated • 11.6k
• 9
demon-zombie/MiniMax-M2.7-NVFP4
116B • Updated • 843
• 3
Text Generation
• 8B • Updated • 299
• 3
DJLougen/Ornstein3.6-35B-A3B-NVFP4
Text Generation
• 34B • Updated • 327
• 3
nvidia/Qwen3-30B-A3B-NVFP4
Text Generation
• 16B • Updated • 316k
• 30
cosmicproc/Qwen3.5-4B-NVFP4
Image-Text-to-Text
• 3B • Updated • 1.14k
• 2
YuYu1015/Huihui-Gemma-4-E4B-it-abliterated-NVFP4
Text Generation
• 6B • Updated • 240
• 2
AEON-7/supergemma4-26b-abliterated-multimodal-nvfp4
Text Generation
• 25B • Updated • 1.27k
• 2
NinjaBoffin/MiniMax-M2.7-NVFP4
Text Generation
• 116B • Updated • 272
• 2
nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8
402B • Updated • 763
• 14
nvidia/Llama-4-Scout-17B-16E-Instruct-FP8
109B • Updated • 72.4k
• 13
nvidia/Llama-4-Maverick-17B-128E-Eagle3
Updated • 244
• 10
nvidia/Phi-4-multimodal-instruct-NVFP4
4B • Updated • 1.69k
• 11
nvidia/Phi-4-multimodal-instruct-FP8
6B • Updated • 1.33k
• 7
nvidia/Phi-4-reasoning-plus-FP8
15B • Updated • 560
• 6
nvidia/Phi-4-reasoning-plus-NVFP4
8B • Updated • 1.38k
• 9
nvidia/Llama-3.1-8B-Instruct-NVFP4
5B • Updated • 118k
• 9
Text Generation
• 5B • Updated • 23.9k
• 17
Text Generation
• 8B • Updated • 27.9k
• 5
Text Generation
• 8B • Updated • 14.7k
• 8