MedGemmaImpact/medgemma-1.5-4b-it-fp8

Quantized checkpoint of google/medgemma-1.5-4b-it using FP8 via llmcompressor.oneshot.

Calibration

  • dataset: medqa
  • num_samples: 512
  • max_seq_length: 1024

Notes

  • Vision modules are excluded from quantization to avoid breaking the vision encoder/projector.
  • This repo contains the save_pretrained output produced by llmcompressor (plus the processor/config files).
Downloads last month
8
Safetensors
Model size
4B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MedGemmaImpact/medgemma-1.5-4b-it-fp8

Quantized
(32)
this model