MedGemmaImpact/medgemma-1.5-4b-it-fp8
Quantized checkpoint of google/medgemma-1.5-4b-it using FP8 via llmcompressor.oneshot.
Calibration
- dataset:
medqa - num_samples:
512 - max_seq_length:
1024
Notes
- Vision modules are excluded from quantization to avoid breaking the vision encoder/projector.
- This repo contains the
save_pretrainedoutput produced by llmcompressor (plus the processor/config files).
- Downloads last month
- 8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for MedGemmaImpact/medgemma-1.5-4b-it-fp8
Base model
google/medgemma-1.5-4b-it