Description
This repo contains specialized MoE-quants for Qwen3.6-35B-A3B. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization. To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.
| Quant | Size | Mixture | PPL | 1-(Mean PPL(Q)/PPL(base)) | KLD |
|---|---|---|---|---|---|
| Q8_0 | 34.36 GiB (8.52 BPW) | Q8_0 | 6.719733 ± 0.043673 | +0.0000% | 0.005914 ± 0.000097 |
| Q6_K | 27.10 GiB (6.72 BPW) | Q8_0 / Q6_K / Q6_K / Q6_K | 6.720708 ± 0.043671 | +0.0145% | 0.006655 ± 0.000103 |
| Q5_K_M | 24.44 GiB (6.06 BPW) | Q8_0 / Q5_K / Q5_K / Q6_K | 6.728925 ± 0.043742 | +0.1368% | 0.008198 ± 0.000112 |
| Q4_K_M | 20.61 GiB (5.11 BPW) | Q8_0 / Q4_K / Q4_K / Q5_K | 6.741414 ± 0.043822 | +0.3227% | 0.013899 ± 0.000169 |
| IQ4_XS | 16.40 GiB (4.06 BPW) | Q8_0 / IQ3_S / IQ3_S / IQ4_XS | 6.888604 ± 0.044992 | +2.5131% | 0.033477 ± 0.000265 |
| IQ3_S | 12.65 GiB (3.13 BPW) | Q6_K / IQ2_S / IQ2_S / IQ3_S | 7.177309 ± 0.047398 | +6.8095% | 0.084848 ± 0.000588 |
- Downloads last month
- 5,881
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for AesSedai/Qwen3.6-35B-A3B-GGUF
Base model
Qwen/Qwen3.6-35B-A3B
