Qwen3.6-35B-A3B
Collection
17 items • Updated • 2
This model was converted to FP16 from z-lab/Qwen3.6-35B-A3B-DFlash BF16.
DFlash is a novel speculative decoding method that utilizes a lightweight block diffusion model for drafting. It enables efficient, high-quality parallel drafting that pushes the limits of inference speed.
"FP16" is M1/M2 Apple Silicon only optimization that leads to a very noticeable prompt processing boost. See "Metal FP32 Vs BF16 Vs FP16 benchmark" and jundot/omlx/pull/880 for details.
Use the original model if you have M3+ Apple Silicon.
Base model
z-lab/Qwen3.6-35B-A3B-DFlash