NOTICE

This model has been superseded by the higher quality Q3.5-INF version available here

INFORMATION

See Kimi-K2.6 MLX in action - demonstration video

Tested on a M3 Ultra 512GB RAM using Inferencer app

  • Single inference ~24.04 tokens/s @ 1000 tokens (debug build)
  • Vision inference ~20.55 (available from v1.11.0)
  • Memory usage: ~437 GiB

Q3.6 typically achieves useable accuracy in our coding test and fits within a 512GB memory budget

Quantization (bpw)PerplexityToken AccuracyMissed DivergenceSize
Q3.51.132812594.92%42.71%450.19 GB
Q3.5-INF1.07812596.67%22.04%455.68 GB
Q3.61.148437594.72%48.72%470.99 GB
BaseUntested100%0.000%658.59 GB
  • Perplexity: Measures the confidence for predicting base tokens (lower is better)
  • Token Accuracy: The percentage of correctly generated base tokens
  • Missed Divergence: Measures severity of misses; how much the token was missed by
Quantized with a modified version of MLX
For more details see demonstration video or visit Kimi-K2.6.

Disclaimer

We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for inferencerlabs/Kimi-K2.6-MLX-3.6bit

Quantized
(30)
this model