Bonsai

Prism ML Website  |  White Paper  |  Demo & Examples  |  Discord

Ternary-Bonsai-4B-mlx-2bit

Ternary (1.58-bit) language model for Apple Silicon

7.1x smaller than FP16 | 4.8x faster on M4 Pro | 50 tok/s on iPhone | runs on Mac, iPhone, iPad

Highlights

  • 1.05 GiB (1.13 GB) packed 2-bit size (down from 8.04 GB FP16) — runs on any Mac or iPhone
  • Ternary weights {-1, 0, +1} across embeddings, attention projections, MLP projections, and LM head
  • 70.7 avg benchmark score across 6 categories — competitive with full-precision 4B models
  • MLX-native format with group size 128 and FP16 scaling

Pareto Frontier

Resources

  • White Paper
  • Demo repo — examples for serving, benchmarking, and integrating Bonsai
  • Discord — community support and updates
  • Kernels: MLX (Apple Silicon) · mlx-swift (iOS/macOS) — 2-bit format is supported out of the box

Model Overview

Item Specification
Base model Qwen3-4B
Parameters 4.0B (~3.6B non-embedding)
Architecture GQA (32 query / 8 KV heads), SwiGLU MLP, RoPE, RMSNorm
Layers 36 Transformer decoder blocks
Context length 32,768 tokens
Vocab size 151,936
Weight format Ternary g128: {-1, 0, +1} with FP16 group-wise scaling
Packed 2-bit size 1.05 GiB (1.13 GB)
Ternary coverage Embeddings, attention projections, MLP projections, LM head
License Apache 2.0

Quantization Format: Ternary g128

Each weight takes a value from {-1, 0, +1}, with one shared FP16 scale per group of 128 weights:

w_i = scale_g * t_i,    t_i in {-1, 0, +1}

The information-theoretic cost is log2(3) ≈ 1.585 bits per weight, plus FP16 group scales (16 bits per 128 weights), for a theoretical minimum of ~1.71 bits/weight. This release uses the MLX 2-bit format, which stores each ternary value in 2 bits plus group scales, for an effective ~2.125 bits/weight.

Memory

Format Size Reduction Ratio
FP16 8.04 GB -- 1.0x
MLX 2-bit g128 1.05 GiB (1.13 GB) 85.9% 7.1x

Quickstart

MLX (Python)

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("prism-ml/Ternary-Bonsai-4B-mlx-2bit")

response = generate(
    model,
    tokenizer,
    prompt="Explain quantum computing in simple terms.",
    max_tokens=256,
)
print(response)

Throughput (MLX / Apple Silicon)

Platform Backend PP512 (tok/s) TG128 (tok/s) FP16 TG (tok/s) Speedup
M4 Pro 48 GB MLX (Python) 817 133 28 4.8x

iPhone 17 Pro Max (MLX Swift)

Platform Backend PP512 (tok/s) TG128 (tok/s) 4-bit TG (tok/s) Speedup
iPhone 17 Pro Max MLX Swift 659 50 27 1.8x

Benchmarks

Evaluated with EvalScope v1.4.2 + vLLM 0.15.1 on NVIDIA H100. Full benchmark suite (10 benchmarks):

Model Size Avg MMLU-R MuSR IFEval GSM8K HE+ BFCLv3
Ternary Bonsai 4B 0.86 GB 70.7 69.7 45.1 72.1 90.5 78.7 67.8
1-bit Bonsai 4B (prior) 0.57 GB 62.7 58.7 41.4 69.6 87.3 71.3 48.0
Qwen 3 4B 8.04 GB 77.1 79.8 57.4 80.0 92.1 74.4 78.9
Ministral3 3B 6.86 GB 73.2 77.5 56.5 73.1 91.4 69.5 71.3
Gemma 3 4B 7.76 GB 67.9 66.0 46.3 73.0 89.8 67.1 65.1
Llama 3.2 3B 6.43 GB 64.4 65.5 48.9 78.3 80.1 52.4 60.9

Intelligence Density

density = -ln(1 - score/100) / size_GB
Model Size Intelligence Density (1/GB)
Ternary Bonsai 4B 0.86 GB 1.426
1-bit Bonsai 4B (prior) 0.57 GB 1.744
Ministral3 3B 6.86 GB 0.192
Qwen 3 4B 8.04 GB 0.183
Llama 3.2 3B 6.43 GB 0.161
Gemma 3 4B 7.76 GB 0.146

Citation

@techreport{ternarybonsai,
    title   = {Ternary Bonsai: 1.58-bit Language Models at 8B, 4B, and 1.7B Scale},
    author  = {Prism ML},
    year    = {2026},
    month   = {April},
    url     = {https://prismml.com}
}

Contact

For questions, feedback, or collaboration inquiries: contact@prismml.com

Downloads last month
811
Safetensors
Model size
0.3B params
Tensor type
U32
·
F16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prism-ml/Ternary-Bonsai-4B-mlx-2bit

Finetuned
(2)
this model

Collection including prism-ml/Ternary-Bonsai-4B-mlx-2bit

Evaluation results