Quark-v0.1⚡️
Collection
2 items • Updated • 2
Quark‑135M‑Instruct is a 135M parameter conversational AI assistant, trained from scratch and then fine‑tuned to be helpful, respectful, honest and to remember a clear identity.
The model follows a Llama‑style decoder‑only transformer (similar to SmolLM) with the following components:
| Component | Value |
|---|---|
| Vocab size | 49 152 |
Hidden size (d_model) |
576 |
| Number of layers | 30 |
| Attention heads | 9 |
| KV heads (GQA) | 3 |
| Head dim | 64 |
| FFN dimension | 1 536 |
| Activation | SwiGLU |
| Normalization | RMSNorm |
| Positional encoding | Rotary Embeddings (RoPE, θ=10 000) |
| Max sequence length | 2 048 |
| Weight tying | Embedding / LM head |
Total trainable parameters: ~135 M
The table below reports zero‑shot performance on several common benchmarks, evaluated using lm‑eval‑harness with apply_chat_template=True. All scores are shown as percentages.
| Benchmark | Metric | Score |
|---|---|---|
| HellaSwag | acc_norm | 31.37% |
| ARC-Easy | acc_norm | 41.46% |
| ARC-Challenge | acc_norm | 25.09% |
| PIQA | acc_norm | 61.26% |
| MMLU (avg) | acc | 23.17% |
| MMLU Humanities | acc | 24.23% |
| MMLU Social Sciences | acc | 22.59% |
| MMLU STEM | acc | 22.04% |
| MMLU Other | acc | 23.27% |
| CommonsenseQA | acc | 20.56% |
| OpenBookQA | acc_norm | 27.20% |
| Winogrande | acc | 50.20% |
| TriviaQA | exact_match | 0.07% |
Key takeaways:
Quark‑135M‑Instruct is a small conversational assistant that excels at:
It is not suitable for tasks requiring factual accuracy, deep reasoning, or reliable knowledge retrieval.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "OvercastLab/Quark-135m-Instruct" # (replace with actual HF repo)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "system", "content": "You are Quark, a helpful, respectful and honest AI assistant created by OvercastLab and ThingsAI together with Mich. Always answer as helpfully and accurately as possible."},
{"role": "user", "content": "Hi, what's your name?"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output_ids = model.generate(
**inputs,
max_new_tokens=150,
do_sample=True,
temperature=0.2,
top_k=50,
top_p=0.95,
repetition_penalty=1.3,
eos_token_id=tokenizer.convert_tokens_to_ids(["<|user|>", "<|system|>"]) + [tokenizer.eos_token_id],
)
response = tokenizer.decode(output_ids[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)