Post
1657
tencent/Hy3-preview is out: an open-weights MoE reasoning model.
ā 295B total / 21B active / 256K context
ā Fused fast-and-slow thinking in a single model
ā First model trained on Hunyuan's rebuilt pretraining + RL infra (Feb ā Apr)
Benchmarks:
š SWE-Bench Verified, Terminal-Bench 2.0, BrowseComp, WideSearch ā competitive results, particularly strong on agentic tool use
š Top score on Tsinghua's 2026 Spring math PhD qualifying exam
š Strong context-learning and instruction-following on Tencent's CL-bench / CL-bench-Life
More details can be found in my article: https://huggingface.co/blog/imnotkitty/hy3-preview
ā 295B total / 21B active / 256K context
ā Fused fast-and-slow thinking in a single model
ā First model trained on Hunyuan's rebuilt pretraining + RL infra (Feb ā Apr)
Benchmarks:
š SWE-Bench Verified, Terminal-Bench 2.0, BrowseComp, WideSearch ā competitive results, particularly strong on agentic tool use
š Top score on Tsinghua's 2026 Spring math PhD qualifying exam
š Strong context-learning and instruction-following on Tencent's CL-bench / CL-bench-Life
More details can be found in my article: https://huggingface.co/blog/imnotkitty/hy3-preview