How to use from
Hermes Agent
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf Orion-zhen/Qwen3-30B-A3B-Instruct-2507-1M-GGUF:
Configure Hermes
# Install Hermes:
curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash
hermes setup
# Point Hermes at the local server:
hermes config set model.provider custom
hermes config set model.base_url http://127.0.0.1:8080/v1
hermes config set model.default Orion-zhen/Qwen3-30B-A3B-Instruct-2507-1M-GGUF:
Run Hermes
hermes
Quick Links

Qwen3-30B-A3B-Instruct-2507-1M-GGUF

Scale context window up from 262144 to 1048576 (x4) using yarn.

Due to my poor limited network bandwidth, I have to pick out some quantization to upload, instead of all of them. BTW, Qwen is really good at scaling model name lol.

Downloads last month
18
GGUF
Model size
31B params
Architecture
qwen3moe
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Orion-zhen/Qwen3-30B-A3B-Instruct-2507-1M-GGUF

Quantized
(117)
this model