Qwen3.6-35B-A3B-UD-IQ4_XS fixed for GitHub Copilot

This model package is fixed to work with GitHub Copilot.

It republishes the original GGUF together with the Copilot-compatible Ollama configuration from this repository.

Field Value
Published repo johnml1135/qwen36-35b-a3b-ud-iq4xs-128k-github-copilot
Original model https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF
GitHub Copilot fix Included Modelfile with Copilot-safe Ollama tool calling
Weights Unchanged GGUF weights from the original upstream release
GGUF file Qwen3.6-35B-A3B-UD-IQ4_XS.gguf
Model Name qwen36-35b-a3b-ud-iq4xs-128k
Architecture qwen36
Quantization UD-IQ4_XS
Context Length 131072

What is included

  • Qwen3.6-35B-A3B-UD-IQ4_XS.gguf
  • Modelfile
  • README.md with release notes and provenance

Use with Ollama

Download the repository contents locally and run:

ollama create my-copilot-model -f Modelfile

The included Modelfile expects ./Qwen3.6-35B-A3B-UD-IQ4_XS.gguf in the same directory.

Provenance

Release Notes

This package keeps the original Unsloth GGUF weights and adds the Ollama Modelfile needed to make the model work cleanly with GitHub Copilot local model agent flows.

Validated in this repo against:

  • clean no-think chat output by default
  • structured tool calls for Copilot agent turns
  • optional think=true reasoning streams on harder prompts

The Copilot-specific fix is in the included Modelfile and runtime settings, not in altered model weights.

Downloads last month
194
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for johnml1135/qwen36-35b-a3b-ud-iq4xs-128k-github-copilot

Quantized
(1)
this model