Qwen3.6-35B-A3B-UD-IQ4_XS fixed for GitHub Copilot
This model package is fixed to work with GitHub Copilot.
It republishes the original GGUF together with the Copilot-compatible Ollama configuration from this repository.
| Field | Value |
|---|---|
| Published repo | johnml1135/qwen36-35b-a3b-ud-iq4xs-128k-github-copilot |
| Original model | https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF |
| GitHub Copilot fix | Included Modelfile with Copilot-safe Ollama tool calling |
| Weights | Unchanged GGUF weights from the original upstream release |
| GGUF file | Qwen3.6-35B-A3B-UD-IQ4_XS.gguf |
| Model Name | qwen36-35b-a3b-ud-iq4xs-128k |
| Architecture | qwen36 |
| Quantization | UD-IQ4_XS |
| Context Length | 131072 |
What is included
Qwen3.6-35B-A3B-UD-IQ4_XS.ggufModelfileREADME.mdwith release notes and provenance
Use with Ollama
Download the repository contents locally and run:
ollama create my-copilot-model -f Modelfile
The included Modelfile expects ./Qwen3.6-35B-A3B-UD-IQ4_XS.gguf in the same directory.
Provenance
- Generated with
ollama-copilot-fixer.
Release Notes
This package keeps the original Unsloth GGUF weights and adds the Ollama Modelfile
needed to make the model work cleanly with GitHub Copilot local model agent flows.
Validated in this repo against:
- clean no-think chat output by default
- structured tool calls for Copilot agent turns
- optional
think=truereasoning streams on harder prompts
The Copilot-specific fix is in the included Modelfile and runtime settings, not in altered model weights.
- Downloads last month
- 194
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support