Qwen3.5-27B.Q8_0.gguf does not work!

#13
by beginor - opened

I have tried some gguf file in this repo, and find gguf with q8_0 quant does not works with llama.cpp, maybe the file is bad, the results are:

  • Qwen3.5-27B.Q4_K_M.gguf works;
  • Qwen3.5-27B.Q5_K_M.gguf not test yet, may be works.
  • Qwen3.5-27B.Q6_K.gguf works;
  • Qwen3.5-27B.Q8_0.gguf does not work;

Please refer Qwen3.5-27B.Q8_0.gguf got dummy output with llama.cpp for details.

Sign up or log in to comment