How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="dalatexcoder/Rice-Cracker-Qwen3.5-0.8B-Abliterated-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

Notice

@lyraaaa has created a working version, which you can find here

Broken. Don't Use. (Wrong Tokenizer, I assume) Quants of dalatexcoder/Rice-Cracker-Qwen3.5-0.8B-Abliterated-Base Since I am not mradermacher or bartowski, I cannot create other GGUFs due to my limited resources and time, or maybe I will create a whole family.

Downloads last month
380
GGUF
Model size
0.8B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dalatexcoder/Rice-Cracker-Qwen3.5-0.8B-Abliterated-GGUF