🚨⚠️ I HAVE REACHED HUGGING FACE'S FREE STORAGE LIMIT ⚠️🚨

I can no longer upload new models unless I can cover the cost of additional storage.
I host 70+ free models as an independent contributor and this work is unpaid.
Without your support, no more new models can be uploaded.

🎉 Patreon (Monthly)  |  ☕ Ko-fi (One-time)

Every contribution goes directly toward Hugging Face storage fees to keep models free for everyone.


91% fewer refusals (8/100 Uncensored vs 93/100 Original) while preserving model quality (0.0274 KL divergence).

❤️ Support My Work

Creating these models takes significant time, work and compute. If you find them useful consider supporting me:

image/png

Platform Link What you get
🎉 Patreon Monthly support Priority model requests
☕ Ko-fi One-time tip My eternal gratitude

Your help will motivate me and would go into further improving my workflow and coverings fees for storage, compute and may even help uncensoring bigger model with rental Cloud GPUs.


This model is great for creative writing and translation, the original base model writing and translations feels a litle stiff which might not really read very nicely some times, Qwen3.5-27B-Writer-V2-uncensored-heretic aims to fix this issue and improve the writing quality of Qwen3.5-27B.

This is a decensored version of ConicCat/Qwen3.5-27B-Writer-V2, made using Heretic v1.2.0 with the Arbitrary-Rank Ablation (ARA) method

Abliteration parameters

Parameter Value
start_layer_index 31
end_layer_index 56
preserve_good_behavior_weight 0.4059
steer_bad_behavior_weight 0.0001
overcorrect_relative_weight 1.1869
neighbor_count 10

Targeted components

  • attn.o_proj
  • attn.out_proj

Performance

Metric This model Original model (ConicCat/Qwen3.5-27B-Writer-V2)
KL divergence 0.0274 0 (by definition)
Refusals 8/100 93/100

Lower refusals indicate fewer content restrictions, while lower KL divergence indicates more closeness to the original model's baseline. Higher refusals cause more rejections, objections, pushbacks, lecturing, censorship, softening and deflections.

MMLU test results:

Original:

Tasks Version Filter n-shot Metric Value Stderr
mmlu 2 none acc 0.8562 ± 0.0028
- humanities 2 none acc 0.8047 ± 0.0056
- formal_logic 1 none 0 acc 0.7302 ± 0.0397
- high_school_european_history 1 none 0 acc 0.9030 ± 0.0231
- high_school_us_history 1 none 0 acc 0.9412 ± 0.0165
- high_school_world_history 1 none 0 acc 0.9409 ± 0.0153
- international_law 1 none 0 acc 0.9256 ± 0.0240
- jurisprudence 1 none 0 acc 0.9074 ± 0.0280
- logical_fallacies 1 none 0 acc 0.9202 ± 0.0213
- moral_disputes 1 none 0 acc 0.8584 ± 0.0188
- moral_scenarios 1 none 0 acc 0.7352 ± 0.0148
- philosophy 1 none 0 acc 0.8842 ± 0.0182
- prehistory 1 none 0 acc 0.9167 ± 0.0154
- professional_law 1 none 0 acc 0.7080 ± 0.0116
- world_religions 1 none 0 acc 0.9181 ± 0.0210
- other 2 none acc 0.8735 ± 0.0057
- business_ethics 1 none 0 acc 0.8300 ± 0.0378
- clinical_knowledge 1 none 0 acc 0.8868 ± 0.0195
- college_medicine 1 none 0 acc 0.8382 ± 0.0281
- global_facts 1 none 0 acc 0.6200 ± 0.0488
- human_aging 1 none 0 acc 0.8430 ± 0.0244
- management 1 none 0 acc 0.8738 ± 0.0329
- marketing 1 none 0 acc 0.9530 ± 0.0139
- medical_genetics 1 none 0 acc 0.9700 ± 0.0171
- miscellaneous 1 none 0 acc 0.9387 ± 0.0086
- nutrition 1 none 0 acc 0.9020 ± 0.0170
- professional_accounting 1 none 0 acc 0.8014 ± 0.0238
- professional_medicine 1 none 0 acc 0.9522 ± 0.0130
- virology 1 none 0 acc 0.5723 ± 0.0385
- social sciences 2 none acc 0.9162 ± 0.0049
- econometrics 1 none 0 acc 0.8158 ± 0.0365
- high_school_geography 1 none 0 acc 0.9596 ± 0.0140
- high_school_government_and_politics 1 none 0 acc 0.9896 ± 0.0073
- high_school_macroeconomics 1 none 0 acc 0.9282 ± 0.0131
- high_school_microeconomics 1 none 0 acc 0.9664 ± 0.0117
- high_school_psychology 1 none 0 acc 0.9541 ± 0.0090
- human_sexuality 1 none 0 acc 0.9160 ± 0.0243
- professional_psychology 1 none 0 acc 0.8725 ± 0.0135
- public_relations 1 none 0 acc 0.7636 ± 0.0407
- security_studies 1 none 0 acc 0.8449 ± 0.0232
- sociology 1 none 0 acc 0.9652 ± 0.0130
- us_foreign_policy 1 none 0 acc 0.9400 ± 0.0239
- stem 2 none acc 0.8576 ± 0.0060
- abstract_algebra 1 none 0 acc 0.8000 ± 0.0402
- anatomy 1 none 0 acc 0.8296 ± 0.0325
- astronomy 1 none 0 acc 0.9671 ± 0.0145
- college_biology 1 none 0 acc 0.9792 ± 0.0119
- college_chemistry 1 none 0 acc 0.6800 ± 0.0469
- college_computer_science 1 none 0 acc 0.8300 ± 0.0378
- college_mathematics 1 none 0 acc 0.6800 ± 0.0469
- college_physics 1 none 0 acc 0.8235 ± 0.0379
- computer_security 1 none 0 acc 0.8700 ± 0.0338
- conceptual_physics 1 none 0 acc 0.9404 ± 0.0155
- electrical_engineering 1 none 0 acc 0.8276 ± 0.0315
- elementary_mathematics 1 none 0 acc 0.9101 ± 0.0147
- high_school_biology 1 none 0 acc 0.9516 ± 0.0122
- high_school_chemistry 1 none 0 acc 0.8522 ± 0.0250
- high_school_computer_science 1 none 0 acc 0.9300 ± 0.0256
- high_school_mathematics 1 none 0 acc 0.6741 ± 0.0286
- high_school_physics 1 none 0 acc 0.8609 ± 0.0283
- high_school_statistics 1 none 0 acc 0.8704 ± 0.0229
- machine_learning 1 none 0 acc 0.7857 ± 0.0389
Groups Version Filter n-shot Metric Value Stderr
mmlu 2 none acc 0.8562 ± 0.0028
- humanities 2 none acc 0.8047 ± 0.0056
- other 2 none acc 0.8735 ± 0.0057
- social sciences 2 none acc 0.9162 ± 0.0049
- stem 2 none acc 0.8576 ± 0.0060

Heretic:

Tasks Version Filter n-shot Metric Value Stderr
mmlu 2 none acc 0.8469 ± 0.0029
- humanities 2 none acc 0.7858 ± 0.0058
- formal_logic 1 none 0 acc 0.7302 ± 0.0397
- high_school_european_history 1 none 0 acc 0.8970 ± 0.0237
- high_school_us_history 1 none 0 acc 0.9412 ± 0.0165
- high_school_world_history 1 none 0 acc 0.9367 ± 0.0158
- international_law 1 none 0 acc 0.9256 ± 0.0240
- jurisprudence 1 none 0 acc 0.9167 ± 0.0267
- logical_fallacies 1 none 0 acc 0.8957 ± 0.0240
- moral_disputes 1 none 0 acc 0.8526 ± 0.0191
- moral_scenarios 1 none 0 acc 0.6458 ± 0.0160
- philosophy 1 none 0 acc 0.8810 ± 0.0184
- prehistory 1 none 0 acc 0.9043 ± 0.0164
- professional_law 1 none 0 acc 0.7086 ± 0.0116
- world_religions 1 none 0 acc 0.9298 ± 0.0196
- other 2 none acc 0.8725 ± 0.0057
- business_ethics 1 none 0 acc 0.8200 ± 0.0386
- clinical_knowledge 1 none 0 acc 0.9057 ± 0.0180
- college_medicine 1 none 0 acc 0.8613 ± 0.0264
- global_facts 1 none 0 acc 0.5600 ± 0.0499
- human_aging 1 none 0 acc 0.8341 ± 0.0250
- management 1 none 0 acc 0.9223 ± 0.0265
- marketing 1 none 0 acc 0.9573 ± 0.0133
- medical_genetics 1 none 0 acc 0.9700 ± 0.0171
- miscellaneous 1 none 0 acc 0.9425 ± 0.0083
- nutrition 1 none 0 acc 0.9020 ± 0.0170
- professional_accounting 1 none 0 acc 0.7766 ± 0.0248
- professional_medicine 1 none 0 acc 0.9338 ± 0.0151
- virology 1 none 0 acc 0.5723 ± 0.0385
- social sciences 2 none acc 0.9110 ± 0.0050
- econometrics 1 none 0 acc 0.8070 ± 0.0371
- high_school_geography 1 none 0 acc 0.9495 ± 0.0156
- high_school_government_and_politics 1 none 0 acc 0.9845 ± 0.0089
- high_school_macroeconomics 1 none 0 acc 0.9205 ± 0.0137
- high_school_microeconomics 1 none 0 acc 0.9664 ± 0.0117
- high_school_psychology 1 none 0 acc 0.9486 ± 0.0095
- human_sexuality 1 none 0 acc 0.9084 ± 0.0253
- professional_psychology 1 none 0 acc 0.8742 ± 0.0134
- public_relations 1 none 0 acc 0.7727 ± 0.0401
- security_studies 1 none 0 acc 0.8204 ± 0.0246
- sociology 1 none 0 acc 0.9602 ± 0.0138
- us_foreign_policy 1 none 0 acc 0.9400 ± 0.0239
- stem 2 none acc 0.8503 ± 0.0061
- abstract_algebra 1 none 0 acc 0.7100 ± 0.0456
- anatomy 1 none 0 acc 0.8444 ± 0.0313
- astronomy 1 none 0 acc 0.9605 ± 0.0158
- college_biology 1 none 0 acc 0.9722 ± 0.0137
- college_chemistry 1 none 0 acc 0.6400 ± 0.0482
- college_computer_science 1 none 0 acc 0.8300 ± 0.0378
- college_mathematics 1 none 0 acc 0.7100 ± 0.0456
- college_physics 1 none 0 acc 0.8529 ± 0.0352
- computer_security 1 none 0 acc 0.8600 ± 0.0349
- conceptual_physics 1 none 0 acc 0.9362 ± 0.0160
- electrical_engineering 1 none 0 acc 0.8276 ± 0.0315
- elementary_mathematics 1 none 0 acc 0.9074 ± 0.0149
- high_school_biology 1 none 0 acc 0.9387 ± 0.0136
- high_school_chemistry 1 none 0 acc 0.8473 ± 0.0253
- high_school_computer_science 1 none 0 acc 0.9200 ± 0.0273
- high_school_mathematics 1 none 0 acc 0.6630 ± 0.0288
- high_school_physics 1 none 0 acc 0.8411 ± 0.0299
- high_school_statistics 1 none 0 acc 0.8704 ± 0.0229
- machine_learning 1 none 0 acc 0.7768 ± 0.0395
Groups Version Filter n-shot Metric Value Stderr
mmlu 2 none acc 0.8469 ± 0.0029
- humanities 2 none acc 0.7858 ± 0.0058
- other 2 none acc 0.8725 ± 0.0057
- social sciences 2 none acc 0.9110 ± 0.0050
- stem 2 none acc 0.8503 ± 0.0061

MMLU - Massive Multitask Language Understanding, multiple-choice questions across 57 subjects (math, history, law, medicine, etc.).

GGUF Version

GGUF quantizations available here llmfan46/Qwen3.5-27B-Writer-V2-uncensored-heretic-GGUF.


ConicCat/Qwen3.5-27B-Writer-V2

A tentative second version. Hopefully, it's better.

A writing & roleplay finetune of Qwen3.5 27B. The primary emphasis is on writing quality as it strongly generalizes across both domains.

The basic idea is to use a curriculum learning setup to overcome the lack of high quality roleplay data by first training on lower quality roleplay data, then training on higher quality writing data. Starting from ConicCat/Qwen3.5-Antirep-27B, the model was trained on a roughly equal mixture of instruct / roleplay / writing data for three epochs. The model was then trained for eleven epochs on a smaller dataset of book chunks.

Recommended Settings

  • Chatml template with <think>\n\n</think>\n prefill or <think>\n prefill. Should think less!
  • temperature = 0.7
  • top_p = 0.95
  • A moderate dry penalty of ~ 0.4-0.8 should work well.
  • For quants, Q4_K_M runs well with ~100k context on 24GB Vram
  • IQ4_XS should fit on 16GB Vram with about 20-24k context with the vulkan backend, although it's pretty tight and may require some fiddling around with open programs e.t.c.

Datasets

  • ConicCat/AntiRep to mitigate repetitition.

  • internlm/Condor-SFT-20K for instruct; even though instruct capabilities are not the primary focus, adding some instruct data helps mitigate forgetting and maintains general intellect and instruction following capabilites.

  • ConicCat/Gutenberg-SFT. A reformatted version of the original Gutenberg DPO dataset by jondurbin for SFT with some slight augmentation to address many of the samples being overly long.

  • ConicCat/MiniC2_V3.2. The venerable C2, with cleaned and reformatted system prompts, and all user / assistant turns replaced by V3.2.

  • A dataset of backtranslated books. Unfortunately, I am unable to release this set as all of the data is under copyright.

Downloads last month
84
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for llmfan46/Qwen3.5-27B-Writer-V2-uncensored-heretic

Base model

Qwen/Qwen3.5-27B
Finetuned
(2)
this model
Quantizations
3 models

Datasets used to train llmfan46/Qwen3.5-27B-Writer-V2-uncensored-heretic