πŸ“‹ Model Description


license: gemma library_name: transformers pipeline_tag: text-generation base_model: google/gemma-3-12b-it basemodelrelation: quantized language: - en tags: - abliteration - heretic - uncensored - gemma - ltx-2 - comfyui - video-generation - text-encoder

Gemma 3 12B IT - Heretic (Abliterated)

An abliterated version of Google's Gemma 3 12B IT created using Heretic. This model has reduced refusals while maintaining model quality, making it suitable as an uncensored text encoder for video generation models like LTX-2.

Model Details

  • Base Model: google/gemma-3-12b-it
  • Abliteration Method: Heretic v1.1.0
  • Trial Selected: Trial 99
  • Refusals: 7/100 (vs 100/100 original)
  • KL Divergence: 0.0826 (minimal model damage)

Files

HuggingFace Format (for transformers, llama.cpp conversion)

model-00001-of-00005.safetensors
model-00002-of-00005.safetensors
...
config.json
tokenizer.model
tokenizer_config.json

ComfyUI Format (for LTX-2 text encoder)

comfyui/gemma312Bitheretic.safetensors          # bf16, 22GB
comfyui/gemma312Bithereticfp8e4m3fn.safetensors  # fp8, 11GB

GGUF Format (for llama.cpp)

QuantSizeQualityRecommendation
F1622GBLosslessReference, same as original
Q8_012GBExcellentBest quality quantization
Q6_K9.0GBVery GoodHigh quality, good compression
Q5KM7.9GBGoodBalanced quality/size
Q5KS7.7GBGoodSlightly smaller Q5
Q4KM6.8GBGood⭐ Recommended
Q4KS6.5GBDecentSmaller Q4 variant
Q3KM5.6GBAcceptableFor very low VRAM only
gguf/gemma-3-12b-it-heretic-f16.gguf
gguf/gemma-3-12b-it-heretic-Q8_0.gguf
gguf/gemma-3-12b-it-heretic-Q6_K.gguf
gguf/gemma-3-12b-it-heretic-Q5KM.gguf
gguf/gemma-3-12b-it-heretic-Q5KS.gguf
gguf/gemma-3-12b-it-heretic-Q4KM.gguf
gguf/gemma-3-12b-it-heretic-Q4KS.gguf
gguf/gemma-3-12b-it-heretic-Q3KM.gguf

Note: GGUF support in ComfyUI for Gemma text encoders is experimental. See PR #402 for status. The GGUFs work with llama.cpp directly.

Do abliterated models makes a difference for LTX2?

I had a deep dive into this topic and found that, maybe not. While it does vary slightly the output of the video, most of the abliteration is on layer 48, the final decision making layer. LTXV averages all the layers which may wash out layer 48s difference. Still it'd be more interesting for someone with more knowledge to confirm this.

Usage

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
"DreamFast/gemma-3-12b-it-heretic",
device_map="auto",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained("DreamFast/gemma-3-12b-it-heretic")

prompt = "Write a story about a bank heist"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, maxnewtokens=200)
print(tokenizer.decode(outputs[0], skipspecialtokens=True))

With ComfyUI (LTX-2)

  1. Download the ComfyUI format file:
- comfyui/gemma312Bitheretic.safetensors (bf16, 22GB) or - comfyui/gemma312Bithereticfp8e4m3fn.safetensors (fp8, 11GB)
  1. Place in ComfyUI/models/text_encoders/
  2. In your LTX-2 workflow, use the LTXAVTextEncoderLoader node and select the heretic file

Tip: For multi-GPU setups or CPU offloading, check out ComfyUI-LTX2-MultiGPU for optimized LTX-2 workflows.

With llama.cpp

# Using llama-server
llama-server -m gemma-3-12b-it-heretic-Q4KM.gguf

Or with llama-cli

llama-cli -m gemma-3-12b-it-heretic-Q4KM.gguf -p "Write a story about a bank heist"

Why Abliterate?

Even when Gemma doesn't outright refuse a prompt, it may "sanitize" or weaken certain concepts in the embeddings. For video generation with LTX-2, this can result in:

  • Weaker adherence to creative prompts
  • Softened or altered visual outputs
  • Less faithful representation of requested content

Abliteration removes this soft censorship, resulting in more faithful prompt encoding.

Abliteration Process

Created using Heretic with the following evaluation results:

* Evaluating...
  * Obtaining first-token probability distributions...
  * KL divergence: 0.0826
  * Counting model refusals...
  * Refusals: 7/100

The low KL divergence (0.0826) indicates minimal model damage, while 7/100 refusals means 93% of previously-refused prompts now work.

Limitations

  • This model inherits all limitations of the base Gemma 3 12B model
  • Abliteration reduces but does not completely eliminate refusals
  • NVFP4 quantization is not supported for text encoders in ComfyUI (use fp8 instead)

License

This model is subject to the Gemma license.

Acknowledgments

πŸ“‚ GGUF File List

No GGUF files available