π Model Description
license: gemma language:
- en
- google/gemma-3-12b-it
- Uncensored
- text-generation
- vision-language
- multimodal
Gemma 3 β 12B IT Uncensored
This repository hosts Gemma 3 β 12B IT Uncensored, an instruction-tuned 12 billionβparameter model based on Googleβs Gemma 3 architecture, along with its Vision-Language (VLM) variant.
The model is intended for advanced local and research use, offering strong instruction-following, reasoning, coding, and (for the VLM) multimodal image + text understanding, with minimal additional alignment constraints.
Model Overview
- Model Name: Gemma 3 β 12B IT Uncensored
- VLM Variant: Gemma 3 β 12B IT VLM Uncensored
- Base Architecture: Gemma 3, 12 billion parameters (12B)
- Base Model Developer: Google
- Curator / Release: BrainDAO
- License: Gemma License (inherits from the base model)
- Intended Use: Instruction following, reasoning, coding, conversation, and multimodal understanding
What Is This Model?
This is an uncensored derivative of the Gemma 3 12B Instruction-Tuned (IT) model.
No additional safety layers, refusals, or alignment constraints have been intentionally added beyond those present in the base model.
The goal is to provide:
- Greater freedom in system prompt design
- Fewer artificial refusals
- Strong general reasoning and instruction adherence
- Full user control in local or private deployments
Key Features & Capabilities
Text Model (LLM)
- High-quality instruction following
- Strong logical and analytical reasoning
- Coding assistance across multiple programming languages
- Conversational and assistant-style interactions
- Suitable for agentic and tool-augmented workflows
Vision-Language Model (VLM)
- Image understanding and description
- Visual question answering (VQA)
- Image + text instruction following
- Multimodal chat and assistant use cases
Chat Template & System Prompt
The model follows the Gemma instruction format.
Example:
<bos><startofturn>system
You are a helpful AI assistant.
<endofturn>
<startofturn>user
{your prompt here}
<endofturn>
<startofturn>assistant
For the VLM variant, images must be provided using the multimodal input format supported by your inference framework.
Intended Use Cases
- General-purpose assistant β reasoning, writing, and conversation
- Coding assistant β generation, debugging, and refactoring
- Research & analysis β structured reasoning and synthesis
- Agentic workflows β tool use, planners, function calling
- Multimodal applications (VLM) β image QA, captioning, visual reasoning
- Local & private deployment β full control over data and prompts
License & Usage Notes
This model inherits the Gemma License from its base model (google/gemma-3-12b-it).
- The Gemma License is a custom license provided by Google
- You must review and comply with the Gemma terms of use before downloading, using, or redistributing this model
- This repository does not relicense the model under Apache-2.0, MIT, or any other standard open-source license
Users are solely responsible for ensuring their use complies with the Gemma License and all applicable laws and regulations.
Acknowledgements
- Google for the Gemma 3 architecture and base model
- BrainDAO for curation and release
- The open-source community supporting local inference, quantization, and deployment tools
Community & Support
- Use the Hugging Face Discussions tab for questions and updates
- Community feedback and contributions are welcome
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
gemma-3-12b-it-uncensored_F16.gguf
LFS
FP16
|
21.92 GB | Download |
|
gemma-3-12b-it-uncensored_Q2_k.gguf
LFS
Q2
|
4.44 GB | Download |
|
gemma-3-12b-it-uncensored_Q3_k_m.gguf
LFS
Q3
|
5.6 GB | Download |
|
gemma-3-12b-it-uncensored_Q4_k_m.gguf
Recommended
LFS
Q4
|
6.8 GB | Download |
|
gemma-3-12b-it-uncensored_Q5_k_m.gguf
LFS
Q5
|
7.87 GB | Download |
|
gemma-3-12b-it-uncensored_Q6_k.gguf
LFS
Q6
|
9 GB | Download |
|
gemma-3-12b-it-uncensored_Q8_0.gguf
LFS
Q8
|
11.65 GB | Download |
|
gemma-3-12b-it-uncensored_mmproj-f16.gguf
LFS
FP16
|
814.63 MB | Download |