π Model Description
license: apache-2.0 language:
- en
- vision-language
- multimodal
- uncensored
- gguf
- text-generation
- image-understanding
- Gemma-3-4B
Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking-GGUF
This repository contains Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking-GGUF, a 4B-parameter vision-language instruction-tuned model provided in GGUF format for efficient local inference.
The model is designed for open-ended reasoning, multimodal understanding, and minimal alignment constraints, making it suitable for experimentation, research, and advanced local deployments.
Model Summary
- Model ID: Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking-GGUF
- Architecture: Gemma 3 (4B parameters)
- Type: Vision-Language (Text + Image)
- Format: GGUF
- Publisher: mradermacher
- License: Apache 2.0 (inherits from base model)
Key Characteristics
- Multimodal input support (text + images)
- Instruction-tuned for conversational and reasoning tasks
- Reduced content filtering and alignment constraints
- Optimized for local inference runtimes
- Suitable for research, exploration, and advanced user workflows
β οΈ This model is uncensored. Outputs may include sensitive or unfiltered content. Use responsibly.
Supported Use Cases
Text-Based
- Conversational assistants
- Creative writing and storytelling
- Summarization and rewriting
- General reasoning and analysis
Vision + Text
- Image captioning
- Visual question answering
- Scene and object understanding
- Multimodal reasoning tasks
GGUF Compatibility
This model can be used with GGUF-compatible runtimes such as:
llama.cpp- Ollama (GGUF-based builds)
- Other local inference engines supporting GGUF
Performance and supported features may vary depending on runtime and hardware.
Basic Usage Example
Command Line (llama.cpp-style)
./main \
-m Andycurrent/Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-ThinkingGGUFF16.gguf \
-p "Describe the key idea behind multimodal AI models."`
Usage Notes
- Provide clear, explicit prompts for best results
- When using images, ensure proper formatting and resolution
- Add moderation or filtering layers if deploying in public-facing applications
Ethical Considerations
Due to its uncensored nature:
- Not recommended for unrestricted public deployment
- Should not be used in safety-critical environments
- Users are responsible for compliance with applicable laws and policies
Acknowledgements
- Gemma base model contributors
- Open-source inference and quantization communities
- Tools and runtimes enabling efficient local LLM deployment
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_F16.gguf
LFS
FP16
|
7.23 GB | Download |
|
Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_Q2_k.gguf
LFS
Q2
|
1.61 GB | Download |
|
Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_Q3_k_m.gguf
LFS
Q3
|
1.95 GB | Download |
|
Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_Q4_k_m.gguf
Recommended
LFS
Q4
|
2.32 GB | Download |
|
Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_Q5_k_m.gguf
LFS
Q5
|
2.64 GB | Download |
|
Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_Q6_k.gguf
LFS
Q6
|
2.97 GB | Download |
|
Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_Q8_0.gguf
LFS
Q8
|
3.85 GB | Download |
|
Gemma-3-4B-VL-it-Gemini-Pro-Heretic-Uncensored-Thinking_mmproj_f16.gguf
LFS
FP16
|
811.82 MB | Download |