π Model Description
license: apache-2.0 tags:
- text-generation-inference
- uncensored
- abliterated
- unfiltered
- unredacted
- max
- llama.cpp
- legal
- prithivMLmods/Qwen3-VL-8B-Thinking-Unredacted-MAX
- en
Qwen3-VL-8B-Thinking-Unredacted-MAX-GGUF
Qwen3-VL-8B-Thinking-Unredacted-MAX is a highly advanced and unredacted evolution of the original Qwen3-VL-8B-Thinking model, meticulously fine-tuned through sophisticated abliterated training strategies that are specifically designed to minimize or neutralize internal refusal mechanisms while preserving and amplifying the modelβs core multimodal reasoning capabilities, enabling it to understand and analyze complex visual inputs with exceptional depth and generate unrestricted, richly detailed, and contextually nuanced captions, explanations, and analyses across a wide spectrum of content, including artistic, technical, scientific, forensic, and abstract domains; as an 8-billion-parameter vision-language system, it delivers high-fidelity outputs with superior reasoning and descriptive accuracy compared to smaller variants, making it particularly suitable for advanced data annotation, accessibility enhancement, creative storytelling, historical or medical dataset curation, and rigorous red-teaming or bias evaluation studies, all while maintaining a careful balance between computational efficiency, output fidelity, and versatility, positioning it as a state-of-the-art tool for researchers, developers, and professionals who require unrestricted, high-precision vision-language reasoning and generation capabilities.
Qwen3-VL-8B-Thinking-Unredacted-MAX [GGUF]
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| Qwen3-VL-8B-Thinking-Unredacted-MAX.BF16.gguf | BF16 | 16.4 GB | Download |
| Qwen3-VL-8B-Thinking-Unredacted-MAX.Q80.gguf | Q80 | 8.71 GB | Download |
| Qwen3-VL-8B-Thinking-Unredacted-MAX.mmproj-bf16.gguf | mmproj-bf16 | 1.16 GB | Download |
| Qwen3-VL-8B-Thinking-Unredacted-MAX.mmproj-q80.gguf | mmproj-q80 | 752 MB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Qwen3-VL-8B-Thinking-Unredacted-MAX.BF16.gguf
Recommended
LFS
FP16
|
15.26 GB | Download |
|
Qwen3-VL-8B-Thinking-Unredacted-MAX.Q8_0.gguf
LFS
Q8
|
8.11 GB | Download |
|
Qwen3-VL-8B-Thinking-Unredacted-MAX.mmproj-bf16.gguf
LFS
FP16
|
1.08 GB | Download |
|
Qwen3-VL-8B-Thinking-Unredacted-MAX.mmproj-q8_0.gguf
LFS
Q8
|
717.44 MB | Download |