π Model Description
license: apache-2.0 language:
- en
- prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v2
- text-generation-inference
- abliterated
- llama.cpp
Qwen3-VL-8B-Instruct-abliterated-v2-GGUF
The Qwen3-VL-8B-Instruct-abliterated-v2 from prithivMLmods represents the second iteration (v2.0) of the abliterated variant of Alibaba's Qwen3-VL-8B-Instruct, an 8B-parameter vision-language model engineered to fully remove safety refusals and content filters through advanced abliteration techniques, delivering uncensored, highly detailed captioning, instruction-following, and multimodal reasoning across complex, sensitive, artistic, technical, abstract, or explicit visual content with Interleaved-MRoPE fusion, 32-language OCR, 262K context length, and robust support for diverse resolutions, aspect ratios, videos, and layouts. Building on v1 with refined uncensoring for even greater output fidelity and reduced artifacts, it enables variational detail controlβfrom concise summaries to exhaustive, multi-granularity analysesβprimarily in English with prompt-engineered multilingual adaptability, making it optimal for red-teaming, research in generative safety, creative visual storytelling, and unrestricted agentic applications on high-end GPUs (16-24GB VRAM BF16/FP8) via Transformers or vLLM. This version preserves the base model's state-of-the-art multimodal perception while eliminating guardrails for factual, descriptive responses in scenarios where conventional models would refuse.
Qwen3-VL-8B-Instruct-abliterated-v2 [GGUF]
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| Qwen3-VL-8B-Instruct-abliterated-v2.IQ4XS.gguf | IQ4XS | 4.59 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q2K.gguf | Q2K | 3.28 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q3KL.gguf | Q3KL | 4.43 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q3KM.gguf | Q3KM | 4.12 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q3KS.gguf | Q3KS | 3.77 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q4KM.gguf | Q4KM | 5.03 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q4KS.gguf | Q4KS | 4.8 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q5KM.gguf | Q5KM | 5.85 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q5KS.gguf | Q5KS | 5.72 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q6K.gguf | Q6K | 6.73 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q80.gguf | Q80 | 8.71 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.f16.gguf | F16 | 16.4 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-Q80.gguf | mmproj-Q80 | 752 MB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-f16.gguf | mmproj-f16 | 1.16 GB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Qwen3-VL-8B-Instruct-abliterated-v2.IQ4_XS.gguf
LFS
Q4
|
4.28 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q2_K.gguf
LFS
Q2
|
3.06 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_L.gguf
LFS
Q3
|
4.13 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_M.gguf
LFS
Q3
|
3.84 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_S.gguf
LFS
Q3
|
3.51 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q4_K_M.gguf
Recommended
LFS
Q4
|
4.68 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q4_K_S.gguf
LFS
Q4
|
4.47 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q5_K_M.gguf
LFS
Q5
|
5.45 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q5_K_S.gguf
LFS
Q5
|
5.33 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q6_K.gguf
LFS
Q6
|
6.26 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.Q8_0.gguf
LFS
Q8
|
8.11 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.f16.gguf
LFS
FP16
|
15.26 GB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-Q8_0.gguf
LFS
Q8
|
717.44 MB | Download |
|
Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-f16.gguf
LFS
FP16
|
1.08 GB | Download |