πŸ“‹ Model Description


license: apache-2.0 language:
  • en
base_model:
  • prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v2
pipeline_tag: image-text-to-text library_name: transformers tags:
  • text-generation-inference
  • abliterated
  • llama.cpp

Qwen3-VL-8B-Instruct-abliterated-v2-GGUF

The Qwen3-VL-8B-Instruct-abliterated-v2 from prithivMLmods represents the second iteration (v2.0) of the abliterated variant of Alibaba's Qwen3-VL-8B-Instruct, an 8B-parameter vision-language model engineered to fully remove safety refusals and content filters through advanced abliteration techniques, delivering uncensored, highly detailed captioning, instruction-following, and multimodal reasoning across complex, sensitive, artistic, technical, abstract, or explicit visual content with Interleaved-MRoPE fusion, 32-language OCR, 262K context length, and robust support for diverse resolutions, aspect ratios, videos, and layouts. Building on v1 with refined uncensoring for even greater output fidelity and reduced artifacts, it enables variational detail controlβ€”from concise summaries to exhaustive, multi-granularity analysesβ€”primarily in English with prompt-engineered multilingual adaptability, making it optimal for red-teaming, research in generative safety, creative visual storytelling, and unrestricted agentic applications on high-end GPUs (16-24GB VRAM BF16/FP8) via Transformers or vLLM. This version preserves the base model's state-of-the-art multimodal perception while eliminating guardrails for factual, descriptive responses in scenarios where conventional models would refuse.

Qwen3-VL-8B-Instruct-abliterated-v2 [GGUF]

File NameQuant TypeFile SizeFile Link
Qwen3-VL-8B-Instruct-abliterated-v2.IQ4XS.ggufIQ4XS4.59 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q2K.ggufQ2K3.28 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q3KL.ggufQ3KL4.43 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q3KM.ggufQ3KM4.12 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q3KS.ggufQ3KS3.77 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q4KM.ggufQ4KM5.03 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q4KS.ggufQ4KS4.8 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q5KM.ggufQ5KM5.85 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q5KS.ggufQ5KS5.72 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q6K.ggufQ6K6.73 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.Q80.ggufQ808.71 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.f16.ggufF1616.4 GBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-Q80.ggufmmproj-Q80752 MBDownload
Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-f16.ggufmmproj-f161.16 GBDownload

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

!image.png

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Qwen3-VL-8B-Instruct-abliterated-v2.IQ4_XS.gguf
LFS Q4
4.28 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q2_K.gguf
LFS Q2
3.06 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_L.gguf
LFS Q3
4.13 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_M.gguf
LFS Q3
3.84 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_S.gguf
LFS Q3
3.51 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q4_K_M.gguf
Recommended LFS Q4
4.68 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q4_K_S.gguf
LFS Q4
4.47 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q5_K_M.gguf
LFS Q5
5.45 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q5_K_S.gguf
LFS Q5
5.33 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q6_K.gguf
LFS Q6
6.26 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.Q8_0.gguf
LFS Q8
8.11 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.f16.gguf
LFS FP16
15.26 GB Download
Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-Q8_0.gguf
LFS Q8
717.44 MB Download
Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-f16.gguf
LFS FP16
1.08 GB Download