π Model Description
pipeline_tag: image-text-to-text base_model:
- huihui-ai/Huihui-Qwen3-VL-4B-Instruct-abliterated
These are quantizations of the model Huihui-Qwen3-VL-4B-Instruct-abliterated.
They have been updated to use the imatrix from unsloth.
Original model: https://huggingface.co/huihui-ai/Huihui-Qwen3-VL-4B-Instruct-abliterated
Download the latest llama.cpp to use them.
Try to use the best quality you can run.
For the mmproj, try to use the F32 version as it will produce the best results. F32 > BF16 > F16
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-BF16.gguf
LFS
FP16
|
7.5 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-F16.gguf
LFS
FP16
|
7.5 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-IQ3_M.gguf
LFS
Q3
|
1.83 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-IQ3_S.gguf
LFS
Q3
|
1.77 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-IQ3_XS.gguf
LFS
Q3
|
1.69 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-IQ3_XXS.gguf
LFS
Q3
|
1.56 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-IQ4_NL.gguf
LFS
Q4
|
2.22 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-IQ4_XS.gguf
LFS
Q4
|
2.11 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-MXFP4.gguf
LFS
|
2.84 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-Q4_K_M.gguf
Recommended
LFS
Q4
|
2.33 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-Q4_K_S.gguf
LFS
Q4
|
2.22 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-Q5_K_M.gguf
LFS
Q5
|
2.69 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-Q5_K_S.gguf
LFS
Q5
|
2.63 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-Q6_K.gguf
LFS
Q6
|
3.08 GB | Download |
|
Huihui-Qwen3-VL-4B-Instruct-abliterated-Q8_0.gguf
LFS
Q8
|
3.99 GB | Download |
|
mmproj-BF16.gguf
LFS
FP16
|
800.44 MB | Download |
|
mmproj-F16.gguf
LFS
FP16
|
797.44 MB | Download |
|
mmproj-F32.gguf
LFS
|
1.55 GB | Download |