π Model Description
base_model: DavidAU/Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking datasets:
- TeichAI/glm-4.7-2000x
- en
- zh
- GLM 4.7 Flash distill
- unsloth
- thinking
- reasoning
- heretic
- uncensored
- abliterated
- thinking
- reasoning
- deep reasoning
- fine tune
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- horror
- r rated
- x rated
- all use cases
- not-for-all-audiences
About
static quants of https://huggingface.co/DavidAU/Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking
For a convenient overview and download list, visit our model page for this model.
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking-i1-GGUF
Usage
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|---|---|---|---|
| GGUF | mmproj-Q8_0 | 0.9 | multi-modal supplement |
| GGUF | mmproj-f16 | 1.3 | multi-modal supplement |
| GGUF | Q2_K | 3.4 | |
| GGUF | Q3K_S | 3.9 | |
| GGUF | Q3K_M | 4.2 | lower quality |
| GGUF | Q3K_L | 4.5 | |
| GGUF | IQ4_XS | 4.7 | |
| GGUF | Q4K_S | 4.9 | fast, recommended |
| GGUF | Q4K_M | 5.1 | fast, recommended |
| GGUF | Q5K_S | 5.8 | |
| GGUF | Q5K_M | 6.0 | |
| GGUF | Q6_K | 6.8 | very good quality |
| GGUF | Q8_0 | 8.8 | fast, best quality |
| GGUF | f16 | 16.5 | 16 bpw, overkill |
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.IQ4_XS.gguf
LFS
Q4
|
4.28 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q2_K.gguf
LFS
Q2
|
3.06 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q3_K_L.gguf
LFS
Q3
|
4.13 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q3_K_M.gguf
LFS
Q3
|
3.84 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q3_K_S.gguf
LFS
Q3
|
3.51 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q4_K_M.gguf
Recommended
LFS
Q4
|
4.68 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q4_K_S.gguf
LFS
Q4
|
4.47 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q5_K_M.gguf
LFS
Q5
|
5.45 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q5_K_S.gguf
LFS
Q5
|
5.33 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q6_K.gguf
LFS
Q6
|
6.26 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.Q8_0.gguf
LFS
Q8
|
8.11 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.f16.gguf
LFS
FP16
|
15.26 GB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.mmproj-Q8_0.gguf
LFS
Q8
|
717.44 MB | Download |
|
Qwen3-VL-8B-GLM-4.7-Flash-Heretic-Uncensored-Thinking.mmproj-f16.gguf
LFS
FP16
|
1.08 GB | Download |