π Model Description
base_model: Qwen/Qwen-Image library_name: gguf quantized_by: city96 license: apache-2.0 language:
- en
- zh
This is a direct GGUF conversion of Qwen/Qwen-Image.
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
Type | Name | Location | Download |
---|---|---|---|
Main Model | Qwen-Image | ComfyUI/models/diffusion_models | GGUF (this repo) |
Text Encoder | Qwen2.5-VL-7B | ComfyUI/models/textencoders | Safetensors / GGUF |
VAE | Qwen-Image VAE | ComfyUI/models/vae | Safetensors |
Example outputs - sample size of 1, not strictly representative
Notes
>[!NOTE]
The Q5KM, Q4KM and most importantly the low bitrate quants (Q3KM, Q3KS, Q2_K) use a new dynamic logic where the first/last layer is kept in high precision.
For a comparison, see this imgsli page. With this method, even Q2K remains somewhat usable.
As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.
π GGUF File List
π Filename | π¦ Size | β‘ Download |
---|---|---|
qwen-image-BF16.gguf
LFS
FP16
|
38.07 GB | Download |
qwen-image-Q2_K.gguf
LFS
Q2
|
6.58 GB | Download |
qwen-image-Q3_K_M.gguf
LFS
Q3
|
9.01 GB | Download |
qwen-image-Q3_K_S.gguf
LFS
Q3
|
8.34 GB | Download |
qwen-image-Q4_0.gguf
Recommended
LFS
Q4
|
11.04 GB | Download |
qwen-image-Q4_1.gguf
LFS
Q4
|
11.96 GB | Download |
qwen-image-Q4_K_M.gguf
LFS
Q4
|
12.17 GB | Download |
qwen-image-Q4_K_S.gguf
LFS
Q4
|
11.31 GB | Download |
qwen-image-Q5_0.gguf
LFS
Q5
|
13.41 GB | Download |
qwen-image-Q5_1.gguf
LFS
Q5
|
14.33 GB | Download |
qwen-image-Q5_K_M.gguf
LFS
Q5
|
13.91 GB | Download |
qwen-image-Q5_K_S.gguf
LFS
Q5
|
13.15 GB | Download |
qwen-image-Q6_K.gguf
LFS
Q6
|
15.67 GB | Download |
qwen-image-Q8_0.gguf
LFS
Q8
|
20.27 GB | Download |