π Model Description
base_model: black-forest-labs/FLUX.2-dev library_name: gguf quantized_by: city96 license: other license_name: flux-dev-non-commercial-license license_link: LICENSE.md tags:
- image-generation
- image-editing
- flux
- diffusion-single-file
This is a direct GGUF conversion of black-forest-labs/FLUX.2-dev.
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
|---|---|---|---|
| Main Model | flux2-dev | ComfyUI/models/diffusion_models | GGUF (this repo) |
| Text Encoder | Mistral-Small-3.2-24B-Instruct-2506 | ComfyUI/models/textencoders | Safetensors / GGUF |
| VAE | flux2 VAE | ComfyUI/models/vae | Safetensors |
Notes
>[!NOTE]
As with Qwen-Image, Q5KM, Q4KM, Q3KM, Q3KS and Q2_K have some extra logic as to which blocks to keep in high precision.
The logic is partially based on guesswork, trial & error, and the graph found in the readme for Freepik/flux.1-lite-8B (which in turn quotes this blog by Ostris)
As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
flux2-dev-BF16.gguf
LFS
FP16
|
60.02 GB | Download |
|
flux2-dev-Q2_K.gguf
LFS
Q2
|
11.98 GB | Download |
|
flux2-dev-Q3_K_M.gguf
LFS
Q3
|
14.86 GB | Download |
|
flux2-dev-Q3_K_S.gguf
LFS
Q3
|
14.7 GB | Download |
|
flux2-dev-Q4_0.gguf
Recommended
LFS
Q4
|
17.97 GB | Download |
|
flux2-dev-Q4_1.gguf
LFS
Q4
|
19.8 GB | Download |
|
flux2-dev-Q4_K_M.gguf
LFS
Q4
|
18.7 GB | Download |
|
flux2-dev-Q4_K_S.gguf
LFS
Q4
|
17.97 GB | Download |
|
flux2-dev-Q5_0.gguf
LFS
Q5
|
21.63 GB | Download |
|
flux2-dev-Q5_1.gguf
LFS
Q5
|
23.46 GB | Download |
|
flux2-dev-Q5_K_M.gguf
LFS
Q5
|
22.41 GB | Download |
|
flux2-dev-Q5_K_S.gguf
LFS
Q5
|
21.63 GB | Download |
|
flux2-dev-Q6_K.gguf
LFS
Q6
|
25.51 GB | Download |
|
flux2-dev-Q8_0.gguf
LFS
Q8
|
32.6 GB | Download |