π Model Description
license: apache-2.0 base_model:
- Qwen/Qwen-Image-Edit-2509
- text: apply the image 2 full costume to image 1 singing girl
- text: use image 2 city night view as background for image 1
- text: use image 2 as background for image 1 fairy
- gguf-connector
- gguf-node
qwen-image-edit-plus-gguf
- run it with
gguf-connector; simply execute the command below in console/terminal
ggc q8
>
>GGUF file(s) available. Select which one to use:
>
>1. qwen-image-edit-plus-v2-iq3_s.gguf
>2. qwen-image-edit-plus-v2-iq4_nl.gguf
>3. qwen-image-edit-plus-v2-mxfp4_moe.gguf
>
>Enter your choice (1 to 3): _
>
- opt a
gguffile in your current directory to interact with; nothing else
ggc q8accepts multiple image input (see picture above; two images as input)- as lite lora auto applied, able to generate output with merely 4/8 steps instead of the default 40 steps; save up to 80% loading time
- up to 3 pictures plus customize prompt as input (above is 3 images input demo)
- though
ggc q8is accepting single image input (see above), you could opt the legacyggc q7(see below); similar to image-edit model before
ggc q7
run it with gguf-node via comfyui
- drag qwen-image-edit-plus to >
./ComfyUI/models/diffusionmodels - *anyone below, drag it to >
./ComfyUI/models/textencoders
- drag pig [254MB] to >
./ComfyUI/models/vae
run it with diffusers
- might need the most updated git version for
QwenImageEditPlusPipeline, should after this pr; for i quant support, should after this commit; install the updated git version diffusers by:
pip install git+https://github.com/huggingface/diffusers.git
- simply replace
QwenImageEditPipelinebyQwenImageEditPlusPipelinefrom the qwen-image-edit inference example (see here)
import torch, os
from diffusers import QwenImageTransformer2DModel, GGUFQuantizationConfig, QwenImageEditPlusPipeline
from diffusers.utils import load_image
modelpath = "https://huggingface.co/calcuis/qwen-image-edit-plus-gguf/blob/main/qwen-image-edit-plus-v2-iq4nl.gguf"
transformer = QwenImageTransformer2DModel.fromsinglefile(
model_path,
quantizationconfig=GGUFQuantizationConfig(computedtype=torch.bfloat16),
torch_dtype=torch.bfloat16,
config="callgg/image-edit-plus",
subfolder="transformer"
)
pipeline = QwenImageEditPipeline.frompretrained("Qwen/Qwen-Image-Edit-2509", transformer=transformer, torchdtype=torch.bfloat16)
print("pipeline loaded")
pipeline.enablemodelcpu_offload()
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
prompt = "Add a hat to the cat"
inputs = {
"image": image,
"prompt": prompt,
"generator": torch.manual_seed(0),
"truecfgscale": 2.5,
"negative_prompt": " ",
"numinferencesteps": 20,
}
with torch.inference_mode():
output = pipeline(inputs)
output_image = output.images[0]
output_image.save("output.png")
print("image saved at", os.path.abspath("output.png"))
run nunchaku safetensors straight with gguf-connector (experimental feature)
- run it with the new
q9connector; simply execute the command below in console/terminal
ggc q9
>
>Safetensors available. Select which one to use:
>
>1. qwen-image-edit-lite-blackwell-fp4.safetensors
>2. qwen-image-edit-lite-int4.safetensors (for non-blackwell card)
>
>Enter your choice (1 to 2): _
- opt a
safetensorsfile in your current directory to interact with; nothing else
!screenshot
note: able to generate output with 4/8 steps (see above); surprisingly fast even with low end device; compatible with safetensors in nunchaku repo (depends on your machine; opt the right one)
run the lite model (experimental) with gguf-connector
ggc q0
>
>GGUF file(s) available. Select which one to use:
>
>1. qwen-image-edit-lite-iq4_nl.gguf
>2. qwen-image-edit-lite-q4_0.gguf
>3. qwen-image-edit-lite-q4ks.gguf
>
>Enter your choice (1 to 3): _
>
- opt a
gguffile in your current directory to interact with; nothing else
note: a new lite lora auto applied to q0 and q9; able to generate output with 4/8 steps; and more working layers in these versions, should be more stable than p0 (v2.0) below
- for lite v2.0, please use
p0connector (experimental)
ggc p0
>
>GGUF file(s) available. Select which one to use:
>
>1. qwen-image-edit-lite-v2.0-iq2_s.gguf
>2. qwen-image-edit-lite-v2.0-iq3_s.gguf
>3. qwen-image-edit-lite-v2.0-iq4_nl.gguf
>
>Enter your choice (1 to 3): _
>
- opt a
gguffile in your current directory to interact with; nothing else
run the new lite v2.1 (experimental) with gguf-connector
- for lite v2.1, please use
p9connector
ggc p9
>
>GGUF file(s) available. Select which one to use:
>
>1. qwen-image-edit-lite-v2.1-q4_0.gguf
>2. qwen-image-edit-lite-v2.1-mxfp4_moe.gguf
>
>Enter your choice (1 to 2): _
>
- opt a
gguffile in your current directory to interact with; nothing else
note: ggc p9 is able to generate picture with 4/8 steps but need a higher guidance (i.e., 3.5); if too many elements involved, you might consider increasing the steps (i.e., 15) for better output
reference
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
qwen-image-edit-lite-iq4_nl.gguf
LFS
Q4
|
7.57 GB | Download |
|
qwen-image-edit-lite-iq4_xs.gguf
LFS
Q4
|
7.15 GB | Download |
|
qwen-image-edit-lite-q4_0.gguf
Recommended
LFS
Q4
|
7.57 GB | Download |
|
qwen-image-edit-lite-q4_1.gguf
LFS
Q4
|
8.4 GB | Download |
|
qwen-image-edit-lite-q4_k_s.gguf
LFS
Q4
|
7.57 GB | Download |
|
qwen-image-edit-lite-q8_0.gguf
LFS
Q8
|
14.21 GB | Download |
|
qwen-image-edit-lite-v2.0-iq2_s.gguf
LFS
Q2
|
4.42 GB | Download |
|
qwen-image-edit-lite-v2.0-iq3_s.gguf
LFS
Q3
|
5.6 GB | Download |
|
qwen-image-edit-lite-v2.0-iq4_nl.gguf
LFS
Q4
|
7.2 GB | Download |
|
qwen-image-edit-lite-v2.0-iq4_xs.gguf
LFS
Q4
|
6.81 GB | Download |
|
qwen-image-edit-lite-v2.0-q4_k_s.gguf
LFS
Q4
|
7.21 GB | Download |
|
qwen-image-edit-lite-v2.0-q8_0.gguf
LFS
Q8
|
13.53 GB | Download |
|
qwen-image-edit-lite-v2.1-mxfp4_moe.gguf
LFS
|
14.74 GB | Download |
|
qwen-image-edit-lite-v2.1-q4_0.gguf
LFS
Q4
|
8.1 GB | Download |
|
qwen-image-edit-plus-iq2_s.gguf
LFS
Q2
|
6.55 GB | Download |
|
qwen-image-edit-plus-iq3_s.gguf
LFS
Q3
|
8.43 GB | Download |
|
qwen-image-edit-plus-iq3_xxs.gguf
LFS
Q3
|
8.34 GB | Download |
|
qwen-image-edit-plus-iq4_nl.gguf
LFS
Q4
|
10.8 GB | Download |
|
qwen-image-edit-plus-iq4_xs.gguf
LFS
Q4
|
10.2 GB | Download |
|
qwen-image-edit-plus-q2_k.gguf
LFS
Q2
|
6.59 GB | Download |
|
qwen-image-edit-plus-q2_k_s.gguf
LFS
Q2
|
6.55 GB | Download |
|
qwen-image-edit-plus-q3_k_l.gguf
LFS
Q3
|
8.53 GB | Download |
|
qwen-image-edit-plus-q3_k_m.gguf
LFS
Q3
|
8.47 GB | Download |
|
qwen-image-edit-plus-q3_k_s.gguf
LFS
Q3
|
8.4 GB | Download |
|
qwen-image-edit-plus-q4_0.gguf
LFS
Q4
|
10.8 GB | Download |
|
qwen-image-edit-plus-q4_1.gguf
LFS
Q4
|
11.98 GB | Download |
|
qwen-image-edit-plus-q4_k_m.gguf
LFS
Q4
|
10.93 GB | Download |
|
qwen-image-edit-plus-q4_k_s.gguf
LFS
Q4
|
10.8 GB | Download |
|
qwen-image-edit-plus-q5_0.gguf
LFS
Q5
|
13.17 GB | Download |
|
qwen-image-edit-plus-q5_1.gguf
LFS
Q5
|
14.36 GB | Download |
|
qwen-image-edit-plus-q5_k_m.gguf
LFS
Q5
|
13.24 GB | Download |
|
qwen-image-edit-plus-q5_k_s.gguf
LFS
Q5
|
13.17 GB | Download |
|
qwen-image-edit-plus-q6_k.gguf
LFS
Q6
|
15.69 GB | Download |
|
qwen-image-edit-plus-q8_0.gguf
LFS
Q8
|
20.29 GB | Download |
|
qwen-image-edit-plus-v2-iq3_s.gguf
LFS
Q3
|
8.4 GB | Download |
|
qwen-image-edit-plus-v2-iq3_xxs.gguf
LFS
Q3
|
8.31 GB | Download |
|
qwen-image-edit-plus-v2-iq4_nl.gguf
LFS
Q4
|
10.76 GB | Download |
|
qwen-image-edit-plus-v2-iq4_xs.gguf
LFS
Q4
|
10.17 GB | Download |
|
qwen-image-edit-plus-v2-mxfp4_moe.gguf
LFS
|
14.56 GB | Download |
|
qwen-image-edit-plus-v2-tq1_0.gguf
LFS
|
6.5 GB | Download |
|
qwen-image-edit-plus-v2-tq2_0.gguf
LFS
Q2
|
6.52 GB | Download |
|
qwen2.5-vl-7b-test-q4_0.gguf
LFS
Q4
|
4.69 GB | Download |