π Model Description
license: mit language:
- en
- comfyui
- gguf-comfy
- gguf-node
- text: >-
- text: drag it
- text: drag it
gguf-node test pack
!screenshot
locate gguf from Add Node > extension dropdown menu (between 3d and api; second last option)
setup (in general)
- drag gguf file(s) to diffusionmodels folder (
./ComfyUI/models/diffusionmodels) - drag clip or encoder(s) to textencoders folder (
./ComfyUI/models/textencoders) - drag controlnet adapter(s), if any, to controlnet folder (
./ComfyUI/models/controlnet) - drag lora adapter(s), if any, to loras folder (
./ComfyUI/models/loras) - drag vae decoder(s) to vae folder (
./ComfyUI/models/vae)
run it straight (no installation needed way; recommended)
- get the comfy pack with the new gguf-node (pack)
- run the .bat file in the main directory
or, for existing user (alternative method)
- you could git clone the node to your
./ComfyUI/customnodes(more details here) - either navigate to
./ComfyUI/custom_nodesfirst or drag and drop the node clone (gguf repo) there
workflow
- drag any workflow json file to the activated browser; or
- drag any generated output file (i.e., picture, video, etc.; which contains the workflow metadata) to the activated browser
simulator
- design your own prompt; or
- generate random prompt/descriptor(s) by the simulator (might not applicable for all models)
#### booster
- drag safetensors file(s) to diffusionmodels folder (./ComfyUI/models/diffusionmodels)
- select the safetensors model; click
Queue(run); simply track the progress from console - when it was done; the boosted safetensors fp32 will be saved to the output folder (./ComfyUI/output)
#### cutter (beta)
- drag safetensors file(s) to diffusionmodels folder (./ComfyUI/models/diffusionmodels)
- select the safetensors model; click
Queue(run); simply track the progress from console - when it was done; the half-cut safetensors fp8_e4m3fn will be saved to the output folder (./ComfyUI/output)
convertor (alpha)
- drag safetensors file(s) to diffusionmodels folder (./ComfyUI/models/diffusionmodels)
- select the safetensors model; click
Queue(run); track the progress from console - the converted gguf file will be saved to the output folder (./ComfyUI/output)
convertor (reverse)
- drag gguf file(s) to diffusionmodels folder (./ComfyUI/models/diffusionmodels)
- select the gguf model; click
Queue(run); track the progress from console - the reverse converted safetensors file will be saved to the output folder (./ComfyUI/output)
convertor (zero)
- drag safetensors file(s) to diffusionmodels folder (./ComfyUI/models/diffusionmodels)
- select the safetensors model; click
Queue(run); track the progress from console - the converted gguf file will be saved to the output folder (./ComfyUI/output)
- zero means no restrictions; different from alpha; any form of safetensors can be converted; pig architecture will be applied for the output gguf
latest feature: gguf vaeπ· loader is now working on gguf-node
- gguf vae is able to save memory consumption of your machine
- convert your safetensors vae to gguf vae using convertor (zero)
- then use it with the new gguf vae loader
- same as gguf clip loaders, gguf vae loader is compatible with both safetensors and gguf file(s)
disclaimer
- some models (original files) as well as part of the codes are obtained from somewhere or provided by someone else and we might not easily spot out the creator/contributor(s) behind, unless it was specified in the source; rather let it blank instead of anonymous/unnamed/unknown
- we hope we are able to make more effort to trace the source; if it is your work, do let us know; we will address it back properly and probably; thanks for everything
reference
- sd3.5, sdxl from stabilityai
- flux from black-forest-labs
- pixart from pixart-alpha
- lumina from alpha-vllm
- aura from fal
- mochi from genmo
- hyvid from tencent
- wan from wan-ai
- ltxv from lightricks
- cosmos from nvidia
- pig architecture from connector
- comfyui from comfyanonymous
- comfyui-gguf from city96
- llama.cpp from ggerganov
- llama-cpp-python from abetlen
- gguf-connector (pypi|repo)
- gguf-node (pypi|repo|pack)
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
aura_flow_0.3-q4_0.gguf
Recommended
LFS
Q4
|
3.68 GB | Download |
|
cosmos-7b-text2world-q4_k_m.gguf
LFS
Q4
|
3.79 GB | Download |
|
flux-dev-fp32-q4_0.gguf
LFS
Q4
|
6.24 GB | Download |
|
flux1-schnell-q4_0.gguf
LFS
Q4
|
6.41 GB | Download |
|
hunyuan-video-t2v-720p-q4_0.gguf
LFS
Q4
|
7.21 GB | Download |
|
legacy_sdxl_blackmagic-q4_0.gguf
LFS
Q4
|
1.36 GB | Download |
|
legacy_sdxl_boleromix-q4_0.gguf
LFS
Q4
|
1.36 GB | Download |
|
legacy_sdxl_snow-q4_0.gguf
LFS
Q4
|
1.36 GB | Download |
|
llava-llama-3-8b-q3_k_m.gguf
LFS
Q3
|
3.74 GB | Download |
|
ltx-video-2b-v0.9-q4_0.gguf
LFS
Q4
|
1.18 GB | Download |
|
ltx-video-2b-v0.9.1-q4_0.gguf
LFS
Q4
|
1.01 GB | Download |
|
lumina2-q4_k_m.gguf
LFS
Q4
|
1.37 GB | Download |
|
mochi-q3_k_m.gguf
LFS
Q3
|
4.01 GB | Download |
|
piggy-i2v-q4_k_m.gguf
LFS
Q4
|
7.32 GB | Download |
|
piggy-i2v-v2-q4_0.gguf
LFS
Q4
|
7.21 GB | Download |
|
piggy-t2v-q3_k_m.gguf
LFS
Q3
|
5.81 GB | Download |
|
piggy-t2v-q4_k_m.gguf
LFS
Q4
|
7.34 GB | Download |
|
pixart-sigma-xl-2-1024-ms-f16.gguf
LFS
FP16
|
1.14 GB | Download |
|
pixart-sigma-xl-2-1024-ms-q4_k_m.gguf
LFS
Q4
|
955.34 MB | Download |
|
pixart-xl-2-1024-ms-f16.gguf
LFS
FP16
|
1.14 GB | Download |
|
pixart-xl-2-1024-ms-q4_k_m.gguf
LFS
Q4
|
956.96 MB | Download |
|
sd3.5-large-fp32-q4_0.gguf
LFS
Q4
|
4.45 GB | Download |
|
sd3.5_large-q4_0.gguf
LFS
Q4
|
4.44 GB | Download |
|
sd3.5_large_turbo-q4_0.gguf
LFS
Q4
|
4.44 GB | Download |
|
sd3.5_medium-q4_0.gguf
LFS
Q4
|
1.62 GB | Download |
|
wan2.1-i2v-14b-480p-q3_k_m.gguf
LFS
Q3
|
8 GB | Download |
|
wan2.1-i2v-14b-720p-q3_k_m.gguf
LFS
Q3
|
8 GB | Download |
|
wan2.1-t2v-14b-q3_k_m.gguf
LFS
Q3
|
7.12 GB | Download |
