π Model Description
license: apache-2.0 base_model:
- bytedance-research/HuMo
- gguf-node
- text: >-
- text: >-
- text: >-
humo-gguf
- drag humo to >
./ComfyUI/models/diffusionmodels - drag cow-umt5xxl [3.67GB] to >
./ComfyUI/models/textencoders - drag pig [254MB] to >
./ComfyUI/models/vae
s2v workflow
- drag humo to >
./ComfyUI/models/diffusionmodels - anyone below, drag it to >
./ComfyUI/models/textencoders
- drag whisper3 [3.23GB] to >
./ComfyUI/models/audioencoders - drag pig [254MB] to >
./ComfyUI/models/vae
note: output seems different from wan; don't expect too much; get the lite lora for 4/8-step operation here, the lora works for 17b only; but 1.7b itself is fast enough
reference
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
humo-1.7b-iq4_nl.gguf
LFS
Q4
|
1 GB | Download |
|
humo-1.7b-iq4_xs.gguf
LFS
Q4
|
975.56 MB | Download |
|
humo-1.7b-q2_k.gguf
LFS
Q2
|
671.63 MB | Download |
|
humo-1.7b-q3_k_l.gguf
LFS
Q3
|
967.04 MB | Download |
|
humo-1.7b-q3_k_m.gguf
LFS
Q3
|
893.07 MB | Download |
|
humo-1.7b-q3_k_s.gguf
LFS
Q3
|
813.31 MB | Download |
|
humo-1.7b-q4_0.gguf
Recommended
LFS
Q4
|
1.03 GB | Download |
|
humo-1.7b-q4_1.gguf
LFS
Q4
|
1.1 GB | Download |
|
humo-1.7b-q4_k_m.gguf
LFS
Q4
|
1.15 GB | Download |
|
humo-1.7b-q4_k_s.gguf
LFS
Q4
|
1.05 GB | Download |
|
humo-1.7b-q5_0.gguf
LFS
Q5
|
1.22 GB | Download |
|
humo-1.7b-q5_1.gguf
LFS
Q5
|
1.29 GB | Download |
|
humo-1.7b-q5_k_m.gguf
LFS
Q5
|
1.27 GB | Download |
|
humo-1.7b-q5_k_s.gguf
LFS
Q5
|
1.2 GB | Download |
|
humo-1.7b-q6_k.gguf
LFS
Q6
|
1.4 GB | Download |
|
humo-1.7b-q8_0.gguf
LFS
Q8
|
1.78 GB | Download |
|
humo-17b-iq4_nl.gguf
LFS
Q4
|
9.83 GB | Download |
|
humo-17b-iq4_xs.gguf
LFS
Q4
|
9.34 GB | Download |
|
humo-17b-q3_k_s.gguf
LFS
Q3
|
7.75 GB | Download |
|
humo-17b-q4_0.gguf
LFS
Q4
|
9.99 GB | Download |
|
humo-17b-q5_0.gguf
LFS
Q5
|
11.95 GB | Download |
|
humo-17b-q6_k.gguf
LFS
Q6
|
13.86 GB | Download |
|
humo-17b-q8_0.gguf
LFS
Q8
|
17.64 GB | Download |