π Model Description
license: apache-2.0 library_name: gguf base_model:
- Wan-AI/Wan2.1-VACE-14B
- video
- video-generation
Example workflow - based on the Comfyui example workflow
This is a direct GGUF conversion of Wan-AI/Wan2.1-VACE-14B
All quants are created from the FP32 base file, though I only uploaded the Q8_0 and less, if you want the F16 or BF16 one I would upload it per request.
The model files can be used with the ComfyUI-GGUF custom node.
Place model files in ComfyUI/models/unet
- see the GitHub readme for further install instructions.
The VAE can be downloaded from here
Please refer to this chart for a basic overview of quantization types.
For conversion I used the conversion scripts from city96
π GGUF File List
π Filename | π¦ Size | β‘ Download |
---|---|---|
Wan2.1_14B_VACE-BF16.gguf
LFS
FP16
|
32.31 GB | Download |
Wan2.1_14B_VACE-F16.gguf
LFS
FP16
|
32.31 GB | Download |
Wan2.1_14B_VACE-Q3_K_S.gguf
LFS
Q3
|
7.31 GB | Download |
Wan2.1_14B_VACE-Q4_0.gguf
Recommended
LFS
Q4
|
9.62 GB | Download |
Wan2.1_14B_VACE-Q4_1.gguf
LFS
Q4
|
10.42 GB | Download |
Wan2.1_14B_VACE-Q4_K_M.gguf
LFS
Q4
|
10.84 GB | Download |
Wan2.1_14B_VACE-Q4_K_S.gguf
LFS
Q4
|
9.83 GB | Download |
Wan2.1_14B_VACE-Q5_0.gguf
LFS
Q5
|
11.61 GB | Download |
Wan2.1_14B_VACE-Q5_1.gguf
LFS
Q5
|
12.41 GB | Download |
Wan2.1_14B_VACE-Q5_K_M.gguf
LFS
Q5
|
12.14 GB | Download |
Wan2.1_14B_VACE-Q5_K_S.gguf
LFS
Q5
|
11.41 GB | Download |
Wan2.1_14B_VACE-Q6_K.gguf
LFS
Q6
|
13.53 GB | Download |
Wan2.1_14B_VACE-Q8_0.gguf
LFS
Q8
|
17.38 GB | Download |