π Model Description
base_model: Wan-AI/Wan2.1-I2V-14B-720P library_name: gguf quantized_by: city96 tags:
- video
- video-generation
- en
- zh
This is a direct GGUF conversion of Wan-AI/Wan2.1-I2V-14B-720P
All quants are created from the FP32 base file, though I only uploaded FP16 due to it exceeding the 50GB max file limit and gguf-split loading not currently being supported in ComfyUI-GGUF.
The model files can be used with the ComfyUI-GGUF custom node.
Place model files in ComfyUI/models/unet
- see the GitHub readme for further install instructions.
The other files required can be downloaded from this repository by Comfy-Org
Please refer to this chart for a basic overview of quantization types.
π GGUF File List
π Filename | π¦ Size | β‘ Download |
---|---|---|
wan2.1-i2v-14b-720p-BF16.gguf
LFS
FP16
|
31 GB | Download |
wan2.1-i2v-14b-720p-F16.gguf
LFS
FP16
|
31 GB | Download |
wan2.1-i2v-14b-720p-Q3_K_M.gguf
LFS
Q3
|
8 GB | Download |
wan2.1-i2v-14b-720p-Q3_K_S.gguf
LFS
Q3
|
7.38 GB | Download |
wan2.1-i2v-14b-720p-Q4_0.gguf
Recommended
LFS
Q4
|
9.54 GB | Download |
wan2.1-i2v-14b-720p-Q4_1.gguf
LFS
Q4
|
10.32 GB | Download |
wan2.1-i2v-14b-720p-Q4_K_M.gguf
LFS
Q4
|
10.56 GB | Download |
wan2.1-i2v-14b-720p-Q4_K_S.gguf
LFS
Q4
|
9.72 GB | Download |
wan2.1-i2v-14b-720p-Q5_0.gguf
LFS
Q5
|
11.42 GB | Download |
wan2.1-i2v-14b-720p-Q5_1.gguf
LFS
Q5
|
12.2 GB | Download |
wan2.1-i2v-14b-720p-Q5_K_M.gguf
LFS
Q5
|
11.87 GB | Download |
wan2.1-i2v-14b-720p-Q5_K_S.gguf
LFS
Q5
|
11.26 GB | Download |
wan2.1-i2v-14b-720p-Q6_K.gguf
LFS
Q6
|
13.26 GB | Download |
wan2.1-i2v-14b-720p-Q8_0.gguf
LFS
Q8
|
16.9 GB | Download |