π Model Description
base_model: Wan-AI/Wan2.1-T2V-14B library_name: gguf quantized_by: city96 tags:
- text-to-video
This is a direct GGUF conversion of Wan-AI/Wan2.1-T2V-14B
All quants are created from the FP32 base file, though I only uploaded FP16 due to it exceeding the 50GB max file limit and gguf-split loading not currently being supported in ComfyUI-GGUF.
The model files can be used with the ComfyUI-GGUF custom node.
Place model files in ComfyUI/models/unet
- see the GitHub readme for further install instructions.
The VAE can be downloaded from this repository by Kijai
Please refer to this chart for a basic overview of quantization types.
π GGUF File List
π Filename | π¦ Size | β‘ Download |
---|---|---|
wan2.1-t2v-14b-BF16.gguf
LFS
FP16
|
27.06 GB | Download |
wan2.1-t2v-14b-F16.gguf
LFS
FP16
|
27.06 GB | Download |
wan2.1-t2v-14b-Q3_K_M.gguf
LFS
Q3
|
7.12 GB | Download |
wan2.1-t2v-14b-Q3_K_S.gguf
LFS
Q3
|
6.51 GB | Download |
wan2.1-t2v-14b-Q4_0.gguf
Recommended
LFS
Q4
|
8.41 GB | Download |
wan2.1-t2v-14b-Q4_1.gguf
LFS
Q4
|
9.06 GB | Download |
wan2.1-t2v-14b-Q4_K_M.gguf
LFS
Q4
|
9.43 GB | Download |
wan2.1-t2v-14b-Q4_K_S.gguf
LFS
Q4
|
8.59 GB | Download |
wan2.1-t2v-14b-Q5_0.gguf
LFS
Q5
|
10.05 GB | Download |
wan2.1-t2v-14b-Q5_1.gguf
LFS
Q5
|
10.7 GB | Download |
wan2.1-t2v-14b-Q5_K_M.gguf
LFS
Q5
|
10.49 GB | Download |
wan2.1-t2v-14b-Q5_K_S.gguf
LFS
Q5
|
9.88 GB | Download |
wan2.1-t2v-14b-Q6_K.gguf
LFS
Q6
|
11.62 GB | Download |
wan2.1-t2v-14b-Q8_0.gguf
LFS
Q8
|
14.79 GB | Download |