πŸ“‹ Model Description


base_model:
  • QuantStack/Wan2.1T2V14BFusionXVACE
basemodelrelation: quantized library_name: gguf quantized_by: lym00 tags:
  • text-to-video
  • image-to-video
  • video-to-video
  • quantized
language:
  • en
license: apache-2.0

This is a GGUF conversion of QuantStack/Wan2.1T2V14BFusionX_VACE.

All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.

Usage

The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:

TypeNameLocationDownload
Main ModelWan2.1T2V14BFusionXVACE-GGUFComfyUI/models/unetGGUF (this repo)
Text Encoderumt5-xxl-encoderComfyUI/models/textencodersSafetensors / GGUF
VAEWan21VAEbf16ComfyUI/models/vaeSafetensors
ComfyUI example workflow

Notes

All original licenses and restrictions from the base models still apply.

## Reference

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Wan2.1_T2V_14B_FusionX_VACE-F16.gguf
LFS FP16
32.3 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q2_K.gguf
LFS Q2
5.92 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q3_K_L.gguf
LFS Q3
8.72 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q3_K_S.gguf
LFS Q3
7.3 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q4_0.gguf
Recommended LFS Q4
9.61 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q4_1.gguf
LFS Q4
10.41 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q4_K_M.gguf
LFS Q4
10.83 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q4_K_S.gguf
LFS Q4
9.82 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q5_0.gguf
LFS Q5
11.6 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q5_1.gguf
LFS Q5
12.4 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q5_K_M.gguf
LFS Q5
12.13 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q5_K_S.gguf
LFS Q5
11.4 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q6_K.gguf
LFS Q6
13.52 GB Download
Wan2.1_T2V_14B_FusionX_VACE-Q8_0.gguf
LFS Q8
17.37 GB Download
Wan2.1_T2V_14B_FusionX_VACE-VACE-Q3_K_M.gguf
LFS Q3
8.03 GB Download