πŸ“‹ Model Description


license: apache-2.0 base_model: Tongyi-MAI/Z-Image tags: - gguf - diffusion - text-to-image - lumina2 - comfyui - quantized library_name: gguf pipeline_tag: text-to-image

Z-Image Base GGUF

GGUF quantized version of Tongyi-MAI/Z-Image (Alibaba's 6B parameter diffusion model) for use with ComfyUI-GGUF.

Model Information

PropertyValue
Base ModelTongyi-MAI/Z-Image
ArchitectureLumina2 (DiT-based)
Parameters~6B
TypeNon-distilled (supports CFG, negative prompts, LoRA)
Recommended Steps28-50

Available Quantizations

FileSizeVRAM RequiredQuality
zimagebaseQ80.gguf6.8 GB~7-8 GBBest
zimagebase_BF16.gguf12.4 GB~13 GBOriginal

Usage with ComfyUI

Requirements

  1. ComfyUI
  2. ComfyUI-GGUF custom nodes

Installation

  1. Install ComfyUI-GGUF:
cd ComfyUI/custom_nodes
git clone https://github.com/city96/ComfyUI-GGUF
pip install --upgrade gguf
  1. Download the GGUF file and place it in:
ComfyUI/models/unet/
  1. Use the "Unet Loader (GGUF)" node instead of the standard model loader.

Credits

License

Apache 2.0 (same as original Z-Image model)

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
z_image_base_BF16.gguf
LFS FP16
11.47 GB Download
z_image_base_Q4_K_M.gguf
Recommended LFS Q4
4.59 GB Download
z_image_base_Q8_0.gguf
LFS Q8
6.73 GB Download