πŸ“‹ Model Description


base_model: zai-org/GLM-4.6V-Flash

GLM-4.6V-Flash-GGUF

This model is converted from zai-org/GLM-4.6V-Flash to GGUF using converthftogguf.py

To use it:

llama-server -hf ggml-org/GLM-4.6V-Flash-GGUF

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
GLM-4.6V-Flash-Q4_K_M.gguf
Recommended LFS Q4
5.74 GB Download
GLM-4.6V-Flash-Q8_0.gguf
LFS Q8
9.31 GB Download
mmproj-GLM-4.6V-Flash-Q8_0.gguf
LFS Q8
934.65 MB Download