πŸ“‹ Model Description


base_model:
  • zai-org/GLM-4.7-Flash

ngxson/GLM-4.7-Flash-GGUF

See: https://github.com/ggml-org/llama.cpp/pull/18936

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
GLM-4.7-Flash-Q4_K_M.gguf
Recommended LFS Q4
16.89 GB Download
GLM-4.7-Flash-Q8_0.gguf
LFS Q8
29.66 GB Download
GLM-4.7-Flash-f16.gguf
LFS FP16
55.79 GB Download