π Model Description
license: other tags: - gguf - glm - pruned - quantized base_model: zai-org/GLM-5
GLM-5 Pruned Q4KM GGUF
Pruned and quantized version of GLM-5 in GGUF format.
Model Details
- Base Model: GLM-5 by Zhipu AI / Z.AI
- Quantization: Q4KM (4-bit, medium quality)
- Pruning: Pruned variant for reduced size
- Format: GGUF (compatible with llama.cpp, ollama, etc.)
- File Size: ~218 GB
Usage
With llama.cpp:
llama-server --model GLM-5-pruned-Q4KM.gguf --n-gpu-layers 999 --ctx-size 8192
Notes
This is a community upload of a pruned + quantized GLM-5 model.
Requires significant RAM/VRAM due to the large MoE architecture.
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
GLM-5-pruned-Q4_K_M.gguf
Recommended
LFS
Q4
|
217.1 GB | Download |