πŸ“‹ Model Description


base_model: THUDM/glm-4-9b-chat inference: false language:
  • zh
  • en
library_name: gguf license: other license_link: https://huggingface.co/THUDM/glm-4-9b-chat/blob/main/LICENSE license_name: glm-4 pipeline_tag: text-generation quantized_by: legraphista tags:
  • glm
  • chatglm
  • thudm
  • quantized
  • GGUF
  • quantization
  • static
  • 16bit
  • 8bit
  • 6bit
  • 5bit
  • 4bit
  • 3bit
  • 2bit

glm-4-9b-chat-GGUF

Llama.cpp static quantization of THUDM/glm-4-9b-chat

Original Model: THUDM/glm-4-9b-chat
Original dtype: BF16 (bfloat16)
Quantized by: https://github.com/ggerganov/llama.cpp/pull/6999
IMatrix dataset: here

- Common Quants - All Quants - Simple chat template - Chat template with system prompt - Llama.cpp - Why is the IMatrix not applied everywhere? - How do I merge a split GGUF?

Files

Common Quants

FilenameQuant typeFile SizeStatusUses IMatrixIs Split
glm-4-9b-chat.Q80.ggufQ809.99GBβœ… Availableβšͺ Static
πŸ“¦ No | glm-4-9b-chat.Q6K.gguf | Q6K | 8.26GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q4K.gguf | Q4K | 6.25GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q3K.gguf | Q3K | 5.06GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q2K.gguf | Q2K | 3.99GB | βœ… Available | βšͺ Static | πŸ“¦ No

All Quants

FilenameQuant typeFile SizeStatusUses IMatrixIs Split
glm-4-9b-chat.BF16.ggufBF1618.81GBβœ… Availableβšͺ Static
πŸ“¦ No | glm-4-9b-chat.FP16.gguf | F16 | 18.81GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q80.gguf | Q80 | 9.99GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q6K.gguf | Q6K | 8.26GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q5K.gguf | Q5K | 7.14GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q5KS.gguf | Q5K_S | 6.69GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q4K.gguf | Q4K | 6.25GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q4KS.gguf | Q4K_S | 5.75GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.IQ4NL.gguf | IQ4NL | 5.51GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.IQ4XS.gguf | IQ4XS | 5.30GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q3K.gguf | Q3K | 5.06GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q3KL.gguf | Q3K_L | 5.28GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q3KS.gguf | Q3K_S | 4.59GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.IQ3M.gguf | IQ3M | 4.81GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.IQ3S.gguf | IQ3S | 4.59GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.IQ3XS.gguf | IQ3XS | 4.43GB | βœ… Available | βšͺ Static | πŸ“¦ No | glm-4-9b-chat.Q2K.gguf | Q2K | 3.99GB | βœ… Available | βšͺ Static | πŸ“¦ No

Downloading using huggingface-cli

If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/glm-4-9b-chat-GGUF --include "glm-4-9b-chat.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/glm-4-9b-chat-GGUF --include "glm-4-9b-chat.Q8_0/*" --local-dir ./

see FAQ for merging GGUF's


Inference

Simple chat template

[gMASK]<sop><|user|>
{user_prompt}<|assistant|>
{assistant_response}<|user|>
{nextuserprompt}

Chat template with system prompt

[gMASK]<sop><|system|>
{system_prompt}<|user|>
{user_prompt}<|assistant|>
{assistant_response}<|user|>
{nextuserprompt}

Llama.cpp

llama.cpp/main -m glm-4-9b-chat.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"

FAQ

Why is the IMatrix not applied everywhere?

According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).

How do I merge a split GGUF?

  1. Make sure you have gguf-split available
- To get hold of gguf-split, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find gguf-split
  1. Locate your GGUF chunks folder (ex: glm-4-9b-chat.Q80)
  2. Run gguf-split --merge glm-4-9b-chat.Q80/glm-4-9b-chat.Q80-00001-of-XXXXX.gguf glm-4-9b-chat.Q80.gguf
- Make sure to point gguf-split to the first chunk of the split.

Got a suggestion? Ping me @legraphista!

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
glm-4-9b-chat.BF16.gguf
Recommended LFS FP16
17.52 GB Download
glm-4-9b-chat.FP16.gguf
LFS FP16
17.52 GB Download
glm-4-9b-chat.IQ3_M.gguf
LFS Q3
4.48 GB Download
glm-4-9b-chat.IQ3_S.gguf
LFS Q3
4.27 GB Download
glm-4-9b-chat.IQ3_XS.gguf
LFS Q3
4.13 GB Download
glm-4-9b-chat.IQ4_NL.gguf
LFS Q4
5.13 GB Download
glm-4-9b-chat.IQ4_XS.gguf
LFS Q4
4.94 GB Download
glm-4-9b-chat.Q2_K.gguf
LFS Q2
3.72 GB Download
glm-4-9b-chat.Q3_K.gguf
LFS Q3
4.72 GB Download
glm-4-9b-chat.Q3_K_L.gguf
LFS Q3
4.92 GB Download
glm-4-9b-chat.Q3_K_S.gguf
LFS Q3
4.27 GB Download
glm-4-9b-chat.Q4_K.gguf
LFS Q4
5.82 GB Download
glm-4-9b-chat.Q4_K_S.gguf
LFS Q4
5.36 GB Download
glm-4-9b-chat.Q5_K.gguf
LFS Q5
6.65 GB Download
glm-4-9b-chat.Q5_K_S.gguf
LFS Q5
6.23 GB Download
glm-4-9b-chat.Q6_K.gguf
LFS Q6
7.69 GB Download
glm-4-9b-chat.Q8_0.gguf
LFS Q8
9.31 GB Download