π Model Description
base_model: THUDM/glm-4-9b-chat inference: false language:
- zh
- en
- glm
- chatglm
- thudm
- quantized
- GGUF
- quantization
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
glm-4-9b-chat-GGUF
Llama.cpp static quantization of THUDM/glm-4-9b-chatOriginal Model: THUDM/glm-4-9b-chat
Original dtype: BF16 (bfloat16)
Quantized by: https://github.com/ggerganov/llama.cpp/pull/6999
IMatrix dataset: here
Files
Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|---|---|---|---|---|---|
| glm-4-9b-chat.Q80.gguf | Q80 | 9.99GB | β Available | βͺ Static |
All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
|---|---|---|---|---|---|
| glm-4-9b-chat.BF16.gguf | BF16 | 18.81GB | β Available | βͺ Static |
Downloading using huggingface-cli
If you do not have hugginface-cli installed:pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/glm-4-9b-chat-GGUF --include "glm-4-9b-chat.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/glm-4-9b-chat-GGUF --include "glm-4-9b-chat.Q8_0/*" --local-dir ./
see FAQ for merging GGUF's
Inference
Simple chat template
[gMASK]<sop><|user|>
{user_prompt}<|assistant|>
{assistant_response}<|user|>
{nextuserprompt}
Chat template with system prompt
[gMASK]<sop><|system|>
{system_prompt}<|user|>
{user_prompt}<|assistant|>
{assistant_response}<|user|>
{nextuserprompt}
Llama.cpp
llama.cpp/main -m glm-4-9b-chat.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
FAQ
Why is the IMatrix not applied everywhere?
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).How do I merge a split GGUF?
- Make sure you have
gguf-splitavailable
gguf-split, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find gguf-split
- Locate your GGUF chunks folder (ex:
glm-4-9b-chat.Q80) - Run
gguf-split --merge glm-4-9b-chat.Q80/glm-4-9b-chat.Q80-00001-of-XXXXX.gguf glm-4-9b-chat.Q80.gguf
gguf-split to the first chunk of the split.
Got a suggestion? Ping me @legraphista!
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
glm-4-9b-chat.BF16.gguf
Recommended
LFS
FP16
|
17.52 GB | Download |
|
glm-4-9b-chat.FP16.gguf
LFS
FP16
|
17.52 GB | Download |
|
glm-4-9b-chat.IQ3_M.gguf
LFS
Q3
|
4.48 GB | Download |
|
glm-4-9b-chat.IQ3_S.gguf
LFS
Q3
|
4.27 GB | Download |
|
glm-4-9b-chat.IQ3_XS.gguf
LFS
Q3
|
4.13 GB | Download |
|
glm-4-9b-chat.IQ4_NL.gguf
LFS
Q4
|
5.13 GB | Download |
|
glm-4-9b-chat.IQ4_XS.gguf
LFS
Q4
|
4.94 GB | Download |
|
glm-4-9b-chat.Q2_K.gguf
LFS
Q2
|
3.72 GB | Download |
|
glm-4-9b-chat.Q3_K.gguf
LFS
Q3
|
4.72 GB | Download |
|
glm-4-9b-chat.Q3_K_L.gguf
LFS
Q3
|
4.92 GB | Download |
|
glm-4-9b-chat.Q3_K_S.gguf
LFS
Q3
|
4.27 GB | Download |
|
glm-4-9b-chat.Q4_K.gguf
LFS
Q4
|
5.82 GB | Download |
|
glm-4-9b-chat.Q4_K_S.gguf
LFS
Q4
|
5.36 GB | Download |
|
glm-4-9b-chat.Q5_K.gguf
LFS
Q5
|
6.65 GB | Download |
|
glm-4-9b-chat.Q5_K_S.gguf
LFS
Q5
|
6.23 GB | Download |
|
glm-4-9b-chat.Q6_K.gguf
LFS
Q6
|
7.69 GB | Download |
|
glm-4-9b-chat.Q8_0.gguf
LFS
Q8
|
9.31 GB | Download |