πŸ“‹ Model Description


quantized_by: ubergarm pipeline_tag: text-generation base_model: zai-org/GLM-5 basemodelrelation: quantized license: mit tags:
  • imatrix
  • conversational
  • glmmoedsa
  • ik_llama.cpp
language: - en - zh

ik_llama.cpp imatrix Quantizations of zai-org/GLM-5

NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.

Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds. Also check for ikllama.cpp windows builds by Thireus here..

These quants provide best in class perplexity for the given memory footprint.

Big Thanks

Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!

Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!

Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!

Quant Collection

Perplexity computed against wiki.test.raw. (lower is "better")

!Perplexity Chart

These two are just test quants for baseline perplexity comparison and not available for download here:

  • BF16 1404.406 GiB (16.003 BPW)

- PPL over 565 chunks for n_ctx=512 = 2.6298 +/- 0.01396
  • Q8_0 746.302 GiB (8.504 BPW)

- PPL over 565 chunks for n_ctx=512 = 2.6303 +/- 0.01398

NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!

IQ3_KS 320.216 GiB (3.649 BPW)

PPL over 565 chunks for n_ctx=512 = 2.7839 +/- 0.01508

NOTE: Actual used RAM/VRAM will be about 314.07 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.

πŸ‘ˆ Secret Recipe

#!/usr/bin/env bash

custom="

79 Repeating Layers [0-78]

Attention [0-78]

blk\..*\.attnkb\.weight=q8_0 blk\..*\.attnvb\.weight=q8_0 blk\..*\.attnkvamqa\.weight=q80 blk\..*\.attnqa\.weight=iq6_k blk\..*\.attnqb\.weight=iq6_k blk\..*\.attnoutput\.weight=iq6k

First 3 Dense Layers [0-2]

blk\..*\.ffndown\.weight=iq5ks blk\..*\.ffn(gate|up)\.weight=iq5ks

Shared Expert Layers [3-78]

blk\..*\.ffndownshexp\.weight=iq5_ks blk\..*\.ffn(gate|up)shexp\.weight=iq5_ks

Routed Experts Layers [3-78]

NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available

blk\.(78)\.ffndownexps\.weight=iq5_ks blk\.(78)\.ffn(gate|up)exps\.weight=iq5_ks blk\..*\.ffndownexps\.weight=iq4_ks blk\..*\.ffn(gate|up)exps\.weight=iq3_ks

Lightning indexer tensors [0-78]

NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available

blk\..*\.indexer\.proj\.weight=q8_0 blk\..*\.indexer\.attnk\.weight=q80 blk\..*\.indexer\.attnqb\.weight=iq6_k

NextN MTP Layer [78]

NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available

blk\..*\.nextn\.ehproj\.weight=q80

Non-Repeating Layers

tokenembd\.weight=iq4k output\.weight=iq6_k "

custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-IQ3_KS.gguf \
IQ3_KS \
128

IQ2_KL 261.988 GiB (2.985 BPW)

PPL over 565 chunks for n_ctx=512 = 3.0217 +/- 0.01651

NOTE: Actual used RAM/VRAM will be about 255.84 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.

πŸ‘ˆ Secret Recipe

#!/usr/bin/env bash

custom="

79 Repeating Layers [0-78]

Attention [0-78]

blk\..*\.attnkb\.weight=q8_0 blk\..*\.attnvb\.weight=q8_0 blk\..*\.attnkvamqa\.weight=q80 blk\..*\.attnqa\.weight=iq6_k blk\..*\.attnqb\.weight=iq6_k blk\..*\.attnoutput\.weight=iq6k

First 3 Dense Layers [0-2]

blk\..*\.ffndown\.weight=iq5ks blk\..*\.ffn(gate|up)\.weight=iq5ks

Shared Expert Layers [3-78]

blk\..*\.ffndownshexp\.weight=iq5_ks blk\..*\.ffn(gate|up)shexp\.weight=iq5_ks

Routed Experts Layers [3-78]

NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available

blk\.(78)\.ffndownexps\.weight=iq5_ks blk\.(78)\.ffn(gate|up)exps\.weight=iq5_ks blk\..*\.ffndownexps\.weight=iq3_ks blk\..*\.ffn(gate|up)exps\.weight=iq2_kl

Lightning indexer tensors [0-78]

NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available

blk\..*\.indexer\.proj\.weight=q8_0 blk\..*\.indexer\.attnk\.weight=q80 blk\..*\.indexer\.attnqb\.weight=iq6_k

NextN MTP Layer [78]

NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available

blk\..*\.nextn\.ehproj\.weight=q80

Non-Repeating Layers

tokenembd\.weight=iq4k output\.weight=iq6_k "

custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-IQ2_KL.gguf \
IQ2_KL \
128

smol-IQ2_KS 205.738 GiB (2.344 BPW)

PPL over 565 chunks for n_ctx=512 = 3.7792 +/- 0.02183

NOTE: Actual used RAM/VRAM will be about 200 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.

πŸ‘ˆ Secret Recipe

#!/usr/bin/env bash

custom="

79 Repeating Layers [0-78]

Attention [0-78]

blk\..*\.attnkb\.weight=q8_0 blk\..*\.attnvb\.weight=q8_0 blk\..*\.attnkvamqa\.weight=q80 blk\..*\.attnqa\.weight=iq6_k blk\..*\.attnqb\.weight=iq6_k blk\..*\.attnoutput\.weight=iq6k

First 3 Dense Layers [0-2]

blk\..*\.ffndown\.weight=iq5ks blk\..*\.ffn(gate|up)\.weight=iq5ks

Shared Expert Layers [3-78]

blk\..*\.ffndownshexp\.weight=iq5_ks blk\..*\.ffn(gate|up)shexp\.weight=iq5_ks

Routed Experts Layers [3-78]

NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available

blk\.(78)\.ffndownexps\.weight=iq5_ks blk\.(78)\.ffn(gate|up)exps\.weight=iq5_ks blk\..*\.ffndownexps\.weight=iq2_ks blk\..*\.ffn(gate|up)exps\.weight=iq2_ks

Lightning indexer tensors [0-78]

NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available

blk\..*\.indexer\.proj\.weight=q8_0 blk\..*\.indexer\.attnk\.weight=q80 blk\..*\.indexer\.attnqb\.weight=iq6_k

NextN MTP Layer [78]

NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available

blk\..*\.nextn\.ehproj\.weight=q80

Non-Repeating Layers

tokenembd\.weight=iq4k output\.weight=iq6_k "

custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-smol-IQ2_KS.gguf \
IQ2_KS \
128

smol-IQ1_KT 169.190 GiB (1.928 BPW)

PPL over 565 chunks for n_ctx=512 = 4.6032 +/- 0.02768

NOTE: Actual used RAM/VRAM will be about 163.046 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.

πŸ‘ˆ Secret Recipe

custom="
#!/usr/bin/env bash

79 Repeating Layers [0-78]

Attention [0-78]

blk\..*\.attnkb\.weight=q8_0 blk\..*\.attnvb\.weight=q8_0 blk\..*\.attnkvamqa\.weight=q80 blk\..*\.attnqa\.weight=iq6_k blk\..*\.attnqb\.weight=iq6_k blk\..*\.attnoutput\.weight=iq6k

First 3 Dense Layers [0-2]

blk\..*\.ffndown\.weight=iq5ks blk\..*\.ffn(gate|up)\.weight=iq5ks

Shared Expert Layers [3-78]

blk\..*\.ffndownshexp\.weight=iq5_ks blk\..*\.ffn(gate|up)shexp\.weight=iq5_ks

Routed Experts Layers [3-78]

NOTE: blk.78.* NOT implemented at time of quantizing so no imatrix data available

blk\.(78)\.ffndownexps\.weight=iq5_ks blk\.(78)\.ffn(gate|up)exps\.weight=iq5_ks blk\..*\.ffndownexps\.weight=iq1_kt blk\..*\.ffn(gate|up)exps\.weight=iq1_kt

Lightning indexer tensors [0-78]

NOTE: indexer.* NOT implemented at time of quantizing so no imatrix data available

blk\..*\.indexer\.proj\.weight=q8_0 blk\..*\.indexer\.attnk\.weight=q80 blk\..*\.indexer\.attnqb\.weight=iq6_k

NextN MTP Layer [78]

NOTE: nextn.* NOT implemented at time of quantizing so no imatrix data available

blk\..*\.nextn\.ehproj\.weight=q80

Non-Repeating Layers

tokenembd\.weight=iq4k output\.weight=iq6_k "

custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/GLM-5-GGUF/imatrix-GLM-5-BF16.dat \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-256x22B-5-BF16-00001-of-00033.gguf \
/mnt/data/models/ubergarm/GLM-5-GGUF/GLM-5-smol-IQ1_KT.gguf \
IQ1_KT \
128

Quick Start

# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp

Build for hybrid CPU+CUDA

$ cmake -B build -DCMAKEBUILDTYPE=Release -DGGML_CUDA=ON $ cmake --build build --config Release -j $(nproc)

Download Quants

$ pip install huggingface_hub $ hf download --local-dir ./GLM-5-GGUF/ --include=smol-IQ2_KS/*.gguf ubergarm/GLM-5-GGUF

Hybrid CPU and Single GPU

echo TODO or look at ubergarm/Kimi-K2.5-GGUF model card quick (as it is also MLA arch)

Multi GPU Full Offload

echo TODO or look at ubergarm/Kimi-K2.5-GGUF model card quick (as it is also MLA arch)

CPU-Only

numactl -N ${SOCKET} -m ${SOCKET} \ ./build/bin/llama-server \ --model "$model"\ --alias ubergarm/GLM-5 \ -ger \ --merge-qkv \ --ctx-size 131072 \ -ctk q8_0 \ -mla 3 \ --parallel 1 \ --threads 96 \ --threads-batch 128 \ --numa numactl \ --host 127.0.0.1 \ --port 8080 \ --no-mmap \ --jinja

I tested even the smol-IQ1_KT is working with opencode! You can also bring your own template with --chat-template-file myTemplate.jinja.

References

πŸ“‚ GGUF File List

No GGUF files available