πŸ“‹ Model Description


quantized_by: ubergarm pipeline_tag: text-generation base_model: moonshotai/Kimi-K2.5 license: other license_name: modified-mit license_link: https://huggingface.co/moonshotai/Kimi-K2.5/blob/main/LICENSE basemodelrelation: quantized tags:
  • mla
  • imatrix
  • conversational
  • ik_llama.cpp

imatrix Quantization of moonshotai/Kimi-K2.5

The quants in this collection REQUIRE ikllama.cpp fork to support the ik's latest SOTA quants and optimizations! Do not download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc! (Or get the "full quality" Q4X from AesSedai which runs on both ik and mainline (link below). Thank you AesSedai for your efforts Kimi-K2.5-GGUF !!!).

NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.

Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for Windows builds by Thireus here. which have been CUDA 12.8.

These quants provide best in class perplexity for the given memory footprint.

Big Thanks

Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!

Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models!

Finally, I really appreciate all the support from aifoundry.org so check out their open source RISC-V solutions, and of course huggingface for hosting all these big quants!

Quant Collection

Perplexity computed against wiki.test.raw. (lower is "better")

!Perplexity Chart

You can get the "full quality" from AesSedai/Kimi-K2.5-GGUF Q4_X

  • Q4_X 543.617 GiB (4.549 BPW)

- Final estimate: PPL over 568 chunks for n_ctx=512 = 1.8235 +/- 0.00698

IQ3_K 459.432 GiB (3.845 BPW)

Final estimate: PPL over 568 chunks for n_ctx=512 = 1.8775 +/- 0.00727

Note: Just on this quant, imatrix was applied only to ffn(gate|up)exps tensors that are iq3_k.

πŸ‘ˆ Secret Recipe

#!/usr/bin/env bash

custom="

Attention [0-60] (GPU)


blk\..*\.attnkb\.weight=q8_0
blk\..*\.attnvb\.weight=q8_0

Balance of attn tensors

blk\..*\.attnkvamqa\.weight=q80 blk\..*\.attnqa\.weight=q8_0 blk\..*\.attnqb\.weight=q8_0 blk\..*\.attnoutput\.weight=q80

First Single Dense Layer [0] (GPU)

blk\..*\.ffndown\.weight=q80 blk\..*\.ffn(gate|up)\.weight=q80

Shared Expert [1-60] (GPU)

blk\..*\.ffndownshexp\.weight=q8_0 blk\..*\.ffn(gate|up)shexp\.weight=q8_0

Routed Experts [1-60] (CPU)

NOTE: imatrix is only applied to the iq3_k tensors for this recipe

blk\..*\.ffndownexps\.weight=q4_0 blk\..*\.ffn(gate|up)exps\.weight=iq3_k

tokenembd\.weight=iq6k
output\.weight=iq6_k
"

custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Kimi-K2.5-GGUF/imatrix-Kimi-K2.5-Q4_X.dat \
--include-weights ffngateexps \
--include-weights ffnupexps \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-384x14B-BF16-00001-of-00046.gguf \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-IQ3_K.gguf \
IQ3_K \
128

smol-IQ3_KS 388.258 GiB (3.249 BPW)

Final estimate: PPL over 568 chunks for n_ctx=512 = 1.9562 +/- 0.00772

πŸ‘ˆ Secret Recipe

#!/usr/bin/env bash

custom="

Attention [0-60] (GPU)


blk\..*\.attnkb\.weight=q8_0
blk\..*\.attnvb\.weight=q8_0

Balance of attn tensors

blk\..*\.attnkvamqa\.weight=q80 blk\..*\.attnqa\.weight=q8_0 blk\..*\.attnqb\.weight=q8_0 blk\..*\.attnoutput\.weight=q80

First Single Dense Layer [0] (GPU)

blk\..*\.ffndown\.weight=q80 blk\..*\.ffn(gate|up)\.weight=q80

Shared Expert [1-60] (GPU)

blk\..*\.ffndownshexp\.weight=q8_0 blk\..*\.ffn(gate|up)shexp\.weight=q8_0

Routed Experts [1-60] (CPU)

blk\..*\.ffndownexps\.weight=iq3_ks blk\..*\.ffn(gate|up)exps\.weight=iq3_ks

tokenembd\.weight=iq4k
output\.weight=iq6_k
"

custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Kimi-K2.5-GGUF/imatrix-Kimi-K2.5-Q4_X.dat \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-384x14B-BF16-00001-of-00046.gguf \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-smol-IQ3_KS.gguf \
IQ3_KS \
128

smol-IQ2_KL 329.195 GiB (2.755 BPW)

Final estimate: PPL over 568 chunks for n_ctx=512 = 2.1813 +/- 0.00899

πŸ‘ˆ Secret Recipe

#!/usr/bin/env bash

custom="

Attention [0-60] (GPU)


blk\..*\.attnkb\.weight=q8_0
blk\..*\.attnvb\.weight=q8_0

Balance of attn tensors

blk\..*\.attnkvamqa\.weight=q80 blk\..*\.attnqa\.weight=q8_0 blk\..*\.attnqb\.weight=q8_0 blk\..*\.attnoutput\.weight=q80

First Single Dense Layer [0] (GPU)

blk\..*\.ffndown\.weight=q80 blk\..*\.ffn(gate|up)\.weight=q80

Shared Expert [1-60] (GPU)

blk\..*\.ffndownshexp\.weight=q8_0 blk\..*\.ffn(gate|up)shexp\.weight=q8_0

Routed Experts [1-60] (CPU)

blk\..*\.ffndownexps\.weight=iq2_kl blk\..*\.ffn(gate|up)exps\.weight=iq2_kl

tokenembd\.weight=iq4k
output\.weight=iq6_k
"

custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Kimi-K2.5-GGUF/imatrix-Kimi-K2.5-Q4_X.dat \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-384x14B-BF16-00001-of-00046.gguf \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-smol-IQ2_KL.gguf \
IQ2_KL \
128

smol-IQ2_KS 270.133 GiB (2.261 BPW)

Final estimate: PPL over 568 chunks for n_ctx=512 = 2.6209 +/- 0.01158

πŸ‘ˆ Secret Recipe

#!/usr/bin/env bash

custom="

Attention [0-60] (GPU)


blk\..*\.attnkb\.weight=q8_0
blk\..*\.attnvb\.weight=q8_0

Balance of attn tensors

blk\..*\.attnkvamqa\.weight=q80 blk\..*\.attnqa\.weight=q8_0 blk\..*\.attnqb\.weight=q8_0 blk\..*\.attnoutput\.weight=q80

First Single Dense Layer [0] (GPU)

blk\..*\.ffndown\.weight=q80 blk\..*\.ffn(gate|up)\.weight=q80

Shared Expert [1-60] (GPU)

blk\..*\.ffndownshexp\.weight=q8_0 blk\..*\.ffn(gate|up)shexp\.weight=q8_0

Routed Experts [1-60] (CPU)

blk\..*\.ffndownexps\.weight=iq2_ks blk\..*\.ffn(gate|up)exps\.weight=iq2_ks

tokenembd\.weight=iq4k
output\.weight=iq6_k
"

custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Kimi-K2.5-GGUF/imatrix-Kimi-K2.5-Q4_X.dat \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-384x14B-BF16-00001-of-00046.gguf \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-smol-IQ2_KS.gguf \
IQ2_KS \
128

smol-IQ1_KT 218.936 GiB (1.832 BPW)

Final estimate: PPL over 568 chunks for n_ctx=512 = 3.2450 +/- 0.01540

only for the desperate

Also keep in mind KT trellis quants generally are slower token generation given likely compute bottleneck if running on CPU, but if it is all you can fit then well... They are fast on GPU similar to EXL3.

πŸ‘ˆ Secret Recipe

#!/usr/bin/env bash

custom="

Attention [0-60] (GPU)


blk\..*\.attnkb\.weight=q8_0
blk\..*\.attnvb\.weight=q8_0

Balance of attn tensors

blk\..*\.attnkvamqa\.weight=q80 blk\..*\.attnqa\.weight=q8_0 blk\..*\.attnqb\.weight=q8_0 blk\..*\.attnoutput\.weight=q80

First Single Dense Layer [0] (GPU)

blk\..*\.ffndown\.weight=q80 blk\..*\.ffn(gate|up)\.weight=q80

Shared Expert [1-60] (GPU)

blk\..*\.ffndownshexp\.weight=q8_0 blk\..*\.ffn(gate|up)shexp\.weight=q8_0

Routed Experts [1-60] (CPU)

blk\..*\.ffndownexps\.weight=iq1_kt blk\..*\.ffn(gate|up)exps\.weight=iq1_kt

tokenembd\.weight=iq4k
output\.weight=iq6_k
"

custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Kimi-K2.5-GGUF/imatrix-Kimi-K2.5-Q4_X.dat \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-384x14B-BF16-00001-of-00046.gguf \
/mnt/data/models/ubergarm/Kimi-K2.5-GGUF/Kimi-K2.5-smol-IQ1_KT.gguf \
IQ1_KT \
128

Quick Start

# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp

Build for hybrid CPU+CUDA (or set GGML_CUDA=OFF for CPU only)

$ cmake -B build -DCMAKEBUILDTYPE=Release -DGGML_CUDA=ON $ cmake --build build --config Release -j $(nproc)

Hybrid CPU+GPU

echo TODO

Run CPU-Only on single NUMA node e.g. NPS1

numactl -N ${SOCKET} -m ${SOCKET} \ ./build/bin/llama-server \ --model "$model"\ --alias ubergarm/Kimi-K2.5-GGUF \ --merge-qkv \ --ctx-size 131072 \ -ctk q8_0 \ -mla 3 \ --parallel 1 \ --threads 96 \ --threads-batch 128 \ --numa numactl \ --host 127.0.0.1 \ --port 8080 \ --no-mmap \ --jinja \ --special \ --chat-template-file ./models/templates/Kimi-K2-Thinking.jinja

--validate-quants

NOTE: I still need to read up on what people are doing for the chat template. To get my pydantic-ai tool calling test working I had to use the old Kimi-K2-Thinking template and --jinja --special. Open a comment and share your tool-use configuration. You might have luck with jukofyork's suggestion here

Q4_X Patch

jukofyork's patch below is applied before running llama-quantize to make the "full quality" Q4X, which you can download from AesSedai. I didn't upload the Q4X I made and used for imatrix, but it should be similar. I used the original llm-compressor safetensors and AesSedai's PR linked in references below to create it with mainline llama.cpp. No need for end users to do anything, this patch is only required during the quantization step.

https://github.com/ggml-org/llama.cpp/pull/17064#issuecomment-3521098057

diff --git a/ggml/src/ggml-quants.c b/ggml/src/ggml-quants.c
index 20a9831b..05feef4f 100644
--- a/ggml/src/ggml-quants.c
+++ b/ggml/src/ggml-quants.c
@@ -689,7 +689,7 @@ void quantizerowq40ref(const float  restrict x, blockq40  restrict y, in
             }
         }

  • const float d = max / -8;
  • const float d = max / -7;
const float id = d ? 1.0f/d : 0.0f;

y[i].d = GGMLFP32TO_FP16(d);

References

πŸ“‚ GGUF File List

No GGUF files available