📋 Model Description


base_model: moonshotai/Kimi-Linear-48B-A3B-Instruct library_name: transformers language: - en tags: - pytorch license: mit pipeline_tag: text-generation quantized_by: ymcki

This is a repo for experimental GGUFs for the backend agnostic
implementation of the Kimi-Linear model support that requires a llama.cpp
from this repo.
You can git clone it and compile locally.

git clone https://github.com/ymcki/llama.cpp --branch Kimi-Linear
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release -j 6

If you have enough VRAM, you can run it purely on your graphics card:

./build/bin/llama-cli -m ~/Kimi-Linear-48B-A3B-Instruct-GGUF/Kimi-Linear-48B-A3B-Instruct.Q4KM.gguf -c 8192 -ngl 100 --mmap

Otherwise, you can only load the shared experts and KV cache to your graphics card and the rest to CPU and RAM.

./build/bin/llama-cli -m ~/Kimi-Linear-48B-A3B-Instruct-GGUF/Kimi-Linear-48B-A3B-Instruct.Q4KM.gguf -c 8192 -cmoe -ngl 100 --mmap

I am going to only make ggufs without imatrix and ggufs with an imatrix based on
c4enja_imatrix.txt for better Japanese performance as bartowski and unsloth
will make ggufs with English imatrix anyway.

Base perplexity for f16 gguf is 7.291970 ± 0.048577.

Seems like MLA KV cache can only be run at F16 probably due to itself being a kind of compression. You can use this table to see how much context you can run with a single 24GB card.

| Quant Type | imatrix | File Size | Delta Perplexity | KL Divergence | Description |
| ---------- | ------- | ----------| ---------------- | ------------- | ----------- |
| Q5KM | c4enja_imatrix.txt | 34.87GB | 7.115874 ± 0.047587 | 0.074066 ± 0.001537 | Good |
| Q5KM | None | 34.87GB | 7.133672 ± 0.047741 | 0.074684 ± 0.001535 | Good. Slightly worse than imatrix |
| Q4KM | c4enja_imatrix.txt | 29.70GB | 7.147482 ± 0.047851 | 0.081894 ± 0.001521 | Good. Can run 128k context on a single 32GB card. |
| Q4KM | None | 29.70GB | 7.172188 ± 0.048107 | 0.083700 ± 0.00152 | Good. Slightly worse than imatrix |
| MXFP4_MOE | None | 27.21GB | 7.179840 ± 0.047966 | 0.088789 ± 0.001544 | Good. Can run 240k context on a single 32GB card. |
| MXFP4MOE | c4enjaimatrix.txt | 27.21GB | 7.179840 ± 0.047966 | 0.088789 ± 0.001544 | Good. Same as the no imatrix version. |
| IQ4XS | c4enjaimatrix.txt | 26.27GB | 7.208724 ± 0.048490 | 0.088246 ± 0.001528 | Good. Can run 304k context on a single 32GB card. |
| IQ4NL | c4enjaimatrix.txt | 27.79GB | 7.209342 ± 0.048412 | 0.087678 ± 0.001532 | Doesn't make sense compare to MXFP4_MOE |
| IQ3M | c4enjaimatrix.txt | 21.55GB | 7.368516 ± 0.048425 | 0.113435 ± 0.001457 | Quite Good. Can run 96k context on a single 24GB card. |
| IQ3S | c4enjaimatrix.txt | 21.33GB | 7.448991 ± 0.049167 | 0.119987 ± 0.001466 | Quite Good. Can run 112k context on a single 24GB card. |
| IQ3XS | c4enjaimatrix.txt | 20.17GB | 7.534649 ± 0.049461 | 0.129645 ± 0.001448 | Quite Good. Can run 176k context on a single 24GB card. |
| Q3KS | c4enja_imatrix.txt | 21.33GB | 7.557247 ± 0.051236 | 0.131708 ± 0.001521 | Quite Good. Can run 112k context on a single 24GB card. |
| Q3KS | None | 21.33GB | 7.632887 ± 0.051792 | 0.146355 ± 0.001534 | Quite Good but worse than imatrix. Good for CPU use. |
| IQ3XXS | c4enjaimatrix.txt | 18.99GB | 7.780732 ± 0.052592 | 0.164925 ± 0.001537 | Not so good but can run 240k context on a single 24GB card. |
| IQ2M | c4enjaimatrix.txt | 16.13GB | 8.207663 ± 0.054957 | 0.224437 ± 0.001536 | Slightly batter than Q2_K but you can run 400k context on a single 24GB card. |
| Q2K | c4enjaimatrix.txt | 18.03GB | 8.295144 ± 0.057566 | 0.221437 ± 0.001617 | So-so but you can run 288k context on a single 24GB card. Good for performance evaluation. |
| Q2_K | None | 18.03GB | 8.648201 ± 0.059234 | 0.267082 ± 0.001659 | Worse than imatrix |

As expected, imatrix has no effect on MXFP4MOE. From this reddit thread, its perplexity
is about the same as IQ4XS but about 6% bigger file size. Here, its perplexity is better than IQ4XS. This makes it a viable option.

📂 GGUF File List

📁 Filename 📦 Size ⚡ Download
Kimi-Linear-48B-A3B-Instruct-jp-imatrix.IQ3_M.gguf
LFS Q3
20.07 GB Download
Kimi-Linear-48B-A3B-Instruct-jp-imatrix.IQ4_XS.gguf
LFS Q4
24.47 GB Download
Kimi-Linear-48B-A3B-Instruct-jp-imatrix.Q2_K.gguf
LFS Q2
16.79 GB Download
Kimi-Linear-48B-A3B-Instruct-jp-imatrix.Q4_K_M.gguf
Recommended LFS Q4
27.66 GB Download
Kimi-Linear-48B-A3B-Instruct-jp-imatrix.Q5_K_M.gguf
LFS Q5
32.47 GB Download
Kimi-Linear-48B-A3B-Instruct.MXFP4_MOE.gguf
LFS
25.34 GB Download