π Model Description
quantized_by: ubergarm pipeline_tag: text-generation base_model: MiniMaxAI/MiniMax-M2.5 basemodelrelation: quantized license_name: modified-mit license_link: https://github.com/MiniMax-AI/MiniMax-M2.5/blob/main/LICENSE tags:
- imatrix
- conversational
- minimaxm2
- ikllama.cpp
ik_llama.cpp imatrix Quantizations of MiniMaxAI/MiniMax-M2.5
NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for Windows builds by Thireus here. which have been CUDA 12.8.
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!
Quant Collection
Perplexity computed against wiki.test.raw. (lower is "better")These two are just a test quants for baseline perplexity comparison and not available for download here:
BF16426.060 GiB (16.003 BPW)
- PPL over 552 chunks for n_ctx=512 = 8.3386 +/- 0.06651
Q8_0226.431 GiB (8.505 BPW)
- PPL over 552 chunks for n_ctx=512 = 8.3590 +/- 0.06673
NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!
IQ5_K 157.771 GiB (5.926 BPW)
PPL over 552 chunks for n_ctx=512 = 8.4860 +/- 0.06815custom="
61 Repeating Layers [0-61]
Attention [0-61] GPU
blk\..\.attnq.=q80
blk\..\.attnk.=q80
blk\..\.attnv.=q80
blk\..\.attnoutput.=q80
Routed Experts Layers [0-61] CPU
blk\..*\.ffndownexps\.weight=iq6_k
blk\..*\.ffn(gate|up)exps\.weight=iq5_k
Non-Repeating Layers
tokenembd\.weight=q80
output\.weight=q8_0
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-IQ5_K.gguf \
IQ5_K \
128
IQ4_NL 121.386 GiB (4.559 BPW)
PPL over 552 chunks for n_ctx=512 = 8.4419 +/- 0.06757This one is not mainline compat because it uses:
- tokenembd@iq4k (instead of mainline q4K)
- output@iq6k (instead of mainline q6_K
It gives a nice little boost in perplexity at basically the same size so I opted to use the newer types. It is technically a smol-IQ4_NL but its fine.
#!/usr/bin/env bash
custom="
61 Repeating Layers [0-61]
Attention [0-61] GPU
blk\..\.attnq.=q80
blk\..\.attnk.=q80
blk\..\.attnv.=q80
blk\..\.attnoutput.=q80
Routed Experts Layers [0-61] CPU
blk\..*\.ffndownexps\.weight=iq4_nl
blk\..*\.ffn(gate|up)exps\.weight=iq4_nl
Non-Repeating Layers
tokenembd\.weight=iq4k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-IQ4_NL.gguf \
IQ4_NL \
128
mainline-IQ4_NL 121.234 GiB (4.554 BPW)
PPL over 552 chunks for n_ctx=512 = 8.4528 +/- 0.06759This one is mainline compat because it uses:
- tokenembd@q4K
- output@q6_K
This is the one to use for Vulkan, probably Mac, but might need more than 128GB hrm...
#!/usr/bin/env bash
custom="
61 Repeating Layers [0-61]
Attention [0-61] GPU
blk\..\.attnq.=q80
blk\..\.attnk.=q80
blk\..\.attnv.=q80
blk\..\.attnoutput.=q80
Routed Experts Layers [0-61] CPU
blk\..*\.ffndownexps\.weight=iq4_nl
blk\..*\.ffn(gate|up)exps\.weight=iq4_nl
Non-Repeating Layers
tokenembd\.weight=q4K
output\.weight=q4_K
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-mainline-IQ4_NL.gguf \
IQ4_NL \
128
IQ4_XS 114.842 GiB (4.314 BPW)
PPL over 552 chunks for n_ctx=512 = 8.5702 +/- 0.06901This is the only quant in this collection that is compatible with mainline llama.cpp. ikllama.cpp can run all of them. Its technically a smol-IQ4XS but its fine.
#!/usr/bin/env bash
custom="
61 Repeating Layers [0-61]
Attention [0-61] GPU
blk\..\.attnq.=q80
blk\..\.attnk.=q80
blk\..\.attnv.=q80
blk\..\.attnoutput.=q80
Routed Experts Layers [0-61] CPU
blk\..*\.ffndownexps\.weight=iq4_xs
blk\..*\.ffn(gate|up)exps\.weight=iq4_xs
Non-Repeating Layers
tokenembd\.weight=q4K
output\.weight=q6_K
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-IQ4_XS.gguf \
IQ4_XS \
128
smol-IQ4_KSS 108.671 GiB (4.082 BPW)
PPL over 552 chunks for n_ctx=512 = 8.5815 +/- 0.06888#!/usr/bin/env bash
custom="
61 Repeating Layers [0-61]
Attention [0-61] GPU
blk\..\.attnq.=q80
blk\..\.attnk.=q80
blk\..\.attnv.=q80
blk\..\.attnoutput.=q80
Routed Experts Layers [0-61] CPU
blk\..*\.ffndownexps\.weight=iq4_kss
blk\..*\.ffn(gate|up)exps\.weight=iq4_kss
Non-Repeating Layers
tokenembd\.weight=iq4k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-smol-IQ4_KSS.gguf \
IQ4_KSS \
128
smol-IQ3_KS 87.237 GiB (3.277 BPW)
PPL over 552 chunks for n_ctx=512 = 8.7539 +/- 0.07075#!/usr/bin/env bash
custom="
61 Repeating Layers [0-61]
Attention [0-61] GPU
blk\..\.attnq.=q80
blk\..\.attnk.=q80
blk\..\.attnv.=q80
blk\..\.attnoutput.=q80
Routed Experts Layers [0-61] CPU
blk\..*\.ffndownexps\.weight=iq3_ks
blk\..*\.ffn(gate|up)exps\.weight=iq3_ks
Non-Repeating Layers
tokenembd\.weight=iq4k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-smol-IQ3_KS.gguf \
IQ3_KS \
128
IQ2_KS 69.800 GiB (2.622 BPW)
PPL over 552 chunks for n_ctx=512 = 9.6827 +/- 0.07972#!/usr/bin/env bash
custom="
61 Repeating Layers [0-61]
Attention [0-61] GPU
blk\..\.attnq.=q80
blk\..\.attnk.=q80
blk\..\.attnv.=q80
blk\..\.attnoutput.=q80
Routed Experts Layers [0-61] CPU
blk\..*\.ffndownexps\.weight=iq3_ks
blk\..*\.ffn(gate|up)exps\.weight=iq2_ks
Non-Repeating Layers
tokenembd\.weight=iq4k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/imatrix-MiniMax-M2.5-BF16.dat \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-256x4.9B-BF16-00001-of-00010.gguf \
/mnt/data/models/ubergarm/MiniMax-M2.5-GGUF/MiniMax-M2.5-IQ2_KS.gguf \
IQ2_KS \
128
Quick Start
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKEBUILDTYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
Download Desired Quant
$ pip install huggingface_hub
$ hf download --local-dir ./MiniMax-M2.5-GGUF/ --include=smol-IQ3_KS/*.gguf ubergarm/MiniMax-M2.5-GGUF
Hybrid CPU and Single GPU
echo TODO or look at my Step-3.5-Flash for rough example for now using --cpu-moe or --n-cpu-moe XX etc
Multi GPU Full Offload 128k context 96GB VRAM!!!
model=MiniMax-M2.5-IQ2_KS-00001-of-00003.gguf
GLIBCXXREGEXSTATELIMIT=1000000 \
CUDAVISIBLEDEVICES="0,1" \
./build/bin/llama-sweep-bench \
--model "$model" \
--alias ubergarm/MiniMax-M2.5 \
-khad -ctk q60 -ctv q80 \
-c 131072 \
-ger \
-sm graph \
-ngl 99 \
-ub 4096 -b 4096 \
-ts 47,48 \
--threads 1 \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
CPU-Only
numactl -N "$SOCKET" -m "$SOCKET" \
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/MiniMax-M2.5 \
--ctx-size 65536 \
-ger \
--merge-qkv \
-ctk q80 -ctv q80 \
-ub 4096 -b 4096 \
--parallel 1 \
--threads 96 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
My own early testing with opencode suggests that even the smol-IQ3_KS is working okay with tool calling etc!
For tool use you can always bring your own template with --chat-template-file myTemplate.jinja and might need --special` etc.