π Model Description
quantized_by: ubergarm pipeline_tag: text-generation base_model: stepfun-ai/Step-3.5-Flash basemodelrelation: quantized license: apache-2.0 tags:
- imatrix
- conversational
- ik_llama.cpp
- step3p5
ik_llama.cpp imatrix Quantizations of stepfun-ai/Step-3.5-Flash
NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for Windows builds by Thireus here. which have been CUDA 12.8.
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!
Quant Collection
Perplexity computed against wiki.test.raw. (lower is "better")These two are just a test quants for baseline perplexity comparison and not available for download here:
BF16366.952 GiB (16.004 BPW)
- PPL over 561 chunks for n_ctx=512 = 2.4169 +/- 0.01107
Q8_0195.031 GiB (8.506 BPW)
- PPL over 561 chunks for n_ctx=512 = 2.4188 +/- 0.01109
NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!
IQ5_K 136.891 GiB (5.970 BPW)
PPL over 561 chunks for n_ctx=512 = 2.4304 +/- 0.01117#!/usr/bin/env bash
custom="
45 Repeating Layers [0-44]
Attention [0-44] GPU
blk\..\.attngate.=q80
blk\..\.attnq.=q80
blk\..\.attnk.=q80
blk\..\.attnv.=q80
blk\..\.attnoutput.=q80
First 3 Dense Layers [0-2] GPU
blk\..*\.ffndown\.weight=q80
blk\..*\.ffn(gate|up)\.weight=q80
Shared Expert Layers [3-44] GPU
blk\..*\.ffndownshexp\.weight=q8_0
blk\..*\.ffn(gate|up)shexp\.weight=q8_0
Routed Experts Layers [3-44] CPU
blk\..*\.ffndownexps\.weight=iq6_k
blk\..*\.ffn(gate|up)exps\.weight=iq5_k
Non-Repeating Layers
tokenembd\.weight=q80
output\.weight=q8_0
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/imatrix-Step-3.5-Flash-BF16.dat \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-288x7.4B-BF16-00001-of-00009.gguf \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-IQ5_K.gguf \
IQ5_K \
128
IQ4_XS 100.53 GiB (4.38 BPW)
PPL over 561 chunks for n_ctx=512 = 2.5181 +/- 0.01178NOTE: This mainline compatible quant does not use imatrix.
#!/usr/bin/env bash
custom="
45 Repeating Layers [0-44]
Attention [0-44] GPU
blk\..\.attngate.=q80
blk\..\.attnq.=q80
blk\..\.attnk.=q80
blk\..\.attnv.=q80
blk\..\.attnoutput.=q80
First 3 Dense Layers [0-2] GPU
blk\..*\.ffndown\.weight=q80
blk\..*\.ffn(gate|up)\.weight=q80
Shared Expert Layers [3-44] GPU
blk\..*\.ffndownshexp\.weight=q8_0
blk\..*\.ffn(gate|up)shexp\.weight=q8_0
Routed Experts Layers [3-44] CPU
blk\..*\.ffndownexps\.weight=iq4_xs
blk\..*\.ffn(gate|up)exps\.weight=iq4_xs
Non-Repeating Layers
tokenembd\.weight=q4K
output\.weight=q6_K
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-288x7.4B-BF16-00001-of-00009.gguf \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-IQ4_XS.gguf \
IQ4_XS \
128
smol-IQ4_KSS 94.080 GiB (4.103 BPW)
PPL over 561 chunks for n_ctx=512 = 2.5705 +/- 0.01211#!/usr/bin/env bash
custom="
45 Repeating Layers [0-44]
Attention [0-44] GPU
blk\..\.attngate.=iq6k
blk\..\.attnq.=iq6k
blk\..\.attnk.=iq6k
blk\..\.attnv.=iq6k
blk\..\.attnoutput.=iq6k
First 3 Dense Layers [0-2] GPU
blk\..*\.ffndown\.weight=iq6k
blk\..*\.ffn(gate|up)\.weight=iq6k
Shared Expert Layers [3-44] GPU
blk\..*\.ffndownshexp\.weight=iq6_k
blk\..*\.ffn(gate|up)shexp\.weight=iq6_k
Routed Experts Layers [3-44] CPU
blk\..*\.ffndownexps\.weight=iq4_kss
blk\..*\.ffn(gate|up)exps\.weight=iq4_kss
Non-Repeating Layers
tokenembd\.weight=iq4k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/imatrix-Step-3.5-Flash-BF16.dat \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-288x7.4B-BF16-00001-of-00009.gguf \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-smol-IQ4_KSS.gguf \
IQ4_KSS \
128
smol-IQ3_KS 75.934 GiB (3.312 BPW)
PPL over 561 chunks for n_ctx=512 = 2.7856 +/- 0.01365#!/usr/bin/env bash
custom="
45 Repeating Layers [0-44]
Attention [0-44] GPU
blk\..\.attngate.=iq6k
blk\..\.attnq.=iq6k
blk\..\.attnk.=iq6k
blk\..\.attnv.=iq6k
blk\..\.attnoutput.=iq6k
First 3 Dense Layers [0-2] GPU
blk\..*\.ffndown\.weight=iq6k
blk\..*\.ffn(gate|up)\.weight=iq6k
Shared Expert Layers [3-44] GPU
blk\..*\.ffndownshexp\.weight=iq6_k
blk\..*\.ffn(gate|up)shexp\.weight=iq6_k
Routed Experts Layers [3-44] CPU
blk\..*\.ffndownexps\.weight=iq3_ks
blk\..*\.ffn(gate|up)exps\.weight=iq3_ks
Non-Repeating Layers
tokenembd\.weight=iq4k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/imatrix-Step-3.5-Flash-BF16.dat \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-288x7.4B-BF16-00001-of-00009.gguf \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-smol-IQ3_KS.gguf \
IQ3_KS \
128
smol-IQ2_KS 53.786 GiB (2.346 BPW)
PPL over 561 chunks for n_ctx=512 = 4.2597 +/- 0.02425#!/usr/bin/env bash
custom="
45 Repeating Layers [0-44]
Attention [0-44] GPU
blk\..\.attngate.=iq6k
blk\..\.attnq.=iq6k
blk\..\.attnk.=iq6k
blk\..\.attnv.=iq6k
blk\..\.attnoutput.=iq6k
First 3 Dense Layers [0-2] GPU
blk\..*\.ffndown\.weight=iq6k
blk\..*\.ffn(gate|up)\.weight=iq6k
Shared Expert Layers [3-44] GPU
blk\..*\.ffndownshexp\.weight=iq6_k
blk\..*\.ffn(gate|up)shexp\.weight=iq6_k
Routed Experts Layers [3-44] CPU
blk\..*\.ffndownexps\.weight=iq2_ks
blk\..*\.ffn(gate|up)exps\.weight=iq2_ks
Non-Repeating Layers
tokenembd\.weight=iq4k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/imatrix-Step-3.5-Flash-BF16.dat \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-288x7.4B-BF16-00001-of-00009.gguf \
/mnt/data/models/ubergarm/Step-3.5-Flash-GGUF/Step-3.5-Flash-smol-IQ2_KS.gguf \
IQ2_KS \
128
Quick Start
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKEBUILDTYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
Run full offload on >2 GPUs with -sm graph Graph Parallel
https://github.com/ikawrakow/ik_llama.cpp/pull/1236
https://github.com/ikawrakow/ik_llama.cpp/pull/1231
https://github.com/ikawrakow/ik_llama.cpp/pull/1239
https://github.com/ikawrakow/ik_llama.cpp/pull/1240
CUDAVISIBLEDEVICES="0,1" \
./build/bin/llama-server \
--model "$model" \
--alias ubergarm/Step-Fun-3.5-Flash \
-c 65536 \
-ger \
-sm graph \
-ngl 99 \
-ub 4096 -b 4096 \
-ts 47,48 \
--threads 1 \
--host 127.0.0.1 \
--port 8080 \
--jinja \
--no-mmap
CPU-only Mainline llama.cpp Example
numactl -N "$SOCKET" -m "$SOCKET" \
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/Step-3.5-Flash \
--ctx-size 65536 \
-ctk q80 -ctv q80 \
-ub 4096 -b 4096 \
--parallel 1 \
--threads 96 \
--threads-batch 128 \
--numa numactl \
--host 127.0.0.1 \
--port 8080 \
--no-mmap \
--jinja
The chat template baked into these GGUFs from the original
one.
Because of this note and later updated official gguf chat
template
you will probably want to copy paste the template
from the official gguf chat
template
and use --chat-template-file myTemplate.jinja.
Also Check Discussion 1 for a tested working chat template for tool use thanks to mindkrypted!
Another option for mainline tool calling users is to check out pwilkin's autoparser branch.
References
- ikllama.cpp
- Getting Started Guide (already out of date lol)
- ubergarm-imatrix-calibration-corpus-v02.txt
- mainline llama.cpp PR19283 converted with
pull/19283/head:pr/step3.5-flash@5737bcf1bplus castingstep35.attention.slidingwindowpatternto[INT32]as for some reason it defaults to[BOOL]for me (which would work fine for mainline regardless). - ikllama.cpp PR1231 imatrix & quantized with
ik/step35_compat@9a0b5e80