πŸ“‹ Model Description


base_model: LLM360/K2 inference: false language:
  • en
library_name: gguf license: apache-2.0 pipeline_tag: text-generation quantized_by: legraphista tags:
  • nlp
  • llm
  • quantized
  • GGUF
  • imatrix
  • quantization
  • imat
  • imatrix
  • static
  • 16bit
  • 8bit
  • 6bit
  • 5bit
  • 4bit
  • 3bit
  • 2bit
  • 1bit

K2-IMat-GGUF

Llama.cpp imatrix quantization of LLM360/K2

Original Model: LLM360/K2
Original dtype: FP16 (float16)
Quantized by: llama.cpp b3051
IMatrix dataset: here

- IMatrix - Common Quants - All Quants - Llama.cpp - Why is the IMatrix not applied everywhere? - How do I merge a split GGUF?

Files

IMatrix

Status: βœ… Available Link: here

Common Quants

FilenameQuant typeFile SizeStatusUses IMatrixIs Split
K2.Q80/*Q8069.37GBβœ… Availableβšͺ Static
βœ‚ Yes | K2.Q6K/* | Q6K | 53.56GB | βœ… Available | βšͺ Static | βœ‚ Yes | K2.Q4K.gguf | Q4K | 39.35GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.Q3K.gguf | Q3K | 31.63GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.Q2K.gguf | Q2K | 24.11GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No

All Quants

FilenameQuant typeFile SizeStatusUses IMatrixIs Split
K2.FP16/*F16130.58GBβœ… Availableβšͺ Static
βœ‚ Yes | K2.Q80/* | Q80 | 69.37GB | βœ… Available | βšͺ Static | βœ‚ Yes | K2.Q6K/* | Q6K | 53.56GB | βœ… Available | βšͺ Static | βœ‚ Yes | K2.Q5K/* | Q5K | 46.24GB | βœ… Available | βšͺ Static | βœ‚ Yes | K2.Q5KS.gguf | Q5K_S | 44.92GB | βœ… Available | βšͺ Static | πŸ“¦ No | K2.Q4K.gguf | Q4K | 39.35GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.Q4KS.gguf | Q4K_S | 37.06GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ4NL.gguf | IQ4NL | 36.80GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ4XS.gguf | IQ4XS | 34.76GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.Q3K.gguf | Q3K | 31.63GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.Q3KL.gguf | Q3K_L | 34.65GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.Q3KS.gguf | Q3K_S | 28.16GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ3M.gguf | IQ3M | 29.83GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ3S.gguf | IQ3S | 28.16GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ3XS.gguf | IQ3XS | 26.64GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ3XXS.gguf | IQ3XXS | 24.67GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.Q2K.gguf | Q2K | 24.11GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.Q2KS.gguf | Q2K_S | 21.98GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ2M.gguf | IQ2M | 22.41GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ2S.gguf | IQ2S | 20.78GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ2XS.gguf | IQ2XS | 19.27GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ2XXS.gguf | IQ2XXS | 17.47GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ1M.gguf | IQ1M | 15.43GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No | K2.IQ1S.gguf | IQ1S | 14.21GB | βœ… Available | 🟒 IMatrix | πŸ“¦ No

Downloading using huggingface-cli

If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/K2-IMat-GGUF --include "K2.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/K2-IMat-GGUF --include "K2.Q8_0/*" --local-dir ./

see FAQ for merging GGUF's


Inference

Llama.cpp

llama.cpp/main -m K2.Q8_0.gguf --color -i -p "prompt here"

FAQ

Why is the IMatrix not applied everywhere?

According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).

How do I merge a split GGUF?

  1. Make sure you have gguf-split available
- To get hold of gguf-split, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find gguf-split
  1. Locate your GGUF chunks folder (ex: K2.Q80)
  2. Run gguf-split --merge K2.Q80/K2.Q80-00001-of-XXXXX.gguf K2.Q80.gguf
- Make sure to point gguf-split to the first chunk of the split.

Got a suggestion? Ping me @legraphista!

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
K2.IQ1_M.gguf
Recommended LFS
14.37 GB Download
K2.IQ1_S.gguf
LFS
13.23 GB Download
K2.IQ2_M.gguf
LFS Q2
20.87 GB Download
K2.IQ2_S.gguf
LFS Q2
19.36 GB Download
K2.IQ2_XS.gguf
LFS Q2
17.95 GB Download
K2.IQ2_XXS.gguf
LFS Q2
16.27 GB Download
K2.IQ3_M.gguf
LFS Q3
27.78 GB Download
K2.IQ3_S.gguf
LFS Q3
26.23 GB Download
K2.IQ3_XS.gguf
LFS Q3
24.81 GB Download
K2.IQ3_XXS.gguf
LFS Q3
22.98 GB Download
K2.IQ4_NL.gguf
LFS Q4
34.27 GB Download
K2.IQ4_XS.gguf
LFS Q4
32.38 GB Download
K2.Q2_K.gguf
LFS Q2
22.46 GB Download
K2.Q2_K_S.gguf
LFS Q2
20.47 GB Download
K2.Q3_K.gguf
LFS Q3
29.46 GB Download
K2.Q3_K_L.gguf
LFS Q3
32.27 GB Download
K2.Q3_K_S.gguf
LFS Q3
26.23 GB Download
K2.Q4_K.gguf
LFS Q4
36.65 GB Download
K2.Q4_K_S.gguf
LFS Q4
34.51 GB Download
K2.Q5_K_S.gguf
LFS Q5
41.84 GB Download