πŸ“‹ Model Description


base_model: deepseek-ai/DeepSeek-Coder-V2-Instruct inference: false library_name: gguf license: other license_link: LICENSE license_name: deepseek-license pipeline_tag: text-generation quantized_by: legraphista tags:
  • quantized
  • GGUF
  • quantization
  • imat
  • imatrix
  • static
  • 8bit
  • 6bit
  • 5bit
  • 4bit
  • 3bit
  • 2bit
  • 1bit

DeepSeek-Coder-V2-Instruct-IMat-GGUF

Llama.cpp imatrix quantization of deepseek-ai/DeepSeek-Coder-V2-Instruct

Original Model: deepseek-ai/DeepSeek-Coder-V2-Instruct
Original dtype: BF16 (bfloat16)
Quantized by: llama.cpp b3166
IMatrix dataset: here

- IMatrix - Common Quants - All Quants - Simple chat template - Chat template with system prompt - Llama.cpp - Why is the IMatrix not applied everywhere? - How do I merge a split GGUF?

Files

IMatrix

Status: βœ… Available Link: here

Common Quants

FilenameQuant typeFile SizeStatusUses IMatrixIs Split
DeepSeek-Coder-V2-Instruct.Q80/*Q80250.62GBβœ… Availableβšͺ Static
βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q6K/* | Q6K | 193.54GB | βœ… Available | βšͺ Static | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q4K/* | Q4K | 142.45GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q3K/* | Q3K | 112.67GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q2K/* | Q2K | 85.95GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes

All Quants

FilenameQuant typeFile SizeStatusUses IMatrixIs Split
DeepSeek-Coder-V2-Instruct.Q80/*Q80250.62GBβœ… Availableβšͺ Static
βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q6K/* | Q6K | 193.54GB | βœ… Available | βšͺ Static | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q5K/* | Q5K | 167.22GB | βœ… Available | βšͺ Static | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q5KS/* | Q5K_S | 162.31GB | βœ… Available | βšͺ Static | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q4K/* | Q4K | 142.45GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q4KS/* | Q4K_S | 133.88GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ4NL/* | IQ4NL | 132.91GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ4XS/* | IQ4XS | 125.56GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q3K/* | Q3K | 112.67GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q3KL/* | Q3K_L | 122.37GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q3KS/* | Q3K_S | 101.68GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ3M/* | IQ3M | 103.37GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ3S/* | IQ3S | 101.68GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ3XS/* | IQ3XS | 96.30GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ3XXS/* | IQ3XXS | 90.85GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q2K/* | Q2K | 85.95GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.Q2KS/* | Q2K_S | 79.60GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ2M/* | IQ2M | 76.92GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ2S/* | IQ2S | 69.87GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ2XS/* | IQ2XS | 68.71GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ2XXS/* | IQ2XXS | 61.50GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ1M/* | IQ1M | 52.68GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes | DeepSeek-Coder-V2-Instruct.IQ1S/* | IQ1S | 47.39GB | βœ… Available | 🟒 IMatrix | βœ‚ Yes

Downloading using huggingface-cli

If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/DeepSeek-Coder-V2-Instruct-IMat-GGUF --include "DeepSeek-Coder-V2-Instruct.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/DeepSeek-Coder-V2-Instruct-IMat-GGUF --include "DeepSeek-Coder-V2-Instruct.Q8_0/*" --local-dir ./

see FAQ for merging GGUF's


Inference

Simple chat template

<|begin▁of▁sentence|>User: {user_prompt}

Assistant: {assistantresponse}<|end▁of▁sentence|>User: {nextuser_prompt}

Chat template with system prompt

<|begin▁of▁sentence|>{system_prompt}

User: {user_prompt}

Assistant: {assistantresponse}<|end▁of▁sentence|>User: {nextuser_prompt}

Llama.cpp

llama.cpp/main -m DeepSeek-Coder-V2-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"

FAQ

Why is the IMatrix not applied everywhere?

According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).

How do I merge a split GGUF?

  1. Make sure you have gguf-split available
- To get hold of gguf-split, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find gguf-split
  1. Locate your GGUF chunks folder (ex: DeepSeek-Coder-V2-Instruct.Q80)
  2. Run gguf-split --merge DeepSeek-Coder-V2-Instruct.Q80/DeepSeek-Coder-V2-Instruct.Q80-00001-of-XXXXX.gguf DeepSeek-Coder-V2-Instruct.Q80.gguf
- Make sure to point gguf-split to the first chunk of the split.

Got a suggestion? Ping me @legraphista!

πŸ“‚ GGUF File List

No GGUF files available