πŸ“‹ Model Description


base_model: bigcode/starcoderplus datasets:
  • bigcode/the-stack-dedup
  • tiiuae/falcon-refinedweb
extragatedfields: I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox extragatedprompt: "## Model License Agreement\nPlease read the BigCode OpenRAIL-M license agreement before accepting it.\n " language:
  • en
library_name: transformers quantized_by: mradermacher tags:
  • code

About






weighted/imatrix quants of https://huggingface.co/bigcode/starcoderplus


static quants are available at https://huggingface.co/mradermacher/starcoderplus-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFi1-IQ1_S3.8for the desperate
GGUFi1-IQ1_M4.1mostly desperate
GGUFi1-IQ2_XXS4.6
GGUFi1-IQ2_XS5.0
GGUFi1-IQ2_S5.4
GGUFi1-IQ2_M5.8
GGUFi1-Q2K6.4IQ3XXS probably better
GGUFi1-IQ3_XXS6.5lower quality
GGUFi1-IQ3_XS7.0
GGUFi1-IQ3S7.2beats Q3K*
GGUFi1-Q3KS7.2IQ3XS probably better
GGUFi1-IQ3_M7.7
GGUFi1-Q3KM8.5IQ3S probably better
GGUFi1-IQ4_XS8.8
GGUFi1-Q4_09.3fast, low quality
GGUFi1-Q4K_S9.4optimal size/speed/quality
GGUFi1-Q3KL9.4IQ3M probably better
GGUFi1-Q4K_M10.2fast, recommended
GGUFi1-Q5K_S11.2
GGUFi1-Q5K_M11.8
GGUFi1-Q6K13.2practically like static Q6K
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
starcoderplus.i1-IQ1_M.gguf
LFS
3.71 GB Download
starcoderplus.i1-IQ1_S.gguf
LFS
3.42 GB Download
starcoderplus.i1-IQ2_M.gguf
LFS Q2
5.27 GB Download
starcoderplus.i1-IQ2_S.gguf
LFS Q2
4.89 GB Download
starcoderplus.i1-IQ2_XS.gguf
LFS Q2
4.59 GB Download
starcoderplus.i1-IQ2_XXS.gguf
LFS Q2
4.17 GB Download
starcoderplus.i1-IQ3_M.gguf
LFS Q3
7.09 GB Download
starcoderplus.i1-IQ3_S.gguf
LFS Q3
6.62 GB Download
starcoderplus.i1-IQ3_XS.gguf
LFS Q3
6.42 GB Download
starcoderplus.i1-IQ3_XXS.gguf
LFS Q3
5.99 GB Download
starcoderplus.i1-IQ4_XS.gguf
LFS Q4
8.08 GB Download
starcoderplus.i1-Q2_K.gguf
LFS Q2
5.87 GB Download
starcoderplus.i1-Q3_K_L.gguf
LFS Q3
8.63 GB Download
starcoderplus.i1-Q3_K_M.gguf
LFS Q3
7.78 GB Download
starcoderplus.i1-Q3_K_S.gguf
LFS Q3
6.62 GB Download
starcoderplus.i1-Q4_0.gguf
Recommended LFS Q4
8.58 GB Download
starcoderplus.i1-Q4_K_M.gguf
LFS Q4
9.44 GB Download
starcoderplus.i1-Q4_K_S.gguf
LFS Q4
8.62 GB Download
starcoderplus.i1-Q5_K_M.gguf
LFS Q5
10.9 GB Download
starcoderplus.i1-Q5_K_S.gguf
LFS Q5
10.33 GB Download
starcoderplus.i1-Q6_K.gguf
LFS Q6
12.24 GB Download