πŸ“‹ Model Description


base_model: estrogen/c4ai-command-r7b-12-2024 extragatedfields: Affiliation: text Country: country I agree to use this model for non-commercial use ONLY: checkbox Name: text extragatedprompt: By submitting this form, you agree to the License Agreement and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s Privacy Policy. You’ll receive email updates about C4AI and Cohere research, events, products and services. You can unsubscribe at any time. language:
  • en
  • fr
  • de
  • es
  • it
  • pt
  • ja
  • ko
  • zh
  • ar
  • el
  • fa
  • pl
  • id
  • cs
  • he
  • hi
  • nl
  • ro
  • ru
  • tr
  • uk
  • vi
library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher

About






weighted/imatrix quants of https://huggingface.co/estrogen/c4ai-command-r7b-12-2024


static quants are available at https://huggingface.co/mradermacher/c4ai-command-r7b-12-2024-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFi1-IQ1_S2.3for the desperate
GGUFi1-IQ1_M2.5mostly desperate
GGUFi1-IQ2_XXS2.7
GGUFi1-IQ2_XS2.9
GGUFi1-IQ2_S3.0
GGUFi1-IQ2_M3.2
GGUFi1-Q2K_S3.3very low quality
GGUFi1-IQ3_XXS3.5lower quality
GGUFi1-Q2K3.5IQ3XXS probably better
GGUFi1-IQ3_XS3.8
GGUFi1-Q3KS4.0IQ3XS probably better
GGUFi1-IQ3S4.0beats Q3K*
GGUFi1-IQ3_M4.1
GGUFi1-Q3KM4.3IQ3S probably better
GGUFi1-Q3KL4.6IQ3M probably better
GGUFi1-IQ4_XS4.7
GGUFi1-Q4_04.9fast, low quality
GGUFi1-IQ4NL4.9prefer IQ4XS
GGUFi1-Q4K_S4.9optimal size/speed/quality
GGUFi1-Q4K_M5.2fast, recommended
GGUFi1-Q4_15.3
GGUFi1-Q5K_S5.8
GGUFi1-Q5K_M5.9
GGUFi1-Q6K6.7practically like static Q6K
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
c4ai-command-r7b-12-2024.i1-IQ1_M.gguf
LFS
2.19 GB Download
c4ai-command-r7b-12-2024.i1-IQ1_S.gguf
LFS
2.06 GB Download
c4ai-command-r7b-12-2024.i1-IQ2_M.gguf
LFS Q2
2.87 GB Download
c4ai-command-r7b-12-2024.i1-IQ2_S.gguf
LFS Q2
2.7 GB Download
c4ai-command-r7b-12-2024.i1-IQ2_XS.gguf
LFS Q2
2.6 GB Download
c4ai-command-r7b-12-2024.i1-IQ2_XXS.gguf
LFS Q2
2.41 GB Download
c4ai-command-r7b-12-2024.i1-IQ3_M.gguf
LFS Q3
3.72 GB Download
c4ai-command-r7b-12-2024.i1-IQ3_S.gguf
LFS Q3
3.62 GB Download
c4ai-command-r7b-12-2024.i1-IQ3_XS.gguf
LFS Q3
3.47 GB Download
c4ai-command-r7b-12-2024.i1-IQ3_XXS.gguf
LFS Q3
3.18 GB Download
c4ai-command-r7b-12-2024.i1-IQ4_NL.gguf
LFS Q4
4.48 GB Download
c4ai-command-r7b-12-2024.i1-IQ4_XS.gguf
LFS Q4
4.28 GB Download
c4ai-command-r7b-12-2024.i1-Q2_K.gguf
LFS Q2
3.2 GB Download
c4ai-command-r7b-12-2024.i1-Q2_K_S.gguf
LFS Q2
3.03 GB Download
c4ai-command-r7b-12-2024.i1-Q3_K_L.gguf
LFS Q3
4.22 GB Download
c4ai-command-r7b-12-2024.i1-Q3_K_M.gguf
LFS Q3
3.93 GB Download
c4ai-command-r7b-12-2024.i1-Q3_K_S.gguf
LFS Q3
3.6 GB Download
c4ai-command-r7b-12-2024.i1-Q4_0.gguf
Recommended LFS Q4
4.48 GB Download
c4ai-command-r7b-12-2024.i1-Q4_1.gguf
LFS Q4
4.87 GB Download
c4ai-command-r7b-12-2024.i1-Q4_K_M.gguf
LFS Q4
4.71 GB Download
c4ai-command-r7b-12-2024.i1-Q4_K_S.gguf
LFS Q4
4.5 GB Download
c4ai-command-r7b-12-2024.i1-Q5_K_M.gguf
LFS Q5
5.41 GB Download
c4ai-command-r7b-12-2024.i1-Q5_K_S.gguf
LFS Q5
5.28 GB Download
c4ai-command-r7b-12-2024.i1-Q6_K.gguf
LFS Q6
6.14 GB Download