πŸ“‹ Model Description


base_model: HuggingFaceH4/mistral-7b-grok datasets:
  • HuggingFaceH4/grok-conversation-harmless
  • HuggingFaceH4/ultrachat_200k
language:
  • en
library_name: transformers license: apache-2.0 quantized_by: mradermacher tags:
  • alignment-handbook
  • generatedfromtrainer

About

static quants of https://huggingface.co/HuggingFaceH4/mistral-7b-grok


weighted/imatrix quants are available at https://huggingface.co/mradermacher/mistral-7b-grok-i1-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFQ2_K3.0
GGUFIQ3_XS3.3
GGUFQ3K_S3.4
GGUFIQ3S3.4beats Q3K*
GGUFIQ3_M3.5
GGUFQ3K_M3.8lower quality
GGUFQ3K_L4.1
GGUFIQ4_XS4.2
GGUFQ404_44.2fast on arm, low quality
GGUFQ4K_S4.4fast, recommended
GGUFQ4K_M4.6fast, recommended
GGUFQ5K_S5.3
GGUFQ5K_M5.4
GGUFQ6_K6.2very good quality
GGUFQ8_07.9fast, best quality
GGUFf1614.616 bpw, overkill
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
mistral-7b-grok.IQ3_M.gguf
LFS Q3
3.2 GB Download
mistral-7b-grok.IQ3_S.gguf
LFS Q3
3.11 GB Download
mistral-7b-grok.IQ3_XS.gguf
LFS Q3
2.94 GB Download
mistral-7b-grok.IQ4_XS.gguf
LFS Q4
3.82 GB Download
mistral-7b-grok.Q2_K.gguf
LFS Q2
2.68 GB Download
mistral-7b-grok.Q3_K_L.gguf
LFS Q3
3.7 GB Download
mistral-7b-grok.Q3_K_M.gguf
LFS Q3
3.42 GB Download
mistral-7b-grok.Q3_K_S.gguf
LFS Q3
3.09 GB Download
mistral-7b-grok.Q4_0_4_4.gguf
Recommended LFS Q4
3.83 GB Download
mistral-7b-grok.Q4_K_M.gguf
LFS Q4
4.21 GB Download
mistral-7b-grok.Q4_K_S.gguf
LFS Q4
4 GB Download
mistral-7b-grok.Q5_K_M.gguf
LFS Q5
4.92 GB Download
mistral-7b-grok.Q5_K_S.gguf
LFS Q5
4.8 GB Download
mistral-7b-grok.Q6_K.gguf
LFS Q6
5.68 GB Download
mistral-7b-grok.Q8_0.gguf
LFS Q8
7.28 GB Download
mistral-7b-grok.f16.gguf
LFS FP16
13.49 GB Download