πŸ“‹ Model Description


base_model: anthracite-org/magnum-v4-22b datasets:
  • anthracite-org/c2logs32kmistral-v3v1.2nosystem
  • anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system
  • anthracite-org/kalo-opus-instruct-3k-filtered-no-system
  • anthracite-org/nopmclaudewritingfixed
  • anthracite-org/kaloopusmisc240827nosystem
  • anthracite-org/kalomiscpart2nosystem
language:
  • en
library_name: transformers license: other license_name: mrl quantized_by: mradermacher tags:
  • chat

About






weighted/imatrix quants of https://huggingface.co/anthracite-org/magnum-v4-22b


static quants are available at https://huggingface.co/mradermacher/magnum-v4-22b-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFi1-IQ1_S4.9for the desperate
GGUFi1-IQ1_M5.4mostly desperate
GGUFi1-IQ2_XXS6.1
GGUFi1-IQ2_XS6.7
GGUFi1-IQ2_S7.1
GGUFi1-IQ2_M7.7
GGUFi1-Q2K8.4IQ3XXS probably better
GGUFi1-IQ3_XXS8.7lower quality
GGUFi1-IQ3_XS9.3
GGUFi1-Q3KS9.7IQ3XS probably better
GGUFi1-IQ3S9.8beats Q3K*
GGUFi1-IQ3_M10.2
GGUFi1-Q3KM10.9IQ3S probably better
GGUFi1-Q3KL11.8IQ3M probably better
GGUFi1-IQ4_XS12.0
GGUFi1-Q4_012.7fast, low quality
GGUFi1-Q4K_S12.8optimal size/speed/quality
GGUFi1-Q4K_M13.4fast, recommended
GGUFi1-Q5K_S15.4
GGUFi1-Q5K_M15.8
GGUFi1-Q6K18.4practically like static Q6K
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
magnum-v4-22b.i1-IQ1_M.gguf
LFS
4.91 GB Download
magnum-v4-22b.i1-IQ1_S.gguf
LFS
4.5 GB Download
magnum-v4-22b.i1-IQ2_M.gguf
LFS Q2
7.1 GB Download
magnum-v4-22b.i1-IQ2_S.gguf
LFS Q2
6.55 GB Download
magnum-v4-22b.i1-IQ2_XS.gguf
LFS Q2
6.19 GB Download
magnum-v4-22b.i1-IQ2_XXS.gguf
LFS Q2
5.58 GB Download
magnum-v4-22b.i1-IQ3_M.gguf
LFS Q3
9.37 GB Download
magnum-v4-22b.i1-IQ3_S.gguf
LFS Q3
9.02 GB Download
magnum-v4-22b.i1-IQ3_XS.gguf
LFS Q3
8.55 GB Download
magnum-v4-22b.i1-IQ3_XXS.gguf
LFS Q3
8.01 GB Download
magnum-v4-22b.i1-IQ4_XS.gguf
LFS Q4
11.12 GB Download
magnum-v4-22b.i1-Q2_K.gguf
LFS Q2
7.7 GB Download
magnum-v4-22b.i1-Q3_K_L.gguf
LFS Q3
10.92 GB Download
magnum-v4-22b.i1-Q3_K_M.gguf
LFS Q3
10.02 GB Download
magnum-v4-22b.i1-Q3_K_S.gguf
LFS Q3
8.98 GB Download
magnum-v4-22b.i1-Q4_0.gguf
Recommended LFS Q4
11.75 GB Download
magnum-v4-22b.i1-Q4_K_M.gguf
LFS Q4
12.43 GB Download
magnum-v4-22b.i1-Q4_K_S.gguf
LFS Q4
11.79 GB Download
magnum-v4-22b.i1-Q5_K_M.gguf
LFS Q5
14.64 GB Download
magnum-v4-22b.i1-Q5_K_S.gguf
LFS Q5
14.27 GB Download
magnum-v4-22b.i1-Q6_K.gguf
LFS Q6
17 GB Download