πŸ“‹ Model Description


base_model: NewEden/Apertus-8b-instruct-patched extragatedbutton_content: Submit extragatedfields: Affiliation: text By clicking Submit below I accept the terms of use: checkbox Country: country Your Name: text geo: ip_location extragatedprompt: "### Apertus LLM Acceptable Use Policy \n(1.0 | September 1, 2025)\n\"Agreement\" The Swiss National AI Institute (SNAI) is a partnership between the two Swiss Federal Institutes of Technology, ETH Zurich and EPFL. \n\nBy using the Apertus LLM you agree to indemnify, defend, and hold harmless ETH Zurich and EPFL against any third-party claims arising from your use of Apertus LLM. \n\nThe training data and the Apertus LLM may contain or generate information that directly or indirectly refers to an identifiable individual (Personal Data). You process Personal Data as independent controller in accordance with applicable data protection law. SNAI will regularly provide a file with hash values for download which you can apply as an output filter to your use of our Apertus LLM. The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from SNAI every six months following the release of the model. " language:
  • en
library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags:
  • multilingual
  • compliant
  • swiss-ai
  • apertus

About









weighted/imatrix quants of https://huggingface.co/NewEden/Apertus-8b-instruct-patched

For a convenient overview and download list, visit our model page for this model.

static quants are available at https://huggingface.co/mradermacher/Apertus-8b-instruct-patched-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFimatrix0.1imatrix file (for creating your own quants)
GGUFi1-IQ1_S2.1for the desperate
GGUFi1-IQ1_M2.3mostly desperate
GGUFi1-IQ2_XXS2.5
GGUFi1-IQ2_XS2.7
GGUFi1-IQ2_S2.9
GGUFi1-IQ2_M3.1
GGUFi1-Q2K_S3.1very low quality
GGUFi1-IQ3_XXS3.4lower quality
GGUFi1-Q2K3.4IQ3XXS probably better
GGUFi1-IQ3_XS3.7
GGUFi1-Q3KS3.8IQ3XS probably better
GGUFi1-IQ3S3.8beats Q3K*
GGUFi1-IQ3_M3.9
GGUFi1-Q3KM4.3IQ3S probably better
GGUFi1-IQ4_XS4.6
GGUFi1-Q3KL4.7IQ3M probably better
GGUFi1-IQ4NL4.8prefer IQ4XS
GGUFi1-Q4_04.8fast, low quality
GGUFi1-Q4K_S4.8optimal size/speed/quality
GGUFi1-Q4K_M5.2fast, recommended
GGUFi1-Q4_15.2
GGUFi1-Q5K_S5.7
GGUFi1-Q5K_M5.9
GGUFi1-Q6K6.7practically like static Q6K
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Apertus-8b-instruct-patched.i1-IQ1_M.gguf
LFS
2.04 GB Download
Apertus-8b-instruct-patched.i1-IQ1_S.gguf
LFS
1.91 GB Download
Apertus-8b-instruct-patched.i1-IQ2_M.gguf
LFS Q2
2.77 GB Download
Apertus-8b-instruct-patched.i1-IQ2_S.gguf
LFS Q2
2.6 GB Download
Apertus-8b-instruct-patched.i1-IQ2_XS.gguf
LFS Q2
2.44 GB Download
Apertus-8b-instruct-patched.i1-IQ2_XXS.gguf
LFS Q2
2.25 GB Download
Apertus-8b-instruct-patched.i1-IQ3_M.gguf
LFS Q3
3.55 GB Download
Apertus-8b-instruct-patched.i1-IQ3_S.gguf
LFS Q3
3.44 GB Download
Apertus-8b-instruct-patched.i1-IQ3_XS.gguf
LFS Q3
3.32 GB Download
Apertus-8b-instruct-patched.i1-IQ3_XXS.gguf
LFS Q3
3.06 GB Download
Apertus-8b-instruct-patched.i1-IQ4_NL.gguf
LFS Q4
4.37 GB Download
Apertus-8b-instruct-patched.i1-IQ4_XS.gguf
LFS Q4
4.16 GB Download
Apertus-8b-instruct-patched.i1-Q2_K.gguf
LFS Q2
3.06 GB Download
Apertus-8b-instruct-patched.i1-Q2_K_S.gguf
LFS Q2
2.82 GB Download
Apertus-8b-instruct-patched.i1-Q3_K_L.gguf
LFS Q3
4.26 GB Download
Apertus-8b-instruct-patched.i1-Q3_K_M.gguf
LFS Q3
3.88 GB Download
Apertus-8b-instruct-patched.i1-Q3_K_S.gguf
LFS Q3
3.43 GB Download
Apertus-8b-instruct-patched.i1-Q4_0.gguf
Recommended LFS Q4
4.38 GB Download
Apertus-8b-instruct-patched.i1-Q4_1.gguf
LFS Q4
4.79 GB Download
Apertus-8b-instruct-patched.i1-Q4_K_M.gguf
LFS Q4
4.71 GB Download
Apertus-8b-instruct-patched.i1-Q4_K_S.gguf
LFS Q4
4.4 GB Download
Apertus-8b-instruct-patched.i1-Q5_K_M.gguf
LFS Q5
5.41 GB Download
Apertus-8b-instruct-patched.i1-Q5_K_S.gguf
LFS Q5
5.23 GB Download
Apertus-8b-instruct-patched.i1-Q6_K.gguf
LFS Q6
6.16 GB Download
Apertus-8b-instruct-patched.imatrix.gguf
LFS
5.15 MB Download