π Model Description
base_model: NewEden/Apertus-8b-instruct-patched extragatedbutton_content: Submit extragatedfields: Affiliation: text By clicking Submit below I accept the terms of use: checkbox Country: country Your Name: text geo: ip_location extragatedprompt: "### Apertus LLM Acceptable Use Policy \n(1.0 | September 1, 2025)\n\"Agreement\" The Swiss National AI Institute (SNAI) is a partnership between the two Swiss Federal Institutes of Technology, ETH Zurich and EPFL. \n\nBy using the Apertus LLM you agree to indemnify, defend, and hold harmless ETH Zurich and EPFL against any third-party claims arising from your use of Apertus LLM. \n\nThe training data and the Apertus LLM may contain or generate information that directly or indirectly refers to an identifiable individual (Personal Data). You process Personal Data as independent controller in accordance with applicable data protection law. SNAI will regularly provide a file with hash values for download which you can apply as an output filter to your use of our Apertus LLM. The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output. We strongly advise downloading and applying this output filter from SNAI every six months following the release of the model. " language:
- en
- multilingual
- compliant
- swiss-ai
- apertus
About
weighted/imatrix quants of https://huggingface.co/NewEden/Apertus-8b-instruct-patched
For a convenient overview and download list, visit our model page for this model.
static quants are available at https://huggingface.co/mradermacher/Apertus-8b-instruct-patched-GGUF
Usage
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|---|---|---|---|
| GGUF | imatrix | 0.1 | imatrix file (for creating your own quants) |
| GGUF | i1-IQ1_S | 2.1 | for the desperate |
| GGUF | i1-IQ1_M | 2.3 | mostly desperate |
| GGUF | i1-IQ2_XXS | 2.5 | |
| GGUF | i1-IQ2_XS | 2.7 | |
| GGUF | i1-IQ2_S | 2.9 | |
| GGUF | i1-IQ2_M | 3.1 | |
| GGUF | i1-Q2K_S | 3.1 | very low quality |
| GGUF | i1-IQ3_XXS | 3.4 | lower quality |
| GGUF | i1-Q2K | 3.4 | IQ3XXS probably better |
| GGUF | i1-IQ3_XS | 3.7 | |
| GGUF | i1-Q3KS | 3.8 | IQ3XS probably better |
| GGUF | i1-IQ3S | 3.8 | beats Q3K* |
| GGUF | i1-IQ3_M | 3.9 | |
| GGUF | i1-Q3KM | 4.3 | IQ3S probably better |
| GGUF | i1-IQ4_XS | 4.6 | |
| GGUF | i1-Q3KL | 4.7 | IQ3M probably better |
| GGUF | i1-IQ4NL | 4.8 | prefer IQ4XS |
| GGUF | i1-Q4_0 | 4.8 | fast, low quality |
| GGUF | i1-Q4K_S | 4.8 | optimal size/speed/quality |
| GGUF | i1-Q4K_M | 5.2 | fast, recommended |
| GGUF | i1-Q4_1 | 5.2 | |
| GGUF | i1-Q5K_S | 5.7 | |
| GGUF | i1-Q5K_M | 5.9 | |
| GGUF | i1-Q6K | 6.7 | practically like static Q6K |
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Apertus-8b-instruct-patched.i1-IQ1_M.gguf
LFS
|
2.04 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ1_S.gguf
LFS
|
1.91 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ2_M.gguf
LFS
Q2
|
2.77 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ2_S.gguf
LFS
Q2
|
2.6 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ2_XS.gguf
LFS
Q2
|
2.44 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ2_XXS.gguf
LFS
Q2
|
2.25 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ3_M.gguf
LFS
Q3
|
3.55 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ3_S.gguf
LFS
Q3
|
3.44 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ3_XS.gguf
LFS
Q3
|
3.32 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ3_XXS.gguf
LFS
Q3
|
3.06 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ4_NL.gguf
LFS
Q4
|
4.37 GB | Download |
|
Apertus-8b-instruct-patched.i1-IQ4_XS.gguf
LFS
Q4
|
4.16 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q2_K.gguf
LFS
Q2
|
3.06 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q2_K_S.gguf
LFS
Q2
|
2.82 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q3_K_L.gguf
LFS
Q3
|
4.26 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q3_K_M.gguf
LFS
Q3
|
3.88 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q3_K_S.gguf
LFS
Q3
|
3.43 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q4_0.gguf
Recommended
LFS
Q4
|
4.38 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q4_1.gguf
LFS
Q4
|
4.79 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q4_K_M.gguf
LFS
Q4
|
4.71 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q4_K_S.gguf
LFS
Q4
|
4.4 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q5_K_M.gguf
LFS
Q5
|
5.41 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q5_K_S.gguf
LFS
Q5
|
5.23 GB | Download |
|
Apertus-8b-instruct-patched.i1-Q6_K.gguf
LFS
Q6
|
6.16 GB | Download |
|
Apertus-8b-instruct-patched.imatrix.gguf
LFS
|
5.15 MB | Download |