π Model Description
base_model: huihui-ai/Huihui-MoE-5B-A1.7B-abliterated extragatedprompt: |- Usage Warnings
βRisk of Sensitive or Controversial Outputsβ: This modelβs safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
βNot Suitable for All Audiences:β Due to limited content filtering, the modelβs outputs may be inappropriate for public settings, underage users, or applications requiring high security.
βLegal and Ethical Responsibilitiesβ: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
βResearch and Experimental Useβ: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
βMonitoring and Review Recommendationsβ: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
βNo Default Safety Guaranteesβ: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- moe
About
weighted/imatrix quants of https://huggingface.co/huihui-ai/Huihui-MoE-5B-A1.7B-abliterated
For a convenient overview and download list, visit our model page for this model.
static quants are available at https://huggingface.co/mradermacher/Huihui-MoE-5B-A1.7B-abliterated-GGUF
Usage
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|---|---|---|---|
| GGUF | i1-IQ1_S | 1.4 | for the desperate |
| GGUF | i1-IQ1_M | 1.5 | mostly desperate |
| GGUF | i1-IQ2_XXS | 1.6 | |
| GGUF | i1-IQ2_XS | 1.8 | |
| GGUF | i1-IQ2_S | 1.8 | |
| GGUF | i1-IQ2_M | 2.0 | |
| GGUF | i1-Q2K_S | 2.0 | very low quality |
| GGUF | i1-Q2K | 2.1 | IQ3XXS probably better |
| GGUF | i1-IQ3_XXS | 2.2 | lower quality |
| GGUF | i1-IQ3_XS | 2.4 | |
| GGUF | i1-IQ3S | 2.5 | beats Q3K* |
| GGUF | i1-Q3KS | 2.5 | IQ3XS probably better |
| GGUF | i1-IQ3_M | 2.5 | |
| GGUF | i1-Q3KM | 2.7 | IQ3S probably better |
| GGUF | i1-Q3KL | 2.9 | IQ3M probably better |
| GGUF | i1-IQ4_XS | 3.0 | |
| GGUF | i1-IQ4NL | 3.1 | prefer IQ4XS |
| GGUF | i1-Q4_0 | 3.1 | fast, low quality |
| GGUF | i1-Q4K_S | 3.1 | optimal size/speed/quality |
| GGUF | i1-Q4K_M | 3.3 | fast, recommended |
| GGUF | i1-Q4_1 | 3.4 | |
| GGUF | i1-Q5K_S | 3.7 | |
| GGUF | i1-Q5K_M | 3.8 | |
| GGUF | i1-Q6K | 4.4 | practically like static Q6K |
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ1_M.gguf
LFS
|
1.27 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ1_S.gguf
LFS
|
1.18 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ2_M.gguf
LFS
Q2
|
1.74 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ2_S.gguf
LFS
Q2
|
1.61 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ2_XS.gguf
LFS
Q2
|
1.55 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ2_XXS.gguf
LFS
Q2
|
1.43 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ3_M.gguf
LFS
Q3
|
2.24 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ3_S.gguf
LFS
Q3
|
2.2 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ3_XS.gguf
LFS
Q3
|
2.1 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ3_XXS.gguf
LFS
Q3
|
1.96 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ4_NL.gguf
LFS
Q4
|
2.81 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-IQ4_XS.gguf
LFS
Q4
|
2.66 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q2_K.gguf
LFS
Q2
|
1.89 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q2_K_S.gguf
LFS
Q2
|
1.77 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q3_K_L.gguf
LFS
Q3
|
2.58 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q3_K_M.gguf
LFS
Q3
|
2.4 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q3_K_S.gguf
LFS
Q3
|
2.2 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q4_0.gguf
Recommended
LFS
Q4
|
2.82 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q4_1.gguf
LFS
Q4
|
3.09 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q4_K_M.gguf
LFS
Q4
|
2.98 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q4_K_S.gguf
LFS
Q4
|
2.83 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q5_K_M.gguf
LFS
Q5
|
3.47 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q5_K_S.gguf
LFS
Q5
|
3.38 GB | Download |
|
Huihui-MoE-5B-A1.7B-abliterated.i1-Q6_K.gguf
LFS
Q6
|
3.98 GB | Download |