πŸ“‹ Model Description


base_model: huihui-ai/Huihui-InternVL3-78B-abliterated language:
  • multilingual
library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE license_name: qwen mradermacher: readme_rev: 1 quantized_by: mradermacher tags:
  • internvl
  • custom_code
  • abliterated
  • uncensored

About









weighted/imatrix quants of https://huggingface.co/huihui-ai/Huihui-InternVL3-78B-abliterated

For a convenient overview and download list, visit our model page for this model.

static quants are available at https://huggingface.co/mradermacher/Huihui-InternVL3-78B-abliterated-GGUF

This is a vision model - mmproj files (if any) will be in the static repository.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFimatrix0.1imatrix file (for creating your own qwuants)
GGUFi1-IQ1_M23.8mostly desperate
GGUFi1-IQ2_XXS25.6
GGUFi1-IQ2_XS27.2
GGUFi1-IQ2_S28.0
GGUFi1-IQ2_M29.4
GGUFi1-Q2K_S29.7very low quality
GGUFi1-Q2K29.9IQ3XXS probably better
GGUFi1-IQ3_XXS31.9lower quality
GGUFi1-IQ3_XS32.9
GGUFi1-IQ3S34.6beats Q3K*
GGUFi1-Q3KS34.6IQ3XS probably better
GGUFi1-IQ3_M35.6
GGUFi1-Q3KM37.8IQ3S probably better
GGUFi1-Q3KL39.6IQ3M probably better
GGUFi1-IQ4_XS39.8
GGUFi1-Q4_041.5fast, low quality
GGUFi1-Q4K_S44.0optimal size/speed/quality
GGUFi1-Q4_145.8
GGUFi1-Q4K_M47.5fast, recommended
PART 1 PART 2i1-Q5KS51.5
PART 1 PART 2i1-Q5KM54.5
PART 1 PART 2i1-Q6K64.4practically like static Q6K
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Huihui-InternVL3-78B-abliterated.i1-IQ1_M.gguf
LFS
22.11 GB Download
Huihui-InternVL3-78B-abliterated.i1-IQ2_M.gguf
LFS Q2
27.32 GB Download
Huihui-InternVL3-78B-abliterated.i1-IQ2_S.gguf
LFS Q2
26.02 GB Download
Huihui-InternVL3-78B-abliterated.i1-IQ2_XS.gguf
LFS Q2
25.2 GB Download
Huihui-InternVL3-78B-abliterated.i1-IQ2_XXS.gguf
LFS Q2
23.74 GB Download
Huihui-InternVL3-78B-abliterated.i1-IQ3_M.gguf
LFS Q3
33.06 GB Download
Huihui-InternVL3-78B-abliterated.i1-IQ3_S.gguf
LFS Q3
32.12 GB Download
Huihui-InternVL3-78B-abliterated.i1-IQ3_XS.gguf
LFS Q3
30.58 GB Download
Huihui-InternVL3-78B-abliterated.i1-IQ3_XXS.gguf
LFS Q3
29.65 GB Download
Huihui-InternVL3-78B-abliterated.i1-IQ4_XS.gguf
LFS Q4
36.98 GB Download
Huihui-InternVL3-78B-abliterated.i1-Q2_K.gguf
LFS Q2
27.76 GB Download
Huihui-InternVL3-78B-abliterated.i1-Q2_K_S.gguf
LFS Q2
27.54 GB Download
Huihui-InternVL3-78B-abliterated.i1-Q3_K_L.gguf
LFS Q3
36.79 GB Download
Huihui-InternVL3-78B-abliterated.i1-Q3_K_M.gguf
LFS Q3
35.11 GB Download
Huihui-InternVL3-78B-abliterated.i1-Q3_K_S.gguf
LFS Q3
32.12 GB Download
Huihui-InternVL3-78B-abliterated.i1-Q4_0.gguf
Recommended LFS Q4
38.54 GB Download
Huihui-InternVL3-78B-abliterated.i1-Q4_1.gguf
LFS Q4
42.56 GB Download
Huihui-InternVL3-78B-abliterated.i1-Q4_K_M.gguf
LFS Q4
44.16 GB Download
Huihui-InternVL3-78B-abliterated.i1-Q4_K_S.gguf
LFS Q4
40.87 GB Download
Huihui-InternVL3-78B-abliterated.imatrix.gguf
LFS
24.11 MB Download