πŸ“‹ Model Description


base_model: OpenGVLab/InternVL3-14B datasets:
  • OpenGVLab/MMPR-v1.2
language:
  • multilingual
library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE license_name: qwen mradermacher: readme_rev: 1 quantized_by: mradermacher tags:
  • internvl
  • custom_code

About






weighted/imatrix quants of https://huggingface.co/OpenGVLab/InternVL3-14B

For a convenient overview and download list, visit our model page for this model.

static quants are available at https://huggingface.co/mradermacher/InternVL3-14B-GGUF

This is a vision model - mmproj files (if any) will be in the static repository.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFi1-IQ1_S3.7for the desperate
GGUFi1-IQ1_M4.0mostly desperate
GGUFi1-IQ2_XXS4.4
GGUFi1-IQ2_XS4.8
GGUFi1-IQ2_S5.1
GGUFi1-IQ2_M5.5
GGUFi1-Q2K_S5.5very low quality
GGUFi1-Q2K5.9IQ3XXS probably better
GGUFi1-IQ3_XXS6.0lower quality
GGUFi1-IQ3_XS6.5
GGUFi1-Q3KS6.8IQ3XS probably better
GGUFi1-IQ3S6.8beats Q3K*
GGUFi1-IQ3_M7.0
GGUFi1-Q3KM7.4IQ3S probably better
GGUFi1-Q3KL8.0IQ3M probably better
GGUFi1-IQ4_XS8.2
GGUFi1-Q4_08.6fast, low quality
GGUFi1-IQ4NL8.6prefer IQ4XS
GGUFi1-Q4K_S8.7optimal size/speed/quality
GGUFi1-Q4K_M9.1fast, recommended
GGUFi1-Q4_19.5
GGUFi1-Q5K_S10.4
GGUFi1-Q5K_M10.6
GGUFi1-Q6K12.2practically like static Q6K
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
InternVL3-14B.i1-IQ1_M.gguf
LFS
3.6 GB Download
InternVL3-14B.i1-IQ1_S.gguf
LFS
3.36 GB Download
InternVL3-14B.i1-IQ2_M.gguf
LFS Q2
4.99 GB Download
InternVL3-14B.i1-IQ2_S.gguf
LFS Q2
4.66 GB Download
InternVL3-14B.i1-IQ2_XS.gguf
LFS Q2
4.38 GB Download
InternVL3-14B.i1-IQ2_XXS.gguf
LFS Q2
4.01 GB Download
InternVL3-14B.i1-IQ3_M.gguf
LFS Q3
6.44 GB Download
InternVL3-14B.i1-IQ3_S.gguf
LFS Q3
6.23 GB Download
InternVL3-14B.i1-IQ3_XS.gguf
LFS Q3
5.94 GB Download
InternVL3-14B.i1-IQ3_XXS.gguf
LFS Q3
5.54 GB Download
InternVL3-14B.i1-IQ4_NL.gguf
LFS Q4
7.96 GB Download
InternVL3-14B.i1-IQ4_XS.gguf
LFS Q4
7.56 GB Download
InternVL3-14B.i1-Q2_K.gguf
LFS Q2
5.37 GB Download
InternVL3-14B.i1-Q2_K_S.gguf
LFS Q2
5.02 GB Download
InternVL3-14B.i1-Q3_K_L.gguf
LFS Q3
7.38 GB Download
InternVL3-14B.i1-Q3_K_M.gguf
LFS Q3
6.83 GB Download
InternVL3-14B.i1-Q3_K_S.gguf
LFS Q3
6.2 GB Download
InternVL3-14B.i1-Q4_0.gguf
Recommended LFS Q4
7.95 GB Download
InternVL3-14B.i1-Q4_1.gguf
LFS Q4
8.74 GB Download
InternVL3-14B.i1-Q4_K_M.gguf
LFS Q4
8.37 GB Download
InternVL3-14B.i1-Q4_K_S.gguf
LFS Q4
7.98 GB Download
InternVL3-14B.i1-Q5_K_M.gguf
LFS Q5
9.78 GB Download
InternVL3-14B.i1-Q5_K_S.gguf
LFS Q5
9.56 GB Download
InternVL3-14B.i1-Q6_K.gguf
LFS Q6
11.29 GB Download