πŸ“‹ Model Description


base_model: mPLUG/GUI-Owl-1.5-2B-Instruct language:
  • en
library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher tags:
  • arxiv:2602.16855

About









weighted/imatrix quants of https://huggingface.co/mPLUG/GUI-Owl-1.5-2B-Instruct

For a convenient overview and download list, visit our model page for this model.

static quants are available at https://huggingface.co/mradermacher/GUI-Owl-1.5-2B-Instruct-GGUF

This is a vision model - mmproj files (if any) will be in the static repository.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFimatrix0.1imatrix file (for creating your own quants)
GGUFi1-IQ1_S0.6for the desperate
GGUFi1-IQ1_M0.6mostly desperate
GGUFi1-IQ2_XXS0.7
GGUFi1-IQ2_XS0.7
GGUFi1-IQ2_S0.8
GGUFi1-IQ2_M0.8
GGUFi1-Q2K_S0.8very low quality
GGUFi1-IQ3_XXS0.9lower quality
GGUFi1-Q2K0.9IQ3XXS probably better
GGUFi1-IQ3_XS0.9
GGUFi1-IQ3S1.0beats Q3K*
GGUFi1-Q3KS1.0IQ3XS probably better
GGUFi1-IQ3_M1.0
GGUFi1-Q3KM1.0IQ3S probably better
GGUFi1-Q3KL1.1IQ3M probably better
GGUFi1-IQ4_XS1.1
GGUFi1-IQ4NL1.2prefer IQ4XS
GGUFi1-Q4_01.2fast, low quality
GGUFi1-Q4K_S1.2optimal size/speed/quality
GGUFi1-Q4K_M1.2fast, recommended
GGUFi1-Q4_11.2
GGUFi1-Q5K_S1.3
GGUFi1-Q5K_M1.4
GGUFi1-Q6K1.5practically like static Q6K
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
GUI-Owl-1.5-2B-Instruct.i1-IQ1_M.gguf
LFS
518.6 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ1_S.gguf
LFS
491.88 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ2_M.gguf
LFS Q2
662.98 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ2_S.gguf
LFS Q2
627.35 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ2_XS.gguf
LFS Q2
602.26 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ2_XXS.gguf
LFS Q2
563.13 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ3_M.gguf
LFS Q3
854.17 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ3_S.gguf
LFS Q3
827.08 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ3_XS.gguf
LFS Q3
795.58 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ3_XXS.gguf
LFS Q3
719.42 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ4_NL.gguf
LFS Q4
1005.58 MB Download
GUI-Owl-1.5-2B-Instruct.i1-IQ4_XS.gguf
LFS Q4
963.58 MB Download
GUI-Owl-1.5-2B-Instruct.i1-Q2_K.gguf
LFS Q2
741.77 MB Download
GUI-Owl-1.5-2B-Instruct.i1-Q2_K_S.gguf
LFS Q2
699.02 MB Download
GUI-Owl-1.5-2B-Instruct.i1-Q3_K_L.gguf
LFS Q3
957.02 MB Download
GUI-Owl-1.5-2B-Instruct.i1-Q3_K_M.gguf
LFS Q3
896.02 MB Download
GUI-Owl-1.5-2B-Instruct.i1-Q3_K_S.gguf
LFS Q3
827.08 MB Download
GUI-Owl-1.5-2B-Instruct.i1-Q4_0.gguf
Recommended LFS Q4
1007.83 MB Download
GUI-Owl-1.5-2B-Instruct.i1-Q4_1.gguf
LFS Q4
1.06 GB Download
GUI-Owl-1.5-2B-Instruct.i1-Q4_K_M.gguf
LFS Q4
1.03 GB Download
GUI-Owl-1.5-2B-Instruct.i1-Q4_K_S.gguf
LFS Q4
1011.08 MB Download
GUI-Owl-1.5-2B-Instruct.i1-Q5_K_M.gguf
LFS Q5
1.17 GB Download
GUI-Owl-1.5-2B-Instruct.i1-Q5_K_S.gguf
LFS Q5
1.15 GB Download
GUI-Owl-1.5-2B-Instruct.i1-Q6_K.gguf
LFS Q6
1.32 GB Download
GUI-Owl-1.5-2B-Instruct.imatrix.gguf
LFS
2 MB Download