πŸ“‹ Model Description


base_model: Qwen/Qwen2.5-1.5B-Instruct language:
  • en
library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE mradermacher: readme_rev: 1 quantized_by: mradermacher tags:
  • chat

About






static quants of https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct

For a convenient overview and download list, visit our model page for this model.

weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-1.5B-Instruct-i1-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs
for
more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

LinkTypeSize/GBNotes
GGUFQ2_K0.8
GGUFIQ3_XS0.8
GGUFQ3K_S0.9
GGUFIQ3S0.9beats Q3K*
GGUFIQ3_M0.9
GGUFQ3K_M0.9lower quality
GGUFQ3K_L1.0
GGUFIQ4_XS1.0
GGUFQ4_01.0fast, low quality
GGUFQ404_41.0fast on arm, low quality
GGUFQ404_81.0fast on arm+i8mm, low quality
GGUFQ408_81.0fast on arm+sve, low quality
GGUFQ4K_S1.0fast, recommended
GGUFIQ4NL1.0prefer IQ4XS
GGUFQ4K_M1.1fast, recommended
GGUFQ4_11.1
GGUFQ5_01.2
GGUFQ5K_S1.2
GGUFQ5K_M1.2
GGUFQ5_11.3
GGUFQ6_K1.4very good quality
GGUFQ8_01.7fast, best quality
GGUFSOURCE3.2source gguf, only provided when it was hard to come by
GGUFf163.216 bpw, overkill
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

!image.png

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Qwen2.5-1.5B-Instruct.IQ3_M.gguf
LFS Q3
740.68 MB Download
Qwen2.5-1.5B-Instruct.IQ3_S.gguf
LFS Q3
727.09 MB Download
Qwen2.5-1.5B-Instruct.IQ3_XS.gguf
LFS Q3
697.8 MB Download
Qwen2.5-1.5B-Instruct.IQ4_NL.gguf
LFS Q4
897.88 MB Download
Qwen2.5-1.5B-Instruct.IQ4_XS.gguf
LFS Q4
860.39 MB Download
Qwen2.5-1.5B-Instruct.Q2_K.gguf
LFS Q2
644.98 MB Download
Qwen2.5-1.5B-Instruct.Q3_K_L.gguf
LFS Q3
839.39 MB Download
Qwen2.5-1.5B-Instruct.Q3_K_M.gguf
LFS Q3
786 MB Download
Qwen2.5-1.5B-Instruct.Q3_K_S.gguf
LFS Q3
725.69 MB Download
Qwen2.5-1.5B-Instruct.Q4_0.gguf
Recommended LFS Q4
891.64 MB Download
Qwen2.5-1.5B-Instruct.Q4_0_4_4.gguf
LFS Q4
891.64 MB Download
Qwen2.5-1.5B-Instruct.Q4_0_4_8.gguf
LFS Q4
891.64 MB Download
Qwen2.5-1.5B-Instruct.Q4_0_8_8.gguf
LFS Q4
891.64 MB Download
Qwen2.5-1.5B-Instruct.Q4_1.gguf
LFS Q4
969.74 MB Download
Qwen2.5-1.5B-Instruct.Q4_K_M.gguf
LFS Q4
940.37 MB Download
Qwen2.5-1.5B-Instruct.Q4_K_S.gguf
LFS Q4
896.75 MB Download
Qwen2.5-1.5B-Instruct.Q5_0.gguf
LFS Q5
1.02 GB Download
Qwen2.5-1.5B-Instruct.Q5_1.gguf
LFS Q5
1.1 GB Download
Qwen2.5-1.5B-Instruct.Q5_K_M.gguf
LFS Q5
1.05 GB Download
Qwen2.5-1.5B-Instruct.Q5_K_S.gguf
LFS Q5
1.02 GB Download
Qwen2.5-1.5B-Instruct.Q6_K.gguf
LFS Q6
1.19 GB Download
Qwen2.5-1.5B-Instruct.Q8_0.gguf
LFS Q8
1.53 GB Download
Qwen2.5-1.5B-Instruct.SOURCE.gguf
LFS
2.88 GB Download
Qwen2.5-1.5B-Instruct.f16.gguf
LFS FP16
2.88 GB Download