πŸ“‹ Model Description

This repo contains GGUF .mmproj files intended for use in KoboldCpp alongside other GGUF models.

Please pick the correct projector model to load for your architecture (e.g. all Mistral 7B based models should use the Mistral 7B projector.)

Load them in KoboldCpp with --mmproj or from the "Model Files" tab

!image/png

When enabled, make sure you refresh KoboldAI Lite, then click Add Img and upload your image. Clicking that image should show Vision is enabled.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
LLaMA3-8B_mmproj-Q4_1.gguf
LFS Q4
187.73 MB Download
Ministral-3-3B-Instruct-2512-mmproj-Q8_0.gguf
LFS Q8
426.37 MB Download
Qwen2-VL-2B-mmproj-q5_1.gguf
LFS Q5
482.21 MB Download
Qwen2-VL-7B-mmproj-q5_1.gguf
LFS Q5
489.72 MB Download
Qwen2.5-Omni-3B-mmproj-Q8_0.gguf
LFS Q8
1.43 GB Download
Qwen2.5-Omni-7B-mmproj-Q8_0.gguf
LFS Q8
1.44 GB Download
gemma3-12b-mmproj.gguf
LFS
814.63 MB Download
gemma3-27b-mmproj.gguf
LFS
818 MB Download
gemma3-4b-mmproj.gguf
LFS
811.82 MB Download
llama-13b-mmproj-v1.5.Q4_1.gguf
LFS Q4
193.99 MB Download
llama-7b-mmproj-v1.5-Q4_0.gguf
Recommended LFS Q4
169.2 MB Download
minicpm-mmproj-model-f16.gguf
LFS FP16
996.04 MB Download
mistral-7b-mmproj-v1.5-Q4_1.gguf
LFS Q4
187.73 MB Download
mmproj-mistralai_Mistral-Small-3.1-24B-Instruct-2503-f16.gguf
LFS FP16
837.38 MB Download
obsidian-3b_mmproj-Q4_1.gguf
LFS Q4
180.69 MB Download
pixtral-12b-mmproj-f16.gguf
LFS FP16
829.76 MB Download
qwen2.5-vl-7b-vision-mmproj-f16.gguf
LFS FP16
1.26 GB Download
yi-34b-mmproj-v1.6-Q4_1.gguf
LFS Q4
210.28 MB Download