πŸ“‹ Model Description


license: apache-2.0 language:
  • en
  • ja
  • fr
  • de
  • es
  • it
  • pt
  • ru
  • zh
base_model:
  • mistralai/Pixtral-12B-2409
pipeline_tag: image-text-to-text new_version: EnlistedGhost/Pixtral-12B-2409-GGUF datasets:
  • mistralai/MM-MT-Bench
tags:
  • Pixtral,
  • 12B,
  • Vision,
  • Conversational,
  • Ollama,
  • ggml,
  • gguf,
  • Image-Text-to-Text,
  • Multimodal,
  • MistralAI
  • Llama.cpp
  • Quantized

Model - Pixtral-12B-2409 (GGUF + Ollama Patched)

[The first official MistralAI Pixtral-12B to be GGUF converted and GGUF Quantized!]

NEW UPDATES! (As of: November 27th, 2025) (Read below!)

-------------------------------------------------------------------------
Description:

EnlistedGhost's GGUF/Quantized Repository of - MistralAI Pixtral-12B-2409...
This release includes GGUF+Ollama-Patched moodel files and three working projector
(mmproj) files for the Vision Projector, offering full capabilities in Ollama or Llama.cpp.


No modifications, edits, or configurations are required to use this model with
Ollama, it works natively! Both Vision and Text work with Ollama. (^.^)

Personal Notes:

The quality of interaction with this model from MistralAI has been verified
to be of a higher quality than others available. There are other Pixtral-12B models
that are community versions, however the weights that MistralAI had are different than
from the community vertsion has and are significantly improved over community available releases.
Thank you for taking the time to read this!

Some explaination for this claim -

The currently available (publicly released on Huggingface) Pixtral-12B GGUF/GGUF-Quantized
releases are from mistral-community and not MistralAI themselves.
So, I am very excited to offer the Huggingface community an Ollama and
Llama.cpp compatible version of the officially released: MistralAI/Pixtral-12B-2409
Multimodal Vision Model!

Public Notice: The statements and wording in this release DO NOT:

infer, suggest, slander or portray in a negative manner the
mistral-community and their releases! I have nothing but a very high respect
for mistral-community and their quality work.
(Be sure to check them out too!)


The wording I use is only to offer a distinction between the fact that these
GGUF/Quantized files are derrived directly from official MistralAI iterations.
*(Sorry for the long disclaimer there - I just have to be very clear that I am not
in any way saying anything negative about the already-quantized and GGUF converted
Pixtral-12B from mistral-community)*

Happy Inferencing!

-- Jon Z (EnlistedGhost)

---------------------------------------------

Model Updates (As of: November 27th, 2025)

  • Updated: All GGUF model file(s) with Extremely-High-Quality GGUF file(s) (You won't be disappointed!)
Final Quantized and full-BF16 modelfiles have been uploaded!!!

How to run this Model using Ollama

You can run this model by using the "ollama run" command.
Simply copy & paste one of the commands from the list below into
your console, terminal or power-shell window.
Quant TypeFile SizeCommand
Q2KS4.78 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q2KS
Q2K5.12 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q2K
Q3KS5.62 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q3KS
Q3KM6.35 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q3KM
Q4KS7.29 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q4KS
Q4KM7.65 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q4KM
Q4KXL7.98 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q4KXL
Q5KS8.43 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q5KS
Q5KM8.93 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q5KM
Q5KXL9.14 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q5KXL
Q6K10.1 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q6K
Q6KM10.4 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q6KM
Q6KXL11.6 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q6KXL
Q8013.0 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:Q80
F1624.5 GBollama run hf.co/EnlistedGhost/Pixtral-12B-2409-GGUF:F16
mmproj (Vision Projector) Files
Quant TypeFile SizeDownload-Link
Q80465 MB[mmproj Vision Pixtral-12B rojector:Q80]
F16870 MB[mmproj Vision Pixtral-12B rojector:F16]
F321.74 GB[mmproj Vision Pixtral-12B rojector:F32]
-------------------------------------------------------------------------------

Intended Use

Same as original:

Out-of-Scope Use

Same as original:

Bias, Risks, and Limitations

Same as original:

Training Details

Training sets and data are from:
(This is a direct off-shoot/decendant of the above mentioned model)

Evaluation

  • This model has NOT been evaluated in any form, scope or type of method.
  • !!! USE AT YOUR OWN RISK !!!
  • !!! NO WARRANTY IS PROVIDED OF ANY KIND !!!

Citation (Original Paper)

[MistalAI Pixtral-12B Original Paper]

Detailed Release Information

Attributions (Credits)

A big thank-you is extended to the below credited sources!
These contributions are what made this release possible!

*Important Notice: This is NOT a copy/paste release,

I have created unique Quantized files that were then altered further

to properly work with Ollama software.

This resulted in the first publicly available

Pixtral-12B-2409 model that natively runs on Ollama.*

Model Card Authors and Contact

[EnlistedGhost]

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Pixtral-12B-2409-BF16.gguf
LFS FP16
22.82 GB Download
Pixtral-12B-2409-IQ3_M.gguf
LFS Q3
5.33 GB Download
Pixtral-12B-2409-IQ4_XS.gguf
LFS Q4
6.51 GB Download
Pixtral-12B-2409-Q2_K.gguf
LFS Q2
4.77 GB Download
Pixtral-12B-2409-Q2_K_L.gguf
LFS Q2
6.5 GB Download
Pixtral-12B-2409-Q2_K_M.gguf
LFS Q2
6 GB Download
Pixtral-12B-2409-Q2_K_XL.gguf
LFS Q2
7.39 GB Download
Pixtral-12B-2409-Q3_K_L.gguf
LFS Q3
6.36 GB Download
Pixtral-12B-2409-Q3_K_M.gguf
LFS Q3
5.91 GB Download
Pixtral-12B-2409-Q3_K_S.gguf
LFS Q3
5.4 GB Download
Pixtral-12B-2409-Q3_K_XL.gguf
LFS Q3
6.66 GB Download
Pixtral-12B-2409-Q4_K_M.gguf
Recommended LFS Q4
7.17 GB Download
Pixtral-12B-2409-Q4_K_S.gguf
LFS Q4
6.79 GB Download
Pixtral-12B-2409-Q4_K_XL.gguf
LFS Q4
8.72 GB Download
Pixtral-12B-2409-Q5_K_M.gguf
LFS Q5
8.21 GB Download
Pixtral-12B-2409-Q5_K_S.gguf
LFS Q5
8.02 GB Download
Pixtral-12B-2409-Q5_K_XL.gguf
LFS Q5
9.71 GB Download
Pixtral-12B-2409-Q6_K.gguf
LFS Q6
9.37 GB Download
Pixtral-12B-2409-Q6_K_L.gguf
LFS Q6
9.67 GB Download
Pixtral-12B-2409-Q6_K_XL.gguf
LFS Q6
10.57 GB Download
Pixtral-12B-2409-Q8_0.gguf
LFS Q8
12.13 GB Download
mmproj-Pixtral-12B-2409-F16.gguf
LFS FP16
829.76 MB Download
mmproj-Pixtral-12B-2409-F32.gguf
LFS
1.62 GB Download
mmproj-Pixtral-12B-2409-Q8_0.gguf
LFS Q8
443.14 MB Download