πŸ“‹ Model Description


license: apache-2.0 datasets:
  • mistralai/MM-MT-Bench
language:
  • en
  • fr
  • es
  • de
  • it
  • pt
  • nl
  • zh
  • ja
  • ko
  • ar
base_model:
  • mistralai/Ministral-3-14B-Instruct-2512-BF16
new_version: EnlistedGhost/Ministral-3-14B-Instruct-2512-GGUF pipeline_tag: image-text-to-text tags:
  • MistralAI
  • Ministral
  • Ministral-3
  • Ollama
  • Llama.cpp
  • GGUF
  • Image-Text-to-Text
  • Conversational
  • Quantize
  • Multimodal
  • Mistral3

Example image

Example image

------------------------------------------------
- Model Details and Specifications: -
------------------------------------------------

Ministral-3 14B Instruct 2512 (GGUF)

This release contains:

Llama.cpp and Ollama compatible GGUF converted and Quantized model files
(Compatible with both Ollama, and Llama.cpp)

Quantized GGUF version of:

  • Ministral-3-14B-Instruct-2512-BF16
    (by MistralAI)

Original Model Link:

----------------------------------------------

-------------------------------------------------------------
- GGUF Conversion and Quantization Details: -
-------------------------------------------------------------

Software used to convert Safetensors to GGUF:

Software used to create Quantized GGUF Files:

Specific GitHub Commit Point:

Converted to GGUF and Quantized by:

----------------------------------------------

--------------------------
---- Original Info ----
--------------------------

(Crossposted from the link in the above section: "Model Details"):



Ministral 3 14B Instruct 2512 BF16

The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart. A powerful and efficient language model with vision capabilities.

This model is the instruct post-trained version, fine-tuned for instruction tasks, making it ideal for chat and instruction based use cases.

The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware. Ministral 3 14B can even be deployed locally, capable of fitting in 32GB of VRAM in BF16, and less than 24GB of RAM/VRAM when quantized.

We provide a no-loss FP8 version here, you can find other formats and quantizations in the Ministral 3 - Additional Checkpoints collection.

Key Features

Ministral 3 14B consists of two main architectural components:
  • 13.5B Language Model
  • 0.4B Vision Encoder

The Ministral 3 14B Instruct model offers the following capabilities:

  • Vision: Enables the model to analyze images and provide insights based on visual content, in addition to text.
  • Multilingual: Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
  • System Prompt: Maintains strong adherence and support for system prompts.
  • Agentic: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
  • Edge-Optimized: Delivers best-in-class performance at a small scale, deployable anywhere.
  • Apache 2.0 License: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
  • Large Context Window: Supports a 256k context window.

Use Cases

Private AI deployments where advanced capabilities meet practical hardware constraints:
  • Private/custom chat and AI assistant deployments in constrained environments
  • Advanced local agentic use cases
  • Fine-tuning and specialization
  • And more...

Bringing advanced AI capabilities to most environments.

Ministral 3 Family

Model NameTypePrecisionLink
Ministral 3 3B Base 2512Base pre-trainedBF16Hugging Face
Ministral 3 3B Instruct 2512Instruct post-trainedBF16Hugging Face
Ministral 3 3B Reasoning 2512Reasoning capableBF16Hugging Face
Ministral 3 8B Base 2512Base pre-trainedBF16Hugging Face
Ministral 3 8B Instruct 2512Instruct post-trainedBF16Hugging Face
Ministral 3 8B Reasoning 2512Reasoning capableBF16Hugging Face
Ministral 3 14B Base 2512Base pre-trainedBF16Hugging Face
Ministral 3 14B Instruct 2512Instruct post-trainedBF16Hugging Face
Ministral 3 14B Reasoning 2512Reasoning capableBF16Hugging Face
Other formats available here.

Benchmark Results

We compare Ministral 3 to similar sized models.

Reasoning

ModelAIME25AIME24GPQA DiamondLiveCodeBench
Ministral 3 14B0.8500.8980.7120.646
Qwen3-14B (Thinking)0.7370.8370.6630.593
Ministral 3 8B0.7870.8600.6680.616
Qwen3-VL-8B-Thinking0.7980.8600.6710.580
Ministral 3 3B0.7210.7750.5340.548
Qwen3-VL-4B-Thinking0.6970.7290.6010.513

Instruct

ModelArena HardWildBenchMATH Maj@1MM MTBench
Ministral 3 14B0.55168.50.9048.49
Qwen3 14B (Non-Thinking)0.42765.10.870NOT MULTIMODAL
Gemma3-12B-Instruct0.43663.20.8546.70
Ministral 3 8B0.50966.80.8768.08
Qwen3-VL-8B-Instruct0.52866.30.9468.00
Ministral 3 3B0.30556.80.8307.83
Qwen3-VL-4B-Instruct0.43856.80.9008.01
Qwen3-VL-2B-Instruct0.16342.20.7866.36
Gemma3-4B-Instruct0.31849.10.7595.23

Base

ModelMultilingual MMLUMATH CoT 2-ShotAGIEval 5-shotMMLU Redux 5-shotMMLU 5-shotTriviaQA 5-shot
Ministral 3 14B0.7420.6760.6480.8200.7940.749
Qwen3 14B Base0.7540.6200.6610.8370.8040.703
Gemma 3 12B Base0.6900.4870.5870.7660.7450.788
Ministral 3 8B0.7060.6260.5910.7930.7610.681
Qwen 3 8B Base0.7000.5760.5960.7940.7600.639
Ministral 3 3B0.6520.6010.5110.7350.7070.592
Qwen 3 4B Base0.6770.4050.5700.7590.7130.530
Gemma 3 4B Base0.5160.2940.4300.6260.5890.640

License

This model is licensed under the Apache 2.0 License.

You must not use this model in a manner that infringes, misappropriates, or otherwise violates any third party’s rights, including intellectual property rights.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Ministral-3-14B-Instruct-2512-BF16.gguf
LFS FP16
25.17 GB Download
Ministral-3-14B-Instruct-2512-IQ2_M.gguf
LFS Q2
4.51 GB Download
Ministral-3-14B-Instruct-2512-IQ3_L.gguf
LFS Q3
6.62 GB Download
Ministral-3-14B-Instruct-2512-IQ3_M.gguf
LFS Q3
5.84 GB Download
Ministral-3-14B-Instruct-2512-IQ4_L.gguf
LFS Q4
8.01 GB Download
Ministral-3-14B-Instruct-2512-IQ4_M.gguf
LFS Q4
6.89 GB Download
Ministral-3-14B-Instruct-2512-IQ4_S.gguf
LFS Q4
7.47 GB Download
Ministral-3-14B-Instruct-2512-IQ4_XS.gguf
LFS Q4
6.9 GB Download
Ministral-3-14B-Instruct-2512-Q2_K.gguf
LFS Q2
5.68 GB Download
Ministral-3-14B-Instruct-2512-Q2_K_L.gguf
LFS Q2
6.43 GB Download
Ministral-3-14B-Instruct-2512-Q2_K_S.gguf
LFS Q2
4.89 GB Download
Ministral-3-14B-Instruct-2512-Q3_K_L.gguf
LFS Q3
6.96 GB Download
Ministral-3-14B-Instruct-2512-Q3_K_M.gguf
LFS Q3
6.47 GB Download
Ministral-3-14B-Instruct-2512-Q3_K_S.gguf
LFS Q3
5.9 GB Download
Ministral-3-14B-Instruct-2512-Q3_K_XL.gguf
LFS Q3
7.68 GB Download
Ministral-3-14B-Instruct-2512-Q4_K_M.gguf
Recommended LFS Q4
8.35 GB Download
Ministral-3-14B-Instruct-2512-Q4_K_S.gguf
LFS Q4
7.46 GB Download
Ministral-3-14B-Instruct-2512-Q5_K_M.gguf
LFS Q5
9.26 GB Download
Ministral-3-14B-Instruct-2512-Q5_K_S.gguf
LFS Q5
8.82 GB Download
Ministral-3-14B-Instruct-2512-Q5_K_XL.gguf
LFS Q5
9.67 GB Download
Ministral-3-14B-Instruct-2512-Q6_K.gguf
LFS Q6
10.33 GB Download
Ministral-3-14B-Instruct-2512-Q6_K_L.gguf
LFS Q6
11.39 GB Download
Ministral-3-14B-Instruct-2512-Q6_K_M.gguf
LFS Q6
11.27 GB Download
Ministral-3-14B-Instruct-2512-Q8_0.gguf
LFS Q8
13.37 GB Download
Ministral-3-14B-Instruct-2512-Q8_0_L.gguf
LFS Q8
14.55 GB Download
mmproj-Ministral-3-14B-Instruct-2512-F16.gguf
LFS FP16
837.38 MB Download
mmproj-Ministral-3-14B-Instruct-2512-F32.gguf
LFS
1.64 GB Download