πŸ“‹ Model Description


base_model:
  • mistralai/Devstral-Small-2-24B-Instruct-2512

Devstral-Small-2-24B-Instruct-2512 GGUF

Recommended way to run this model:

llama-server -hf ggml-org/Devstral-Small-2-24B-Instruct-2512-GGUF -c 0

Then, access http://localhost:8080

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Devstral-Small-2-24B-Instruct-2512-Q8_0.gguf
Recommended LFS Q8
23.33 GB Download
mmproj-Devstral-Small-2-24B-Instruct-2512-F16.gguf
LFS FP16
837.38 MB Download