πŸ“‹ Model Description


language:
  • en
  • fr
  • de
  • es
  • pt
  • it
  • ja
  • ko
  • ru
  • zh
  • ar
  • fa
  • id
  • ms
  • ne
  • pl
  • ro
  • sr
  • sv
  • tr
  • uk
  • vi
  • hi
  • bn
license: apache-2.0 library_name: vllm inference: false base_model:
  • mistralai/Devstral-Small-2507
  • mistralai/Mistral-Small-3.1-24B-Instruct-2503
pipeline_tag: text2text-generation

[!NOTE]

You should use --jinja to enable the system prompt in llama.cpp.

Devstral 1.1, with tool-calling and optional vision support.

Learn to run Devstral correctly - Read our Guide.

Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.

✨ Run & Fine-tune Devstral 1.1 with Unsloth!

Devstral Small 1.1

Devstral is an agentic LLM for software engineering tasks built under a collaboration between Mistral AI and All Hands AI πŸ™Œ. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this benchmark.

It is finetuned from Mistral-Small-3.1, therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from Mistral-Small-3.1 the vision encoder was removed.

For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.

Learn more about Devstral in our blog post.

Updates compared to Devstral Small 1.0:

Key Features:

  • Agentic coding: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
  • lightweight: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use.
  • Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
  • Context Window: A 128k context window.
  • Tokenizer: Utilizes a Tekken tokenizer with a 131k vocabulary size.

Benchmark Results

SWE-Bench

Devstral Small 1.1 achieves a score of 53.6% on SWE-Bench Verified, outperforming Devstral Small 1.0 by +6,8% and the second best state of the art model by +11.4%.

ModelAgentic ScaffoldSWE-Bench Verified (%)
Devstral Small 1.1OpenHands Scaffold53.6
Devstral Small 1.0OpenHands Scaffold46.8
GPT-4.1-miniOpenAI Scaffold23.6
Claude 3.5 HaikuAnthropic Scaffold40.6
SWE-smith-LM 32BSWE-agent Scaffold40.2
Skywork SWEOpenHands Scaffold38.0
DeepSWER2E-Gym Scaffold42.2

When evaluated under the same test scaffold (OpenHands, provided by All Hands AI πŸ™Œ), Devstral exceeds far larger models such as Deepseek-V3-0324 and Qwen3 232B-A22B.

!SWE Benchmark

Usage

We recommend to use Devstral with the OpenHands scaffold.
You can use it either through our API or by running locally.

API

Follow these instructions to create a Mistral account and get an API key.

Then run these commands to start the OpenHands docker container.

export MISTRALAPIKEY=<MY_KEY>

mkdir -p ~/.openhands && echo '{"language":"en","agent":"CodeActAgent","maxiterations":null,"securityanalyzer":null,"confirmationmode":false,"llmmodel":"mistral/devstral-small-2507","llmapikey":"'$MISTRALAPIKEY'","remoteruntimeresourcefactor":null,"githubtoken":null,"enabledefaultcondenser":true}' > ~/.openhands-state/settings.json

docker pull docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik

docker run -it --rm --pull=always \
-e SANDBOXRUNTIMECONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik \
-e LOGALLEVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands:/.openhands \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.48

Local inference

The model can also be deployed with the following libraries:

#### vLLM (recommended)


Expand

We recommend using this model with the vLLM library
to implement production-ready inference pipelines.

Installation

Make sure you install vLLM >= 0.9.1:

pip install vllm --upgrade

Also make sure to have installed mistralcommon >= 1.7.0.

pip install mistral-common --upgrade

To check:

python -c "import mistralcommon; print(mistralcommon.version)"

You can also make use of a ready-to-go docker image or on the docker hub.

Launch server

We recommand that you use Devstral in a server/client setting.

  1. Spin up a server:
vllm serve mistralai/Devstral-Small-2507 --tokenizermode mistral --configformat mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
  1. To ping the client you can use a simple Python snippet.
import requests
import json
from huggingfacehub import hfhub_download

url = "http://<your-server-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}

model = "mistralai/Devstral-Small-2507"

def loadsystemprompt(repo_id: str, filename: str) -> str:
filepath = hfhubdownload(repoid=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt

SYSTEMPROMPT = loadsystemprompt(model, "SYSTEMPROMPT.txt")

messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": [
{
"type": "text",
"text": "<your-command>",
},
],
},
]

data = {"model": model, "messages": messages, "temperature": 0.15}

Devstral Small 1.1 supports tool calling. If you want to use tools, follow this:

tools = [ # Define tools for vLLM

{

"type": "function",

"function": {

"name": "git_clone",

"description": "Clone a git repository",

"parameters": {

"type": "object",

"properties": {

"url": {

"type": "string",

"description": "The url of the git repository",

},

},

"required": ["url"],

},

},

}

]

data = {"model": model, "messages": messages, "temperature": 0.15, "tools": tools} # Pass tools to payload.

response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])


#### Mistral-inference


Expand

We recommend using mistral-inference to quickly try out / "vibe-check" Devstral.

Installation

Make sure to have mistral_inference >= 1.6.0 installed.

pip install mistral_inference --upgrade

Download

from huggingfacehub import snapshotdownload
from pathlib import Path

mistralmodelspath = Path.home().joinpath('mistral_models', 'Devstral')
mistralmodelspath.mkdir(parents=True, exist_ok=True)

snapshotdownload(repoid="mistralai/Devstral-Small-2507", allowpatterns=["params.json", "consolidated.safetensors", "tekken.json"], localdir=mistralmodelspath)

Chat

You can run the model using the following command:

mistral-chat $HOME/mistralmodels/Devstral --instruct --maxtokens 300

You can then prompt it with anything you'd like.

#### Transformers


Expand

To make the best use of our model with transformers make sure to have installed mistral-common >= 1.7.0 to use our tokenizer.

pip install mistral-common --upgrade

Then load our tokenizer along with the model and generate:

import torch

from mistral_common.protocol.instruct.messages import (
SystemMessage, UserMessage
)
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from huggingfacehub import hfhub_download
from transformers import AutoModelForCausalLM

def loadsystemprompt(repo_id: str, filename: str) -> str:
filepath = hfhubdownload(repoid=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
return system_prompt

model_id = "mistralai/Devstral-Small-2507"
SYSTEMPROMPT = loadsystemprompt(modelid, "SYSTEM_PROMPT.txt")

tokenizer = MistralTokenizer.fromhfhub(model_id)
model = AutoModelForCausalLM.frompretrained(modelid)

tokenized = tokenizer.encodechatcompletion(
ChatCompletionRequest(
messages=[
SystemMessage(content=SYSTEM_PROMPT),
UserMessage(content="<your-command>"),
],
)
)

output = model.generate(
input_ids=torch.tensor([tokenized.tokens]),
maxnewtokens=1000,
)[0]

decoded_output = tokenizer.decode(output[len(tokenized.tokens):])
print(decoded_output)

#### LM Studio


Expand

Download the weights from either:

  • LM Studio GGUF repository (recommended): https://huggingface.co/lmstudio-community/Devstral-Small-2507-GGUF
  • our GGUF repository: https://huggingface.co/mistralai/Devstral-Small-2507_gguf

pip install -U "huggingface_hub[cli]"
huggingface-cli download \
"lmstudio-community/Devstral-Small-2507-GGUF" \ # or mistralai/Devstral-Small-2507_gguf
--include "Devstral-Small-2507-Q4KM.gguf" \
--local-dir "Devstral-Small-2507_gguf/"

You can serve the model locally with LMStudio.

  • Download LM Studio and install it
  • Install lms cli ~/.lmstudio/bin/lms bootstrap
  • In a bash terminal, run lms import Devstral-Small-2507-Q4KM.gguf in the directory where you've downloaded the model checkpoint (e.g. Devstral-Small-2507gguf)
  • Open the LM Studio application, click the terminal icon to get into the developer tab. Click select a model to load and select Devstral Small 2507. Toggle the status button to start the model, in setting toggle Serve on Local Network to be on.
  • On the right tab, you will see an API identifier which should be devstral-small-2507 and an api address under API Usage. Keep note of this address, this is used for OpenHands or Cline.

#### llama.cpp


Expand

Download the weights from huggingface:

pip install -U "huggingface_hub[cli]"
huggingface-cli download \
"mistralai/Devstral-Small-2507_gguf" \
--include "Devstral-Small-2507-Q4KM.gguf" \
--local-dir "mistralai/Devstral-Small-2507_gguf/"

Then run Devstral using the llama.cpp server.

./llama-server -m mistralai/Devstral-Small-2507gguf/Devstral-Small-2507-Q4K_M.gguf -c 0 # -c configure the context size, 0 means model's default, here 128k.

OpenHands (recommended)

#### Launch a server to deploy Devstral Small 1.1

Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with Devstral Small 1.1.

In the case of the tutorial we spineed up a vLLM server running the command:

vllm serve mistralai/Devstral-Small-2507 --tokenizermode mistral --configformat mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2

The server address should be in the following format: http://:8000/v1

#### Launch OpenHands

You can follow installation of OpenHands here.

The easiest way to launch OpenHands is to use the Docker image:

docker pull docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik

docker run -it --rm --pull=always \
-e SANDBOXRUNTIMECONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.48-nikolaik \
-e LOGALLEVENTS=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/.openhands:/.openhands \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:0.48

Then, you can access the OpenHands UI at http://localhost:3000.

#### Connect to the server

When accessing the OpenHands UI, you will be prompted to connect to a server. You can use the advanced mode to connect to the server you launched earlier.

Fill the following fields:

  • Custom Model: openai/mistralai/Devstral-Small-2507
  • Base URL: http://:8000/v1
  • API Key: token (or any other token you used to launch the server if any)


See settings

!OpenHands Settings

Cline

#### Launch a server to deploy Devstral Small 1.1

Make sure you launched an OpenAI-compatible server such as vLLM or Ollama as described above. Then, you can use OpenHands to interact with Devstral Small 1.1.

In the case of the tutorial we spineed up a vLLM server running the command:

vllm serve mistralai/Devstral-Small-2507 --tokenizermode mistral --configformat mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2

The server address should be in the following format: http://:8000/v1

#### Launch Cline

You can follow installation of Cline here. Then you can configure the server address in the settings.


See settings

!Cline Settings

Examples

#### OpenHands:Understanding Test Coverage of Mistral Common

We can start the OpenHands scaffold and link it to a repo to analyze test coverage and identify badly covered files.
Here we start with our public mistral-common repo.

After the repo is mounted in the workspace, we give the following instruction

Check the test coverage of the repo and then create a visualization of test coverage. Try plotting a few different types of graphs and save them to a png.

The agent will first browse the code base to check test configuration and structure.

!mistral common coverage - prompt

Then it sets up the testing dependencies and launches the coverage test:

!mistral common coverage - dependencies

Finally, the agent writes necessary code to visualize the coverage, export the results and save the plots to a png.
!mistral common coverage - visualization

At the end of the run, the following plots are produced:
!mistral common coverage - coverage distribution
!mistral common coverage - coverage pie
!mistral common coverage - coverage summary

and the model is able to explain the results:
!mistral common coverage - navigate

#### Cline: build a video game

First initialize Cline inside VSCode and connect it to the server you launched earlier.

We give the following instruction to builde the video game:

Create a video game that mixes Space Invaders and Pong for the web.

Follow these instructions:

  • There are two players one at the top and one at the bottom. The players are controling a bar to bounce a ball.
  • The first player plays with the keys "a" and "d", the second with the right and left arrows.
  • The invaders are located at the center of the screen. They shoud look like the ones in Space Invaders. Their goal is to shoot on the players randomly. They cannot be destroyed by the ball that pass through them. This means that invaders never die.
  • The players goal is to avoid shootings from the space invaders and send the ball to the edge of the over player.
  • The ball bounces on the left and right edges.
  • Once the ball touch one of the player's edge, the player loses.
  • Once a player is touched 3 times or more by a shooting, the player loses.
  • The player winning is the last one standing.
  • Display on the UI, the number of times a player touched the ball, and the remaining health.

!space invaders pong - prompt

The agent will first create the game:

!space invaders pong - structure

Then it will explain how to launch the game:

!space invaders pong - task completed

Finally, the game is ready to be played:

!space invaders pong - game

Don't hesitate to iterate or give more information to Devstral to improve the game!

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Devstral-Small-2507-BF16.gguf
LFS FP16
43.92 GB Download
Devstral-Small-2507-IQ4_NL.gguf
LFS Q4
12.54 GB Download
Devstral-Small-2507-IQ4_XS.gguf
LFS Q4
11.9 GB Download
Devstral-Small-2507-Q2_K.gguf
LFS Q2
8.28 GB Download
Devstral-Small-2507-Q2_K_L.gguf
LFS Q2
8.43 GB Download
Devstral-Small-2507-Q3_K_M.gguf
LFS Q3
10.69 GB Download
Devstral-Small-2507-Q3_K_S.gguf
LFS Q3
9.69 GB Download
Devstral-Small-2507-Q4_0.gguf
Recommended LFS Q4
12.57 GB Download
Devstral-Small-2507-Q4_1.gguf
LFS Q4
13.85 GB Download
Devstral-Small-2507-Q4_K_M.gguf
LFS Q4
13.35 GB Download
Devstral-Small-2507-Q4_K_S.gguf
LFS Q4
12.62 GB Download
Devstral-Small-2507-Q5_K_M.gguf
LFS Q5
15.61 GB Download
Devstral-Small-2507-Q5_K_S.gguf
LFS Q5
15.18 GB Download
Devstral-Small-2507-Q6_K.gguf
LFS Q6
18.02 GB Download
Devstral-Small-2507-Q8_0.gguf
LFS Q8
23.33 GB Download
Devstral-Small-2507-UD-IQ1_M.gguf
LFS
5.6 GB Download
Devstral-Small-2507-UD-IQ1_S.gguf
LFS
5.18 GB Download
Devstral-Small-2507-UD-IQ2_M.gguf
LFS Q2
7.68 GB Download
Devstral-Small-2507-UD-IQ2_XXS.gguf
LFS Q2
6.29 GB Download
Devstral-Small-2507-UD-IQ3_XXS.gguf
LFS Q3
8.76 GB Download
Devstral-Small-2507-UD-Q2_K_XL.gguf
LFS Q2
8.65 GB Download
Devstral-Small-2507-UD-Q3_K_XL.gguf
LFS Q3
11.04 GB Download
Devstral-Small-2507-UD-Q4_K_XL.gguf
LFS Q4
13.55 GB Download
Devstral-Small-2507-UD-Q5_K_XL.gguf
LFS Q5
15.64 GB Download
Devstral-Small-2507-UD-Q6_K_XL.gguf
LFS Q6
19.36 GB Download
Devstral-Small-2507-UD-Q8_K_XL.gguf
LFS Q8
27 GB Download
mmproj-F16.gguf
LFS FP16
837.38 MB Download
mmproj-F32.gguf
LFS
1.64 GB Download