πŸ“‹ Model Description


library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE base_model:
  • Qwen/Qwen3-4B-Instruct-2507
tags:
  • qwen
  • qwen3
  • unsloth

See our collection for all versions of Qwen3 including GGUF, 4-bit & 16-bit formats.

Learn to run Qwen3-2507 correctly - Read our Guide.

Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.

✨ Read our Qwen3-2507 Guide here!

Unsloth supportsFree NotebooksPerformanceMemory use
Qwen3 (14B)▢️ Start on Colab3x faster70% less
GRPO with Qwen3 (8B)▢️ Start on Colab3x faster80% less
Llama-3.2 (3B)▢️ Start on Colab-Conversational.ipynb)2.4x faster58% less
Llama-3.2 (11B vision)▢️ Start on Colab-Vision.ipynb)2x faster60% less
Qwen2.5 (7B)▢️ Start on Colab-Alpaca.ipynb)2x faster60% less

Qwen3-4B-Instruct-2507

Chat

Highlights

We introduce the updated version of the Qwen3-4B non-thinking mode, named Qwen3-4B-Instruct-2507, featuring the following key enhancements:

  • Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage.
  • Substantial gains in long-tail knowledge coverage across multiple languages.
  • Markedly better alignment with user preferences in subjective and open-ended tasks, enabling more helpful responses and higher-quality text generation.
  • Enhanced capabilities in 256K long-context understanding.

!image/jpeg

Model Overview

Qwen3-4B-Instruct-2507 has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 4.0B
  • Number of Paramaters (Non-Embedding): 3.6B
  • Number of Layers: 36
  • Number of Attention Heads (GQA): 32 for Q and 8 for KV
  • Context Length: 262,144 natively.

NOTE: This model supports only non-thinking mode and does not generate ` blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required.

For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.

Performance

GPT-4.1-nano-2025-04-14Qwen3-30B-A3B Non-ThinkingQwen3-4B Non-ThinkingQwen3-4B-Instruct-2507
Knowledge
MMLU-Pro62.869.158.069.6
MMLU-Redux80.284.177.384.2
GPQA50.354.841.762.0
SuperGPQA32.242.232.042.8
Reasoning
AIME2522.721.619.147.4
HMMT259.712.012.131.0
ZebraLogic14.833.235.280.2
LiveBench 2024112541.559.448.463.0
Coding
LiveCodeBench v6 (25.02-25.05)31.529.026.435.1
MultiPL-E76.374.666.676.8
Aider-Polyglot9.824.413.812.9
Alignment
IFEval74.583.781.283.4
Arena-Hard v2*15.924.89.543.4
Creative Writing v372.768.153.683.5
WritingBench66.972.268.583.4
Agent
BFCL-v353.058.657.661.9
TAU1-Retail23.538.324.348.7
TAU1-Airline14.018.016.032.0
TAU2-Retail-31.628.140.4
TAU2-Airline-18.012.024.0
TAU2-Telecom-18.417.513.2
Multilingualism
MultiIF60.770.861.369.0
MMLU-ProX56.265.149.661.6
INCLUDE58.667.853.860.1
PolyMATH15.623.316.631.1
*: For reproducibility, we report the win rates evaluated by GPT-4.1.

Quickstart

The code of Qwen3 has been in the latest Hugging Face transformers and we advise you to use the latest version of transformers.

With transformers<4.51.0, you will encounter the following error:

KeyError: 'qwen3'

The following contains a code snippet illustrating how to use the model generate content based on given inputs.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen3-4B-Instruct-2507"

load the tokenizer and the model

tokenizer = AutoTokenizer.frompretrained(modelname) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" )

prepare the model input

prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.applychattemplate( messages, tokenize=False, addgenerationprompt=True, ) modelinputs = tokenizer([text], returntensors="pt").to(model.device)

conduct text completion

generated_ids = model.generate( model_inputs, maxnewtokens=16384 ) outputids = generatedids[0][len(modelinputs.inputids[0]):].tolist()

content = tokenizer.decode(outputids, skipspecial_tokens=True)

print("content:", content)

For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 or to create an OpenAI-compatible API endpoint:

  • SGLang:

python -m sglang.launch_server --model-path Qwen/Qwen3-4B-Instruct-2507 --context-length 262144

  • vLLM:

vllm serve Qwen/Qwen3-4B-Instruct-2507 --max-model-len 262144

Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as 32,768.

For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.

Agentic Use

Qwen3 excels in tool calling capabilities. We recommend using Qwen-Agent to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.

To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.

from qwen_agent.agents import Assistant

Define LLM

llm_cfg = { 'model': 'Qwen3-4B-Instruct-2507',

# Use a custom endpoint compatible with OpenAI API:
'modelserver': 'http://localhost:8000/v1', # apibase
'api_key': 'EMPTY',
}

Define Tools

tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ]

Define Agent

bot = Assistant(llm=llmcfg, functionlist=tools)

Streaming generation

messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses)

Best Practices

To achieve optimal performance, we recommend the following settings:

  1. Sampling Parameters:
- We suggest using
Temperature=0.7, TopP=0.8, TopK=20, and MinP=0. - For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
  1. Adequate Output Length: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
  2. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the
answer field with only the choice letter, e.g., "answer": "C"`."

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen3technicalreport,
      title={Qwen3 Technical Report}, 
      author={Qwen Team},
      year={2025},
      eprint={2505.09388},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.09388}, 
}

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Qwen3-4B-Instruct-2507-F16.gguf
LFS FP16
7.5 GB Download
Qwen3-4B-Instruct-2507-IQ4_NL.gguf
LFS Q4
2.22 GB Download
Qwen3-4B-Instruct-2507-IQ4_XS.gguf
LFS Q4
2.11 GB Download
Qwen3-4B-Instruct-2507-Q2_K.gguf
LFS Q2
1.55 GB Download
Qwen3-4B-Instruct-2507-Q2_K_L.gguf
LFS Q2
1.55 GB Download
Qwen3-4B-Instruct-2507-Q3_K_M.gguf
LFS Q3
1.93 GB Download
Qwen3-4B-Instruct-2507-Q3_K_S.gguf
LFS Q3
1.76 GB Download
Qwen3-4B-Instruct-2507-Q4_0.gguf
Recommended LFS Q4
2.21 GB Download
Qwen3-4B-Instruct-2507-Q4_1.gguf
LFS Q4
2.42 GB Download
Qwen3-4B-Instruct-2507-Q4_K_M.gguf
LFS Q4
2.33 GB Download
Qwen3-4B-Instruct-2507-Q4_K_S.gguf
LFS Q4
2.22 GB Download
Qwen3-4B-Instruct-2507-Q5_K_M.gguf
LFS Q5
2.69 GB Download
Qwen3-4B-Instruct-2507-Q5_K_S.gguf
LFS Q5
2.63 GB Download
Qwen3-4B-Instruct-2507-Q6_K.gguf
LFS Q6
3.08 GB Download
Qwen3-4B-Instruct-2507-Q8_0.gguf
LFS Q8
3.99 GB Download
Qwen3-4B-Instruct-2507-UD-IQ1_M.gguf
LFS
1.06 GB Download
Qwen3-4B-Instruct-2507-UD-IQ1_S.gguf
LFS
1.01 GB Download
Qwen3-4B-Instruct-2507-UD-IQ2_M.gguf
LFS Q2
1.43 GB Download
Qwen3-4B-Instruct-2507-UD-IQ2_XXS.gguf
LFS Q2
1.17 GB Download
Qwen3-4B-Instruct-2507-UD-IQ3_XXS.gguf
LFS Q3
1.56 GB Download
Qwen3-4B-Instruct-2507-UD-Q2_K_XL.gguf
LFS Q2
1.58 GB Download
Qwen3-4B-Instruct-2507-UD-Q3_K_XL.gguf
LFS Q3
1.98 GB Download
Qwen3-4B-Instruct-2507-UD-Q4_K_XL.gguf
LFS Q4
2.37 GB Download
Qwen3-4B-Instruct-2507-UD-Q5_K_XL.gguf
LFS Q5
2.7 GB Download
Qwen3-4B-Instruct-2507-UD-Q6_K_XL.gguf
LFS Q6
3.41 GB Download
Qwen3-4B-Instruct-2507-UD-Q8_K_XL.gguf
LFS Q8
4.71 GB Download