πŸ“‹ Model Description


library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE base_model:
  • Qwen/Qwen3-30B-A3B-Instruct-2507
tags:
  • qwen
  • qwen3
  • unsloth

See our collection for all versions of Qwen3 including GGUF, 4-bit & 16-bit formats.

Learn to run Qwen3-2507 correctly - Read our Guide.

Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.

✨ Read our Qwen3-2507 Guide here!

Unsloth supportsFree NotebooksPerformanceMemory use
Qwen3 (14B)▢️ Start on Colab3x faster70% less
GRPO with Qwen3 (8B)▢️ Start on Colab3x faster80% less
Llama-3.2 (3B)▢️ Start on Colab-Conversational.ipynb)2.4x faster58% less
Llama-3.2 (11B vision)▢️ Start on Colab-Vision.ipynb)2x faster60% less
Qwen2.5 (7B)▢️ Start on Colab-Alpaca.ipynb)2x faster60% less

Qwen3-30B-A3B-Instruct-2507

Chat

Highlights

We introduce the updated version of the Qwen3-30B-A3B non-thinking mode, named Qwen3-30B-A3B-Instruct-2507, featuring the following key enhancements:

  • Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage.
  • Substantial gains in long-tail knowledge coverage across multiple languages.
  • Markedly better alignment with user preferences in subjective and open-ended tasks, enabling more helpful responses and higher-quality text generation.
  • Enhanced capabilities in 256K long-context understanding.

!image/jpeg

Model Overview

Qwen3-30B-A3B-Instruct-2507 has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 30.5B in total and 3.3B activated
  • Number of Paramaters (Non-Embedding): 29.9B
  • Number of Layers: 48
  • Number of Attention Heads (GQA): 32 for Q and 4 for KV
  • Number of Experts: 128
  • Number of Activated Experts: 8
  • Context Length: 262,144 natively.

NOTE: This model supports only non-thinking mode and does not generate ` blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required.

For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.

Performance

Deepseek-V3-0324GPT-4o-0327Gemini-2.5-Flash Non-ThinkingQwen3-235B-A22B Non-ThinkingQwen3-30B-A3B Non-ThinkingQwen3-30B-A3B-Instruct-2507
Knowledge
MMLU-Pro81.279.881.175.269.178.4
MMLU-Redux90.491.390.689.284.189.3
GPQA68.466.978.362.954.870.4
SuperGPQA57.351.054.648.242.253.4
Reasoning
AIME2546.626.761.624.721.661.3
HMMT2527.57.945.810.012.043.0
ZebraLogic83.452.657.937.733.290.0
LiveBench 2024112566.963.769.162.559.469.0
Coding
LiveCodeBench v6 (25.02-25.05)45.235.840.132.929.043.2
MultiPL-E82.282.777.779.374.683.8
Aider-Polyglot55.145.344.059.624.435.6
Alignment
IFEval82.383.984.383.283.784.7
Arena-Hard v2*45.661.958.352.024.869.0
Creative Writing v381.684.984.680.468.186.0
WritingBench74.575.580.577.072.285.5
Agent
BFCL-v364.766.566.168.058.665.1
TAU1-Retail49.660.3#65.265.238.359.1
TAU1-Airline32.042.8#48.032.018.040.0
TAU2-Retail71.166.7#64.364.931.657.0
TAU2-Airline36.042.0#42.536.018.038.0
TAU2-Telecom34.029.8#16.924.618.412.3
Multilingualism
MultiIF66.570.469.470.270.867.9
MMLU-ProX75.876.278.373.265.172.0
INCLUDE80.182.183.875.667.871.9
PolyMATH32.225.541.927.023.343.1
*: For reproducibility, we report the win rates evaluated by GPT-4.1.

\#: Results were generated using GPT-4o-20241120, as access to the native function calling API of GPT-4o-0327 was unavailable.

Quickstart

The code of Qwen3-MoE has been in the latest Hugging Face transformers and we advise you to use the latest version of transformers.

With transformers<4.51.0, you will encounter the following error:

KeyError: 'qwen3_moe'

The following contains a code snippet illustrating how to use the model generate content based on given inputs.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen3-30B-A3B-Instruct-2507"

load the tokenizer and the model

tokenizer = AutoTokenizer.frompretrained(modelname) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" )

prepare the model input

prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.applychattemplate( messages, tokenize=False, addgenerationprompt=True, ) modelinputs = tokenizer([text], returntensors="pt").to(model.device)

conduct text completion

generated_ids = model.generate( model_inputs, maxnewtokens=16384 ) outputids = generatedids[0][len(modelinputs.inputids[0]):].tolist()

content = tokenizer.decode(outputids, skipspecial_tokens=True)

print("content:", content)

For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 or to create an OpenAI-compatible API endpoint:

  • SGLang:

python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B-Instruct-2507 --context-length 262144

  • vLLM:

vllm serve Qwen/Qwen3-30B-A3B-Instruct-2507 --max-model-len 262144

Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as 32,768.

For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.

Agentic Use

Qwen3 excels in tool calling capabilities. We recommend using Qwen-Agent to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.

To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.

from qwen_agent.agents import Assistant

Define LLM

llm_cfg = { 'model': 'Qwen3-30B-A3B-Instruct-2507',

# Use a custom endpoint compatible with OpenAI API:
'modelserver': 'http://localhost:8000/v1', # apibase
'api_key': 'EMPTY',
}

Define Tools

tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ]

Define Agent

bot = Assistant(llm=llmcfg, functionlist=tools)

Streaming generation

messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses)

Best Practices

To achieve optimal performance, we recommend the following settings:

  1. Sampling Parameters:
- We suggest using
Temperature=0.7, TopP=0.8, TopK=20, and MinP=0. - For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
  1. Adequate Output Length: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
  2. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the
answer field with only the choice letter, e.g., "answer": "C"`."

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen3technicalreport,
      title={Qwen3 Technical Report}, 
      author={Qwen Team},
      year={2025},
      eprint={2505.09388},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.09388}, 
}

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Qwen3-30B-A3B-Instruct-2507-IQ4_NL.gguf
LFS Q4
16.12 GB Download
Qwen3-30B-A3B-Instruct-2507-IQ4_XS.gguf
LFS Q4
15.25 GB Download
Qwen3-30B-A3B-Instruct-2507-Q2_K.gguf
LFS Q2
10.49 GB Download
Qwen3-30B-A3B-Instruct-2507-Q2_K_L.gguf
LFS Q2
10.55 GB Download
Qwen3-30B-A3B-Instruct-2507-Q3_K_M.gguf
LFS Q3
13.7 GB Download
Qwen3-30B-A3B-Instruct-2507-Q3_K_S.gguf
LFS Q3
12.38 GB Download
Qwen3-30B-A3B-Instruct-2507-Q4_0.gguf
Recommended LFS Q4
16.19 GB Download
Qwen3-30B-A3B-Instruct-2507-Q4_1.gguf
LFS Q4
17.87 GB Download
Qwen3-30B-A3B-Instruct-2507-Q4_K_M.gguf
LFS Q4
17.28 GB Download
Qwen3-30B-A3B-Instruct-2507-Q4_K_S.gguf
LFS Q4
16.26 GB Download
Qwen3-30B-A3B-Instruct-2507-Q5_K_M.gguf
LFS Q5
20.23 GB Download
Qwen3-30B-A3B-Instruct-2507-Q5_K_S.gguf
LFS Q5
19.63 GB Download
Qwen3-30B-A3B-Instruct-2507-Q6_K.gguf
LFS Q6
23.37 GB Download
Qwen3-30B-A3B-Instruct-2507-Q8_0.gguf
LFS Q8
30.25 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-IQ1_M.gguf
LFS
9.02 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-IQ1_S.gguf
LFS
8.42 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-IQ2_M.gguf
LFS Q2
10.1 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-IQ2_XXS.gguf
LFS Q2
9.63 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-IQ3_XXS.gguf
LFS Q3
12.02 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-Q2_K_XL.gguf
LFS Q2
10.98 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-Q3_K_XL.gguf
LFS Q3
12.88 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-Q4_K_XL.gguf
LFS Q4
16.48 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-Q5_K_XL.gguf
LFS Q5
20.25 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-Q6_K_XL.gguf
LFS Q6
24.53 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-Q8_K_XL.gguf
LFS Q8
33.52 GB Download
Qwen3-30B-A3B-Instruct-2507-UD-TQ1_0.gguf
LFS
7.54 GB Download