πŸ“‹ Model Description


tags:
  • unsloth
  • qwen3
  • qwen
base_model:
  • Qwen/Qwen3-235B-A22B-Instruct-2507
library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation

See our collection for all versions of Qwen3 including GGUF, 4-bit & 16-bit formats.

Learn to run Qwen3 correctly - Read our Guide.

Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.

✨ Run & Fine-tune Qwen3 with Unsloth!

Unsloth supportsFree NotebooksPerformanceMemory use
Qwen3 (14B)▢️ Start on Colab3x faster70% less
GRPO with Qwen3 (8B)▢️ Start on Colab3x faster80% less
Llama-3.2 (3B)▢️ Start on Colab-Conversational.ipynb)2.4x faster58% less
Llama-3.2 (11B vision)▢️ Start on Colab-Vision.ipynb)2x faster60% less
Qwen2.5 (7B)▢️ Start on Colab-Alpaca.ipynb)2x faster60% less

Qwen3-235B-A22B-Instruct-2507

Chat

Highlights

We introduce the updated version of the Qwen3-235B-A22B non-thinking mode, named Qwen3-235B-A22B-Instruct-2507, featuring the following key enhancements:

  • Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage.
  • Substantial gains in long-tail knowledge coverage across multiple languages.
  • Markedly better alignment with user preferences in subjective and open-ended tasks, enabling more helpful responses and higher-quality text generation.
  • Enhanced capabilities in 256K long-context understanding.

!image/jpeg

Model Overview

Qwen3-235B-A22B-Instruct-2507 has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 235B in total and 22B activated
  • Number of Paramaters (Non-Embedding): 234B
  • Number of Layers: 94
  • Number of Attention Heads (GQA): 64 for Q and 4 for KV
  • Number of Experts: 128
  • Number of Activated Experts: 8
  • Context Length: 262,144 natively.

NOTE: This model supports only non-thinking mode and does not generate ` blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required.

For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.

Performance

Deepseek-V3-0324GPT-4o-0327Claude Opus 4 Non-thinkingKimi K2Qwen3-235B-A22B Non-thinkingQwen3-235B-A22B-Instruct-2507
Knowledge
MMLU-Pro81.279.886.681.175.283.0
MMLU-Redux90.491.394.292.789.293.1
GPQA68.466.974.975.162.977.5
SuperGPQA57.351.056.557.248.262.6
SimpleQA27.240.322.831.012.254.3
CSimpleQA71.160.268.074.560.884.3
Reasoning
AIME2546.626.733.949.524.770.3
HMMT2527.57.915.938.810.055.4
ARC-AGI9.08.830.313.34.341.8
ZebraLogic83.452.6-89.037.795.0
LiveBench 2024112566.963.774.676.462.575.4
Coding
LiveCodeBench v6 (25.02-25.05)45.235.844.648.932.951.8
MultiPL-E82.282.788.585.779.387.9
Aider-Polyglot55.145.370.759.059.657.3
Alignment
IFEval82.383.987.489.883.288.7
Arena-Hard v2*45.661.951.566.152.079.2
Creative Writing v381.684.983.888.180.487.5
WritingBench74.575.579.286.277.085.2
Agent
BFCL-v364.766.560.165.268.070.9
TAU-Retail49.660.3#81.470.765.271.3
TAU-Airline32.042.8#59.653.532.044.0
Multilingualism
MultiIF66.570.4-76.270.277.5
MMLU-ProX75.876.2-74.573.279.4
INCLUDE80.182.1-76.975.679.5
PolyMATH32.225.530.044.827.050.2
*: For reproducibility, we report the win rates evaluated by GPT-4.1.

\#: Results were generated using GPT-4o-20241120, as access to the native function calling API of GPT-4o-0327 was unavailable.

Quickstart

The code of Qwen3-MoE has been in the latest Hugging Face transformers and we advise you to use the latest version of transformers.

With transformers<4.51.0, you will encounter the following error:

KeyError: 'qwen3_moe'

The following contains a code snippet illustrating how to use the model generate content based on given inputs.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen3-235B-A22B-Instruct-2507"

load the tokenizer and the model

tokenizer = AutoTokenizer.frompretrained(modelname) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" )

prepare the model input

prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.applychattemplate( messages, tokenize=False, addgenerationprompt=True, ) modelinputs = tokenizer([text], returntensors="pt").to(model.device)

conduct text completion

generated_ids = model.generate( model_inputs, maxnewtokens=16384 ) outputids = generatedids[0][len(modelinputs.inputids[0]):].tolist()

content = tokenizer.decode(outputids, skipspecial_tokens=True)

print("content:", content)

For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 or to create an OpenAI-compatible API endpoint:

  • SGLang:

python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-Instruct-2507 --tp 8 --context-length 262144

  • vLLM:

vllm serve Qwen/Qwen3-235B-A22B-Instruct-2507 --tensor-parallel-size 8 --max-model-len 262144

Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as 32,768.

For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.

Agentic Use

Qwen3 excels in tool calling capabilities. We recommend using Qwen-Agent to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.

To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.

from qwen_agent.agents import Assistant

Define LLM

llm_cfg = { 'model': 'Qwen3-235B-A22B-Instruct-2507',

# Use a custom endpoint compatible with OpenAI API:
'modelserver': 'http://localhost:8000/v1', # apibase
'api_key': 'EMPTY',
}

Define Tools

tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ]

Define Agent

bot = Assistant(llm=llmcfg, functionlist=tools)

Streaming generation

messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses)

Best Practices

To achieve optimal performance, we recommend the following settings:

  1. Sampling Parameters:
- We suggest using
Temperature=0.7, TopP=0.8, TopK=20, and MinP=0. - For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
  1. Adequate Output Length: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
  2. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the
answer field with only the choice letter, e.g., "answer": "C"`."

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen3technicalreport,
      title={Qwen3 Technical Report}, 
      author={Qwen Team},
      year={2025},
      eprint={2505.09388},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.09388}, 
}

πŸ“‚ GGUF File List

No GGUF files available