πŸ“‹ Model Description


library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE pipeline_tag: text-generation

Qwen3-30B-A3B-Instruct-2507 GGUF Models

Model Generation Details

This model was generated using llama.cpp at commit cd6983d5.


Quantization Beyond the IMatrix

I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.

In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the --tensor-type option in llama.cpp to manually "bump" important layers to higher precision. You can see the implementation here:
πŸ‘‰ Layer bumping with llama.cpp

While this does increase model file size, it significantly improves precision for a given quantization level.

I'd love your feedbackβ€”have you tried this? How does it perform for you?



Click here to get info on choosing the right GGUF model format


Qwen3-30B-A3B-Instruct-2507

Chat

Highlights

We introduce the updated version of the Qwen3-30B-A3B non-thinking mode, named Qwen3-30B-A3B-Instruct-2507, featuring the following key enhancements:

  • Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage.
  • Substantial gains in long-tail knowledge coverage across multiple languages.
  • Markedly better alignment with user preferences in subjective and open-ended tasks, enabling more helpful responses and higher-quality text generation.
  • Enhanced capabilities in 256K long-context understanding.

!image/jpeg

Model Overview

Qwen3-30B-A3B-Instruct-2507 has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 30.5B in total and 3.3B activated
  • Number of Paramaters (Non-Embedding): 29.9B
  • Number of Layers: 48
  • Number of Attention Heads (GQA): 32 for Q and 4 for KV
  • Number of Experts: 128
  • Number of Activated Experts: 8
  • Context Length: 262,144 natively.

NOTE: This model supports only non-thinking mode and does not generate ` blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required.

For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.

Performance

Deepseek-V3-0324GPT-4o-0327Gemini-2.5-Flash Non-ThinkingQwen3-235B-A22B Non-ThinkingQwen3-30B-A3B Non-ThinkingQwen3-30B-A3B-Instruct-2507
Knowledge
MMLU-Pro81.279.881.175.269.178.4
MMLU-Redux90.491.390.689.284.189.3
GPQA68.466.978.362.954.870.4
SuperGPQA57.351.054.648.242.253.4
Reasoning
AIME2546.626.761.624.721.661.3
HMMT2527.57.945.810.012.043.0
ZebraLogic83.452.657.937.733.290.0
LiveBench 2024112566.963.769.162.559.469.0
Coding
LiveCodeBench v6 (25.02-25.05)45.235.840.132.929.043.2
MultiPL-E82.282.777.779.374.683.8
Aider-Polyglot55.145.344.059.624.435.6
Alignment
IFEval82.383.984.383.283.784.7
Arena-Hard v2*45.661.958.352.024.869.0
Creative Writing v381.684.984.680.468.186.0
WritingBench74.575.580.577.072.285.5
Agent
BFCL-v364.766.566.168.058.665.1
TAU1-Retail49.660.3#65.265.238.359.1
TAU1-Airline32.042.8#48.032.018.040.0
TAU2-Retail71.166.7#64.364.931.657.0
TAU2-Airline36.042.0#42.536.018.038.0
TAU2-Telecom34.029.8#16.924.618.412.3
Multilingualism
MultiIF66.570.469.470.270.867.9
MMLU-ProX75.876.278.373.265.172.0
INCLUDE80.182.183.875.667.871.9
PolyMATH32.225.541.927.023.343.1
*: For reproducibility, we report the win rates evaluated by GPT-4.1.

\#: Results were generated using GPT-4o-20241120, as access to the native function calling API of GPT-4o-0327 was unavailable.

Quickstart

The code of Qwen3-MoE has been in the latest Hugging Face transformers and we advise you to use the latest version of transformers.

With transformers<4.51.0, you will encounter the following error:

KeyError: 'qwen3_moe'

The following contains a code snippet illustrating how to use the model generate content based on given inputs.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen3-30B-A3B-Instruct-2507"

load the tokenizer and the model

tokenizer = AutoTokenizer.frompretrained(modelname) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" )

prepare the model input

prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.applychattemplate( messages, tokenize=False, addgenerationprompt=True, ) modelinputs = tokenizer([text], returntensors="pt").to(model.device)

conduct text completion

generated_ids = model.generate( model_inputs, maxnewtokens=16384 ) outputids = generatedids[0][len(modelinputs.inputids[0]):].tolist()

content = tokenizer.decode(outputids, skipspecial_tokens=True)

print("content:", content)

For deployment, you can use sglang>=0.4.6.post1 or vllm>=0.8.5 or to create an OpenAI-compatible API endpoint:

  • SGLang:

python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B-Instruct-2507 --context-length 262144

  • vLLM:

vllm serve Qwen/Qwen3-30B-A3B-Instruct-2507 --max-model-len 262144

Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as 32,768.

For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.

Agentic Use

Qwen3 excels in tool calling capabilities. We recommend using Qwen-Agent to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.

To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.

from qwen_agent.agents import Assistant

Define LLM

llm_cfg = { 'model': 'Qwen3-30B-A3B-Instruct-2507',

# Use a custom endpoint compatible with OpenAI API:
'modelserver': 'http://localhost:8000/v1', # apibase
'api_key': 'EMPTY',
}

Define Tools

tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ]

Define Agent

bot = Assistant(llm=llmcfg, functionlist=tools)

Streaming generation

messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses)

Processing Ultra-Long Texts

To support ultra-long context processing (up to 1 million tokens), we integrate two key techniques:

  • Dual Chunk Attention (DCA): A length extrapolation method that splits long sequences into manageable chunks while preserving global coherence.
  • MInference: A sparse attention mechanism that reduces computational overhead by focusing on critical token interactions.

Together, these innovations significantly improve both generation quality and inference efficiency for sequences beyond 256K tokens. On sequences approaching 1M tokens, the system achieves up to a 3Γ— speedup compared to standard attention implementations.

For full technical details, see the Qwen2.5-1M Technical Report.

How to Enable 1M Token Context

[!NOTE]

To effectively process a 1 million token context, users will require approximately 240 GB of total GPU memory. This accounts for model weights, KV-cache storage, and peak activation memory demands.

#### Step 1: Update Configuration File

Download the model and replace the content of your config.json with config_1m.json, which includes the config for length extrapolation and sparse attention.

export MODELNAME=Qwen3-30B-A3B-Instruct-2507
huggingface-cli download Qwen/${MODELNAME} --local-dir ${MODELNAME}
mv ${MODELNAME}/config.json ${MODELNAME}/config.json.bak
mv ${MODELNAME}/config_1m.json ${MODELNAME}/config.json

#### Step 2: Launch Model Server

After updating the config, proceed with either vLLM or SGLang for serving the model.

#### Option 1: Using vLLM

To run Qwen with 1M context support:

git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e .

Then launch the server with Dual Chunk Flash Attention enabled:

VLLMATTENTIONBACKEND=DUALCHUNKFLASHATTN VLLMUSE_V1=0 \
vllm serve ./Qwen3-30B-A3B-Instruct-2507 \
  --tensor-parallel-size 4 \
  --max-model-len 1010000 \
  --enable-chunked-prefill \
  --max-num-batched-tokens 131072 \
  --enforce-eager \
  --max-num-seqs 1 \
  --gpu-memory-utilization 0.85

##### Key Parameters

ParameterPurpose
VLLMATTENTIONBACKEND=DUALCHUNKFLASH_ATTNEnables the custom attention kernel for long-context efficiency
--max-model-len 1010000Sets maximum context length to ~1M tokens
--enable-chunked-prefillAllows chunked prefill for very long inputs (avoids OOM)
--max-num-batched-tokens 131072Controls batch size during prefill; balances throughput and memory
--enforce-eagerDisables CUDA graph capture (required for dual chunk attention)
--max-num-seqs 1Limits concurrent sequences due to extreme memory usage
--gpu-memory-utilization 0.85Set the fraction of GPU memory to be used for the model executor
#### Option 2: Using SGLang

First, clone and install the specialized branch:

git clone https://github.com/sgl-project/sglang.git
cd sglang
pip install -e "python[all]"

Launch the server with DCA support:

python3 -m sglang.launch_server \
    --model-path ./Qwen3-30B-A3B-Instruct-2507 \
    --context-length 1010000 \
    --mem-frac 0.75 \
    --attention-backend dualchunkflash_attn \
    --tp 4 \
    --chunked-prefill-size 131072

##### Key Parameters

ParameterPurpose
--attention-backend dualchunkflash_attnActivates Dual Chunk Flash Attention
--context-length 1010000Defines max input length
--mem-frac 0.75The fraction of the memory used for static allocation (model weights and KV cache memory pool). Use a smaller value if you see out-of-memory errors.
--tp 4Tensor parallelism size (matches model sharding)
--chunked-prefill-size 131072Prefill chunk size for handling long inputs without OOM
#### Troubleshooting:
  1. Encountering the error: "The model's max sequence length (xxxxx) is larger than the maximum number of tokens that can be stored in the KV cache." or "RuntimeError: Not enough memory. Please try to increase --mem-fraction-static."

The VRAM reserved for the KV cache is insufficient.
- vLLM: Consider reducing the
maxmodellen or increasing the tensorparallelsize and gpumemoryutilization. Alternatively, you can reduce maxnumbatched_tokens, although this may significantly slow down inference.
- SGLang: Consider reducing the
context-length or increasing the tp and mem-frac. Alternatively, you can reduce chunked-prefill-size, although this may significantly slow down inference.

  1. Encountering the error: "torch.OutOfMemoryError: CUDA out of memory."

The VRAM reserved for activation weights is insufficient. You can try lowering gpumemoryutilization or mem-frac, but be aware that this might reduce the VRAM available for the KV cache.

  1. Encountering the error: "Input prompt (xxxxx tokens) + lookahead slots (0) is too long and exceeds the capacity of the block manager." or "The input (xxx xtokens) is longer than the model's context length (xxx tokens)."

The input is too lengthy. Consider using a shorter sequence or increasing the maxmodellen or context-length.

#### Long-Context Performance

We test the model on an 1M version of the RULER benchmark.

Model NameAcc avg4k8k16k32k64k96k128k192k256k384k512k640k768k896k1000k
Qwen3-30B-A3B (Non-Thinking)72.097.196.195.092.282.679.776.970.266.361.955.452.651.552.050.9
Qwen3-30B-A3B-Instruct-2507 (Full Attention)86.898.096.796.997.293.491.089.189.882.583.678.479.777.675.772.8
Qwen3-30B-A3B-Instruct-2507 (Sparse Attention)86.898.097.196.395.193.692.588.187.782.985.780.780.076.975.572.2
  • All models are evaluated with Dual Chunk Attention enabled.
  • Since the evaluation is time-consuming, we use 260 samples for each length (13 sub-tasks, 20 samples for each).

Best Practices

To achieve optimal performance, we recommend the following settings:

  1. Sampling Parameters:
- We suggest using
Temperature=0.7, TopP=0.8, TopK=20, and MinP=0. - For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
  1. Adequate Output Length: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
  2. Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the
answer field with only the choice letter, e.g., "answer": "C"."

Citation

If you find our work helpful, feel free to give us a cite.

@misc{qwen3technicalreport,
      title={Qwen3 Technical Report}, 
      author={Qwen Team},
      year={2025},
      eprint={2505.09388},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.09388}, 
}


πŸš€ If you find these models useful

Help me test my AI-Powered Quantum Network Monitor Assistant with quantum-ready security checks:

πŸ‘‰ Quantum Network Monitor

The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : Source Code Quantum Network Monitor. You will also find the code I use to quantize the models if you want to do it yourself GGUFModelBuilder

πŸ’¬ How to test:
Choose an AI assistant type:
-
TurboLLM (GPT-4.1-mini)
-
HugLLM (Hugginface Open-source models)
-
TestLLM (Experimental CPU-only)

What I’m Testing

I’m pushing the limits of small open-source models for AI network monitoring, specifically:
  • Function calling against live network services
  • How small can a model go while still handling:
- Automated Nmap security scans - Quantum-readiness checks - Network Monitoring tasks

🟑 TestLLM – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):

  • βœ… Zero-configuration setup
  • ⏳ 30s load time (slow inference but no API costs) . No token limited as the cost is low.
  • πŸ”§ Help wanted! If you’re into edge-device AI, let’s collaborate!

Other Assistants

🟒 TurboLLM – Uses gpt-4.1-mini :
  • It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
  • Create custom cmd processors to run .net code on Quantum Network Monitor Agents
  • Real-time network diagnostics and monitoring
  • Security Audits
  • Penetration testing (Nmap/Metasploit)

πŸ”΅ HugLLM – Latest Open-source models:

  • 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.

πŸ’‘ Example commands you could test:

  1. "Give me info on my websites SSL certificate"
  2. "Check if my server is using quantum safe encyption for communication"
  3. "Run a comprehensive security audit on my server"`
  4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code on. This is a very flexible and powerful feature. Use with caution!

Final Word

I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.

If you appreciate the work, please consider buying me a coffee β˜•. Your support helps cover service costs and allows me to raise token limits for everyone.

I'm also open to job opportunities or sponsorship.

Thank you! 😊

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Qwen3-30B-A3B-Instruct-2507-bf16_q8_0.gguf
LFS Q8
31.19 GB Download
Qwen3-30B-A3B-Instruct-2507-f16_q8_0.gguf
LFS Q8
31.19 GB Download
Qwen3-30B-A3B-Instruct-2507-imatrix.gguf
LFS
116.38 MB Download
Qwen3-30B-A3B-Instruct-2507-iq1_m.gguf
LFS
7.95 GB Download
Qwen3-30B-A3B-Instruct-2507-iq1_s.gguf
LFS
7.18 GB Download
Qwen3-30B-A3B-Instruct-2507-iq2_m.gguf
LFS Q2
10.11 GB Download
Qwen3-30B-A3B-Instruct-2507-iq2_s.gguf
LFS Q2
9.61 GB Download
Qwen3-30B-A3B-Instruct-2507-iq2_xs.gguf
LFS Q2
8.98 GB Download
Qwen3-30B-A3B-Instruct-2507-iq2_xxs.gguf
LFS Q2
8.38 GB Download
Qwen3-30B-A3B-Instruct-2507-iq3_m.gguf
LFS Q3
13.48 GB Download
Qwen3-30B-A3B-Instruct-2507-iq3_xs.gguf
LFS Q3
12.03 GB Download
Qwen3-30B-A3B-Instruct-2507-iq3_xxs.gguf
LFS Q3
11.97 GB Download
Qwen3-30B-A3B-Instruct-2507-iq4_nl.gguf
LFS Q4
16.05 GB Download
Qwen3-30B-A3B-Instruct-2507-iq4_xs.gguf
LFS Q4
15.33 GB Download
Qwen3-30B-A3B-Instruct-2507-q2_k_m.gguf
LFS Q2
10.74 GB Download
Qwen3-30B-A3B-Instruct-2507-q2_k_s.gguf
LFS Q2
10.66 GB Download
Qwen3-30B-A3B-Instruct-2507-q3_k_m.gguf
LFS Q3
14.12 GB Download
Qwen3-30B-A3B-Instruct-2507-q3_k_s.gguf
LFS Q3
14.04 GB Download
Qwen3-30B-A3B-Instruct-2507-q4_0.gguf
Recommended LFS Q4
16.33 GB Download
Qwen3-30B-A3B-Instruct-2507-q4_1.gguf
LFS Q4
17.85 GB Download
Qwen3-30B-A3B-Instruct-2507-q4_k_m.gguf
LFS Q4
17.73 GB Download
Qwen3-30B-A3B-Instruct-2507-q4_k_s.gguf
LFS Q4
16.93 GB Download
Qwen3-30B-A3B-Instruct-2507-q5_0.gguf
LFS Q5
19.81 GB Download
Qwen3-30B-A3B-Instruct-2507-q5_1.gguf
LFS Q5
21.55 GB Download
Qwen3-30B-A3B-Instruct-2507-q5_k_m.gguf
LFS Q5
20.92 GB Download
Qwen3-30B-A3B-Instruct-2507-q6_k_m.gguf
LFS Q6
23.51 GB Download
Qwen3-30B-A3B-Instruct-2507-q8_0.gguf
LFS Q8
30.25 GB Download