đ Model Description
license: apache-2.0 language:
- en
- gpt-oss
- reasoning
- moe
- mixture-of-experts
- chain-of-thought
- unsloth
- gguf
- llama-cpp
- openai/gpt-oss-20b
- name: GPT-OSS-Nano
GPT-OSS-Nano
!nan1
Compact Reasoning Model with Mixture of Experts

9B parameters âĸ 12 experts âĸ 128K context âĸ Chain-of-thought reasoning
đ Model Description
GPT-OSS-Nano is a fine-tuned Mixture of Experts (MoE) language model optimized for step-by-step reasoning and problem solving. Built on the GPT-OSS architecture with sparse expert activation, it achieves strong reasoning performance while using only ~3B active parameters per forward pass.
⨠Key Features
| Feature | Description |
|---|---|
| đ§ Sparse MoE | 12 experts, 4 active per token â efficient compute |
| đ Chain-of-Thought | Fine-tuned on reasoning datasets with step-by-step solutions |
| ⥠128K Context | Long context with YaRN rope scaling |
| đŽ Q-GPT Ready | Compatible with quantum confidence estimation |
| đĻ GGUF Available | Run locally with llama.cpp or Ollama |
đī¸ Architecture
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â GPT-OSS-Nano â
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ¤
â Total Parameters â 9.0 Billion â
â Active Parameters â ~3 Billion (per forward pass) â
â Hidden Dimension â 2880 â
â Attention Heads â 64 (8 KV heads, GQA) â
â Layers â 24 â
â Experts â 12 total, 4 active â
â Context Length â 131,072 tokens â
â Vocabulary Size â 201,088 â
â Precision â BFloat16 â
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
đģ Usage
Quick Start with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"squ11z1/gpt-oss-nano",
torch_dtype=torch.bfloat16,
device_map="auto",
trustremotecode=True,
)
tokenizer = AutoTokenizer.from_pretrained(
"squ11z1/gpt-oss-nano",
trustremotecode=True,
)
prompt = """Solve this step by step:
A store offers 20% off on all items. If a jacket costs $85,
what is the final price after discount?"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs,
maxnewtokens=256,
temperature=0.7,
do_sample=True,
)
print(tokenizer.decode(outputs[0], skipspecialtokens=True))
⥠With Unsloth (2x Faster)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
"squ11z1/gpt-oss-nano",
dtype=None,
loadin4bit=True, # 4-bit quantization for efficiency
)
For inference
FastLanguageModel.for_inference(model)
đĻ With GGUF (llama.cpp)
# Download the quantized model
wget https://huggingface.co/squ11z1/gpt-oss-nano/resolve/main/gpt-oss-9b-q4km.gguf
Run inference
./llama-cli -m gpt-oss-9b-q4km.gguf \
-p "Solve step by step: What is 15% of 240?" \
-n 256 --temp 0.7
đĻ With Ollama
# Create Modelfile
echo 'FROM ./gpt-oss-9b-q4km.gguf' > Modelfile
ollama create gpt-oss-nano -f Modelfile
Run
ollama run gpt-oss-nano "Explain quantum computing simply"
đ Training
Training Details
| Parameter | Value |
|---|---|
| Base Model | openai/gpt-oss-20b |
| Method | QLoRA (4-bit quantized LoRA) |
| LoRA Rank | 32 |
| LoRA Alpha | 32 |
| Learning Rate | 2e-4 |
| Batch Size | 2 (gradient accumulation: 8) |
| Epochs | 2 |
| Framework | Unsloth + TRL |
| Hardware | NVIDIA H200 |
đŽ Q-GPT: Quantum Confidence
GPT-OSS-Nano is compatible with Q-GPT â a quantum neural network that estimates response confidence.
from qgpt import loadqgpt
model, tokenizer = load_qgpt("squ11z1/gpt-oss-nano")
outputs = model.generatewithconfidence(inputs, maxnewtokens=256)
print(f"Response confidence: {outputs['confidence_label']}")
Output: "high", "moderate", "low", etc.
if outputs['should_refuse']:
print("â ī¸ Model is uncertain â consider refusing to answer")
Learn more: squ11z1/Q-GPT
â ī¸ Limitations
- Language: Primarily optimized for English; multilingual performance varies
- Hallucinations: May generate plausible but incorrect information on obscure topics
- Safety: Not designed for safety-critical applications without validation
- Math: Strong at arithmetic reasoning; weaker on advanced mathematics
đ License
This model is released under the Apache 2.0 License.
đ Acknowledgments
- Unsloth â 2x faster fine-tuning
- OpenAI â GPT-OSS base model
- llama.cpp â GGUF format and quantization
đ Citation
@misc{gptossnano2026,
title={GPT-OSS-Nano: Compact MoE Reasoning Model},
author={squ11z1},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/squ11z1/gpt-oss-nano}
}
Pro Mundi Vita