📋 Model Description


license: apache-2.0 language:
  • en
library_name: transformers tags:
  • gpt-oss
  • reasoning
  • moe
  • mixture-of-experts
  • chain-of-thought
  • unsloth
  • gguf
  • llama-cpp
base_model:
  • openai/gpt-oss-20b
pipeline_tag: text-generation model-index:
  • name: GPT-OSS-Nano
results: []

GPT-OSS-Nano

!nan1

Compact Reasoning Model with Mixture of Experts

Unsloth</a>
GGUF</a>
License</a>
!
Socket Badge

9B parameters â€ĸ 12 experts â€ĸ 128K context â€ĸ Chain-of-thought reasoning

🤗 Model | 📖 Docs | 🔮 Q-GPT


📋 Model Description

GPT-OSS-Nano is a fine-tuned Mixture of Experts (MoE) language model optimized for step-by-step reasoning and problem solving. Built on the GPT-OSS architecture with sparse expert activation, it achieves strong reasoning performance while using only ~3B active parameters per forward pass.

✨ Key Features

FeatureDescription
🧠 Sparse MoE12 experts, 4 active per token — efficient compute
📝 Chain-of-ThoughtFine-tuned on reasoning datasets with step-by-step solutions
⚡ 128K ContextLong context with YaRN rope scaling
🔮 Q-GPT ReadyCompatible with quantum confidence estimation
đŸ“Ļ GGUF AvailableRun locally with llama.cpp or Ollama

đŸ—ī¸ Architecture

┌─────────────────────────────────────────────────────────┐
│                    GPT-OSS-Nano                         │
├─────────────────────────────────────────────────────────┤
│  Total Parameters     │  9.0 Billion                    │
│  Active Parameters    │  ~3 Billion (per forward pass)  │
│  Hidden Dimension     │  2880                           │
│  Attention Heads      │  64 (8 KV heads, GQA)           │
│  Layers               │  24                             │
│  Experts              │  12 total, 4 active             │
│  Context Length       │  131,072 tokens                 │
│  Vocabulary Size      │  201,088                        │
│  Precision            │  BFloat16                       │
└─────────────────────────────────────────────────────────┘

đŸ’ģ Usage

Quick Start with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
"squ11z1/gpt-oss-nano",
torch_dtype=torch.bfloat16,
device_map="auto",
trustremotecode=True,
)
tokenizer = AutoTokenizer.from_pretrained(
"squ11z1/gpt-oss-nano",
trustremotecode=True,
)

prompt = """Solve this step by step:
A store offers 20% off on all items. If a jacket costs $85,
what is the final price after discount?"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs,
maxnewtokens=256,
temperature=0.7,
do_sample=True,
)
print(tokenizer.decode(outputs[0], skipspecialtokens=True))

⚡ With Unsloth (2x Faster)

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
"squ11z1/gpt-oss-nano",
dtype=None,
loadin4bit=True, # 4-bit quantization for efficiency
)

For inference

FastLanguageModel.for_inference(model)

đŸ“Ļ With GGUF (llama.cpp)

# Download the quantized model
wget https://huggingface.co/squ11z1/gpt-oss-nano/resolve/main/gpt-oss-9b-q4km.gguf

Run inference

./llama-cli -m gpt-oss-9b-q4km.gguf \ -p "Solve step by step: What is 15% of 240?" \ -n 256 --temp 0.7

đŸĻ™ With Ollama

# Create Modelfile
echo 'FROM ./gpt-oss-9b-q4km.gguf' > Modelfile
ollama create gpt-oss-nano -f Modelfile

Run

ollama run gpt-oss-nano "Explain quantum computing simply"

🎓 Training


Training Details

ParameterValue
Base Modelopenai/gpt-oss-20b
MethodQLoRA (4-bit quantized LoRA)
LoRA Rank32
LoRA Alpha32
Learning Rate2e-4
Batch Size2 (gradient accumulation: 8)
Epochs2
FrameworkUnsloth + TRL
HardwareNVIDIA H200
Dataset: Superior-Reasoning — chain-of-thought examples with step-by-step problem solving.


🔮 Q-GPT: Quantum Confidence

GPT-OSS-Nano is compatible with Q-GPT — a quantum neural network that estimates response confidence.

from qgpt import loadqgpt

model, tokenizer = load_qgpt("squ11z1/gpt-oss-nano")
outputs = model.generatewithconfidence(inputs, maxnewtokens=256)

print(f"Response confidence: {outputs['confidence_label']}")

Output: "high", "moderate", "low", etc.

if outputs['should_refuse']:
print("âš ī¸ Model is uncertain — consider refusing to answer")

Learn more: squ11z1/Q-GPT


âš ī¸ Limitations

  • Language: Primarily optimized for English; multilingual performance varies
  • Hallucinations: May generate plausible but incorrect information on obscure topics
  • Safety: Not designed for safety-critical applications without validation
  • Math: Strong at arithmetic reasoning; weaker on advanced mathematics

📜 License

This model is released under the Apache 2.0 License.


🙏 Acknowledgments

  • Unsloth — 2x faster fine-tuning
  • OpenAI — GPT-OSS base model
  • llama.cpp — GGUF format and quantization

📖 Citation

@misc{gptossnano2026,
  title={GPT-OSS-Nano: Compact MoE Reasoning Model},
  author={squ11z1},
  year={2026},
  publisher={Hugging Face},
  url={https://huggingface.co/squ11z1/gpt-oss-nano}
}

Pro Mundi Vita

📂 GGUF File List

📁 Filename đŸ“Ļ Size ⚡ Download
gpt-oss-9b-bf16.gguf
LFS FP16
16.72 GB Download
gpt-oss-9b-q4_k_m.gguf
Recommended LFS Q4
6.36 GB Download
gpt-oss-9b-q8_0.gguf
LFS Q8
8.89 GB Download