π Model Description
license: other license_name: trillion license_link: LICENSE tags:
- finetuned
- chat
- reasoning
- en
- ko
- ja
- trillionlabs/Tri-21B
The License for the Base Model Tri-21B is the trillion license in this repo, the think and think-preview versions are Apache 2.0.

Introduction
Tri-21B-Think-Preview is an intermediate checkpoint of Tri-21B-Think, featuring mid-training context length expansion to 32K tokens and instruction tuning for chain-of-thought reasoning and tool use.
Model Specifications
- Type: Causal Language Model (Reasoning-Enhanced)
- Base Model: Tri-21B
- Architecture: Transformer Decoder with RoPE, SwiGLU, RMSNorm, and GQA
- Number of Parameters: 20.73B
- Number of Layers: 40
- Number of Attention Heads: 32 (Query) / 8 (Key, Value)
- Head Dimension: 160
- Hidden Size: 5,120
- Intermediate Size: 27,392
- Context Length: 32,768 (up to 262,144 with YaRN)
- Vocab Size: 124,416
Quickstart
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "trillionlabs/Tri-21B-Think-Preview"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.frompretrained(modelname)
prompt = "Solve the following step by step: What is the sum of the first 100 prime numbers?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.applychattemplate(
messages,
tokenize=False,
addgenerationprompt=True
)
modelinputs = tokenizer([text], returntensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs,
maxnewtokens=4096,
temperature=0.6,
top_p=0.9
)
generated_ids = [
outputids[len(inputids):] for inputids, outputids in zip(modelinputs.inputids, generated_ids)
]
response = tokenizer.batchdecode(generatedids, skipspecialtokens=True)[0]
print(response)
vLLM & SGLang Deployment
vLLM and SGLang support for Trillion Model is on the way. Stay tuned!
Fine-tuning Notes
Note on
tags: This model was trained withoutandas special tokens. They were added post-training for compatibility with reasoning parsers. If you plan to fine-tune this model, you'll need to modifytokenizer_config.jsonto avoid indexing errors.
Replace tokens 123975 and 123976 in tokenizer_config.json:
"123975": {
"content": "<|reservedspecialtoken_9|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"123976": {
"content": "<|reservedspecialtoken_10|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
Evaluation
| Category | Benchmark | Description | Tri-21B-Think-Preview |
|---|---|---|---|
| Reasoning | GPQA-Diamond | Graduate-level science questions across physics, chemistry, and biology (PhD-level) | 54 |
| AIME 2025 | American Invitational Mathematics Examination 2025 | 50.0 | |
| MMLU-Pro | Massive Multitask Language Understanding with more answer choices and reasoning-focused questions | 65.19 | |
| HLE | Humanity's Last Exam β 2,500 expert-level questions across 100+ subjects created by nearly 1,000 domain experts | 5.12 | |
| Coding | LiveCodeBench v6 | Competitive programming benchmark with problems sourced from recent programming contests | 48.57 |
| SciCode | Code generation across 338 subproblems in 16 natural science fields drawn from real research workflows | 18 | |
| Instruction Following | IFEval | Tests ability to follow precise formatting and output constraint instructions | 84.05 |
| IFBench | Evaluates generalization to novel, verifiable output constraints not seen during training (Allen AI) | 51.02 | |
| Agentic | TAU2-Bench (Telecom) | Dual-control conversational benchmark where both agent and user use tools to resolve telecom scenarios (Sierra) | 93 |
| AA-LCR | Long-context reasoning over multiple documents at 10Kβ100K tokens (Artificial Analysis) | 15 | |
| AA-Omniscience | Factual reliability across 6,000 questions in 42 subtopics, penalizing hallucinations (Artificial Analysis) | -48.55 | |
| Korean | KMMLU-Pro | 2,822 questions from 14 Korean National Professional Licensure exams (LG AI Research) | 54.18 |
| CLIcK | 1,995 Korean cultural and linguistic knowledge questions sourced from official exams and textbooks (KAIST) | 77.94 | |
| KoBALT | Korean linguistic understanding across syntax, semantics, pragmatics, phonetics, and morphology (SNU) | 47.29 |
Limitations
- Language Support: Optimized for English, Korean, and Japanese. Other languages may show degraded performance.
- Knowledge Cutoff: February 2025.
- Intermediate Checkpoint: See Tri-21B-Think for the final model.
License
This model is licensed under the Apache 2.0 License.Contact
For inquiries: [email protected]π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Tri-21B-8bit.gguf
Recommended
LFS
|
20.52 GB | Download |
|
Tri-21B-Think-Preview-fp16.gguf
LFS
FP16
|
38.61 GB | Download |
|
Tri-21B-Think-Preview_8bit.gguf
LFS
|
20.52 GB | Download |
|
Tri-21B-Think-fp16.gguf
LFS
FP16
|
38.61 GB | Download |
|
Tri-21B-Think_4bit.gguf
LFS
|
11.72 GB | Download |
|
Tri-21B-Think_5bit.gguf
LFS
|
13.72 GB | Download |
|
Tri-21B-Think_6bit.gguf
LFS
|
15.84 GB | Download |
|
Tri-21B-Think_8bit.gguf
LFS
|
20.52 GB | Download |
|
Tri-21B-fp16.gguf
LFS
FP16
|
38.61 GB | Download |