πŸ“‹ Model Description


license: other license_name: trillion license_link: LICENSE tags:
  • finetuned
  • chat
  • reasoning
language:
  • en
  • ko
  • ja
pipeline_tag: text-generation library_name: transformers base_model:
  • trillionlabs/Tri-21B

The License for the Base Model Tri-21B is the trillion license in this repo, the think and think-preview versions are Apache 2.0.



Tri-21B-Think-Preview

Introduction

Tri-21B-Think-Preview is an intermediate checkpoint of Tri-21B-Think, featuring mid-training context length expansion to 32K tokens and instruction tuning for chain-of-thought reasoning and tool use.

Model Specifications

  • Type: Causal Language Model (Reasoning-Enhanced)
  • Base Model: Tri-21B
  • Architecture: Transformer Decoder with RoPE, SwiGLU, RMSNorm, and GQA
  • Number of Parameters: 20.73B
  • Number of Layers: 40
  • Number of Attention Heads: 32 (Query) / 8 (Key, Value)
  • Head Dimension: 160
  • Hidden Size: 5,120
  • Intermediate Size: 27,392
  • Context Length: 32,768 (up to 262,144 with YaRN)
  • Vocab Size: 124,416

Quickstart

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "trillionlabs/Tri-21B-Think-Preview"

model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.frompretrained(modelname)

prompt = "Solve the following step by step: What is the sum of the first 100 prime numbers?"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.applychattemplate(
messages,
tokenize=False,
addgenerationprompt=True
)
modelinputs = tokenizer([text], returntensors="pt").to(model.device)

generated_ids = model.generate(
model_inputs,
maxnewtokens=4096,
temperature=0.6,
top_p=0.9
)
generated_ids = [
outputids[len(inputids):] for inputids, outputids in zip(modelinputs.inputids, generated_ids)
]

response = tokenizer.batchdecode(generatedids, skipspecialtokens=True)[0]
print(response)

vLLM & SGLang Deployment

vLLM and SGLang support for Trillion Model is on the way. Stay tuned!

Fine-tuning Notes

Note on tags: This model was trained without and as special tokens. They were added post-training for compatibility with reasoning parsers. If you plan to fine-tune this model, you'll need to modify tokenizer_config.json to avoid indexing errors.

Replace tokens 123975 and 123976 in tokenizer_config.json:

"123975": {
  "content": "<|reservedspecialtoken_9|>",
  "lstrip": false,
  "normalized": false,
  "rstrip": false,
  "single_word": false,
  "special": true
},
"123976": {
  "content": "<|reservedspecialtoken_10|>",
  "lstrip": false,
  "normalized": false,
  "rstrip": false,
  "single_word": false,
  "special": true
}

Evaluation

CategoryBenchmarkDescriptionTri-21B-Think-Preview
ReasoningGPQA-DiamondGraduate-level science questions across physics, chemistry, and biology (PhD-level)54
AIME 2025American Invitational Mathematics Examination 202550.0
MMLU-ProMassive Multitask Language Understanding with more answer choices and reasoning-focused questions65.19
HLEHumanity's Last Exam β€” 2,500 expert-level questions across 100+ subjects created by nearly 1,000 domain experts5.12
CodingLiveCodeBench v6Competitive programming benchmark with problems sourced from recent programming contests48.57
SciCodeCode generation across 338 subproblems in 16 natural science fields drawn from real research workflows18
Instruction FollowingIFEvalTests ability to follow precise formatting and output constraint instructions84.05
IFBenchEvaluates generalization to novel, verifiable output constraints not seen during training (Allen AI)51.02
AgenticTAU2-Bench (Telecom)Dual-control conversational benchmark where both agent and user use tools to resolve telecom scenarios (Sierra)93
AA-LCRLong-context reasoning over multiple documents at 10K–100K tokens (Artificial Analysis)15
AA-OmniscienceFactual reliability across 6,000 questions in 42 subtopics, penalizing hallucinations (Artificial Analysis)-48.55
KoreanKMMLU-Pro2,822 questions from 14 Korean National Professional Licensure exams (LG AI Research)54.18
CLIcK1,995 Korean cultural and linguistic knowledge questions sourced from official exams and textbooks (KAIST)77.94
KoBALTKorean linguistic understanding across syntax, semantics, pragmatics, phonetics, and morphology (SNU)47.29

Limitations

  • Language Support: Optimized for English, Korean, and Japanese. Other languages may show degraded performance.
  • Knowledge Cutoff: February 2025.
  • Intermediate Checkpoint: See Tri-21B-Think for the final model.

License

This model is licensed under the Apache 2.0 License.

Contact

For inquiries: [email protected]

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Tri-21B-8bit.gguf
Recommended LFS
20.52 GB Download
Tri-21B-Think-Preview-fp16.gguf
LFS FP16
38.61 GB Download
Tri-21B-Think-Preview_8bit.gguf
LFS
20.52 GB Download
Tri-21B-Think-fp16.gguf
LFS FP16
38.61 GB Download
Tri-21B-Think_4bit.gguf
LFS
11.72 GB Download
Tri-21B-Think_5bit.gguf
LFS
13.72 GB Download
Tri-21B-Think_6bit.gguf
LFS
15.84 GB Download
Tri-21B-Think_8bit.gguf
LFS
20.52 GB Download
Tri-21B-fp16.gguf
LFS FP16
38.61 GB Download