π Model Description
tags:
- unsloth
- ByteDance-Seed/Seed-Coder-8B-Reasoning
Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.
Seed-Coder-8B-Reasoning
Introduction
We are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.- Model-centric: Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.
- Transparent: We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.
- Powerful: Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.

This repo contains the Seed-Coder-8B-Reasoning model, which has the following features:
- Type: Causal language models
- Training Stage: Pretraining & Post-training
- Data Source: Public datasets
- Context Length: 65,536
Model Downloads
| Model Name | Length | Download | Notes |
|---|---|---|---|
| Seed-Coder-8B-Base | 32K | π€ Model | Pretrained on our model-centric code data. |
| Seed-Coder-8B-Instruct | 32K | π€ Model | Instruction-tuned for alignment with user intent. |
| π Seed-Coder-8B-Reasoning | 64K | π€ Model | RL trained to boost reasoning capabilities. |
| Seed-Coder-8B-Reasoning-bf16 | 64K | π€ Model | RL trained to boost reasoning capabilities. |
Requirements
You will need to install the latest versions oftransformers and accelerate:
pip install -U transformers accelerate
Quickstart
Here is a simple example demonstrating how to load the model and perform code generation using the Hugging Face pipeline API:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Reasoning"
tokenizer = AutoTokenizer.frompretrained(modelid, trustremotecode=True)
model = AutoModelForCausalLM.frompretrained(modelid, torchdtype=torch.bfloat16, devicemap="auto", trustremotecode=True)
messages = [
{"role": "user", "content": "Write a quick sort algorithm."},
]
inputids = tokenizer.applychat_template(
messages,
tokenize=True,
return_tensors="pt",
addgenerationprompt=True,
).to(model.device)
outputs = model.generate(inputids, maxnew_tokens=16384)
response = tokenizer.decode(outputs[0][inputids.shape[-1]:], skipspecial_tokens=True)
print(response)
Evaluation
Seed-Coder-8B-Reasoning strikes impressive performance on competitive programming, demonstrating that smaller LLMs can also be competent on complex reasoning tasks. Our model surpasses QwQ-32B and DeepSeek-R1 on IOI'2024, and achieves an ELO rating comparable to o1-mini on Codeforces contests.

For detailed benchmark performance, please refer to our π Technical Report.
License
This project is licensed under the MIT License. See the LICENSE file for details.
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Seed-Coder-8B-Reasoning-BF16.gguf
LFS
FP16
|
15.37 GB | Download |
|
Seed-Coder-8B-Reasoning-IQ4_NL.gguf
LFS
Q4
|
4.5 GB | Download |
|
Seed-Coder-8B-Reasoning-IQ4_XS.gguf
LFS
Q4
|
4.3 GB | Download |
|
Seed-Coder-8B-Reasoning-Q2_K.gguf
LFS
Q2
|
3.08 GB | Download |
|
Seed-Coder-8B-Reasoning-Q2_K_L.gguf
LFS
Q2
|
3.22 GB | Download |
|
Seed-Coder-8B-Reasoning-Q3_K_M.gguf
LFS
Q3
|
3.87 GB | Download |
|
Seed-Coder-8B-Reasoning-Q3_K_S.gguf
LFS
Q3
|
3.54 GB | Download |
|
Seed-Coder-8B-Reasoning-Q4_1.gguf
LFS
Q4
|
4.92 GB | Download |
|
Seed-Coder-8B-Reasoning-Q4_K_M.gguf
Recommended
LFS
Q4
|
4.72 GB | Download |
|
Seed-Coder-8B-Reasoning-Q4_K_S.gguf
LFS
Q4
|
4.51 GB | Download |
|
Seed-Coder-8B-Reasoning-Q5_K_M.gguf
LFS
Q5
|
5.49 GB | Download |
|
Seed-Coder-8B-Reasoning-Q5_K_S.gguf
LFS
Q5
|
5.37 GB | Download |
|
Seed-Coder-8B-Reasoning-Q6_K.gguf
LFS
Q6
|
6.31 GB | Download |
|
Seed-Coder-8B-Reasoning-Q8_0.gguf
LFS
Q8
|
8.17 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-IQ1_M.gguf
LFS
|
2.25 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-IQ1_S.gguf
LFS
|
2.14 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-IQ2_M.gguf
LFS
Q2
|
2.91 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-IQ2_XXS.gguf
LFS
Q2
|
2.45 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-IQ3_XXS.gguf
LFS
Q3
|
3.21 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-Q2_K_XL.gguf
LFS
Q2
|
3.3 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-Q3_K_XL.gguf
LFS
Q3
|
4.04 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-Q4_K_XL.gguf
LFS
Q4
|
4.79 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-Q5_K_XL.gguf
LFS
Q5
|
5.48 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-Q6_K_XL.gguf
LFS
Q6
|
7.04 GB | Download |
|
Seed-Coder-8B-Reasoning-UD-Q8_K_XL.gguf
LFS
Q8
|
10.26 GB | Download |