πŸ“‹ Model Description


license: cc-by-4.0 language:
  • en
base_model:
  • nvidia/OpenReasoning-Nemotron-32B
pipeline_tag: text-generation library_name: transformers tags:
  • nvidia
  • unsloth
  • code

[!NOTE]

Includes Unsloth chat template fixes!
For llama.cpp, use --jinja

>



Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.



OpenReasoning-Nemotron-32B Overview

Description:

OpenReasoning-Nemotron-32B is a large language model (LLM) which is a derivative of Qwen2.5-32B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. We evaluated this model with up to 64K output tokens. The OpenReasoning model is available in the following sizes: 1.5B, 7B and 14B and 32B.

This model is ready for commercial/non-commercial research use.

License/Terms of Use:

GOVERNING TERMS: Use of the models listed above are governed by the Creative Commons Attribution 4.0 International License (CC-BY-4.0). ADDITIONAL INFORMATION: Apache 2.0 License

Scores on Reasoning Benchmarks

!Evaluation Results with pass@1

Our models demonstrate exceptional performance across a suite of challenging reasoning benchmarks. The 7B, 14B, and 32B models consistently set new state-of-the-art records for their size classes.

ModelAritificalAnalysisIndexGPQAMMLU-PROHLELiveCodeBenchSciCodeAIME24AIME25HMMT FEB 25
1.5B31.031.647.55.528.62.255.545.631.5
7B54.761.171.98.363.316.284.778.263.5
14B60.971.677.510.167.823.587.882.071.2
32B64.373.180.011.970.228.589.284.073.8
\* This is our estimation of the Artificial Analysis Intelligence Index, not an official score.

\* LiveCodeBench version 6, date range 2408-2505.

Combining the work of multiple agents

OpenReasoning-Nemotron models can be used in a "heavy" mode by starting multiple parallel generations and combining them together via generative solution selection (GenSelect). To add this "skill" we follow the original GenSelect training pipeline except we do not train on the selection summary but use the full reasoning trace of DeepSeek R1 0528 671B instead. We only train models to select the best solution for math problems but surprisingly find that this capability directly generalizes to code and science questions! With this "heavy" GenSelect inference mode, OpenReasoning-Nemotron-32B model surpasses O3 (High) on math and coding benchmarks.

!Evaluation Results with GenSelect

ModelPass@1 (Avg@64)Majority@64GenSelect
1.5B
AIME2455.576.776.7
AIME2545.670.070.0
HMMT Feb 2531.546.753.3
7B
AIME2484.793.393.3
AIME2578.286.793.3
HMMT Feb 2563.583.390.0
LCB v6 2408-250563.4n/a67.7
14B
AIME2487.893.393.3
AIME2582.090.090.0
HMMT Feb 2571.286.793.3
LCB v6 2408-250567.9n/a69.1
32B
AIME2489.293.393.3
AIME2584.090.093.3
HMMT Feb 2573.886.796.7
LCB v6 2408-250570.2n/a75.3
HLE11.813.415.5

How to use the models?

To run inference on coding problems:

python
import transformers
import torch
model_id = "nvidia/OpenReasoning-Nemotron-32B"
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    modelkwargs={"torchdtype": torch.bfloat16},
    device_map="auto",
)

Code generation prompt

prompt = """You are a helpful and harmless assistant. You should think step-by-step before responding to the instruction below. Please use python programming language only. You must use
python for just the final solution code block with the following format:
# Your code here
{user} """

Math generation prompt

prompt = """Solve the following math problem. Make sure to put the answer (and only answer) inside \\boxed{}.

#

{user}

"""

Science generation prompt

You can refer to prompts here -

https://github.com/NVIDIA/NeMo-Skills/blob/main/nemo_skills/prompt/config/generic/hle.yaml (HLE)

https://github.com/NVIDIA/NeMo-Skills/blob/main/nemo_skills/prompt/config/eval/aai/mcq-4choices-boxed.yaml (for GPQA)

https://github.com/NVIDIA/NeMo-Skills/blob/main/nemo_skills/prompt/config/eval/aai/mcq-10choices-boxed.yaml (MMLU-Pro)

messages = [
{
"role": "user",
"content": prompt.format(user="Write a program to calculate the sum of the first $N$ fibonacci numbers")},
]
outputs = pipeline(
messages,
maxnewtokens=64000,
)
print(outputs[0]["generated_text"][-1]['content'])

Citation

If you find the data useful, please cite:


@article{ahmad2025opencodereasoning,
title={OpenCodeReasoning: Advancing Data Distillation for Competitive Coding},
author={Wasi Uddin Ahmad, Sean Narenthiran, Somshubra Majumdar, Aleksander Ficek, Siddhartha Jain, Jocelyn Huang, Vahid Noroozi, Boris Ginsburg},
year={2025},
eprint={2504.01943},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.01943},
}

@misc{ahmad2025opencodereasoningiisimpletesttime,
title={OpenCodeReasoning-II: A Simple Test Time Scaling Approach via Self-Critique},
author={Wasi Uddin Ahmad and Somshubra Majumdar and Aleksander Ficek and Sean Narenthiran and Mehrzad Samadi and Jocelyn Huang and Siddhartha Jain and Vahid Noroozi and Boris Ginsburg},
year={2025},
eprint={2507.09075},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.09075},
}

@misc{moshkov2025aimo2winningsolutionbuilding,
title={AIMO-2 Winning Solution: Building State-of-the-Art Mathematical Reasoning Models with OpenMathReasoning dataset},
author={Ivan Moshkov and Darragh Hanley and Ivan Sorokin and Shubham Toshniwal and Christof Henkel and Benedikt Schifferer and Wei Du and Igor Gitman},
year={2025},
eprint={2504.16891},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2504.16891},
}
```

Additional Information:

Deployment Geography:

Global

Use Case:

This model is intended for developers and researchers who work on competitive math, code and science problems. It has been trained via only supervised fine-tuning to achieve strong scores on benchmarks.

Release Date:

Huggingface [07/16/2025] via https://huggingface.co/nvidia/OpenReasoning-Nemotron-32B/

Reference(s):

  • [2504.01943] OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
  • [2504.01943] OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
  • [2504.16891] AIMO-2 Winning Solution: Building State-of-the-Art Mathematical Reasoning Models with OpenMathReasoning dataset

Model Architecture:

Architecture Type: Dense decoder-only Transformer model Network Architecture: Qwen-32B-Instruct
This model was developed based on Qwen2.5-32B-Instruct and has 32B model parameters.

OpenReasoning-Nemotron-1.5B was developed based on Qwen2.5-1.5B-Instruct and has 1.5B model parameters.

OpenReasoning-Nemotron-7B was developed based on Qwen2.5-7B-Instruct and has 7B model parameters.

OpenReasoning-Nemotron-14B was developed based on Qwen2.5-14B-Instruct and has 14B model parameters.

OpenReasoning-Nemotron-32B was developed based on Qwen2.5-32B-Instruct and has 32B model parameters.

Input:

Input Type(s): Text
Input Format(s): String
Input Parameters: One-Dimensional (1D)
Other Properties Related to Input: Trained for up to 64,000 output tokens

Output:

Output Type(s): Text
Output Format: String
Output Parameters: One-Dimensional (1D)
Other Properties Related to Output: Trained for up to 64,000 output tokens

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration :

  • Runtime Engine: NeMo 2.3.0
  • Recommended Hardware Microarchitecture Compatibility:
NVIDIA Ampere
NVIDIA Hopper
  • Preferred/Supported Operating System(s): Linux

Model Version(s):

1.0 (7/16/2025)
OpenReasoning-Nemotron-32B
OpenReasoning-Nemotron-14B
OpenReasoning-Nemotron-7B
OpenReasoning-Nemotron-1.5B

Training and Evaluation Datasets:

Training Dataset:

The training corpus for OpenReasoning-Nemotron-32B is comprised of questions from OpenCodeReasoning dataset, OpenCodeReasoning-II, OpenMathReasoning, and the Synthetic Science questions from the Llama-Nemotron-Post-Training-Dataset. All responses are generated using DeepSeek-R1-0528. We also include the instruction following and tool calling data from Llama-Nemotron-Post-Training-Dataset without modification.

Data Collection Method: Hybrid: Automated, Human, Synthetic

Labeling Method: Hybrid: Automated, Human, Synthetic

Properties: 5M DeepSeek-R1-0528 generated responses from OpenCodeReasoning questions (https://huggingface.co/datasets/nvidia/OpenCodeReasoning), OpenMathReasoning, and the Synthetic Science questions from the Llama-Nemotron-Post-Training-Dataset. We also include the instruction following and tool calling data from Llama-Nemotron-Post-Training-Dataset without modification.

Evaluation Dataset:

We used the following benchmarks to evaluate the model holistically.

Math

  • AIME 2024/2025
  • HMMT
  • BRUNO 2025

Code

  • LiveCodeBench
  • SciCode

Science

  • GPQA
  • MMLU-PRO
  • HLE

Data Collection Method: Hybrid: Automated, Human, Synthetic

Labeling Method: Hybrid: Automated, Human, Synthetic

Inference:

Acceleration Engine: vLLM, Tensor(RT)-LLM
Test Hardware NVIDIA H100-80GB

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
OpenReasoning-Nemotron-32B-IQ4_NL.gguf
LFS Q4
17.4 GB Download
OpenReasoning-Nemotron-32B-IQ4_XS.gguf
LFS Q4
16.5 GB Download
OpenReasoning-Nemotron-32B-Q2_K.gguf
LFS Q2
11.47 GB Download
OpenReasoning-Nemotron-32B-Q2_K_L.gguf
LFS Q2
11.64 GB Download
OpenReasoning-Nemotron-32B-Q3_K_M.gguf
LFS Q3
14.84 GB Download
OpenReasoning-Nemotron-32B-Q3_K_S.gguf
LFS Q3
13.4 GB Download
OpenReasoning-Nemotron-32B-Q4_0.gguf
Recommended LFS Q4
17.43 GB Download
OpenReasoning-Nemotron-32B-Q4_1.gguf
LFS Q4
19.22 GB Download
OpenReasoning-Nemotron-32B-Q4_K_M.gguf
LFS Q4
18.49 GB Download
OpenReasoning-Nemotron-32B-Q4_K_S.gguf
LFS Q4
17.49 GB Download
OpenReasoning-Nemotron-32B-Q5_K_M.gguf
LFS Q5
21.66 GB Download
OpenReasoning-Nemotron-32B-Q5_K_S.gguf
LFS Q5
21.08 GB Download
OpenReasoning-Nemotron-32B-Q6_K.gguf
LFS Q6
25.04 GB Download
OpenReasoning-Nemotron-32B-Q8_0.gguf
LFS Q8
32.43 GB Download
OpenReasoning-Nemotron-32B-UD-IQ1_M.gguf
LFS
7.62 GB Download
OpenReasoning-Nemotron-32B-UD-IQ1_S.gguf
LFS
7.04 GB Download
OpenReasoning-Nemotron-32B-UD-IQ2_M.gguf
LFS Q2
10.61 GB Download
OpenReasoning-Nemotron-32B-UD-IQ2_XXS.gguf
LFS Q2
8.59 GB Download
OpenReasoning-Nemotron-32B-UD-IQ3_XXS.gguf
LFS Q3
12.06 GB Download
OpenReasoning-Nemotron-32B-UD-Q2_K_XL.gguf
LFS Q2
11.83 GB Download
OpenReasoning-Nemotron-32B-UD-Q3_K_XL.gguf
LFS Q3
15.18 GB Download
OpenReasoning-Nemotron-32B-UD-Q4_K_XL.gguf
LFS Q4
18.68 GB Download
OpenReasoning-Nemotron-32B-UD-Q5_K_XL.gguf
LFS Q5
21.73 GB Download
OpenReasoning-Nemotron-32B-UD-Q6_K_XL.gguf
LFS Q6
26.56 GB Download
OpenReasoning-Nemotron-32B-UD-Q8_K_XL.gguf
LFS Q8
36.07 GB Download