πŸ“‹ Model Description


license: apache-2.0 language:
  • en
base_model: Menlo/Jan-nano-128k basemodelrelation: quantized pipeline_tag: text-generation

Jan-Nano-128k: Empowering deeper research through extended context understanding.

Note: Jan-Nano is a non-thinking model.

GitHub</a>
License</a>


Jan-Nano-128k

Authors: Alan Dao, Bach Vu Dinh

Overview

Jan-Nano-128k represents a significant advancement in compact language models for research applications. Building upon the success of Jan-Nano, this enhanced version features a native 128k context window that enables deeper, more comprehensive research capabilities without the performance degradation typically associated with context extension methods.

Key Improvements:

  • πŸ” Research Deeper: Extended context allows for processing entire research papers, lengthy documents, and complex multi-turn conversations
  • ⚑ Native 128k Window: Built from the ground up to handle long contexts efficiently, maintaining performance across the full context range
  • πŸ“ˆ Enhanced Performance: Unlike traditional context extension methods, Jan-Nano-128k shows improved performance with longer contexts

This model maintains full compatibility with Model Context Protocol (MCP) servers while dramatically expanding the scope of research tasks it can handle in a single session.

Evaluation

Jan-Nano-128k has been rigorously evaluated on the SimpleQA benchmark using our MCP-based methodology, demonstrating superior performance compared to its predecessor:

!image/png

Why Jan-Nano-128k?

Traditional approaches to extending context length, such as YaRN (Yet another RoPE extensioN), often result in performance degradation as context length increases. Jan-Nano-128k breaks this paradigm:

This fundamental difference makes Jan-Nano-128k ideal for research applications requiring deep document analysis, multi-document synthesis, and complex reasoning over large information sets.

πŸ–₯️ How to Run Locally

Jan desktop will eventually support this model (WIP). Otherwise you can check the deployment options below that we have tested.

For additional tutorials and community guidance, visit our Discussion Forums.

Deployment

Deploy using VLLM:

vllm serve Menlo/Jan-nano-128k \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--rope-scaling '{"ropetype":"yarn","factor":3.2,"originalmaxpositionembeddings":40960}' --max-model-len 131072

Or llama-server from llama.cpp:

llama-server ... --rope-scaling yarn --rope-scale 3.2 --yarn-orig-ctx 40960

Note: The chat template is included in the tokenizer. For troubleshooting, download the Non-think chat template.

Recommended Sampling Parameters

Temperature: 0.7
Top-p: 0.8
Top-k: 20
Min-p: 0.0

FAQ:

  • I have Jinja template issue with LMStudio, how can i fix? Here

🀝 Community & Support

πŸ“„ Citation

@misc{dao2025jannanotechnicalreport,
      title={Jan-nano Technical Report}, 
      author={Alan Dao and Dinh Bach Vu},
      year={2025},
      eprint={2506.22760},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.22760}, 
}

Jan-Nano-128k: Empowering deeper research through extended context understanding.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
jan-nano-128k-Q3_K_L.gguf
LFS Q3
2.09 GB Download
jan-nano-128k-Q3_K_M.gguf
LFS Q3
1.93 GB Download
jan-nano-128k-Q3_K_S.gguf
LFS Q3
1.76 GB Download
jan-nano-128k-Q4_0.gguf
Recommended LFS Q4
2.21 GB Download
jan-nano-128k-Q4_1.gguf
LFS Q4
2.42 GB Download
jan-nano-128k-Q4_K_M.gguf
LFS Q4
2.33 GB Download
jan-nano-128k-Q4_K_S.gguf
LFS Q4
2.22 GB Download
jan-nano-128k-Q5_0.gguf
LFS Q5
2.63 GB Download
jan-nano-128k-Q5_1.gguf
LFS Q5
2.84 GB Download
jan-nano-128k-Q5_K_M.gguf
LFS Q5
2.69 GB Download
jan-nano-128k-Q5_K_S.gguf
LFS Q5
2.63 GB Download
jan-nano-128k-Q6_K.gguf
LFS Q6
3.08 GB Download
jan-nano-128k-Q8_0.gguf
LFS Q8
3.99 GB Download
jan-nano-128k-iQ4_XS.gguf
LFS Q4
2.11 GB Download