πŸ“‹ Model Description


pipeline_tag: text-ranking tags:
  • gguf
  • reranker
  • qwen3
  • llama-cpp
language:
  • multilingual
base_model: jinaai/jina-reranker-v3 basemodelrelation: quantized inference: false license: cc-by-nc-4.0 library_name: llama.cpp

jina-reranker-v3-GGUF

GGUF quantizations of jina-reranker-v3 using llama.cpp. A 0.6B parameter multilingual listwise reranker quantized for efficient inference.

Requirements

  • Python 3.8+
  • llama.cpp binaries (llama-embedding and llama-tokenize)
  • Hanxiao's llama.cpp fork recommended: https://github.com/hanxiao/llama.cpp

Installation

pip install numpy safetensors

Files

  • jina-reranker-v3-BF16.gguf - Quantized model weights (BF16, 1.1GB)
  • projector.safetensors - MLP projector weights (3MB)
  • rerank.py - Reranker implementation

Usage

from rerank import GGUFReranker

Initialize reranker

reranker = GGUFReranker( model_path="jina-reranker-v3-BF16.gguf", projector_path="projector.safetensors", llamaembeddingpath="/path/to/llama-embedding" )

Rerank documents

query = "What is the capital of France?" documents = [ "Paris is the capital and largest city of France.", "Berlin is the capital of Germany.", "The Eiffel Tower is located in Paris." ]

results = reranker.rerank(query, documents)

for result in results:
print(f"Score: {result['relevance_score']:.4f}, Doc: {result['document'][:50]}...")

API

GGUFReranker.rerank(query, documents, topn=None, returnembeddings=False, instruction=None)

Arguments:

  • query (str): Search query
  • documents (List[str]): Documents to rerank
  • topn (int, optional): Return only top N results
  • returnembeddings (bool): Include embeddings in output
  • instruction (str, optional): Custom ranking instruction

Returns:
List of dicts with keys: index, relevance_score, document, and optionally embedding

Citation

If you find jina-reranker-v3 useful in your research, please cite the original paper:

@misc{wang2025jinarerankerv3lateinteractiondocument,
      title={jina-reranker-v3: Last but Not Late Interaction for Document Reranking},
      author={Feng Wang and Yuqing Li and Han Xiao},
      year={2025},
      eprint={2509.25085},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.25085},
}

License

This MLX implementation follows the same CC BY-NC 4.0 license as the original model. For commercial usage inquiries, please contact Jina AI.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
jina-reranker-v3-BF16.gguf
LFS FP16
1.12 GB Download
jina-reranker-v3-IQ1_M.gguf
LFS
206.62 MB Download
jina-reranker-v3-IQ1_S.gguf
LFS
198.95 MB Download
jina-reranker-v3-IQ2_M.gguf
LFS Q2
253.21 MB Download
jina-reranker-v3-IQ2_XXS.gguf
LFS Q2
219.39 MB Download
jina-reranker-v3-IQ3_M.gguf
LFS Q3
321.04 MB Download
jina-reranker-v3-IQ3_S.gguf
LFS Q3
308.68 MB Download
jina-reranker-v3-IQ3_XS.gguf
LFS Q3
298.84 MB Download
jina-reranker-v3-IQ3_XXS.gguf
LFS Q3
266.67 MB Download
jina-reranker-v3-IQ4_NL.gguf
LFS Q4
364.47 MB Download
jina-reranker-v3-IQ4_XS.gguf
LFS Q4
351.34 MB Download
jina-reranker-v3-Q2_K.gguf
LFS Q2
283.09 MB Download
jina-reranker-v3-Q3_K_M.gguf
LFS Q3
331.62 MB Download
jina-reranker-v3-Q4_K_M.gguf
Recommended LFS Q4
378.9 MB Download
jina-reranker-v3-Q5_K_M.gguf
LFS Q5
424.4 MB Download
jina-reranker-v3-Q5_K_S.gguf
LFS Q5
416.97 MB Download
jina-reranker-v3-Q6_K.gguf
LFS Q6
472.75 MB Download
jina-reranker-v3-Q8_0.gguf
LFS Q8
610.4 MB Download