📋 Model Description


quantized_by: Nomic pipeline_tag: sentence-similarity base_model: nomic-ai/nomic-embed-text-v2-moe basemodelrelation: quantized tags:
  • sentence-similarity
  • feature-extraction
license: apache-2.0 language:
  • en
  • es
  • fr
  • de
  • it
  • pt
  • pl
  • nl
  • tr
  • ja
  • vi
  • ru
  • id
  • ar
  • cs
  • ro
  • sv
  • el
  • uk
  • zh
  • hu
  • da
  • 'no'
  • hi
  • fi
  • bg
  • ko
  • sk
  • th
  • he
  • ca
  • lt
  • fa
  • ms
  • sl
  • lv
  • mr
  • bn
  • sq
  • cy
  • be
  • ml
  • kn
  • mk
  • ur
  • fy
  • te
  • eu
  • sw
  • so
  • sd
  • uz
  • co
  • hr
  • gu
  • ce
  • eo
  • jv
  • la
  • zu
  • mn
  • si
  • ga
  • ky
  • tg
  • my
  • km
  • mg
  • pa
  • sn
  • ha
  • ht
  • su
  • gd
  • ny
  • ps
  • ku
  • am
  • ig
  • lo
  • mi
  • nn
  • sm
  • yi
  • st
  • tl
  • xh
  • yo
  • af
  • ta
  • tn
  • ug
  • az
  • ba
  • bs
  • dv
  • et
  • gl
  • gn
  • gv
  • hy

Llama.cpp Quantizations of nomic-embed-text-v2-moe: Multilingual Mixture of Experts Text Embeddings

Blog | Technical Report | AWS SageMaker | Atlas Embedding and Unstructured Data Analytics Platform

This model was presented in the paper Training Sparse Mixture Of Experts Text Embedding Models.

Using llama.cpp commit e3a9421b7 for quantization.

Original model: nomic-embed-text-v2-moe

Usage

This model can be used with the llama.cpp server and other software that supports llama.cpp embedding models.

Embedding text with nomic-embed-text requires task instruction prefixes at the beginning of each string.

For example, the code below shows how to use the search_query prefix to embed user questions, e.g. in a RAG application.

Start a llama.cpp server:

llama-server -m nomic-embed-text-v2-moe.bf16.gguf --embeddings

And run this code:

import requests

def dot(va, vb):
return sum(a * b for a, b in zip(va, vb))

def embed(texts):
resp = requests.post('http://localhost:8080/v1/embeddings', json={'input': texts}).json()
return [d['embedding'] for d in resp['data']]

docs = ['åĩŒå…Ĩåžˆé…ˇ', 'édžéŠŧåžˆé…ˇ'] # 'embeddings are cool', 'llamas are cool'
docsembed = embed(['searchdocument: ' + d for d in docs])

query = 'čˇŸæˆ‘čŽ˛čŽ˛åĩŒå…Ĩ' # 'tell me about embeddings'
queryembed = embed(['searchquery: ' + query])[0]
print(f'query: {query!r}')
for d, e in zip(docs, docs_embed):
print(f'similarity {dot(query_embed, e):.2f}: {d!r}')

You should see output similar to this:

query: 'čˇŸæˆ‘čŽ˛čŽ˛åĩŒå…Ĩ'
similarity 0.48: 'åĩŒå…Ĩåžˆé…ˇ'
similarity 0.19: 'édžéŠŧåžˆé…ˇ'

Download a file (not the whole branch) from below:

FilenameQuant TypeFile SizeDescription
nomic-embed-text-v2-moe.f32.gguff321820MiBFull FP32 weights.
nomic-embed-text-v2-moe.f16.gguff16913MiBFull FP16 weights.
nomic-embed-text-v2-moe.bf16.ggufbf16913MiBFull BF16 weights.
nomic-embed-text-v2-moe.Q8\0.ggufQ8\0488MiBExtremely high quality, generally unneeded but max available quant.
nomic-embed-text-v2-moe.Q6\K.ggufQ6\K379MiBVery high quality, near perfect, recommended.
nomic-embed-text-v2-moe.Q5\K\M.ggufQ5\K\_M354MiBHigh quality, recommended.
nomic-embed-text-v2-moe.Q5\K\S.ggufQ5\K\_S343MiBHigh quality, recommended.
nomic-embed-text-v2-moe.Q4\1.ggufQ4\1326MiBLegacy format, similar performance to Q4\K\S but with improved tokens/watt on Apple silicon.
nomic-embed-text-v2-moe.Q4\K\M.ggufQ4\K\_M328MiBGood quality, default size for most use cases, recommended.
nomic-embed-text-v2-moe.Q4\K\S.ggufQ4\K\_S310MiBSlightly lower quality with more space savings, recommended.
nomic-embed-text-v2-moe.Q4\0.ggufQ4\0309MiBLegacy format, offers online repacking for ARM and AVX CPU inference.
nomic-embed-text-v2-moe.Q3\K\L.ggufQ3\K\_L307MiBLower quality but usable, good for low RAM availability.
nomic-embed-text-v2-moe.Q3\K\M.ggufQ3\K\_M294MiBLow quality.
nomic-embed-text-v2-moe.Q3\K\S.ggufQ3\K\_S275MiBLow quality, not recommended.
nomic-embed-text-v2-moe.Q2\K.ggufQ2\K261MiBVery low quality but surprisingly usable.

Model Overview

nomic-embed-text-v2-moe is a SoTA multilingual MoE text embedding model that excels at multilingual retrieval:
  • High Performance: SoTA Multilingual performance compared to ~300M parameter models, competitive with models 2x in size
  • Multilinguality: Supports ~100 languages and trained on over 1.6B pairs
  • Flexible Embedding Dimension: Trained with Matryoshka Embeddings with 3x reductions in storage cost with minimal performance degradations
  • Fully Open-Source: Model weights, code, and training data (see code repo) released
ModelParams (M)Emb DimBEIRMIRACLPretrain DataFinetune DataCode
Nomic Embed v230576852.8665.80✅✅✅
mE5 Base27876848.8862.30❌❌❌
mGTE Base30576851.1063.40❌❌❌
Arctic Embed v2 Base30576855.4059.90❌❌❌
BGE M3568102448.8069.20❌✅❌
Arctic Embed v2 Large568102455.6566.00❌❌❌
mE5 Large560102451.4066.50❌❌❌

Model Architecture

  • Total Parameters: 475M
  • Active Parameters During Inference: 305M
  • Architecture Type: Mixture of Experts (MoE)
  • MoE Configuration: 8 experts with top-2 routing
  • Embedding Dimensions: Supports flexible dimension from 768 to 256 through Matryoshka representation learning
  • Maximum Sequence Length: 512 tokens
  • Languages: Supports dozens of languages (see Performance section)

Paper Abstract

Transformer-based text embedding models have improved their performance on benchmarks like MIRACL and BEIR by increasing their parameter counts. However, this scaling approach introduces significant deployment challenges, including increased inference latency and memory usage. These challenges are particularly severe in retrieval-augmented generation (RAG) applications, where large models' increased memory requirements constrain dataset ingestion capacity, and their higher latency directly impacts query-time performance. While causal language models have addressed similar efficiency challenges using Mixture of Experts (MoE) architectures, this approach hasn't been successfully adapted to the general text embedding setting. In this paper, we introduce Nomic Embed v2, the first general purpose MoE text embedding model. Our model outperforms models in the same parameter class on both monolingual and multilingual benchmarks while also maintaining competitive performance with models twice its size. We open-source all code, models, and evaluation data to ensure full reproducibility of our training pipeline at https://github.com/nomic-ai/contrastors.

Performance

nomic-embed-text-v2-moe performance on BEIR and MIRACL compared to other open-weights embedding models:

!image/png

nomic-embed-text-v2-moe performance on BEIR at 768 dimension and truncated to 256 dimensions:

!image/png

Best Practices

  • Add appropriate prefixes to your text:
- For queries: "search\_query: " - For documents: "search\_document: "
  • Maximum input length is 512 tokens
  • For optimal efficiency, consider using the 256-dimension embeddings if storage/compute is a concern

Limitations

  • Performance may vary across different languages
  • Resource requirements may be higher than traditional dense models due to MoE architecture
  • Must use trustremotecode=True when loading the model to use our custom architecture implementation

Training Details

!image/png
  • Trained on 1.6 billion high-quality pairs across multiple languages
  • Uses consistency filtering to ensure high-quality training data
  • Incorporates Matryoshka representation learning for dimension flexibility
  • Training includes both weakly-supervised contrastive pretraining and supervised finetuning

For more details, please check out the blog post and technical report.

Join the Nomic Community

Citation

If you find the model, dataset, or training code useful, please cite our work
@misc{nussbaum2025trainingsparsemixtureexperts,
      title={Training Sparse Mixture Of Experts Text Embedding Models},
      author={Zach Nussbaum and Brandon Duderstadt},
      year={2025},
      eprint={2502.07972},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.07972},
}

📂 GGUF File List

📁 Filename đŸ“Ļ Size ⚡ Download
nomic-embed-text-v2-moe.Q2_K.gguf
LFS Q2
260.63 MB Download
nomic-embed-text-v2-moe.Q3_K_L.gguf
LFS Q3
307.3 MB Download
nomic-embed-text-v2-moe.Q3_K_M.gguf
LFS Q3
293.85 MB Download
nomic-embed-text-v2-moe.Q3_K_S.gguf
LFS Q3
275.02 MB Download
nomic-embed-text-v2-moe.Q4_0.gguf
Recommended LFS Q4
309.14 MB Download
nomic-embed-text-v2-moe.Q4_1.gguf
LFS Q4
326.02 MB Download
nomic-embed-text-v2-moe.Q4_K_M.gguf
LFS Q4
328.18 MB Download
nomic-embed-text-v2-moe.Q4_K_S.gguf
LFS Q4
310.27 MB Download
nomic-embed-text-v2-moe.Q5_K_M.gguf
LFS Q5
353.65 MB Download
nomic-embed-text-v2-moe.Q5_K_S.gguf
LFS Q5
342.89 MB Download
nomic-embed-text-v2-moe.Q6_K.gguf
LFS Q6
378.75 MB Download
nomic-embed-text-v2-moe.Q8_0.gguf
LFS Q8
488.5 MB Download
nomic-embed-text-v2-moe.bf16.gguf
LFS FP16
913.32 MB Download
nomic-embed-text-v2-moe.f16.gguf
LFS FP16
913.32 MB Download
nomic-embed-text-v2-moe.f32.gguf
LFS
1.78 GB Download