πŸ“‹ Model Description


base_model:
  • LiquidAI/LFM2-8B-A1B
library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language:
  • en
  • ar
  • zh
  • fr
  • de
  • ja
  • ko
  • es
pipeline_tag: text-generation tags:
  • liquid
  • unsloth
  • lfm2
  • edge
  • moe

[!NOTE]

Includes Unsloth chat template fixes!
For llama.cpp, use --jinja

>



Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.





src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png"
alt="Liquid AI"
style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>


LFM2-8B-A1B

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

We're releasing the weights of our first MoE based on LFM2, with 8.3B total parameters and 1.5B active parameters.

  • LFM2-8B-A1B is the best on-device MoE in terms of both quality (comparable to 3-4B dense models) and speed (faster than Qwen3-1.7B).
  • Code and knowledge capabilities are significantly improved compared to LFM2-2.6B.
  • Quantized variants fit comfortably on high-end phones, tablets, and laptops.

Find more information about LFM2-8B-A1B in our blog post.

πŸ“„ Model details

Due to their small size, we recommend fine-tuning LFM2 models on narrow use cases to maximize performance.
They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations.
However, we do not recommend using them for tasks that are knowledge-intensive or require programming skills.

| Property | LFM2-8B-A1B |
| --------------------- | ----------------------------- |
| Total parameters | 8.3B |
| Active parameters | 1.5B |
| Layers | 24 (18 conv + 6 attn) |
| Context length | 32,768 tokens |
| Vocabulary size | 65,536 |
| Training precision| Mixed BF16/FP8 |
| Training budget | 12 trillion tokens |
| License | LFM Open License v1.0 |

Supported languages: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.

Generation parameters: We recommend the following parameters:

  • temperature=0.3
  • minp=0.15
  • repetitionpenalty=1.05

Chat template: LFM2 uses a ChatML-like chat template as follows:

<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>

You can automatically apply it using the dedicated .applychattemplate() function from Hugging Face transformers.

Tool use: It consists of four main steps:

  1. Function definition: LFM2 takes JSON function definitions as input (JSON objects between <|toolliststart|> and <|toollistend|> special tokens), usually in the system prompt
  2. Function call: LFM2 writes Pythonic function calls (a Python list between <|toolcallstart|> and <|toolcallend|> special tokens), as the assistant answer.
  3. Function execution: The function call is executed and the result is returned (string between <|toolresponsestart|> and <|toolresponseend|> special tokens), as a "tool" role.
  4. Final answer: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.

Here is a simple example of a conversation using tool use:

<|startoftext|><|im_start|>system
List of tools: <|toolliststart|>[{"name": "getcandidatestatus", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidateid": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidateid"]}}]<|toollistend|><|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|toolcallstart|>[getcandidatestatus(candidateid="12345")]<|toolcallend|>Checking the current status of candidate ID 12345.<|imend|>
<|im_start|>tool
<|toolresponsestart|>{"candidateid": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}<|toolresponseend|><|imend|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>

Architecture: Hybrid model with multiplicative gates and short convolutions: 18 double-gated short-range LIV convolution blocks and 6 grouped query attention (GQA) blocks.

Pre-training mixture: Approximately 75% English, 20% multilingual, and 5% code data sourced from the web and licensed materials.

Training approach:

  • Very large-scale SFT on 50% downstream tasks, 50% general domains
  • Custom DPO with length normalization and semi-online datasets
  • Iterative model merging

πŸƒ How to run LFM2

1. Transformers

To run LFM2, you need to install Hugging Face transformers from source as follows:

pip install git+https://github.com/huggingface/transformers.git@0c9a72e4576fe4c84077f066e585129c97bfd4e6

Here is an example of how to generate an answer with transformers in Python:

from transformers import AutoModelForCausalLM, AutoTokenizer

Load model and tokenizer

model_id = "LiquidAI/LFM2-8B-A1B" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", dtype="bfloat16",

attnimplementation="flashattention_2" <- uncomment on compatible GPU

) tokenizer = AutoTokenizer.frompretrained(modelid)

Generate answer

prompt = "What is C. elegans?" inputids = tokenizer.applychat_template( [{"role": "user", "content": prompt}], addgenerationprompt=True, return_tensors="pt", tokenize=True, ).to(model.device)

output = model.generate(
input_ids,
do_sample=True,
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
maxnewtokens=512,
)

print(tokenizer.decode(output[0], skipspecialtokens=False))

<|startoftext|><|im_start|>user

What is C. elegans?<|im_end|>

<|im_start|>assistant

C. elegans, also known as Caenorhabditis elegans, is a small, free-living

nematode worm (roundworm) that belongs to the phylum Nematoda.

You can directly run and test the model with this Colab notebook.

2. vLLM

You can run the model in vLLM by building from source:

git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e . -v

Here is an example of how to use it for inference:

from vllm import LLM, SamplingParams

prompts = [
[
{
"content": "What is C. elegans?",
"role": "user",
},
],
[
{
"content": "Say hi in JSON format",
"role": "user",
},
],
[
{
"content": "Define AI in Spanish",
"role": "user",
},
],
]

sampling_params = SamplingParams(
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_tokens=30
)

llm = LLM(model="LiquidAI/LFM2-8B-A1B", dtype="bfloat16")

outputs = llm.chat(prompts, sampling_params)

for i, output in enumerate(outputs):
prompt = prompts[i][0]["content"]
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

3. llama.cpp

You can run LFM2 with llama.cpp using its GGUF checkpoint. Find more information in the model card.

πŸ”§ How to fine-tune LFM2

We recommend fine-tuning LFM2 models on your use cases to maximize performance.

NotebookDescriptionLink
SFT (TRL)Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL.Colab link
DPO (TRL)Preference alignment with Direct Preference Optimization (DPO) using TRL.Colab link

πŸ“ˆ Performance

1. Automated benchmarks

Compared to similar-sized models, LFM2-8B-A1B displays strong performance in instruction following and math while also running significantly faster.

ModelMMLUMMLU-ProGPQAIFEvalIFBenchMulti-IF
LFM2-8B-A1B64.8437.4229.2977.5825.8558.19
LFM2-2.6B64.4225.9626.5779.5622.1960.26
Llama-3.2-3B-Instruct60.3522.2530.671.4320.7850.91
SmolLM3-3B59.8423.9026.3172.4417.9358.86
gemma-3-4b-it58.3534.7629.5176.8523.5366.61
Qwen3-4B-Instruct-250772.2552.3134.8585.6230.2875.54
granite-4.0-h-tiny66.7932.0326.4681.0618.3752.99
ModelGSM8KGSMPlusMATH 500MATH Lvl 5MGSMMMMLU
LFM2-8B-A1B84.3864.7674.262.3872.455.26
LFM2-2.6B82.4160.7563.654.3874.3255.39
Llama-3.2-3B-Instruct75.2138.6841.224.0661.6847.92
SmolLM3-3B81.1258.9173.651.9368.7250.02
gemma-3-4b-it89.9268.3873.252.1887.2850.14
Qwen3-4B-Instruct-250768.4656.1685.673.6281.7660.67
granite-4.0-h-tiny82.6459.1458.236.1173.6856.13
ModelActive paramsLCB v6LCB v5HumanEval+Creative Writing v3
LFM2-8B-A1B1.5B21.04%21.36%69.51%44.22%
Gemma-3-1b-it1B4.27%4.43%37.20%41.67%
Granite-4.0-h-tiny1B26.73%27.27%73.78%32.60%
Llama-3.2-1B-Instruct1.2B4.08%3.64%23.17%31.43%
Qwen2.5-1.5B-Instruct1.5B11.18%10.57%48.78%22.18%
Qwen3-1.7B (/no_think)1.7B24.07%26.48%60.98%31.56%
LFM2-2.6B2.6B14.41%14.43%57.93%38.79%
SmolLM3-3B3.1B19.05%19.20%60.37%36.44%
Llama-3.2-3B-Instruct3.2B11.47%11.48%24.06%38.84%
Qwen3-4B (/no_think)4B36.11%38.64%71.95%37.49%
Qwen3-4B-Instruct-25074B48.72%50.80%82.32%51.71%
Gemma-3-4b-it4.3B18.86%19.09%62.8%68.56%

2. Inference

LFM2-8B-A1B is significantly faster than models with a similar number of active parameters, like Qwen3-1.7B.

The following plots showcase the performance of different models under int4 quantization with int8 dynamic activations on the AMD Ryzen AI 9 HX 370 CPU, using 16 threads. The results are obtained using our internal XNNPACK-based inference stack, and a custom CPU MoE kernel.

πŸ“¬ Contact

If you are interested in custom solutions with edge deployment, please contact our sales team.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
LFM2-8B-A1B-BF16.gguf
LFS FP16
15.54 GB Download
LFM2-8B-A1B-Q2_K.gguf
LFS Q2
2.87 GB Download
LFM2-8B-A1B-Q2_K_L.gguf
LFS Q2
2.87 GB Download
LFM2-8B-A1B-Q3_K_M.gguf
LFS Q3
3.72 GB Download
LFM2-8B-A1B-Q3_K_S.gguf
LFS Q3
3.39 GB Download
LFM2-8B-A1B-Q4_0.gguf
Recommended LFS Q4
4.41 GB Download
LFM2-8B-A1B-Q4_1.gguf
LFS Q4
4.89 GB Download
LFM2-8B-A1B-Q4_K_M.gguf
LFS Q4
4.7 GB Download
LFM2-8B-A1B-Q4_K_S.gguf
LFS Q4
4.43 GB Download
LFM2-8B-A1B-Q5_K_M.gguf
LFS Q5
5.51 GB Download
LFM2-8B-A1B-Q5_K_S.gguf
LFS Q5
5.36 GB Download
LFM2-8B-A1B-Q6_K.gguf
LFS Q6
6.38 GB Download
LFM2-8B-A1B-Q8_0.gguf
LFS Q8
8.26 GB Download
LFM2-8B-A1B-UD-Q2_K_XL.gguf
LFS Q2
2.91 GB Download
LFM2-8B-A1B-UD-Q3_K_XL.gguf
LFS Q3
3.42 GB Download
LFM2-8B-A1B-UD-Q4_K_XL.gguf
LFS Q4
4.42 GB Download
LFM2-8B-A1B-UD-Q5_K_XL.gguf
LFS Q5
5.51 GB Download
LFM2-8B-A1B-UD-Q6_K_XL.gguf
LFS Q6
6.41 GB Download
LFM2-8B-A1B-UD-Q8_K_XL.gguf
LFS Q8
8.38 GB Download