πŸ“‹ Model Description


license: other license_name: lfm1.0 license_link: LICENSE language:
  • en
  • ar
  • zh
  • fr
  • de
  • ja
  • ko
  • es
pipeline_tag: text-generation tags:
  • liquid
  • lfm2
  • edge
  • llama.cpp
  • gguf
base_model:
  • LiquidAI/LFM2-700M



src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png"
alt="Liquid AI"
style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>



Try LFM β€’ Documentation β€’ LEAP

LFM2-700M-GGUF

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-700M

πŸƒ How to run LFM2

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2-700M-GGUF

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
LFM2-700M-F16.gguf
LFS FP16
1.39 GB Download
LFM2-700M-Q4_0.gguf
Recommended LFS Q4
425.65 MB Download
LFM2-700M-Q4_K_M-hip-optimized.gguf
LFS Q4
490.15 MB Download
LFM2-700M-Q4_K_M.gguf
LFS Q4
446.91 MB Download
LFM2-700M-Q5_K_M.gguf
LFS Q5
513.1 MB Download
LFM2-700M-Q6_K.gguf
LFS Q6
583.43 MB Download
LFM2-700M-Q8_0.gguf
LFS Q8
754.9 MB Download