πŸ“‹ Model Description


license: other license_name: lfm1.0 license_link: LICENSE language:
  • en
pipeline_tag: image-text-to-text tags:
  • vision
  • vlm
  • liquid
  • lfm2
  • lfm2-vl
  • edge
  • llama.cpp
  • gguf
base_model:
  • LiquidAI/LFM2-VL-1.6B



src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png"
alt="Liquid AI"
style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>



Liquid: Playground















LFM2-VL-1.6B-GGUF

LFM2-VL is a new generation of vision models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-VL-1.6B

πŸƒ How to run LFM2-VL

Example usage with llama.cpp:

full precision (F16/F16):

llama-mtmd-cli -hf LiquidAI/LFM2-VL-1.6B-GGUF:F16

fastest inference (Q40/Q80):

llama-mtmd-cli -hf LiquidAI/LFM2-VL-1.6B-GGUF:Q4_0

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
LFM2-VL-1.6B-F16.gguf
LFS FP16
2.18 GB Download
LFM2-VL-1.6B-Q4_0.gguf
Recommended LFS Q4
663.52 MB Download
LFM2-VL-1.6B-Q8_0.gguf
LFS Q8
1.16 GB Download
mmproj-LFM2-VL-1.6B-F16.gguf
LFS FP16
791.87 MB Download
mmproj-LFM2-VL-1.6B-Q8_0.gguf
LFS Q8
537.98 MB Download