π Model Description
license: other license_name: lfm1.0 license_link: LICENSE language:
- en
- vision
- vlm
- liquid
- lfm2
- lfm2-vl
- edge
- llama.cpp
- gguf
- LiquidAI/LFM2-VL-1.6B
alt="Liquid AI"
style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
LFM2-VL-1.6B-GGUF
LFM2-VL is a new generation of vision models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-VL-1.6B
π How to run LFM2-VL
Example usage with llama.cpp:
full precision (F16/F16):
llama-mtmd-cli -hf LiquidAI/LFM2-VL-1.6B-GGUF:F16
fastest inference (Q40/Q80):
llama-mtmd-cli -hf LiquidAI/LFM2-VL-1.6B-GGUF:Q4_0
π GGUF File List
π Filename | π¦ Size | β‘ Download |
---|---|---|
LFM2-VL-1.6B-F16.gguf
LFS
FP16
|
2.18 GB | Download |
LFM2-VL-1.6B-Q4_0.gguf
Recommended
LFS
Q4
|
663.52 MB | Download |
LFM2-VL-1.6B-Q8_0.gguf
LFS
Q8
|
1.16 GB | Download |
mmproj-LFM2-VL-1.6B-F16.gguf
LFS
FP16
|
791.87 MB | Download |
mmproj-LFM2-VL-1.6B-Q8_0.gguf
LFS
Q8
|
537.98 MB | Download |