π Model Description
license: other license_name: lfm1.0 license_link: LICENSE language:
- en
- liquid
- lfm2.5
- edge
- llama.cpp
- audio
- speech
- gguf
- LiquidAI/LFM2.5-Audio-1.5B
alt="Liquid AI"
style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
LFM2.5-Audio-1.5B
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B
Runners
runners folder contains runners for various architectures including
- llama-liquid-audio-cli
- llama-liquid-audio-server
π How to run LFM2.5
CLI
Set env variables.
export CKPT=/path/to/LFM2.5-Audio-1.5B-GGUF
export INPUT_WAV=/path/to/input.wav
export OUTPUT_WAV=/path/to/output.wav
ASR (audio -> text)
./llama-liquid-audio-cli -m $CKPT/LFM2.5-Audio-1.5B-Q40.gguf -mm $CKPT/mmproj-LFM2.5-Audio-1.5B-Q40.gguf -mv $CKPT/vocoder-LFM2.5-Audio-1.5B-Q40.gguf --tts-speaker-file $CKPT/tokenizer-LFM2.5-Audio-1.5B-Q40.gguf -sys "Perform ASR." --audio $INPUT_WAV
TTS (text -> audio)
./llama-liquid-audio-cli -m $CKPT/LFM2.5-Audio-1.5B-Q40.gguf -mm $CKPT/mmproj-LFM2.5-Audio-1.5B-Q40.gguf -mv $CKPT/vocoder-LFM2.5-Audio-1.5B-Q40.gguf --tts-speaker-file $CKPT/tokenizer-LFM2.5-Audio-1.5B-Q40.gguf -sys "Perform TTS." -p "Hi, how are you?" --output $OUTPUT_WAV
Interleaved (audio/text -> audio + text)
./llama-liquid-audio-cli -m $CKPT/LFM2.5-Audio-1.5B-Q40.gguf -mm $CKPT/mmproj-LFM2.5-Audio-1.5B-Q40.gguf -mv $CKPT/vocoder-LFM2.5-Audio-1.5B-Q40.gguf --tts-speaker-file $CKPT/tokenizer-LFM2.5-Audio-1.5B-Q40.gguf -sys "Respond with interleaved text and audio." --audio $INPUTWAV --output $OUTPUTWAV
Server
Start server
export CKPT=/path/to/LFM2.5-Audio-1.5B-GGUF
./llama-liquid-audio-server -m $CKPT/LFM2.5-Audio-1.5B-Q40.gguf -mm $CKPT/mmproj-LFM2.5-Audio-1.5B-Q40.gguf -mv $CKPT/vocoder-LFM2.5-Audio-1.5B-Q40.gguf --tts-speaker-file $CKPT/tokenizer-LFM2.5-Audio-1.5B-Q40.gguf
Use liquidaudiochat.py script to communicate with the server.
uv run liquidaudiochat.py
Source Code for Runners
Runners are built from https://github.com/ggml-org/llama.cpp/pull/18641. It's WIP and will take time to land in upstream.
Demo
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
LFM2.5-Audio-1.5B-F16.gguf
LFS
FP16
|
2.18 GB | Download |
|
LFM2.5-Audio-1.5B-Q4_0.gguf
Recommended
LFS
Q4
|
663.52 MB | Download |
|
LFM2.5-Audio-1.5B-Q8_0.gguf
LFS
Q8
|
1.16 GB | Download |
|
mmproj-LFM2.5-Audio-1.5B-F16.gguf
LFS
FP16
|
437.55 MB | Download |
|
mmproj-LFM2.5-Audio-1.5B-Q4_0.gguf
LFS
Q4
|
209.34 MB | Download |
|
mmproj-LFM2.5-Audio-1.5B-Q8_0.gguf
LFS
Q8
|
279.85 MB | Download |
|
tokenizer-LFM2.5-Audio-1.5B-F16.gguf
LFS
FP16
|
136.09 MB | Download |
|
tokenizer-LFM2.5-Audio-1.5B-Q4_0.gguf
LFS
Q4
|
48.2 MB | Download |
|
tokenizer-LFM2.5-Audio-1.5B-Q8_0.gguf
LFS
Q8
|
73.39 MB | Download |
|
vocoder-LFM2.5-Audio-1.5B-F16.gguf
LFS
FP16
|
369.22 MB | Download |
|
vocoder-LFM2.5-Audio-1.5B-Q4_0.gguf
LFS
Q4
|
103.94 MB | Download |
|
vocoder-LFM2.5-Audio-1.5B-Q8_0.gguf
LFS
Q8
|
196.21 MB | Download |