πŸ“‹ Model Description


base_model: liuhaotian/llava-v1.5-7b inference: false library_name: transformers license: llama2 model_creator: liuhaotian model_name: Llava v1.5 7B quantized_by: Second State Inc.








Llava-v1.5-7B-GGUF

Original Model

liuhaotian/llava-v1.5-7b

Run with LlamaEdge

  • LlamaEdge version: v0.16.2
  • Prompt template

- Prompt type: vicuna-llava

- Prompt string

<systemprompt>\nUSER:<imageembeddings>\n<textual_prompt>\nASSISTANT:
  • Context size: 4096
  • Run as LlamaEdge service
wasmedge --dir .:. \
    --nn-preload default:GGML:AUTO:llava-v1.5-7b-Q5KM.gguf \
    llama-api-server.wasm \
    --prompt-template vicuna-llava \
    --ctx-size 4096 \
    --llava-mmproj llava-v1.5-7b-mmproj-model-f16.gguf \
    --model-name llava-v1.5

Quantized GGUF Models

NameQuant methodBitsSizeUse case
llava-v1.5-7b-Q2K.ggufQ2K22.53 GBsmallest, significant quality loss - not recommended for most purposes
llava-v1.5-7b-Q3KL.ggufQ3K_L33.6 GBsmall, substantial quality loss
llava-v1.5-7b-Q3KM.ggufQ3K_M33.3 GBvery small, high quality loss
llava-v1.5-7b-Q3KS.ggufQ3K_S32.95 GBvery small, high quality loss
llava-v1.5-7b-Q40.ggufQ4043.83 GBlegacy; small, very high quality loss - prefer using Q3KM
llava-v1.5-7b-Q4KM.ggufQ4K_M44.08 GBmedium, balanced quality - recommended
llava-v1.5-7b-Q4KS.ggufQ4K_S43.86 GBsmall, greater quality loss
llava-v1.5-7b-Q50.ggufQ5054.65 GBlegacy; medium, balanced quality - prefer using Q4KM
llava-v1.5-7b-Q5KM.ggufQ5K_M54.78 GBlarge, very low quality loss - recommended
llava-v1.5-7b-Q5KS.ggufQ5K_S54.65 GBlarge, low quality loss - recommended
llava-v1.5-7b-Q6K.ggufQ6K65.53 GBvery large, extremely low quality loss
llava-v1.5-7b-Q80.ggufQ8087.16 GBvery large, extremely low quality loss - not recommended
llava-v1.5-7b-mmproj-model-f16.gguff168624 MB
Quantized with llama.cpp b2230

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
llava-v1.5-7b-Q2_K.gguf
LFS Q2
2.36 GB Download
llava-v1.5-7b-Q3_K_L.gguf
LFS Q3
3.35 GB Download
llava-v1.5-7b-Q3_K_M.gguf
LFS Q3
3.07 GB Download
llava-v1.5-7b-Q3_K_S.gguf
LFS Q3
2.75 GB Download
llava-v1.5-7b-Q4_0.gguf
Recommended LFS Q4
3.56 GB Download
llava-v1.5-7b-Q4_K_M.gguf
LFS Q4
3.8 GB Download
llava-v1.5-7b-Q4_K_S.gguf
LFS Q4
3.59 GB Download
llava-v1.5-7b-Q5_0.gguf
LFS Q5
4.33 GB Download
llava-v1.5-7b-Q5_K_M.gguf
LFS Q5
4.45 GB Download
llava-v1.5-7b-Q5_K_S.gguf
LFS Q5
4.33 GB Download
llava-v1.5-7b-Q6_K.gguf
LFS Q6
5.15 GB Download
llava-v1.5-7b-Q8_0.gguf
LFS Q8
6.67 GB Download
llava-v1.5-7b-mmproj-model-f16.gguf
LFS FP16
595.51 MB Download