πŸ“‹ Model Description


datasets:
  • Lin-Chen/ShareGPT4V
pipeline_tag: image-to-text

Model

llava-llama-3-8b-v11 is a LLaVA model fine-tuned from meta-llama/Meta-Llama-3-8B-Instruct and CLIP-ViT-Large-patch14-336 with ShareGPT4V-PT and InternVL-SFT by XTuner.

Note: This model is in GGUF format.

Resources:

Details

ModelVisual EncoderProjectorResolutionPretraining StrategyFine-tuning StrategyPretrain DatasetFine-tune Dataset
LLaVA-v1.5-7BCLIP-LMLP336Frozen LLM, Frozen ViTFull LLM, Frozen ViTLLaVA-PT (558K)LLaVA-Mix (665K)
LLaVA-Llama-3-8BCLIP-LMLP336Frozen LLM, Frozen ViTFull LLM, LoRA ViTLLaVA-PT (558K)LLaVA-Mix (665K)
LLaVA-Llama-3-8B-v1.1CLIP-LMLP336Frozen LLM, Frozen ViTFull LLM, LoRA ViTShareGPT4V-PT (1246K)InternVL-SFT (1268K)

Results


Image

ModelMMBench Test (EN)MMBench Test (CN)CCBench DevMMMU ValSEED-IMGAI2D TestScienceQA TestHallusionBench aAccPOPEGQATextVQAMMEMMStar
LLaVA-v1.5-7B66.559.027.535.360.554.870.444.985.962.058.21511/34830.3
LLaVA-Llama-3-8B68.961.630.436.869.860.973.347.387.263.558.01506/29538.2
LLaVA-Llama-3-8B-v1.172.366.431.636.870.170.072.947.786.462.659.01469/34945.1

Quickstart

Download models

# mmproj
wget https://huggingface.co/xtuner/llava-llama-3-8b-v11-gguf/resolve/main/llava-llama-3-8b-v11-mmproj-f16.gguf

fp16 llm

wget https://huggingface.co/xtuner/llava-llama-3-8b-v11-gguf/resolve/main/llava-llama-3-8b-v11-f16.gguf

int4 llm

wget https://huggingface.co/xtuner/llava-llama-3-8b-v11-gguf/resolve/main/llava-llama-3-8b-v11-int4.gguf

(optional) ollama fp16 modelfile

wget https://huggingface.co/xtuner/llava-llama-3-8b-v11-gguf/resolve/main/OLLAMAMODELFILE_F16

(optional) ollama int4 modelfile

wget https://huggingface.co/xtuner/llava-llama-3-8b-v11-gguf/resolve/main/OLLAMAMODELFILE_INT4

Chat by ollama

# fp16
ollama create llava-llama3-f16 -f ./OLLAMAMODELFILEF16
ollama run llava-llama3-f16 "xx.png Describe this image"

int4

ollama create llava-llama3-int4 -f ./OLLAMAMODELFILEINT4 ollama run llava-llama3-int4 "xx.png Describe this image"

Chat by llama.cpp

  1. Build llama.cpp (docs) .
  2. Build ./llava-cli (docs).

Note: llava-llama-3-8b-v1_1 uses the Llama-3-instruct chat template.

# fp16
./llava-cli -m ./llava-llama-3-8b-v11-f16.gguf --mmproj ./llava-llama-3-8b-v11-mmproj-f16.gguf --image YOURIMAGE.jpg -c 4096 -e -p "<|startheaderid|>user<|endheaderid|>\n\n<image>\nDescribe this image<|eotid|><|startheaderid|>assistant<|endheaderid|>\n\n"

int4

./llava-cli -m ./llava-llama-3-8b-v11-int4.gguf --mmproj ./llava-llama-3-8b-v11-mmproj-f16.gguf --image YOURIMAGE.jpg -c 4096 -e -p "<|startheaderid|>user<|endheaderid|>\n\n<image>\nDescribe this image<|eotid|><|startheaderid|>assistant<|endheaderid|>\n\n"

Reproduce

Please refer to docs.

Citation

@misc{2023xtuner,
    title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
    author={XTuner Contributors},
    howpublished = {\url{https://github.com/InternLM/xtuner}},
    year={2023}
}

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
llava-llama-3-8b-v1_1-f16.gguf
Recommended LFS FP16
14.97 GB Download
llava-llama-3-8b-v1_1-int4.gguf
LFS
4.58 GB Download
llava-llama-3-8b-v1_1-mmproj-f16.gguf
LFS FP16
595.51 MB Download