πŸ“‹ Model Description


language:
  • en
  • de
  • fr
  • it
  • pt
  • hi
  • es
  • th
tags:
  • quantized
  • 2-bit
  • 3-bit
  • 4-bit
  • 5-bit
  • 6-bit
  • 8-bit
  • GGUF
  • text-generation
  • text-generation
model_name: Meta-Llama-3.1-70B-Instruct-GGUF base_model: meta-llama/Meta-Llama-3.1-70B-Instruct inference: false model_creator: meta-llama pipeline_tag: text-generation quantized_by: MaziyarPanahi

MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF

Description

MaziyarPanahi/Meta-Llama-3.1-70B-Instruct-GGUF contains GGUF format model files for meta-llama/Meta-Llama-3.1-70B-Instruct.

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

Here is an incomplete list of clients and libraries that are known to support GGUF:

  • llama.cpp. The source project for GGUF. Offers a CLI and a server option.
  • llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
  • LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
  • text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
  • KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
  • GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
  • LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
  • Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
  • candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
  • ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.

Special thanks

πŸ™ Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Meta-Llama-3.1-70B-Instruct.IQ1_M.gguf
LFS
15.6 GB Download
Meta-Llama-3.1-70B-Instruct.IQ1_S.gguf
LFS
14.29 GB Download
Meta-Llama-3.1-70B-Instruct.IQ2_XS.gguf
LFS Q2
19.69 GB Download
Meta-Llama-3.1-70B-Instruct.IQ3_XS.gguf
LFS Q3
27.29 GB Download
Meta-Llama-3.1-70B-Instruct.IQ4_XS.gguf
LFS Q4
35.3 GB Download
Meta-Llama-3.1-70B-Instruct.Q2_K.gguf
LFS Q2
24.56 GB Download
Meta-Llama-3.1-70B-Instruct.Q3_K_L.gguf
LFS Q3
34.59 GB Download
Meta-Llama-3.1-70B-Instruct.Q3_K_M.gguf
LFS Q3
31.91 GB Download
Meta-Llama-3.1-70B-Instruct.Q3_K_S.gguf
LFS Q3
28.79 GB Download
Meta-Llama-3.1-70B-Instruct.Q4_K_M.gguf
Recommended LFS Q4
39.6 GB Download
Meta-Llama-3.1-70B-Instruct.Q4_K_S.gguf
LFS Q4
37.58 GB Download
Meta-Llama-3.1-70B-Instruct.Q5_K_M.gguf
LFS Q5
46.52 GB Download
Meta-Llama-3.1-70B-Instruct.Q5_K_S.gguf
LFS Q5
45.32 GB Download
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00001-of-00006.gguf
LFS Q6
9.96 GB Download
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00002-of-00006.gguf
LFS Q6
9.51 GB Download
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00003-of-00006.gguf
LFS Q6
9.33 GB Download
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00004-of-00006.gguf
LFS Q6
9.21 GB Download
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00005-of-00006.gguf
LFS Q6
9.21 GB Download
Meta-Llama-3.1-70B-Instruct.Q6_K.gguf-00006-of-00006.gguf
LFS Q6
6.69 GB Download