πŸ“‹ Model Description


license: other language:
  • en
pipeline_tag: text-generation inference: false tags:
  • transformers
  • gguf
  • imatrix
  • Llama-3.2-3B-Instruct

Quantizations of https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct

Inference Clients/UIs


From original readme

The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.

Model Developer: Meta

Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

How to use

This repository contains two versions of Llama-3.2-3B-Instruct, for use with transformers and with the original llama codebase.

Use with transformers

Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via pip install --upgrade transformers.

import torch
from transformers import pipeline

model_id = "meta-llama/Llama-3.2-3B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
maxnewtokens=256,
)
print(outputs[0]["generated_text"][-1])

Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantised and more at huggingface-llama-recipes

Use with llama

Please, follow the instructions in the repository

To download Original checkpoints, see the example command below leveraging huggingface-cli:

huggingface-cli download meta-llama/Llama-3.2-3B-Instruct --include "original/*" --local-dir Llama-3.2-3B-Instruct

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Llama-3.2-3B-Instruct-IQ1_M.gguf
LFS
881.38 MB Download
Llama-3.2-3B-Instruct-IQ1_S.gguf
LFS
827.94 MB Download
Llama-3.2-3B-Instruct-IQ2_M.gguf
LFS Q2
1.14 GB Download
Llama-3.2-3B-Instruct-IQ2_S.gguf
LFS Q2
1.08 GB Download
Llama-3.2-3B-Instruct-IQ2_XS.gguf
LFS Q2
1.02 GB Download
Llama-3.2-3B-Instruct-IQ2_XXS.gguf
LFS Q2
970.44 MB Download
Llama-3.2-3B-Instruct-IQ3_M.gguf
LFS Q3
1.49 GB Download
Llama-3.2-3B-Instruct-IQ3_S.gguf
LFS Q3
1.44 GB Download
Llama-3.2-3B-Instruct-IQ3_XS.gguf
LFS Q3
1.38 GB Download
Llama-3.2-3B-Instruct-IQ3_XXS.gguf
LFS Q3
1.26 GB Download
Llama-3.2-3B-Instruct-IQ4_NL.gguf
LFS Q4
1.79 GB Download
Llama-3.2-3B-Instruct-IQ4_XS.gguf
LFS Q4
1.7 GB Download
Llama-3.2-3B-Instruct-Q2_K.gguf
LFS Q2
1.27 GB Download
Llama-3.2-3B-Instruct-Q2_K_S.gguf
LFS Q2
1.19 GB Download
Llama-3.2-3B-Instruct-Q3_K_L.gguf
LFS Q3
1.69 GB Download
Llama-3.2-3B-Instruct-Q3_K_M.gguf
LFS Q3
1.57 GB Download
Llama-3.2-3B-Instruct-Q3_K_S.gguf
LFS Q3
1.44 GB Download
Llama-3.2-3B-Instruct-Q4_0.gguf
Recommended LFS Q4
1.79 GB Download
Llama-3.2-3B-Instruct-Q4_1.gguf
LFS Q4
1.95 GB Download
Llama-3.2-3B-Instruct-Q4_K_M.gguf
LFS Q4
1.88 GB Download
Llama-3.2-3B-Instruct-Q4_K_S.gguf
LFS Q4
1.8 GB Download
Llama-3.2-3B-Instruct-Q5_0.gguf
LFS Q5
2.12 GB Download
Llama-3.2-3B-Instruct-Q5_1.gguf
LFS Q5
2.28 GB Download
Llama-3.2-3B-Instruct-Q5_K_M.gguf
LFS Q5
2.16 GB Download
Llama-3.2-3B-Instruct-Q5_K_S.gguf
LFS Q5
2.11 GB Download
Llama-3.2-3B-Instruct-Q6_K.gguf
LFS Q6
2.46 GB Download
Llama-3.2-3B-Instruct-Q8_0.gguf
LFS Q8
3.19 GB Download