π Model Description
Quantization made by Richard Erkhov.
Meta-Llama-3.1-8B-Instruct-abliterated - GGUF
- Model creator: https://huggingface.co/huihui-ai/
- Original model: https://huggingface.co/huihui-ai/Meta-Llama-3.1-8B-Instruct-abliterated/
Original model description:
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- abliterated
- uncensored
π¦ Meta-Llama-3.1-8B-Instruct-abliterated
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
Evaluations
The following data has been re-evaluated and calculated as the average for each test.| Benchmark | Llama-3.1-8b-Instruct | Meta-Llama-3.1-8B-Instruct-abliterated |
|-------------|-----------------------|----------------------------------------|
| IF_Eval | 80.0 | 78.98 |
| MMLU Pro | 36.34 | 35.91 |
| TruthfulQA | 52.98 | 55.42 |
| BBH | 48.72 | 47.0 |
| GPQA | 33.55 | 33.93 |
The script used for evaluation can be found inside this repository under /eval.sh, or click here
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
Meta-Llama-3.1-8B-Instruct-abliterated.IQ3_M.gguf
LFS
Q3
|
3.52 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.IQ3_S.gguf
LFS
Q3
|
3.43 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.IQ3_XS.gguf
LFS
Q3
|
3.28 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.IQ4_NL.gguf
LFS
Q4
|
4.38 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.IQ4_XS.gguf
LFS
Q4
|
4.18 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q2_K.gguf
LFS
Q2
|
2.96 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q3_K.gguf
LFS
Q3
|
3.74 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q3_K_L.gguf
LFS
Q3
|
4.03 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q3_K_M.gguf
LFS
Q3
|
3.74 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q3_K_S.gguf
LFS
Q3
|
3.41 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q4_0.gguf
Recommended
LFS
Q4
|
4.34 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q4_1.gguf
LFS
Q4
|
4.78 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q4_K.gguf
LFS
Q4
|
4.58 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q4_K_M.gguf
LFS
Q4
|
4.58 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q4_K_S.gguf
LFS
Q4
|
4.37 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q5_0.gguf
LFS
Q5
|
5.21 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q5_1.gguf
LFS
Q5
|
5.65 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q5_K.gguf
LFS
Q5
|
5.34 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q5_K_M.gguf
LFS
Q5
|
5.34 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q5_K_S.gguf
LFS
Q5
|
5.21 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q6_K.gguf
LFS
Q6
|
6.14 GB | Download |
|
Meta-Llama-3.1-8B-Instruct-abliterated.Q8_0.gguf
LFS
Q8
|
7.95 GB | Download |