πŸ“‹ Model Description

Quantization made by Richard Erkhov.

Github

Discord

Request more models

SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged - GGUF

  • Model creator: https://huggingface.co/TifTifR/
  • Original model: https://huggingface.co/TifTifR/SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged/

NameQuant methodSize
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q2K.ggufQ2_K2.96GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.IQ3XS.ggufIQ3_XS3.28GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.IQ3S.ggufIQ3_S3.43GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q3KS.ggufQ3KS3.41GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.IQ3M.ggufIQ3_M3.52GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q3K.ggufQ3_K3.74GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q3KM.ggufQ3KM3.74GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q3KL.ggufQ3KL4.03GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.IQ4XS.ggufIQ4_XS4.18GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q40.ggufQ4_04.34GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.IQ4NL.ggufIQ4_NL4.38GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q4KS.ggufQ4KS4.37GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q4K.ggufQ4_K4.58GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q4KM.ggufQ4KM4.58GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q41.ggufQ4_14.78GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q50.ggufQ5_05.21GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q5KS.ggufQ5KS5.21GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q5K.ggufQ5_K5.34GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q5KM.ggufQ5KM5.34GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q51.ggufQ5_15.65GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q6K.ggufQ6_K6.14GB
SFT-dna-Llama-3.1-8B-Instruct-20-r16merged.Q80.ggufQ8_07.95GB

Original model description:



base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
  • en

license: apache-2.0
tags:
  • text-generation-inference
  • transformers
  • unsloth
  • llama
  • trl


Uploaded model

  • Developed by: TifTifR
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.IQ3_M.gguf
LFS Q3
3.52 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.IQ3_S.gguf
LFS Q3
3.43 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.IQ3_XS.gguf
LFS Q3
3.28 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.IQ4_NL.gguf
LFS Q4
4.38 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.IQ4_XS.gguf
LFS Q4
4.18 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q2_K.gguf
LFS Q2
2.96 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q3_K.gguf
LFS Q3
3.74 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q3_K_L.gguf
LFS Q3
4.03 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q3_K_M.gguf
LFS Q3
3.74 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q3_K_S.gguf
LFS Q3
3.41 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q4_0.gguf
Recommended LFS Q4
4.34 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q4_1.gguf
LFS Q4
4.78 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q4_K.gguf
LFS Q4
4.58 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q4_K_M.gguf
LFS Q4
4.58 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q4_K_S.gguf
LFS Q4
4.37 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q5_0.gguf
LFS Q5
5.21 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q5_1.gguf
LFS Q5
5.65 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q5_K.gguf
LFS Q5
5.34 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q5_K_M.gguf
LFS Q5
5.34 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q5_K_S.gguf
LFS Q5
5.21 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q6_K.gguf
LFS Q6
6.14 GB Download
SFT-dna-Llama-3.1-8B-Instruct-20-r16_merged.Q8_0.gguf
LFS Q8
7.95 GB Download