πŸ“‹ Model Description

Quantization made by Richard Erkhov.

Github

Discord

Request more models

codegemma-7b-it - GGUF

  • Model creator: https://huggingface.co/unsloth/
  • Original model: https://huggingface.co/unsloth/codegemma-7b-it/

NameQuant methodSize
codegemma-7b-it.Q2K.ggufQ2K3.24GB
codegemma-7b-it.IQ3XS.ggufIQ3XS3.54GB
codegemma-7b-it.IQ3S.ggufIQ3S3.71GB
codegemma-7b-it.Q3KS.ggufQ3K_S3.71GB
codegemma-7b-it.IQ3M.ggufIQ3M3.82GB
codegemma-7b-it.Q3K.ggufQ3K4.07GB
codegemma-7b-it.Q3KM.ggufQ3K_M4.07GB
codegemma-7b-it.Q3KL.ggufQ3K_L4.39GB
codegemma-7b-it.IQ4XS.ggufIQ4XS4.48GB
codegemma-7b-it.Q40.ggufQ404.67GB
codegemma-7b-it.IQ4NL.ggufIQ4NL4.69GB
codegemma-7b-it.Q4KS.ggufQ4K_S4.7GB
codegemma-7b-it.Q4K.ggufQ4K4.96GB
codegemma-7b-it.Q4KM.ggufQ4K_M4.96GB
codegemma-7b-it.Q41.ggufQ415.12GB
codegemma-7b-it.Q50.ggufQ505.57GB
codegemma-7b-it.Q5KS.ggufQ5K_S5.57GB
codegemma-7b-it.Q5K.ggufQ5K5.72GB
codegemma-7b-it.Q5KM.ggufQ5K_M5.72GB
codegemma-7b-it.Q51.ggufQ516.02GB
codegemma-7b-it.Q6K.ggufQ6K6.53GB
codegemma-7b-it.Q80.ggufQ808.45GB

Original model description:



language:
  • en

library_name: transformers
license: apache-2.0
tags:
  • unsloth
  • transformers
  • gemma
  • bnb


Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!

We have a Google Colab Tesla T4 notebook for CodeGemma 7b here: https://colab.research.google.com/drive/19lwcRkZQZtX-qzFP3qZBBHZNcMD1hh?usp=sharing



✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supportsFree NotebooksPerformanceMemory use
Gemma 7b▢️ Start on Colab2.4x faster58% less
Mistral 7b▢️ Start on Colab2.2x faster62% less
Llama-2 7b▢️ Start on Colab2.2x faster43% less
TinyLlama▢️ Start on Colab3.9x faster74% less
CodeLlama 34b A100▢️ Start on Colab1.9x faster27% less
Mistral 7b 1xT4▢️ Start on Kaggle5x faster\*62% less
DPO - Zephyr▢️ Start on Colab1.9x faster19% less

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
codegemma-7b-it.IQ3_M.gguf
LFS Q3
3.82 GB Download
codegemma-7b-it.IQ3_S.gguf
LFS Q3
3.71 GB Download
codegemma-7b-it.IQ3_XS.gguf
LFS Q3
3.54 GB Download
codegemma-7b-it.IQ4_NL.gguf
LFS Q4
4.69 GB Download
codegemma-7b-it.IQ4_XS.gguf
LFS Q4
4.48 GB Download
codegemma-7b-it.Q2_K.gguf
LFS Q2
3.24 GB Download
codegemma-7b-it.Q3_K.gguf
LFS Q3
4.07 GB Download
codegemma-7b-it.Q3_K_L.gguf
LFS Q3
4.39 GB Download
codegemma-7b-it.Q3_K_M.gguf
LFS Q3
4.07 GB Download
codegemma-7b-it.Q3_K_S.gguf
LFS Q3
3.71 GB Download
codegemma-7b-it.Q4_0.gguf
Recommended LFS Q4
4.67 GB Download
codegemma-7b-it.Q4_1.gguf
LFS Q4
5.12 GB Download
codegemma-7b-it.Q4_K.gguf
LFS Q4
4.96 GB Download
codegemma-7b-it.Q4_K_M.gguf
LFS Q4
4.96 GB Download
codegemma-7b-it.Q4_K_S.gguf
LFS Q4
4.7 GB Download
codegemma-7b-it.Q5_0.gguf
LFS Q5
5.57 GB Download
codegemma-7b-it.Q5_1.gguf
LFS Q5
6.02 GB Download
codegemma-7b-it.Q5_K.gguf
LFS Q5
5.72 GB Download
codegemma-7b-it.Q5_K_M.gguf
LFS Q5
5.72 GB Download
codegemma-7b-it.Q5_K_S.gguf
LFS Q5
5.57 GB Download
codegemma-7b-it.Q6_K.gguf
LFS Q6
6.53 GB Download
codegemma-7b-it.Q8_0.gguf
LFS Q8
8.45 GB Download