πŸ“‹ Model Description

Quantization made by Richard Erkhov.

Github

Discord

Request more models

full_deepseek-1.3-tc - GGUF

  • Model creator: https://huggingface.co/Yhhhhhhhhh/
  • Original model: https://huggingface.co/Yhhhhhhhhh/full_deepseek-1.3-tc/

NameQuant methodSize
fulldeepseek-1.3-tc.Q2K.ggufQ2_K0.52GB
fulldeepseek-1.3-tc.IQ3XS.ggufIQ3_XS0.57GB
fulldeepseek-1.3-tc.IQ3S.ggufIQ3_S0.6GB
fulldeepseek-1.3-tc.Q3KS.ggufQ3KS0.6GB
fulldeepseek-1.3-tc.IQ3M.ggufIQ3_M0.63GB
fulldeepseek-1.3-tc.Q3K.ggufQ3_K0.66GB
fulldeepseek-1.3-tc.Q3KM.ggufQ3KM0.66GB
fulldeepseek-1.3-tc.Q3KL.ggufQ3KL0.69GB
fulldeepseek-1.3-tc.IQ4XS.ggufIQ4_XS0.7GB
fulldeepseek-1.3-tc.Q40.ggufQ4_00.72GB
fulldeepseek-1.3-tc.IQ4NL.ggufIQ4_NL0.73GB
fulldeepseek-1.3-tc.Q4KS.ggufQ4KS0.76GB
fulldeepseek-1.3-tc.Q4K.ggufQ4_K0.81GB
fulldeepseek-1.3-tc.Q4KM.ggufQ4KM0.81GB
fulldeepseek-1.3-tc.Q41.ggufQ4_10.8GB
fulldeepseek-1.3-tc.Q50.ggufQ5_00.87GB
fulldeepseek-1.3-tc.Q5KS.ggufQ5KS0.89GB
fulldeepseek-1.3-tc.Q5K.ggufQ5_K0.93GB
fulldeepseek-1.3-tc.Q5KM.ggufQ5KM0.93GB
fulldeepseek-1.3-tc.Q51.ggufQ5_10.95GB
fulldeepseek-1.3-tc.Q6K.ggufQ6_K1.09GB
fulldeepseek-1.3-tc.Q80.ggufQ8_01.33GB

Original model description:



library_name: transformers
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
tags:
  • llama-factory
  • full
  • generatedfromtrainer

model-index:
  • name: nopytimecpfinalsftdeepseek-coder-1.3b-instruct

results: []

nopytimecpfinalsftdeepseek-coder-1.3b-instruct

This model is a fine-tuned version of deepseek-ai/deepseek-coder-1.3b-instruct on the output dataset.
It achieves the following results on the evaluation set:

  • Loss: 0.2606

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learningrate: 5e-06
  • trainbatchsize: 8
  • evalbatchsize: 1
  • seed: 42
  • distributedtype: multi-GPU
  • gradientaccumulationsteps: 2
  • totaltrainbatchsize: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lrschedulertype: cosine
  • lrschedulerwarmupratio: 0.03
  • num_epochs: 4.0

Training results

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.5.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
full_deepseek-1.3-tc.IQ3_M.gguf
LFS Q3
641.62 MB Download
full_deepseek-1.3-tc.IQ3_S.gguf
LFS Q3
612.09 MB Download
full_deepseek-1.3-tc.IQ3_XS.gguf
LFS Q3
584.95 MB Download
full_deepseek-1.3-tc.IQ4_NL.gguf
LFS Q4
746.04 MB Download
full_deepseek-1.3-tc.IQ4_XS.gguf
LFS Q4
715.94 MB Download
full_deepseek-1.3-tc.Q2_K.gguf
LFS Q2
533.79 MB Download
full_deepseek-1.3-tc.Q3_K.gguf
LFS Q3
671.51 MB Download
full_deepseek-1.3-tc.Q3_K_L.gguf
LFS Q3
709.97 MB Download
full_deepseek-1.3-tc.Q3_K_M.gguf
LFS Q3
671.51 MB Download
full_deepseek-1.3-tc.Q3_K_S.gguf
LFS Q3
612.09 MB Download
full_deepseek-1.3-tc.Q4_0.gguf
Recommended LFS Q4
739.99 MB Download
full_deepseek-1.3-tc.Q4_1.gguf
LFS Q4
816.3 MB Download
full_deepseek-1.3-tc.Q4_K.gguf
LFS Q4
832.99 MB Download
full_deepseek-1.3-tc.Q4_K_M.gguf
LFS Q4
832.99 MB Download
full_deepseek-1.3-tc.Q4_K_S.gguf
LFS Q4
776.26 MB Download
full_deepseek-1.3-tc.Q5_0.gguf
LFS Q5
892.62 MB Download
full_deepseek-1.3-tc.Q5_1.gguf
LFS Q5
968.93 MB Download
full_deepseek-1.3-tc.Q5_K.gguf
LFS Q5
955.43 MB Download
full_deepseek-1.3-tc.Q5_K_M.gguf
LFS Q5
955.43 MB Download
full_deepseek-1.3-tc.Q5_K_S.gguf
LFS Q5
908.74 MB Download
full_deepseek-1.3-tc.Q6_K.gguf
LFS Q6
1.09 GB Download
full_deepseek-1.3-tc.Q8_0.gguf
LFS Q8
1.33 GB Download