π Model Description
Quantization made by Richard Erkhov.
ShortKing-1.4b-v0.1 - GGUF
- Model creator: https://huggingface.co/AtAndDev/
- Original model: https://huggingface.co/AtAndDev/ShortKing-1.4b-v0.1/
| Name | Quant method | Size |
|---|---|---|
| ShortKing-1.4b-v0.1.Q2K.gguf | Q2K | 0.53GB |
| ShortKing-1.4b-v0.1.IQ3XS.gguf | IQ3XS | 0.59GB |
| ShortKing-1.4b-v0.1.IQ3S.gguf | IQ3S | 0.61GB |
| ShortKing-1.4b-v0.1.Q3KS.gguf | Q3K_S | 0.61GB |
| ShortKing-1.4b-v0.1.IQ3M.gguf | IQ3M | 0.66GB |
| ShortKing-1.4b-v0.1.Q3K.gguf | Q3K | 0.71GB |
| ShortKing-1.4b-v0.1.Q3KM.gguf | Q3K_M | 0.71GB |
| ShortKing-1.4b-v0.1.Q3KL.gguf | Q3K_L | 0.77GB |
| ShortKing-1.4b-v0.1.IQ4XS.gguf | IQ4XS | 0.74GB |
| ShortKing-1.4b-v0.1.Q40.gguf | Q40 | 0.77GB |
| ShortKing-1.4b-v0.1.IQ4NL.gguf | IQ4NL | 0.78GB |
| ShortKing-1.4b-v0.1.Q4KS.gguf | Q4K_S | 0.78GB |
| ShortKing-1.4b-v0.1.Q4K.gguf | Q4K | 0.85GB |
| ShortKing-1.4b-v0.1.Q4KM.gguf | Q4K_M | 0.85GB |
| ShortKing-1.4b-v0.1.Q41.gguf | Q41 | 0.85GB |
| ShortKing-1.4b-v0.1.Q50.gguf | Q50 | 0.92GB |
| ShortKing-1.4b-v0.1.Q5KS.gguf | Q5K_S | 0.92GB |
| ShortKing-1.4b-v0.1.Q5K.gguf | Q5K | 0.98GB |
| ShortKing-1.4b-v0.1.Q5KM.gguf | Q5K_M | 0.98GB |
| ShortKing-1.4b-v0.1.Q51.gguf | Q51 | 1.0GB |
| ShortKing-1.4b-v0.1.Q6K.gguf | Q6K | 1.08GB |
| ShortKing-1.4b-v0.1.Q80.gguf | Q80 | 1.4GB |
Original model description:
license: cc-by-nc-4.0
datasets:
- vicgalle/alpaca-gpt4
language:
- en
Model Overview
Model license: cc-by-nc-4.0This model is trained based on EleutherAI/pythia-1.4b-deduped model that is LoRA finetuned on vicgalle/alpaca-gpt4 dataset.
Prompt Template: Alpaca
<system_prompt>
Instruction:
<user_message>
Response:
<assistant_response>
Intended Use
THIS IS A TEST MODEL, IT IS NOT INTENDED FOR REAL APPLICATIONS BY ANY MEANS. HOWEVER, A NEW MODEL IS COMING IN THE SAME TOPIC.This model series will be used for small but intense applications.
Training Details
This model took2:31:23 to train in QLoRA on a single T4 GPU.- epochs:
1
- train batch size: 12
- eval batch size: 12
- gradient accumulation steps: 1
- maximum gradient normal: 0.3
- learning rate: 2e-4
- weight decay: 0.001
- optimizer: pagedadamw32bit
- learning rate schedule: cosine
- warmup ratio (linear): 0.03
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
ShortKing-1.4b-v0.1.IQ3_M.gguf
LFS
Q3
|
679.34 MB | Download |
|
ShortKing-1.4b-v0.1.IQ3_S.gguf
LFS
Q3
|
621.97 MB | Download |
|
ShortKing-1.4b-v0.1.IQ3_XS.gguf
LFS
Q3
|
608.47 MB | Download |
|
ShortKing-1.4b-v0.1.IQ4_NL.gguf
LFS
Q4
|
794.02 MB | Download |
|
ShortKing-1.4b-v0.1.IQ4_XS.gguf
LFS
Q4
|
756.45 MB | Download |
|
ShortKing-1.4b-v0.1.Q2_K.gguf
LFS
Q2
|
543.74 MB | Download |
|
ShortKing-1.4b-v0.1.Q3_K.gguf
LFS
Q3
|
725.97 MB | Download |
|
ShortKing-1.4b-v0.1.Q3_K_L.gguf
LFS
Q3
|
783.97 MB | Download |
|
ShortKing-1.4b-v0.1.Q3_K_M.gguf
LFS
Q3
|
725.97 MB | Download |
|
ShortKing-1.4b-v0.1.Q3_K_S.gguf
LFS
Q3
|
621.97 MB | Download |
|
ShortKing-1.4b-v0.1.Q4_0.gguf
Recommended
LFS
Q4
|
788.02 MB | Download |
|
ShortKing-1.4b-v0.1.Q4_1.gguf
LFS
Q4
|
866.16 MB | Download |
|
ShortKing-1.4b-v0.1.Q4_K.gguf
LFS
Q4
|
873.52 MB | Download |
|
ShortKing-1.4b-v0.1.Q4_K_M.gguf
LFS
Q4
|
873.52 MB | Download |
|
ShortKing-1.4b-v0.1.Q4_K_S.gguf
LFS
Q4
|
794.02 MB | Download |
|
ShortKing-1.4b-v0.1.Q5_0.gguf
LFS
Q5
|
944.3 MB | Download |
|
ShortKing-1.4b-v0.1.Q5_1.gguf
LFS
Q5
|
1022.44 MB | Download |
|
ShortKing-1.4b-v0.1.Q5_K.gguf
LFS
Q5
|
1008.05 MB | Download |
|
ShortKing-1.4b-v0.1.Q5_K_M.gguf
LFS
Q5
|
1008.05 MB | Download |
|
ShortKing-1.4b-v0.1.Q5_K_S.gguf
LFS
Q5
|
944.3 MB | Download |
|
ShortKing-1.4b-v0.1.Q6_K.gguf
LFS
Q6
|
1.08 GB | Download |
|
ShortKing-1.4b-v0.1.Q8_0.gguf
LFS
Q8
|
1.4 GB | Download |