π Model Description
Quantization made by Richard Erkhov.
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1 - GGUF
- Model creator: https://huggingface.co/habanoz/
- Original model: https://huggingface.co/habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1/
Original model description:
language:
- en
license: apache-2.0
datasets:
- OpenAssistant/oassttop12023-08-25
pipeline_tag: text-generation
model-index:
- name: tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
numfewshot: 25
metrics:
- type: acc_norm
value: 32.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/openllmleaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
numfewshot: 10
metrics:
- type: acc_norm
value: 58.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/openllmleaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
numfewshot: 5
metrics:
- type: acc
value: 25.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/openllmleaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
numfewshot: 0
metrics:
- type: mc2
value: 38.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/openllmleaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
numfewshot: 5
metrics:
- type: acc
value: 57.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/openllmleaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
numfewshot: 5
metrics:
- type: acc
value: 0.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/openllmleaderboard?query=habanoz/tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1
name: Open LLM Leaderboard
TinyLlama-1.1B-intermediate-step-715k-1.5T finetuned using OpenAssistant/oassttop12023-08-25 dataset.
SFT code:
https://github.com/jzhang38/TinyLlama/tree/main/sft
Evaluation Results at:
https://huggingface.co/datasets/open-llm-leaderboard/detailshabanoztinyllama-oasst1-top1-instruct-full-lr1-5-v0.1public/blob/main/results_2023-11-23T17-25-53.937618.json
Command used:
accelerate launch finetune.py \
--modelnameor_path TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T \
--outputdir ./output/15TFTlr1e-5ep5top1_2023-08-25 \
--logging_steps 10 \
--save_strategy epoch \
--data_seed 42 \
--savetotallimit 2 \
--evaluation_strategy epoch \
--evaldatasetsize 512 \
--maxevalsamples 1000 \
--perdeviceevalbatchsize 1 \
--maxnewtokens 32 \
--dataloadernumworkers 3 \
--groupbylength=False \
--logging_strategy steps \
--removeunusedcolumns False \
--do_train \
--do_eval \
--warmup_ratio 0.05 \
--lrschedulertype constant \
--dataset OpenAssistant/oassttop12023-08-25 \
--dataset_format oasst1 \
--sourcemaxlen 1 \
--targetmaxlen 1023 \
--perdevicetrainbatchsize 2 \
--gradientaccumulationsteps 8 \
--max_steps 0 \
--numtrainepochs 5 \
--learning_rate 1e-5 \
--adam_beta2 0.999 \
--maxgradnorm 1.0 \
--weight_decay 0.0 \
--seed 0 \
--trustremotecodeOpen LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 35.58 |
| AI2 Reasoning Challenge (25-Shot) | 32.85 |
| HellaSwag (10-Shot) | 58.16 |
| MMLU (5-Shot) | 25.96 |
| TruthfulQA (0-shot) | 38.35 |
| Winogrande (5-shot) | 57.70 |
| GSM8k (5-shot) | 0.45 |
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.IQ3_M.gguf
LFS
Q3
|
492.29 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.IQ3_S.gguf
LFS
Q3
|
477.68 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.IQ3_XS.gguf
LFS
Q3
|
455.51 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.IQ4_NL.gguf
LFS
Q4
|
611.36 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.IQ4_XS.gguf
LFS
Q4
|
581.57 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q2_K.gguf
LFS
Q2
|
412.12 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q3_K.gguf
LFS
Q3
|
523.01 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q3_K_L.gguf
LFS
Q3
|
564.13 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q3_K_M.gguf
LFS
Q3
|
523.01 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q3_K_S.gguf
LFS
Q3
|
476.22 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q4_0.gguf
Recommended
LFS
Q4
|
607.24 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q4_1.gguf
LFS
Q4
|
668.89 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q4_K.gguf
LFS
Q4
|
636.89 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q4_K_M.gguf
LFS
Q4
|
636.89 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q4_K_S.gguf
LFS
Q4
|
610.24 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q5_0.gguf
LFS
Q5
|
730.55 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q5_1.gguf
LFS
Q5
|
792.21 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q5_K.gguf
LFS
Q5
|
745.82 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q5_K_M.gguf
LFS
Q5
|
745.82 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q5_K_S.gguf
LFS
Q5
|
730.55 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q6_K.gguf
LFS
Q6
|
861.57 MB | Download |
|
tinyllama-oasst1-top1-instruct-full-lr1-5-v0.1.Q8_0.gguf
LFS
Q8
|
1.09 GB | Download |