πŸ“‹ Model Description


license: other base_model: cognitivecomputations/dolphin-2.9-llama3-8b tags:
  • generatedfromtrainer
model-index:
  • name: out
results: [] datasets:
  • cognitivecomputations/Dolphin-2.9
  • teknium/OpenHermes-2.5
  • m-a-p/CodeFeedback-Filtered-Instruction
  • cognitivecomputations/dolphin-coder
  • cognitivecomputations/samantha-data
  • HuggingFaceH4/ultrachat_200k
  • microsoft/orca-math-word-problems-200k
  • abacusai/SystemChat-1.1
  • Locutusque/function-calling-chatml
  • internlm/Agent-FLAN
pipeline_tag: text-generation

Dolphin 2.9 Llama 3 8b- GGUF 🐬

Model Description

My appreciation for the sponsors of Dolphin 2.9:

This model is based on Llama-3-8b, and is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT

The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.

It took 2.5 days on 8x L40S provided by Crusoe Cloud

This model was trained FFT on all parameters, using ChatML prompt template format.

example:

<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.

Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.

Dolphin is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.

Built with Axolotl

See axolotl config

axolotl version: 0.4.0

base_model: meta-llama/Meta-Llama-3-8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
tokenizerusefast: false

loadin8bit: false
loadin4bit: false
strict: false
model_config:

datasets:
- path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Ultrachat200kunfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/notsamanthanorefusals.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/agentinstructreact_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbenchinstructj1s13kunfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbenchnegativeunfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbenchreact10p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/toolbenchtflancot30punfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9/SystemConversations.jsonl
type: sharegpt
conversation: chatml

chat_template: chatml

datasetpreparedpath: /workspace/datasets/dolphin-2.9/thingy
valsetsize: 0.0002
output_dir: ./out

sequence_len: 4096
sample_packing: true
padtosequence_len: true

gradientaccumulationsteps: 4
microbatchsize: 3
num_epochs: 3
logging_steps: 1
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5

wandb_project: dolphin-2.9-mixtral-8x22b
wandb_watch:
wandbrunid:
wandblogmodel:

trainoninputs: false
groupbylength: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
gradientcheckpointingkwargs:
use_reentrant: false
earlystoppingpatience:
resumefromcheckpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
savesperepoch: 4
savetotallimit: 2
save_steps:
evalsperepoch: 4
evalsamplepacking: false
debug:
deepspeed: deepspeedconfigs/zero3bf16.json
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
eostoken: "<|imend|>"
padtoken: "<|endof_text|>"
tokens:
- "<|im_start|>"
- "<|im_end|>"


Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learningrate: 2e-05
  • trainbatchsize: 3
  • evalbatchsize: 3
  • seed: 42
  • distributedtype: multi-GPU
  • numdevices: 8
  • gradientaccumulationsteps: 4
  • totaltrainbatchsize: 96
  • totalevalbatchsize: 24
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lrschedulertype: cosine
  • lrschedulerwarmupsteps: 7
  • num_epochs: 3

Training results

Training LossEpochStepValidation Loss
1.1460.000511.1064
0.69620.25015550.6636
0.68570.500111100.6503
0.65920.750216650.6419
0.64651.000222200.6317
0.52951.239527750.6408
0.53021.489533300.6351
0.51881.739638850.6227
0.5211.989644400.6168
0.39682.228949950.6646
0.37762.478955500.6619
0.39832.729061050.6602

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.2+cu121
  • Datasets 2.18.0
  • Tokenizers 0.19.1

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
dolphin-2.9-llama3-8b.Q2_K.gguf
LFS Q2
2.96 GB Download
dolphin-2.9-llama3-8b.Q3_K_L.gguf
LFS Q3
4.03 GB Download
dolphin-2.9-llama3-8b.Q3_K_M.gguf
LFS Q3
3.74 GB Download
dolphin-2.9-llama3-8b.Q3_K_S.gguf
LFS Q3
3.41 GB Download
dolphin-2.9-llama3-8b.Q4_0.gguf
Recommended LFS Q4
4.34 GB Download
dolphin-2.9-llama3-8b.Q4_1.gguf
LFS Q4
4.78 GB Download
dolphin-2.9-llama3-8b.Q4_K_M.gguf
LFS Q4
4.58 GB Download
dolphin-2.9-llama3-8b.Q4_K_S.gguf
LFS Q4
4.37 GB Download
dolphin-2.9-llama3-8b.Q5_0.gguf
LFS Q5
5.22 GB Download
dolphin-2.9-llama3-8b.Q5_1.gguf
LFS Q5
5.65 GB Download
dolphin-2.9-llama3-8b.Q5_K_M.gguf
LFS Q5
5.34 GB Download
dolphin-2.9-llama3-8b.Q5_K_S.gguf
LFS Q5
5.22 GB Download
dolphin-2.9-llama3-8b.Q6_K.gguf
LFS Q6
6.14 GB Download
dolphin-2.9-llama3-8b.Q8_0.gguf
LFS Q8
7.95 GB Download