πŸ“‹ Model Description


license: other license_name: databricks-open-model-license library_name: gguf license_link: https://www.databricks.com/legal/open-model-license pipeline_tag: text-generation base_model: databricks/dbrx-instruct
Quants from @phymbert (author of the support for this model in llama.cpp) are posted here The quants here are meant to test imatrix quantized weights. If you run metal, you may need this PR

Added ggml-dbrx-instruct-16x12b-f16_imatrix-wiki.dat which is a 2K batches (1M tokens) on FP16 weights using wiki.train.

QuantIMatrix Quant/Dataset/ChunksSize (GiB)PPL (wiki.test)
IQ4XSQ80/wiki.train/20065.295.2260 +/- 0.03558
IQ4_XSFP16/wiki.train/200065.295.2241 +/- 0.03559
IQ4_XS-66.055.2546 +/- 0.03570
2024-04-13: Support for this model has just being merged - PR #6515. You will need this llama.cpp commit 4bd0f93e to run this model

Quants in this repo are tested running the following command (quants under IQ3 are very sensitive and unreliable so far - the imatrix may require to be trained on FP16 weights rather than Q8_0 and for longer than 200 chunks):

./build/bin/main -ngl 41 -c 4096 -s 0 -e -p "<|imstart|>system\nYou are a helpful assistant.<|imend|>\n<|imstart|>user\nWrite an essay about AI.<|imend|>\n<|im_start|>assistant\n" -m ggml-dbrx-instruct-16x12b-<<quant-to-test>>.gguf

  • GGUF importance matrix (imatrix) quants for https://huggingface.co/databricks/dbrx-instruct
  • The importance matrix is trained for ~100K tokens (200 batches of 512 tokens) using wiki.train.raw.
  • Which GGUF is right for me? (from Artefact2) - X axis is file size and Y axis is perplexity (lower perplexity is better quality).
  • The imatrix is being used on the K-quants as well (only for < Q6K).
  • You can merge GGUFs with gguf-split --merge although this is not required since f482bb2e.
  • What is importance matrix (imatrix)? You can read more about it from the author here.
  • How do I use imatrix quants? Just like any other GGUF, the .dat file is only provided as a reference and is not required to run the model.
  • If you need to use IQ1, then use IQ1M as IQ1S is very unstable.

DBRX is a transformer-based decoder-only large language model (LLM) that was trained using next-token prediction. It uses a fine-grained mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2. This provides 65x more possible combinations of experts and we found that this improves model quality. DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA). It uses the GPT-4 tokenizer as provided in the tiktoken repository. We made these choices based on exhaustive evaluation and scaling experiments.

LayersContextTemplate
40
32768
\<\
imstart\\>system
{system}\<\
imend\\>
\<\
imstart\\>user
{prompt}\<\
imend\\>
\<\
im_start\\>assistant
  • 16x12B MoE
  • 16 experts (12B params per single expert; top_k=4 routing)
  • 36B active params (132B total params)
  • Trained on 12T tokens
  • 32k sequence length training

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
ggml-dbrx-instruct-16x12b-f16-00001-of-00006.gguf
Recommended LFS FP16
43.65 GB Download
ggml-dbrx-instruct-16x12b-f16-00002-of-00006.gguf
LFS FP16
44.46 GB Download
ggml-dbrx-instruct-16x12b-f16-00003-of-00006.gguf
LFS FP16
40.62 GB Download
ggml-dbrx-instruct-16x12b-f16-00004-of-00006.gguf
LFS FP16
44.46 GB Download
ggml-dbrx-instruct-16x12b-f16-00005-of-00006.gguf
LFS FP16
40.6 GB Download
ggml-dbrx-instruct-16x12b-f16-00006-of-00006.gguf
LFS FP16
31.34 GB Download
ggml-dbrx-instruct-16x12b-iq1_m.gguf
LFS
27.75 GB Download
ggml-dbrx-instruct-16x12b-iq1_s.gguf
LFS
25.05 GB Download
ggml-dbrx-instruct-16x12b-iq2_xs.gguf
LFS Q2
35.88 GB Download
ggml-dbrx-instruct-16x12b-iq2_xxs.gguf
LFS Q2
32.24 GB Download
ggml-dbrx-instruct-16x12b-iq3_xs-00001-of-00002.gguf
LFS Q3
45.78 GB Download
ggml-dbrx-instruct-16x12b-iq3_xs-00002-of-00002.gguf
LFS Q3
4.35 GB Download
ggml-dbrx-instruct-16x12b-iq4_xs-00001-of-00002.gguf
LFS Q4
40.1 GB Download
ggml-dbrx-instruct-16x12b-iq4_xs-00002-of-00002.gguf
LFS Q4
25.18 GB Download
ggml-dbrx-instruct-16x12b-q6_k-00001-of-00003.gguf
LFS Q6
46.17 GB Download
ggml-dbrx-instruct-16x12b-q6_k-00002-of-00003.gguf
LFS Q6
46.5 GB Download
ggml-dbrx-instruct-16x12b-q6_k-00003-of-00003.gguf
LFS Q6
7.87 GB Download
ggml-dbrx-instruct-16x12b-q8_0-00001-of-00003.gguf
LFS Q8
42.63 GB Download
ggml-dbrx-instruct-16x12b-q8_0-00002-of-00003.gguf
LFS Q8
45.15 GB Download
ggml-dbrx-instruct-16x12b-q8_0-00003-of-00003.gguf
LFS Q8
42.45 GB Download