πŸ“‹ Model Description


base_model:
  • SanjiWatsuki/Kunoichi-DPO-v2-7B
library_name: transformers tags:
  • mistral
  • quantized
  • text-generation-inference
pipeline_tag: text-generation inference: false license: cc-by-nc-4.0

[!TIP]

Support:

My upload speeds have been cooked and unstable lately.

Realistically I'd need to move to get a better provider.

If you want and you are able to...

You can support my various endeavors here (Ko-fi).

I apologize for disrupting your experience.

GGUF-Imatrix quantizations for SanjiWatsuki/Kunoichi-DPO-v2-7B.

What does "Imatrix" mean?

It stands for Importance Matrix, a technique used to improve the quality of quantized models.

The Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance.

One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse.

More information: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)

If you want any specific quantization to be added, feel free to ask.

All credits belong to the creator.

Base⇒ GGUF(F16)⇒ Imatrix-Data(F16)⇒ GGUF(Imatrix-Quants)

Using llama.cpp-b2277.

For --imatrix data, imatrix-Kunoichi-DPO-v2-7B-F16.dat was used.

Waifu card:

!image/png

Original model information:

ModelMT BenchEQ BenchMMLULogic Test
GPT-4-Turbo9.32---
GPT-48.9962.5286.40.86
Kunoichi-DPO-v2-7B8.5142.1864.940.58
Mixtral-8x7B-Instruct8.3044.8170.60.75
Kunoichi-DPO-7B8.2941.6064.830.59
Kunoichi-7B8.1444.3264.90.58
Starling-7B8.09-63.90.51
Claude-28.0652.1478.5-
Silicon-Maid-7B7.9640.4464.70.54
Loyal-Macaroni-Maid-7B7.9538.6664.90.57
GPT-3.5-Turbo7.9450.28700.57
Claude-17.9-77-
Openchat-3.57.8137.0864.30.39
Dolphin-2.6-DPO7.7442.8861.90.53
Zephyr-7B-beta7.3438.7161.40.30
Llama-2-70b-chat-hf6.8651.5663-
Neural-chat-7b-v3-16.8443.6162.40.30
ModelAverageAGIEvalGPT4AllTruthfulQABigbench
Kunoichi-DPO-7B58.445.087466.9947.52
Kunoichi-DPO-v2-7B58.3144.8575.0565.6947.65
Kunoichi-7B57.5444.9974.8663.7246.58
OpenPipe/mistral-ft-optimized-121856.8544.7475.659.8947.17
Silicon-Maid-7B56.4544.7474.2661.545.32
mlabonne/NeuralHermes-2.5-Mistral-7B53.5143.6773.2455.3741.76
teknium/OpenHermes-2.5-Mistral-7B52.4242.7572.9952.9940.94
openchat/openchat_3.551.3442.6772.9247.2742.51
berkeley-nest/Starling-LM-7B-alpha51.1642.0672.7247.3342.53
HuggingFaceH4/zephyr-7b-beta50.9937.3371.8355.139.7
ModelAlpacaEval2Length
GPT-423.58%1365
GPT-4 031422.07%1371
Mistral Medium21.86%1500
Mixtral 8x7B v0.118.26%1465
Kunoichi-DPO-v217.19%1785
Claude 217.19%1069
Claude16.99%1082
Gemini Pro16.85%1315
GPT-4 061315.76%1140
Claude 2.115.73%1096
Mistral 7B v0.214.72%1676
GPT 3.5 Turbo 061314.13%1328
LLaMA2 Chat 70B13.87%1790
LMCocktail-10.7B-v113.15%1203
WizardLM 13B V1.111.23%1525
Zephyr 7B Beta10.99%1444
OpenHermes-2.5-Mistral (7B)10.34%1107
GPT 3.5 Turbo 03019.62%827
Kunoichi-7B9.38%1492
GPT 3.5 Turbo 11069.18%796
GPT-3.58.56%1018
Phi-2 DPO7.76%1687
LLaMA2 Chat 13B7.70%1513

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Kunoichi-DPO-v2-7B-Q4_K_M-imatrix.gguf
Recommended LFS Q4
4.07 GB Download
Kunoichi-DPO-v2-7B-Q4_K_S-imatrix.gguf
LFS Q4
3.86 GB Download
Kunoichi-DPO-v2-7B-Q5_K_M-imatrix.gguf
LFS Q5
4.78 GB Download
Kunoichi-DPO-v2-7B-Q5_K_S-imatrix.gguf
LFS Q5
4.65 GB Download
Kunoichi-DPO-v2-7B-Q6_K-imatrix.gguf
LFS Q6
5.53 GB Download
Kunoichi-DPO-v2-7B-Q8_0-imatrix.gguf
LFS Q8
7.17 GB Download