πŸ“‹ Model Description


base_model: NeverSleep/Lumimaid-v0.2-8B quantized_by: Lewdiculous library_name: transformers license: cc-by-nc-4.0 inference: false language:
  • en
tags:
  • roleplay
  • llama3
  • sillytavern

#roleplay #sillytavern #llama3

My GGUF-IQ-Imatrix quants for NeverSleep/Lumimaid-v0.2-8B.

I recommend checking their page for feedback and support.

[!IMPORTANT]

Quantization process:

Imatrix data was generated from the FP16-GGUF and conversions directly from the BF16-GGUF.

This is a bit more disk and compute intensive but hopefully avoids any losses during conversion.

To run this model, please use the latest version of KoboldCpp.

If you noticed any issues let me know in the discussions.

[!NOTE]

Presets:

* Llama-3.

>

Some compatible SillyTavern presets can be found here (Virt's Roleplay Presets - v1.9).

Check discussions such as this one and this one for other presets and samplers recommendations.

Lower temperatures are recommended by the authors, so make sure to experiment.

>

General usage with KoboldCpp:

For 8GB VRAM GPUs, I recommend the Q4KM-imat (4.89 BPW) quant for up to 12288 context sizes without the use of --quantkv.

Using --quantkv 1 (β‰ˆQ8) or even --quantkv 2 (β‰ˆQ4) can get you to 32K context sizes with the caveat of not being compatible with Context Shifting, only relevant if you can manage to fill up that much context.

Read more about it in the release here.


⇲ Click here to expand/hide information – General chart with relative quant parformances.

[!NOTE]

Recommended read:

>

"Which GGUF is right for me? (Opinionated)" by Artefact2

>

Click the image to view full size.

!"Which GGUF is right for me? (Opinionated)" by Artefact2 - Firs Graph

[!TIP]

Personal-support:

I apologize for disrupting your experience.

Eventually I may be able to use a dedicated server for this, but for now hopefully these quants are helpful.

If you want and you are able to...

You can spare some change over here (Ko-fi).

>

Author-support:

You can support the authors at their pages/here.

!image/png


Original model card information.

Original card:

Lumimaid 0.2

Image
[8b] - 12b - 70b - 123b

This model is based on: Meta-Llama-3.1-8B-Instruct

Wandb: https://wandb.ai/undis95/Lumi-Llama-3-1-8B?nw=nwuserundis95

Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.

As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.

Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!

Prompt template: Llama-3-Instruct

<|beginoftext|><|startheaderid|>system<|endheaderid|>

{systemprompt}<|eotid|><|startheaderid|>user<|endheaderid|>

{input}<|eotid|><|startheaderid|>assistant<|endheader_id|>

{output}<|eot_id|>

Credits:

  • Undi
  • IkariDev

Training data we used to make our dataset:

We sadly didn't find the sources of the following, DM us if you recognize your set !

  • OpusInstruct-v2-6.5K-Filtered-v2-sharegpt
  • claudesharegpttrimmed
  • CapybaraPureDecontaminated-ShareGPT_reduced

Datasets credits:

  • Epiculous
  • ChaoticNeutrals
  • Gryphe
  • meseca
  • PJMixers
  • NobodyExistsOnTheInternet
  • cgato
  • kalomaze
  • Doctor-Shotgun
  • Norquinal
  • nothingiisreal

Others

Undi: If you want to support us, you can here.

IkariDev: Visit my retro/neocities style website please kek

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Lumimaid-v0.2-8B-BF16.gguf
LFS FP16
14.97 GB Download
Lumimaid-v0.2-8B-F16.gguf
LFS FP16
14.97 GB Download
Lumimaid-v0.2-8B-IQ3_M-imat.gguf
LFS Q3
3.52 GB Download
Lumimaid-v0.2-8B-IQ3_XXS-imat.gguf
LFS Q3
3.05 GB Download
Lumimaid-v0.2-8B-IQ4_XS-imat.gguf
LFS Q4
4.14 GB Download
Lumimaid-v0.2-8B-Q4_K_M-imat.gguf
Recommended LFS Q4
4.58 GB Download
Lumimaid-v0.2-8B-Q4_K_S-imat.gguf
LFS Q4
4.37 GB Download
Lumimaid-v0.2-8B-Q5_K_M-imat.gguf
LFS Q5
5.34 GB Download
Lumimaid-v0.2-8B-Q5_K_S-imat.gguf
LFS Q5
5.21 GB Download
Lumimaid-v0.2-8B-Q6_K-imat.gguf
LFS Q6
6.14 GB Download
Lumimaid-v0.2-8B-Q8_0-imat.gguf
LFS Q8
7.95 GB Download