πŸ“‹ Model Description

Quantization made by Richard Erkhov.

Github

Discord

Request more models

NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1 - GGUF

  • Model creator: https://huggingface.co/MaziyarPanahi/
  • Original model: https://huggingface.co/MaziyarPanahi/NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1/

NameQuant methodSize
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q2K.ggufQ2K2.53GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q3KS.ggufQ3K_S2.95GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q3K.ggufQ3K3.28GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q3KM.ggufQ3K_M3.28GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q3KL.ggufQ3K_L3.56GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.IQ4XS.ggufIQ4XS3.67GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q40.ggufQ403.83GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.IQ4NL.ggufIQ4NL3.87GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q4KS.ggufQ4K_S3.86GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q4K.ggufQ4K4.07GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q4KM.ggufQ4K_M4.07GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q41.ggufQ414.24GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q50.ggufQ504.65GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q5KS.ggufQ5K_S4.65GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q5K.ggufQ5K4.78GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q5KM.ggufQ5K_M4.78GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q51.ggufQ515.07GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q6K.ggufQ6K5.53GB
NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1.Q80.ggufQ807.17GB

Original model description:



license: apache-2.0
tags:
  • Safetensors
  • mistral
  • text-generation-inference
  • merge
  • mistral
  • 7b
  • mistralai/Mistral-7B-Instruct-v0.1
  • athirdpath/NSFWDPONoromaid-7b
  • transformers
  • safetensors
  • mistral
  • text-generation
  • en
  • dataset:athirdpath/DPOPairs-Roleplay-Alpaca-NSFW-v2
  • dataset:athirdpath/DPOPairs-Roleplay-Alpaca-NSFW
  • license:cc-by-nc-4.0
  • autotraincompatible
  • endpointscompatible
  • has_space
  • text-generation-inference
  • region:us


NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1

NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1 is a merge of the following models:

🧩 Configuration

slices:
  - sources:
      - model: mistralai/Mistral-7B-Instruct-v0.1
        layer_range: [0, 32]
      - model: athirdpath/NSFWDPONoromaid-7b
        layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "MaziyarPanahi/NSFWDPONoromaid-7b-Mistral-7B-Instruct-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.applychattemplate(messages, tokenize=False, addgenerationprompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)

outputs = pipeline(prompt, maxnewtokens=256, dosample=True, temperature=0.7, topk=50, top_p=0.95)
print(outputs[0]["generated_text"])

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ4_NL.gguf
LFS Q4
3.87 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.IQ4_XS.gguf
LFS Q4
3.67 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q2_K.gguf
LFS Q2
2.53 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K.gguf
LFS Q3
3.28 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K_L.gguf
LFS Q3
3.56 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K_M.gguf
LFS Q3
3.28 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q3_K_S.gguf
LFS Q3
2.95 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_0.gguf
Recommended LFS Q4
3.83 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_1.gguf
LFS Q4
4.24 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_K.gguf
LFS Q4
4.07 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_K_M.gguf
LFS Q4
4.07 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q4_K_S.gguf
LFS Q4
3.86 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q5_0.gguf
LFS Q5
4.65 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q5_1.gguf
LFS Q5
5.07 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q5_K.gguf
LFS Q5
4.78 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q5_K_M.gguf
LFS Q5
4.78 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q5_K_S.gguf
LFS Q5
4.65 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q6_K.gguf
LFS Q6
5.53 GB Download
NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1.Q8_0.gguf
LFS Q8
7.17 GB Download