πŸ“‹ Model Description


license: mit base_model:
  • zerofata/MS3.2-PaintedFantasy-v4.1-24B






PaintedFantasy



















Painted Fantasy v4.1


Magistral Small 2509 24B





image






Overview






This is an uncensored model intended to excel at creative character driven RP / ERP.


Right after releasing v4 I noticed a bunch of repetition. Go figure. v4.1 is my first stab at trying to actively tailor the dataset towards weeding this out. Compared to v4, the only difference is heavy filtering and rewriting assistant messages identified as repetitive.


Repetition isn't fixed, but it's improved. The model still likes patterns, but at least seems capable of occasionally breaking these itself.














SillyTavern Settings





Recommended Roleplay Format





>
Actions:
In plaintext


>
Dialogue:
"In quotes"


>
Thoughts:
In asterisks





Recommended Samplers





>
Temp:
0.8


>
MinP:
0.05 - 0.075


>
TopP:
0.95 - 1.00





Instruct



Mistral v7 Tekken















Quantizations














Creation Process





Creation Process: SFT > DPO


SFT on approx 25 million tokens (17.5 million trainable). Datasets included SFW / NSFW RP, stories, NSFW reddit writing prompts, creative instruct & chat data.


90% of the dataset is without thinking, 10% included thinking, using the [THINK][/THINK] tags.


All RP data and synthetic stories went through rewriting with GLM 4.7 using hand edited examples as guidelines to improve the response. Rewritten responses were discarded if they failed to reduce the slop score for the message. This reduced the slop by about 25% for each RP / story dataset and made the model noticably more creative with some of its descriptions.


Assistant messages were checked for repetition in RP conversations via embeddings and word frequency checking across multi-turn conversations. Specific messages were rewritten and conversations that still showed high repetition were filtered.


DPO was expanded to include non creative datasets. My usual RP DPO dataset (also rewritten) was included along with cybersecurity and two partial subsets of general assistant / chat preference datasets to help stabalize the model. This worked pretty well. While creativity did take a small hit, enough remained that the improved logic resulted in a notably improved model (IMO).


Using embeddings, DPO samples where the chosen showed a higher similarity to the conversation than the rejected were removed, to ensure DPO doesn't encourage repetition.










πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
MS3.2-PaintedFantasy-v4.1-24B-IQ3_M.gguf
LFS Q3
9.92 GB Download
MS3.2-PaintedFantasy-v4.1-24B-IQ4_XS.gguf
LFS Q4
11.88 GB Download
MS3.2-PaintedFantasy-v4.1-24B-Q3_K_M.gguf
LFS Q3
10.69 GB Download
MS3.2-PaintedFantasy-v4.1-24B-Q4_K_L.gguf
LFS Q4
13.81 GB Download
MS3.2-PaintedFantasy-v4.1-24B-Q4_K_M.gguf
Recommended LFS Q4
13.35 GB Download
MS3.2-PaintedFantasy-v4.1-24B-Q5_K_M.gguf
LFS Q5
15.61 GB Download
MS3.2-PaintedFantasy-v4.1-24B-Q6_K.gguf
LFS Q6
18.02 GB Download
MS3.2-PaintedFantasy-v4.1-24B-Q8_0.gguf
LFS Q8
23.33 GB Download
MS3.2-PaintedFantasy-v4.1-24B-bf16.gguf
LFS FP16
43.92 GB Download