πŸ“‹ Model Description


language:
  • en
base_model:
  • unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
  • roleplay
  • nsfw
  • qwen3
  • finetuned
  • vivid prosing
  • sillytavern
  • creative
license: cc-by-nc-4.0

Ion-3-8b Model Notes

This model is built on the Qwen3-8B architecture and is tuned to emphasize expressive prose, character voice, and conversational roleplay. It performs especially well in narrative and character-driven interactions, where personality, tone, and flow matter more than tightly constrained instruction following.

The model is also more sensitive to temperature and sampling settings overall. Poorly tuned parameters can lead to verbosity, tonal drift, or instability. When configured close to the recommended values, however, the model remains stable while retaining its expressive strengths, making it well-suited for character cards, roleplay scenarios, and prose-focused chat.


!Ion-3-8b – model visual

Recommended character card template:

## Character: [Name]

Personality: [...]

Situation: [What’s happening right now]

Appearance: [Physical details β€” height, features and clothing]

Relationship to {{user}}: [How {{char}} knows or relates to {{user}}]

World Rules: [Optional β€” game mechanics, special rules]

Recommended SillyTavern Settings (Preset)

The JSON below reflects the settings I personally use, which have shown good stability while preserving the model’s character and prose quality.
It can be imported directly into SillyTavern as a preset and is shared purely for convenience and reproducibility, not as a requirement. Users are encouraged to tweak it to suit their own preferences.

{
    "temp": 1.2,
    "temperature_last": true,
    "top_p": 0.91,
    "top_k": 25,
    "top_a": 0,
    "tfs": 1,
    "epsilon_cutoff": 0,
    "eta_cutoff": 0,
    "typical_p": 1,
    "min_p": 0.15,
    "rep_pen": 1.05,
    "reppenrange": 1024,
    "reppendecay": 0,
    "reppenslope": 1,
    "norepeatngram_size": 0,
    "penalty_alpha": 0,
    "num_beams": 1,
    "length_penalty": 1,
    "min_length": 0,
    "encoderreppen": 1,
    "freq_pen": 0,
    "presence_pen": 0,
    "skew": 0,
    "do_sample": false,
    "early_stopping": false,
    "dynatemp": false,
    "min_temp": 0,
    "max_temp": 2,
    "dynatemp_exponent": 1,
    "smoothing_factor": 0,
    "smoothing_curve": 1,
    "dryallowedlength": 2,
    "dry_multiplier": 0,
    "dry_base": 1.75,
    "drysequencebreakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]",
    "drypenaltylast_n": 0,
    "addbostoken": true,
    "baneostoken": false,
    "skipspecialtokens": true,
    "mirostat_mode": 0,
    "mirostat_tau": 5,
    "mirostat_eta": 0.1,
    "guidance_scale": 1,
    "negative_prompt": "",
    "grammar_string": "",
    "json_schema": null,
    "jsonschemaallow_empty": false,
    "banned_tokens": "",
    "sampler_priority": [
        "repetition_penalty",
        "presence_penalty",
        "frequency_penalty",
        "dry",
        "temperature",
        "dynamic_temperature",
        "quadratic_sampling",
        "topnsigma",
        "top_k",
        "top_p",
        "typical_p",
        "epsilon_cutoff",
        "eta_cutoff",
        "tfs",
        "top_a",
        "min_p",
        "mirostat",
        "xtc",
        "encoderrepetitionpenalty",
        "norepeatngram"
    ],
    "samplers": [
        "penalties",
        "dry",
        "topnsigma",
        "top_k",
        "typ_p",
        "tfs_z",
        "typical_p",
        "xtc",
        "top_p",
        "adaptive_p",
        "min_p",
        "temperature"
    ],
    "samplers_priorities": [
        "dry",
        "penalties",
        "norepeatngram",
        "temperature",
        "top_nsigma",
        "topptop_k",
        "top_a",
        "min_p",
        "tfs",
        "eta_cutoff",
        "epsilon_cutoff",
        "typical_p",
        "quadratic",
        "xtc"
    ],
    "ignoreeostoken": false,
    "spacesbetweenspecial_tokens": true,
    "speculative_ngram": false,
    "sampler_order": [
        6,
        0,
        1,
        3,
        4,
        2,
        5
    ],
    "logit_bias": [],
    "xtc_threshold": 0.1,
    "xtc_probability": 0,
    "nsigma": 0,
    "min_keep": 0,
    "extensions": {},
    "adaptive_target": -0.01,
    "adaptive_decay": 0.9,
    "reppensize": 0,
    "genamt": 250,
    "max_length": 8192
}

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
model_q6_k.gguf
Recommended LFS Q6
6.26 GB Download