π Model Description
base_model:
- TheBloke/Llama-2-13B-fp16
- mergekit
- merge
This is the GGUF version of Estopia recommended to be used with Koboldcpp which is an easy to use and very versitile GGUF compatible program.
With Koboldcpp you will be able to instruct write and co-write with this model in the instruct and story writing modes, It is compatibility with your character cards in its KoboldAI Lite UI and has wide API support for all popular frontends..
Introduction
- Estopia is a model focused on improving the dialogue and prose returned when using the instruct format. As a side benefit, character cards and similar seem to have also improved, remembering details well in many cases.
- It focuses on "guided narratives" - using instructions to guide or explore fictional stories, where you act as a guide for the AI to narrate and fill in the details.
- It has primarily been tested around prose, using instructions to guide narrative, detail retention and "neutrality" - in particular with regards to plot armour. Unless you define different rules for your adventure / narrative with instructions, it should be realistic in the responses provided.
- It has been tested using different modes, such as instruct, chat, adventure and story modes - and should be able to do them all to a degree, with it's strengths being instruct and adventure, with story being a close second.
Usage
- The Estopia model has been tested primarily using the Alpaca format, but with the range of models included likely has some understanding of others. Some examples of tested formats are below:
\n### Instruction:\nWhat colour is the sky?\n### Response:\nThe sky is...
- <Story text>\n\nWrite a summary of the text above\n\nThe story starts by...
- Using the Kobold Lite AI adventure mode
- :Hello there!\nAssistant:Good morning...\n
- For settings, the following are recommended for general use:
##||$||---||$||ASSISTANT:||$||[End||$||</s> - A single string for Kobold Lite combining the ones below
- ##
- ---
- :
- [End
- </s>
- The settings above should provide a generally good experience balancing instruction following and creativity. Generally the higher you set the temperature, the greater the creativity and higher chance of logical errors when providing responses from the AI.
Recipe
This model was made in three stages, along with many experimental stages which will be skipped for brevity. The first was internally referred to as EstopiaV9, which has a high degree of instruction following and creativity in responses, though they were generally shorter and a little more restricted in the scope of outputs, but conveyed nuance better.mergemethod: taskarithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: Undi95/UtopiaXL-13B
parameters:
weight: 1.0
- model: Doctor-Shotgun/cat-v1.0-13b
parameters:
weight: 0.02
- model: PygmalionAI/mythalion-13b
parameters:
weight: 0.10
- model: Undi95/Emerhyst-13B
parameters:
weight: 0.05
- model: CalderaAI/13B-Thorns-l2
parameters:
weight: 0.05
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 0.20
dtype: float16
The second part of the merge was known as EstopiaV13. This produced responses which were long, but tended to write beyond good stopping points for further instructions to be added as it leant heavily on novel style prose. It did however benefit from a greater degree of neutrality as described above, and retained many of the detail tracking abilities of V9.
mergemethod: taskarithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: Undi95/UtopiaXL-13B
parameters:
weight: 1.0
- model: Doctor-Shotgun/cat-v1.0-13b
parameters:
weight: 0.01
- model: chargoddard/rpguild-chatml-13b
parameters:
weight: 0.02
- model: PygmalionAI/mythalion-13b
parameters:
weight: 0.08
- model: CalderaAI/13B-Thorns-l2
parameters:
weight: 0.02
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 0.20
dtype: float16
The third step was a merge between the two to retain the benefits of both as much as possible. This was performed using the dare merging technique.
# task-arithmetic style
models:
- model: EstopiaV9
parameters:
weight: 1
density: 1
- model: EstopiaV13
parameters:
weight: 0.05
density: 0.30
mergemethod: dareties
base_model: TheBloke/Llama-2-13B-fp16
parameters:
int8_mask: true
dtype: bfloat16
Model selection
- Undi95/UtopiaXL-13B
- Doctor-Shotgun/cat-v1.0-13b
- PygmalionAI/mythalion-13b
- Undi95/Emerhyst-13B
- CalderaAI/13B-Thorns-l2
- KoboldAI/LLaMA2-13B-Tiefighter
- chargoddard/rpguild-chatml-13b
Notes
- With the differing models inside, this model will not have perfect end of sequence tokens which is a problem many merges can share. While attempts have been made to minimise this, you may occasionally get oddly behaving tokens - this should be possible to resolve with a quick manual edit once and the model should pick up on it.
- Chat is one of the least tested areas for this model. It works fairly well, but it can be quite character card dependant.
- This is a narrative and prose focused model. As a result, it can and will talk for you if guided to do so (such as asking it to act as a co-author or narrator) within instructions or other contexts. This can be mitigated mostly by adding instructions to limit this, or using chat mode instead.
Future areas
- Llava
- Stheno
- DynamicFactor
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
LLaMA2-13B-Estopia.F16.gguf
LFS
FP16
|
24.25 GB | Download |
|
LLaMA2-13B-Estopia.Q2_K.gguf
LFS
Q2
|
5.06 GB | Download |
|
LLaMA2-13B-Estopia.Q3_K_L.gguf
LFS
Q3
|
6.45 GB | Download |
|
LLaMA2-13B-Estopia.Q3_K_M.gguf
LFS
Q3
|
5.9 GB | Download |
|
LLaMA2-13B-Estopia.Q3_K_S.gguf
LFS
Q3
|
5.27 GB | Download |
|
LLaMA2-13B-Estopia.Q4_0.gguf
Recommended
LFS
Q4
|
6.86 GB | Download |
|
LLaMA2-13B-Estopia.Q4_1.gguf
LFS
Q4
|
7.61 GB | Download |
|
LLaMA2-13B-Estopia.Q4_K_M.gguf
LFS
Q4
|
7.33 GB | Download |
|
LLaMA2-13B-Estopia.Q4_K_S.gguf
LFS
Q4
|
6.91 GB | Download |
|
LLaMA2-13B-Estopia.Q5_0.gguf
LFS
Q5
|
8.36 GB | Download |
|
LLaMA2-13B-Estopia.Q5_1.gguf
LFS
Q5
|
9.1 GB | Download |
|
LLaMA2-13B-Estopia.Q5_K_M.gguf
LFS
Q5
|
8.6 GB | Download |
|
LLaMA2-13B-Estopia.Q5_K_S.gguf
LFS
Q5
|
8.36 GB | Download |
|
LLaMA2-13B-Estopia.Q6_K.gguf
LFS
Q6
|
9.95 GB | Download |
|
LLaMA2-13B-Estopia.Q8_0.gguf
LFS
Q8
|
12.88 GB | Download |