π Model Description
Quantization made by Richard Erkhov.
flammen6-mistral-7B - GGUF
- Model creator: https://huggingface.co/flammenai/
- Original model: https://huggingface.co/flammenai/flammen6-mistral-7B/
| Name | Quant method | Size |
|---|---|---|
| flammen6-mistral-7B.Q2K.gguf | Q2K | 2.53GB |
| flammen6-mistral-7B.IQ3XS.gguf | IQ3XS | 2.81GB |
| flammen6-mistral-7B.IQ3S.gguf | IQ3S | 2.96GB |
| flammen6-mistral-7B.Q3KS.gguf | Q3K_S | 2.95GB |
| flammen6-mistral-7B.IQ3M.gguf | IQ3M | 3.06GB |
| flammen6-mistral-7B.Q3K.gguf | Q3K | 3.28GB |
| flammen6-mistral-7B.Q3KM.gguf | Q3K_M | 3.28GB |
| flammen6-mistral-7B.Q3KL.gguf | Q3K_L | 3.56GB |
| flammen6-mistral-7B.IQ4XS.gguf | IQ4XS | 3.67GB |
| flammen6-mistral-7B.Q40.gguf | Q40 | 3.83GB |
| flammen6-mistral-7B.IQ4NL.gguf | IQ4NL | 3.87GB |
| flammen6-mistral-7B.Q4KS.gguf | Q4K_S | 3.86GB |
| flammen6-mistral-7B.Q4K.gguf | Q4K | 4.07GB |
| flammen6-mistral-7B.Q4KM.gguf | Q4K_M | 4.07GB |
| flammen6-mistral-7B.Q41.gguf | Q41 | 4.24GB |
| flammen6-mistral-7B.Q50.gguf | Q50 | 4.65GB |
| flammen6-mistral-7B.Q5KS.gguf | Q5K_S | 4.65GB |
| flammen6-mistral-7B.Q5K.gguf | Q5K | 4.78GB |
| flammen6-mistral-7B.Q5KM.gguf | Q5K_M | 4.78GB |
| flammen6-mistral-7B.Q51.gguf | Q51 | 5.07GB |
| flammen6-mistral-7B.Q6K.gguf | Q6K | 5.53GB |
| flammen6-mistral-7B.Q80.gguf | Q80 | 7.17GB |
Original model description:
license: apache-2.0
base_model:
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO
- nbeerbower/flammen5-mistral-7B
library_name: transformers
tags:
- mergekit
- merge
flammen6-mistral-7B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: nbeerbower/flammen5-mistral-7B
layer_range: [0, 32]
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/flammen5-mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
flammen6-mistral-7B.IQ3_M.gguf
LFS
Q3
|
3.06 GB | Download |
|
flammen6-mistral-7B.IQ3_S.gguf
LFS
Q3
|
2.96 GB | Download |
|
flammen6-mistral-7B.IQ3_XS.gguf
LFS
Q3
|
2.81 GB | Download |
|
flammen6-mistral-7B.IQ4_NL.gguf
LFS
Q4
|
3.87 GB | Download |
|
flammen6-mistral-7B.IQ4_XS.gguf
LFS
Q4
|
3.67 GB | Download |
|
flammen6-mistral-7B.Q2_K.gguf
LFS
Q2
|
2.53 GB | Download |
|
flammen6-mistral-7B.Q3_K.gguf
LFS
Q3
|
3.28 GB | Download |
|
flammen6-mistral-7B.Q3_K_L.gguf
LFS
Q3
|
3.56 GB | Download |
|
flammen6-mistral-7B.Q3_K_M.gguf
LFS
Q3
|
3.28 GB | Download |
|
flammen6-mistral-7B.Q3_K_S.gguf
LFS
Q3
|
2.95 GB | Download |
|
flammen6-mistral-7B.Q4_0.gguf
Recommended
LFS
Q4
|
3.83 GB | Download |
|
flammen6-mistral-7B.Q4_1.gguf
LFS
Q4
|
4.24 GB | Download |
|
flammen6-mistral-7B.Q4_K.gguf
LFS
Q4
|
4.07 GB | Download |
|
flammen6-mistral-7B.Q4_K_M.gguf
LFS
Q4
|
4.07 GB | Download |
|
flammen6-mistral-7B.Q4_K_S.gguf
LFS
Q4
|
3.86 GB | Download |
|
flammen6-mistral-7B.Q5_0.gguf
LFS
Q5
|
4.65 GB | Download |
|
flammen6-mistral-7B.Q5_1.gguf
LFS
Q5
|
5.07 GB | Download |
|
flammen6-mistral-7B.Q5_K.gguf
LFS
Q5
|
4.78 GB | Download |
|
flammen6-mistral-7B.Q5_K_M.gguf
LFS
Q5
|
4.78 GB | Download |
|
flammen6-mistral-7B.Q5_K_S.gguf
LFS
Q5
|
4.65 GB | Download |
|
flammen6-mistral-7B.Q6_K.gguf
LFS
Q6
|
5.53 GB | Download |
|
flammen6-mistral-7B.Q8_0.gguf
LFS
Q8
|
7.17 GB | Download |