πŸ“‹ Model Description

Quantization made by Richard Erkhov.

Github

Discord

Request more models

flammen6-mistral-7B - GGUF

  • Model creator: https://huggingface.co/flammenai/
  • Original model: https://huggingface.co/flammenai/flammen6-mistral-7B/

NameQuant methodSize
flammen6-mistral-7B.Q2K.ggufQ2K2.53GB
flammen6-mistral-7B.IQ3XS.ggufIQ3XS2.81GB
flammen6-mistral-7B.IQ3S.ggufIQ3S2.96GB
flammen6-mistral-7B.Q3KS.ggufQ3K_S2.95GB
flammen6-mistral-7B.IQ3M.ggufIQ3M3.06GB
flammen6-mistral-7B.Q3K.ggufQ3K3.28GB
flammen6-mistral-7B.Q3KM.ggufQ3K_M3.28GB
flammen6-mistral-7B.Q3KL.ggufQ3K_L3.56GB
flammen6-mistral-7B.IQ4XS.ggufIQ4XS3.67GB
flammen6-mistral-7B.Q40.ggufQ403.83GB
flammen6-mistral-7B.IQ4NL.ggufIQ4NL3.87GB
flammen6-mistral-7B.Q4KS.ggufQ4K_S3.86GB
flammen6-mistral-7B.Q4K.ggufQ4K4.07GB
flammen6-mistral-7B.Q4KM.ggufQ4K_M4.07GB
flammen6-mistral-7B.Q41.ggufQ414.24GB
flammen6-mistral-7B.Q50.ggufQ504.65GB
flammen6-mistral-7B.Q5KS.ggufQ5K_S4.65GB
flammen6-mistral-7B.Q5K.ggufQ5K4.78GB
flammen6-mistral-7B.Q5KM.ggufQ5K_M4.78GB
flammen6-mistral-7B.Q51.ggufQ515.07GB
flammen6-mistral-7B.Q6K.ggufQ6K5.53GB
flammen6-mistral-7B.Q80.ggufQ807.17GB

Original model description:



license: apache-2.0
base_model:
  • NousResearch/Nous-Hermes-2-Mistral-7B-DPO
  • nbeerbower/flammen5-mistral-7B

library_name: transformers
tags:
  • mergekit
  • merge


flammen6-mistral-7B

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: nbeerbower/flammen5-mistral-7B
    layer_range: [0, 32]
  - model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
    layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/flammen5-mistral-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
flammen6-mistral-7B.IQ3_M.gguf
LFS Q3
3.06 GB Download
flammen6-mistral-7B.IQ3_S.gguf
LFS Q3
2.96 GB Download
flammen6-mistral-7B.IQ3_XS.gguf
LFS Q3
2.81 GB Download
flammen6-mistral-7B.IQ4_NL.gguf
LFS Q4
3.87 GB Download
flammen6-mistral-7B.IQ4_XS.gguf
LFS Q4
3.67 GB Download
flammen6-mistral-7B.Q2_K.gguf
LFS Q2
2.53 GB Download
flammen6-mistral-7B.Q3_K.gguf
LFS Q3
3.28 GB Download
flammen6-mistral-7B.Q3_K_L.gguf
LFS Q3
3.56 GB Download
flammen6-mistral-7B.Q3_K_M.gguf
LFS Q3
3.28 GB Download
flammen6-mistral-7B.Q3_K_S.gguf
LFS Q3
2.95 GB Download
flammen6-mistral-7B.Q4_0.gguf
Recommended LFS Q4
3.83 GB Download
flammen6-mistral-7B.Q4_1.gguf
LFS Q4
4.24 GB Download
flammen6-mistral-7B.Q4_K.gguf
LFS Q4
4.07 GB Download
flammen6-mistral-7B.Q4_K_M.gguf
LFS Q4
4.07 GB Download
flammen6-mistral-7B.Q4_K_S.gguf
LFS Q4
3.86 GB Download
flammen6-mistral-7B.Q5_0.gguf
LFS Q5
4.65 GB Download
flammen6-mistral-7B.Q5_1.gguf
LFS Q5
5.07 GB Download
flammen6-mistral-7B.Q5_K.gguf
LFS Q5
4.78 GB Download
flammen6-mistral-7B.Q5_K_M.gguf
LFS Q5
4.78 GB Download
flammen6-mistral-7B.Q5_K_S.gguf
LFS Q5
4.65 GB Download
flammen6-mistral-7B.Q6_K.gguf
LFS Q6
5.53 GB Download
flammen6-mistral-7B.Q8_0.gguf
LFS Q8
7.17 GB Download