πŸ“‹ Model Description

Quantization made by Richard Erkhov.

Github

Discord

Request more models

mergekit-ties-fhzafeq - GGUF

  • Model creator: https://huggingface.co/Hjgugugjhuhjggg/
  • Original model: https://huggingface.co/Hjgugugjhuhjggg/mergekit-ties-fhzafeq/

NameQuant methodSize
mergekit-ties-fhzafeq.Q2K.ggufQ2K1.39GB
mergekit-ties-fhzafeq.IQ3XS.ggufIQ3XS1.53GB
mergekit-ties-fhzafeq.IQ3S.ggufIQ3S1.59GB
mergekit-ties-fhzafeq.Q3KS.ggufQ3K_S1.59GB
mergekit-ties-fhzafeq.IQ3M.ggufIQ3M1.65GB
mergekit-ties-fhzafeq.Q3K.ggufQ3K1.73GB
mergekit-ties-fhzafeq.Q3KM.ggufQ3K_M1.73GB
mergekit-ties-fhzafeq.Q3KL.ggufQ3K_L1.85GB
mergekit-ties-fhzafeq.IQ4XS.ggufIQ4XS1.91GB
mergekit-ties-fhzafeq.Q40.ggufQ401.99GB
mergekit-ties-fhzafeq.IQ4NL.ggufIQ4NL2.0GB
mergekit-ties-fhzafeq.Q4KS.ggufQ4K_S2.0GB
mergekit-ties-fhzafeq.Q4K.ggufQ4K2.09GB
mergekit-ties-fhzafeq.Q4KM.ggufQ4K_M2.09GB
mergekit-ties-fhzafeq.Q41.ggufQ412.18GB
mergekit-ties-fhzafeq.Q50.ggufQ502.37GB
mergekit-ties-fhzafeq.Q5KS.ggufQ5K_S2.37GB
mergekit-ties-fhzafeq.Q5K.ggufQ5K2.41GB
mergekit-ties-fhzafeq.Q5KM.ggufQ5K_M2.41GB
mergekit-ties-fhzafeq.Q51.ggufQ512.55GB
mergekit-ties-fhzafeq.Q6K.ggufQ6K2.76GB
mergekit-ties-fhzafeq.Q80.ggufQ803.58GB

Original model description:



base_model:
  • Hjgugugjhuhjggg/mergekit-ties-pghuyfi
  • Hjgugugjhuhjggg/mergekit-ties-dkhnzcn
  • Hjgugugjhuhjggg/mergekit-ties-kmlzhzo
  • huihui-ai/Llama-3.2-3B-Instruct-abliterated
  • Hjgugugjhuhjggg/mergekit-ties-qgcitfu
  • Hjgugugjhuhjggg/mergekit-ties-xflmond
  • Hjgugugjhuhjggg/mergekit-ties-poovzrh

library_name: transformers
tags:
  • mergekit
  • merge


merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using huihui-ai/Llama-3.2-3B-Instruct-abliterated as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Hjgugugjhuhjggg/mergekit-ties-qgcitfu
    parameters:
      density: 0.5
      weight: 0.5
  - model: Hjgugugjhuhjggg/mergekit-ties-dkhnzcn
    parameters:
      density: 0.5
      weight: 0.5
  - model: Hjgugugjhuhjggg/mergekit-ties-poovzrh
    parameters:
      density: 0.5
      weight: 0.5
  - model: Hjgugugjhuhjggg/mergekit-ties-pghuyfi
    parameters:
      density: 0.5
      weight: 0.5
  - model: Hjgugugjhuhjggg/mergekit-ties-kmlzhzo
    parameters:
      density: 0.5
      weight: 0.5
  - model: Hjgugugjhuhjggg/mergekit-ties-xflmond
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated
parameters:
normalize: false
int8_mask: true
dtype: float16

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
mergekit-ties-fhzafeq.IQ3_M.gguf
LFS Q3
1.65 GB Download
mergekit-ties-fhzafeq.IQ3_S.gguf
LFS Q3
1.59 GB Download
mergekit-ties-fhzafeq.IQ3_XS.gguf
LFS Q3
1.53 GB Download
mergekit-ties-fhzafeq.IQ4_NL.gguf
LFS Q4
2 GB Download
mergekit-ties-fhzafeq.IQ4_XS.gguf
LFS Q4
1.91 GB Download
mergekit-ties-fhzafeq.Q2_K.gguf
LFS Q2
1.39 GB Download
mergekit-ties-fhzafeq.Q3_K.gguf
LFS Q3
1.73 GB Download
mergekit-ties-fhzafeq.Q3_K_L.gguf
LFS Q3
1.85 GB Download
mergekit-ties-fhzafeq.Q3_K_M.gguf
LFS Q3
1.73 GB Download
mergekit-ties-fhzafeq.Q3_K_S.gguf
LFS Q3
1.59 GB Download
mergekit-ties-fhzafeq.Q4_0.gguf
Recommended LFS Q4
1.99 GB Download
mergekit-ties-fhzafeq.Q4_1.gguf
LFS Q4
2.18 GB Download
mergekit-ties-fhzafeq.Q4_K.gguf
LFS Q4
2.09 GB Download
mergekit-ties-fhzafeq.Q4_K_M.gguf
LFS Q4
2.09 GB Download
mergekit-ties-fhzafeq.Q4_K_S.gguf
LFS Q4
2 GB Download
mergekit-ties-fhzafeq.Q5_0.gguf
LFS Q5
2.37 GB Download
mergekit-ties-fhzafeq.Q5_1.gguf
LFS Q5
2.55 GB Download
mergekit-ties-fhzafeq.Q5_K.gguf
LFS Q5
2.41 GB Download
mergekit-ties-fhzafeq.Q5_K_M.gguf
LFS Q5
2.41 GB Download
mergekit-ties-fhzafeq.Q5_K_S.gguf
LFS Q5
2.37 GB Download
mergekit-ties-fhzafeq.Q6_K.gguf
LFS Q6
2.76 GB Download
mergekit-ties-fhzafeq.Q8_0.gguf
LFS Q8
3.58 GB Download