π Model Description
language:
- en
- zh
- GLM 4.7 Flash
- thinking
- reasoning
- NEO Imatrix
- MAX Quants
- 16 bit precision output tensor
- zai-org/GLM-4.7-Flash
(latest Llamacpp 7789 commit, with corrected quants.)
GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF
Specialized and Enhanced GGUF quants for the new GLM-4.7-Flash, 30B-A3B MOE, mixture of experts model.
[ https://huggingface.co/zai-org/GLM-4.7-Flash ]
This model can be run on the GPU(s) and/or CPU due to 4 experts activated (appox 2B parameters active).
Default Settings (Most Tasks)
temperature: 1.0
top-p: 0.95
max new tokens: 131072
REP PEN: 1.1 OR 1.0 (off) (if you get repeat issues)
You might also try GLM 4.6 settings (unsloth):
temperature = 0.8
top_p = 0.6 (recommended)
top_k = 2 (recommended)
maxgeneratetokens = 16,384
That being said, I suggest min context of 8k-16K as final outputs (post thinking) can be long and detailed and in a number
of cases has been observed "polishing" the final output one or more times IN the output section.
(Model can handle 200k context, non-roped.)
Quants General:
Quants and Imatrixes computed using latest LLAMACPP (commit: 7789, Jan 21 2026) which contains specific fixes for this model.
Quants prior to this commit (as well as Imatrix generation) performed poorly (re-quanization and re-imatrix generation are required).
Also note there are some issues with Flash Attn and low token generation speed (as Flash is offloaded to CPU in some cases). Disable
Flash Attn until this issue is resolved / makes its way thru the "llamacpp / ai pipeline".
UNCENSORED QUANTS:
https://huggingface.co/DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF
Specialized Quants
Specialized quants (IQ4NL, Q51, Q41, Q80) are precision balanced to address a specific tensor issues in all layers
that requires a specific quant type.
Other "normal" quants will also perform very well.
Quant Enhancements:
Imatrix is NEO and Code datasets by DavidAU - Dual Imatrix (2 imatrixes separately generated) to improve model performance.
All quants (specialized and "normal") are also enhanced with 16 bit (full) precision "output tensor" to further improve model performance.
Output tensor affects 10-20% of the fine output of the model - both thinking and output (final) generation.
Using an "uncensored" (refusals removed) model VS trained "uncensored" model
Usually when you a tell a model to generate horror, swear or x-rated content this is all you have to do to get said content type.
In the case of this model, it will not refuse your request, however it needs to be "pushed" a bit / directed a bit more in SOME CASES.
Although this model will generated x-rated content too, likewise you need to tell it to use "slang" (and include the terms you want)
to get it generate the content correctly as the "expected" content level too.
Without these added directive(s), the content can be "bland" by comparison to an "uncensored model" or model trained on uncensored content.
Roughly, the model tries to generate the content but the "default" setting(s) are so "tame" it needs a push to generate at expected graphic,
cursing or explicit levels.
Even with minimal direction (ie, use these words to swear: x,y,z), this will be enough to push the model to generate the requested content in the ahh... expected format.
Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
π GGUF File List
| π Filename | π¦ Size | β‘ Download |
|---|---|---|
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-IQ2_M.gguf
LFS
Q2
|
9.61 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-IQ3_M.gguf
LFS
Q3
|
12.65 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-IQ4_NL.gguf
LFS
Q4
|
16.14 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-IQ4_XS.gguf
LFS
Q4
|
15.28 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-Q4_1.gguf
LFS
Q4
|
17.86 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-Q4_K_M.gguf
Recommended
LFS
Q4
|
17.24 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-Q4_K_S.gguf
LFS
Q4
|
16.25 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-Q5_1.gguf
LFS
Q5
|
21.31 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-Q5_K_M.gguf
LFS
Q5
|
20.15 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-Q5_K_S.gguf
LFS
Q5
|
19.59 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-Q6_K.gguf
LFS
Q6
|
23.27 GB | Download |
|
GLM-4.7-Flash-NEO-CODE-MAX-imat-D_AU-Q8_0.gguf
LFS
Q8
|
29.93 GB | Download |