πŸ“‹ Model Description


tags:
  • gguf
  • llama.cpp
  • unsloth
datasets:
  • TeichAI/deepseek-v3.2-speciale-1000x
base_model:
  • TeichAI/Qwen3-8B-DeepSeek-v3.2-Speciale-Distill

Qwen3-8B-DeepSeek-v3.2-Speciale-Distill-GGUF - GGUF

This model was finetuned and converted to GGUF format using Unsloth.

Example usage:

  • For text only LLMs: llama-cli --hf repoid/modelname -p "why is the sky blue?"
  • For multimodal models: llama-mtmd-cli -m modelname.gguf --mmproj mmprojfile.gguf

Available Model files:

  • qwen3-8b.Q8_0.gguf
  • qwen3-8b.F16.gguf

Ollama

An Ollama Modelfile is included for easy deployment.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.bf16.gguf
LFS FP16
15.26 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.iq2_m.gguf
LFS Q2
2.84 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.iq2_xs.gguf
LFS Q2
2.51 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.iq3_m.gguf
LFS Q3
3.63 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.iq3_xs.gguf
LFS Q3
3.38 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.iq4_nl.gguf
LFS Q4
4.49 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.iq4_xs.gguf
LFS Q4
4.25 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.q3_k_m.gguf
LFS Q3
3.84 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.q3_k_s.gguf
LFS Q3
3.51 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.q4_k_m.gguf
Recommended LFS Q4
4.68 GB Download
Qwen3-8B-DeepSeek-v3.2-Speciale-Distill.q8_0.gguf
LFS Q8
8.11 GB Download
qwen3-8b.F16.gguf
LFS FP16
15.26 GB Download
qwen3-8b.Q8_0.gguf
LFS Q8
8.11 GB Download