πŸ“‹ Model Description


base_model:
  • google/gemma-3-270m-qat

gemma-3-270m-qat GGUF

Recommended way to run this model:

llama-cli -hf ggml-org/gemma-3-270m-qat-GGUF -c 0 -fa -p "hello"
Then, access http://localhost:8080

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
gemma-3-270m-qat-Q4_0.gguf
Recommended LFS Q4
230.23 MB Download