πŸ“‹ Model Description


language:
  • tr
  • ar
  • af
  • az
  • es
  • en
  • el
  • ro
  • ru
  • rm
  • th
  • uk
  • uz
  • pl
  • pt
  • fa
  • sk
  • sl
  • da
  • de
  • nl
  • fr
  • fi
  • ka
  • hi
  • hu
  • hy
  • ja
  • kk
  • kn
  • ko
  • ku
  • ky
  • la
  • lb
  • id
  • is
  • it
  • zh
  • cs
  • vi
  • be
  • bg
  • bs
  • ne
  • mn
license: mit tags:
  • turkish
  • tΓΌrkiye
  • english
  • ai
  • lamapi
  • gemma3
  • next
  • next-x1
  • efficient
  • text-generation
  • open-source
  • 1b
  • huggingface
  • large-language-model
  • llm
  • causal
  • transformer
  • artificial-intelligence
  • machine-learning
  • ai-research
  • natural-language-processing
  • nlp
  • finetuned
  • lightweight
  • creative
  • summarization
  • question-answering
  • chat-model
  • generative-ai
  • optimized-model
  • unsloth
  • trl
  • sft
  • chemistry
  • biology
  • finance
  • legal
  • music
  • art
  • code
  • climate
  • medical
  • agent
  • text-generation-inference
pipeline_tag: text-generation datasets:
  • mlabonne/FineTome-100k
  • ITCL/FineTomeOs
  • Gryphe/ChatGPT-4o-Writing-Prompts
  • dongguanting/ARPO-SFT-54K
  • GreenerPastures/All-Your-Base-Full
  • Gryphe/Opus-WritingPrompts
  • HuggingFaceH4/MATH-500
  • mlabonne/smoltalk-flat
  • mlabonne/natural_reasoning-formatted
  • OpenSPG/KAG-Thinker-training-dataset
  • uclanlp/Brief-Pro
  • CognitiveKernel/CognitiveKernel-Pro-SFT
  • SuperbEmphasis/Claude-4.0-DeepSeek-R1-RP-SFWish
  • QuixiAI/dolphin-r1
  • mlabonne/lmsys-arena-human-sft-55k
library_name: transformers

πŸš€ Next-1B (t416)

Lightweight, Efficient, and TΓΌrkiye-Focused AI

License: MIT</a>
![Language: English]()
HuggingFace</a>


πŸ“– Overview

Next-1B is a 1-billion parameter causal language model based on Gemma 3, designed for efficiency, low-resource deployment, and reasoning-focused natural language understanding.

Key highlights:

  • Extremely lightweight β€” can run on consumer GPUs with low VRAM.
  • Optimized for text reasoning, summarization, and creative generation.
  • Supports Turkish natively while remaining multilingual.
  • Open-source and transparent for research and applications.

Ideal for developers, students, and organizations needing fast, reliable, and low-resource text-generation.


Our Next 1B and Next 4B models are leading to all of the tiny models in benchmarks.

Model MMLU (5-shot) % MMLU-Pro % GSM8K % MATH %
Next 4B preview 84.6 66.9 82.7 70.5
Next 1B Version t327 87.3 69.2 90.5 70.1
Qwen 3 0.6B 52.81 37.6 60.7 20.5
Llama 3.2 1B 49.3 44.4 11.9 30.6

Also, our Next 14b model is leading to state-of-the-art models in some of the Benchmarks.

Model MMLU (5-shot) % MMLU-Pro % GSM8K % MATH %
Next 14B (Thinking) 94.6 93.2 98.8 92.7
Next 12B 92.7 84.4 95.3 87.2
GPT-5 92.5 87.0 98.4 96.0
Claude Opus 4.1 (Thinking) ~92.0 87.8 84.7 95.4

🎯 Goals

  1. Lightweight Efficiency: Run smoothly on low-resource devices.
  2. Reasoning-Focused: Provide logical and coherent text outputs.
  3. Accessibility: Fully open-source with clear documentation.
  4. Multilingual Adaptability: Turkish-focused but supports other languages.

✨ Key Features

FeatureDescription
πŸ”‹ Lightweight ArchitectureOptimized for low VRAM usage; ideal for small GPUs or CPU deployment.
πŸ‡ΉπŸ‡· Turkish & MultilingualHandles complex Turkish prompts accurately.
🧠 Reasoning CapabilitiesLogical chain-of-thought for question-answering and problem-solving.
πŸ“Š Consistent OutputsReliable and reproducible results across multiple runs.
🌍 Open SourceTransparent, research-friendly, and community-driven.

πŸ“ Model Specifications

SpecificationDetails
Base ModelGemma 3
Parameter Count1 Billion
ArchitectureTransformer, causal LLM
Fine-Tuning MethodInstruction fine-tuning (SFT) with Turkish and multilingual datasets
OptimizationsQuantization-ready (q8, f16, f32)
Use CasesText generation, summarization, Q&A, creative writing, reasoning tasks

πŸš€ Installation & Usage

Use the model:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Lamapi/next-1b"
tokenizer = AutoTokenizer.frompretrained(modelid)
model = AutoModelForCausalLM.frompretrained(modelid)

Chat message

messages = [ {"role": "system", "content": "You are Next-X1, a smart and concise AI assistant trained by Lamapi. Always respond in the user's language. Proudly made in Turkey."}, {"role": "user", "content": "Hello, how are you?"} ]

Prepare input with Tokenizer

prompt = tokenizer.applychattemplate(messages, tokenize=False, addgenerationprompt=True) inputs = tokenizer(prompt, return_tensors="pt")

Output from the model

output = model.generate(inputs, maxnewtokens=50) print(tokenizer.decode(output[0], skipspecialtokens=True))



Hello, how are you?


I'm fine, thank you. How are you?


πŸ“„ License

MIT License β€” free to use, modify, and distribute. Attribution appreciated.


πŸ“ž Contact & Support


Next-1B β€” Lightweight, efficient, and reasoning-focused, bringing Turkey’s AI forward on low-resource hardware.

Follow on HuggingFace</a>

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
next-1b-bf16.gguf
Recommended LFS FP16
1.87 GB Download
next-1b-f16.gguf
LFS FP16
1.87 GB Download
next-1b-f32.gguf
LFS
3.73 GB Download
next-1b-q8_0.gguf
LFS Q8
1019.77 MB Download
next-1b-tq1_0.gguf
LFS
1.47 GB Download
next-1b-tq2_0.gguf
LFS Q2
1.48 GB Download