πŸ“‹ Model Description


license: other license_name: youtu-vl license_link: https://huggingface.co/tencent/Youtu-VL-4B-Instruct/blob/main/LICENSE.txt pipeline_tag: image-text-to-text extragatedeu_disallowed: true library_name: transformers

Youtu-VL-4B-Instruct GGUF Models

Model Generation Details

This model was generated using llama.cpp at commit 8872ad212.


Quantization Beyond the IMatrix

I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.

In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the --tensor-type option in llama.cpp to manually "bump" important layers to higher precision. You can see the implementation here:
πŸ‘‰ Layer bumping with llama.cpp

While this does increase model file size, it significantly improves precision for a given quantization level.

I'd love your feedbackβ€”have you tried this? How does it perform for you?



Click here to get info on choosing the right GGUF model format


🎯 Introduction

Youtu-VL is a lightweight yet robust Vision-Language Model (VLM) built on the Youtu-LLM with 4B parameters. It pioneers Vision-Language Unified Autoregressive Supervision (VLUAS), which markedly strengthens visual perception and multimodal understanding. This enables a standard VLM to perform vision-centric tasks without task-specific additions. Across benchmarks, Youtu-VL stands out for its versatility, achieving competitive results on both vision-centric and general multimodal tasks.

✨ Key Features

- Comprehensive Vision-Centric Capabilities: The model demonstrates strong, broad proficiency across classic vision-centric tasks, delivering competitive performance in visual grounding, image classification, object detection, referring segmentation, semantic segmentation, depth estimation, object counting, and human pose estimation.

- Promising Performance with High Efficiency: Despite its compact 4B-parameter architecture, the model achieves competitive results across a wide range of general multimodal tasks, including general visual question answering (VQA), multimodal reasoning and mathematics, optical character recognition (OCR), multi-image and real-world understanding, hallucination evaluation, and GUI agent tasks.



πŸ€— Model Download

| Model Name | Description | Download |
| ----------- | ----------- |-----------
| Youtu-VL-4B-Instruct | Visual language model of Youtu-LLM | πŸ€— Model|
| Youtu-VL-4B-Instruct-GGUF | Visual language model of Youtu-LLM, in GGUF format | πŸ€— Model|

🧠 Model Architecture Highlights

- Vision–Language Unified Autoregressive Supervision (VLUAS): Youtu-VL is built on the VLUAS paradigm to mitigate the text-dominant optimization bias in conventional VLMs, where visual signals are treated as passive conditions and fine-grained details are often dropped. Rather than using vision features only as inputs, Youtu-VL expands the text lexicon into a unified multimodal vocabulary through a learned visual codebook, turning visual signals into autoregressive supervision targets. Jointly reconstructing visual tokens and text explicitly preserves dense visual information while strengthening multimodal semantic understanding.

- Vision-Centric Prediction with a Standard Architecture (no task-specific modules): Youtu-VL treats image and text tokens with equivalent autoregressive status, empowering it to perform vision-centric tasks for both dense vision prediction (e.g., segmentation, depth) and text-based prediction (e.g., grounding, detection) within a standard VLM architecture, eliminating the need for task-specific additions. This design yields a versitile general-purpose VLM, allowing a single model to flexibly accommodate a wide range of vision-centric and vsion-language requirements.




πŸ† Model Performance

Vision-Centric Tasks



General Multimodal Tasks




πŸš€ Quickstart

Using Transformers to Chat

Ensure your Python environment has the transformers library installed and that the version meets the requirements.

pip install "transformers>=4.56.0,<=4.57.1" torch accelerate pillow torchvision git+https://github.com/lucasb-eyer/pydensecrf.git opencv-python-headless

The snippet below shows how to interact with the chat model using transformers:

from transformers import AutoProcessor, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
"tencent/Youtu-VL-4B-Instruct", attnimplementation="flashattention2", torchdtype="auto", devicemap="cuda", trustremote_code=True
).eval()

processor = AutoProcessor.from_pretrained(
"tencent/Youtu-VL-4B-Instruct", usefast=True, trustremote_code=True
)

img_path = "./assets/logo.png"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": img_path},
{"type": "text", "text": "Describe the image"},
],
}
]

inputs = processor.applychattemplate(
messages,
tokenize=True,
addgenerationprompt=True,
return_dict=True,
return_tensors="pt"
).to(model.device)

generated_ids = model.generate(
inputs,
temperature=0.1,
top_p=0.001,
repetition_penalty=1.05,
do_sample=True,
maxnewtokens=32768,
imginput=imgpath,
)

generatedidstrimmed = [
outids[len(inids) :] for inids, outids in zip(inputs.inputids, generatedids)
]
outputs = processor.batch_decode(
generatedidstrimmed, skipspecialtokens=True, cleanuptokenization_spaces=False
)
generated_text = outputs[0]
print(f"Youtu-VL output: {generated_text}")

Demo for VL and CV tasks

A simple demo for quick start, including VL and CV tasks: jupyter notebook

The core part of this demo is three lines below:

model_path = "tencent/Youtu-VL-4B-Instruct"
youtuvl = YoutuVL(modelpath)
response = youtuvl(prompt, imgpath, segmode=segmode)

Qualitative Results

  • Task: Grounding
> Prompt: Please provide the bounding box coordinate of the region this sentence describes: a black and white cat sitting on the edge of the bathtub > >
  • Task: Object Detection
> Prompt: Detect all objects in the provided image. > >
  • Task: Referring Segmentation
> Prompt: Can you segment "hotdog on left" in this image? > >

For more examples, please refer to paper and Jupyter notebooks.

πŸŽ‰ Citation

If you find our work useful in your research, please consider citing our paper:

@article{youtu-vl,
  title={Youtu-VL: Unleashing Visual Potential via Unified Vision-Language Supervision},
  author={Tencent Youtu Lab},
  year={2026},
  eprint={2601.19798},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2601.19798}, 
}

@article{youtu-llm,
title={Youtu-LLM: Unlocking the Native Agentic Potential for Lightweight Large Language Models},
author={Tencent Youtu Lab},
year={2025},
eprint={2512.24618},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.24618},
}


πŸš€ If you find these models useful

Help me test my AI-Powered Quantum Network Monitor Assistant with quantum-ready security checks:

πŸ‘‰ Quantum Network Monitor

The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : Source Code Quantum Network Monitor. You will also find the code I use to quantize the models if you want to do it yourself GGUFModelBuilder

πŸ’¬ How to test:
Choose an AI assistant type:
- TurboLLM (GPT-4.1-mini)
- HugLLM (Hugginface Open-source models)
- TestLLM (Experimental CPU-only)

What I’m Testing

I’m pushing the limits of small open-source models for AI network monitoring, specifically:
  • Function calling against live network services
  • How small can a model go while still handling:
- Automated Nmap security scans - Quantum-readiness checks - Network Monitoring tasks

🟑 TestLLM – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):

  • βœ… Zero-configuration setup
  • ⏳ 30s load time (slow inference but no API costs) . No token limited as the cost is low.
  • πŸ”§ Help wanted! If you’re into edge-device AI, let’s collaborate!

Other Assistants

🟒 TurboLLM – Uses gpt-4.1-mini :
  • It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
  • Create custom cmd processors to run .net code on Quantum Network Monitor Agents
  • Real-time network diagnostics and monitoring
  • Security Audits
  • Penetration testing (Nmap/Metasploit)

πŸ”΅ HugLLM – Latest Open-source models:

  • 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.

πŸ’‘ Example commands you could test:

  1. "Give me info on my websites SSL certificate"
  2. "Check if my server is using quantum safe encyption for communication"
  3. "Run a comprehensive security audit on my server"
  4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code on. This is a very flexible and powerful feature. Use with caution!

Final Word

I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.

If you appreciate the work, please consider buying me a coffee β˜•. Your support helps cover service costs and allows me to raise token limits for everyone.

I'm also open to job opportunities or sponsorship.

Thank you! 😊

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
Youtu-VL-4B-Instruct-bf16.gguf
LFS FP16
9.13 GB Download
Youtu-VL-4B-Instruct-bf16_q8_0.gguf
LFS Q8
6.72 GB Download
Youtu-VL-4B-Instruct-f16.gguf
LFS FP16
9.13 GB Download
Youtu-VL-4B-Instruct-f16_q8_0.gguf
LFS Q8
6.72 GB Download
Youtu-VL-4B-Instruct-imatrix.gguf
LFS
7.08 MB Download
Youtu-VL-4B-Instruct-iq2_m.gguf
LFS Q2
2.25 GB Download
Youtu-VL-4B-Instruct-iq2_s.gguf
LFS Q2
2.24 GB Download
Youtu-VL-4B-Instruct-iq2_xs.gguf
LFS Q2
2.17 GB Download
Youtu-VL-4B-Instruct-iq3_m.gguf
LFS Q3
2.3 GB Download
Youtu-VL-4B-Instruct-iq3_s.gguf
LFS Q3
2.3 GB Download
Youtu-VL-4B-Instruct-iq3_xs.gguf
LFS Q3
2.22 GB Download
Youtu-VL-4B-Instruct-iq3_xxs.gguf
LFS Q3
2.16 GB Download
Youtu-VL-4B-Instruct-iq4_nl.gguf
LFS Q4
2.75 GB Download
Youtu-VL-4B-Instruct-iq4_xs.gguf
LFS Q4
2.63 GB Download
Youtu-VL-4B-Instruct-q2_k_l.gguf
LFS Q2
2.31 GB Download
Youtu-VL-4B-Instruct-q2_k_m.gguf
LFS Q2
2.15 GB Download
Youtu-VL-4B-Instruct-q2_k_s.gguf
LFS Q2
2.06 GB Download
Youtu-VL-4B-Instruct-q3_k_l.gguf
LFS Q3
2.73 GB Download
Youtu-VL-4B-Instruct-q3_k_m.gguf
LFS Q3
2.57 GB Download
Youtu-VL-4B-Instruct-q3_k_s.gguf
LFS Q3
2.48 GB Download
Youtu-VL-4B-Instruct-q4_0.gguf
Recommended LFS Q4
2.57 GB Download
Youtu-VL-4B-Instruct-q4_0_l.gguf
LFS Q4
2.91 GB Download
Youtu-VL-4B-Instruct-q4_1.gguf
LFS Q4
2.86 GB Download
Youtu-VL-4B-Instruct-q4_1_l.gguf
LFS Q4
3.15 GB Download
Youtu-VL-4B-Instruct-q4_k_l.gguf
LFS Q4
3.08 GB Download
Youtu-VL-4B-Instruct-q4_k_m.gguf
LFS Q4
2.91 GB Download
Youtu-VL-4B-Instruct-q4_k_s.gguf
LFS Q4
2.84 GB Download
Youtu-VL-4B-Instruct-q5_0.gguf
LFS Q5
3.14 GB Download
Youtu-VL-4B-Instruct-q5_0_l.gguf
LFS Q5
3.4 GB Download
Youtu-VL-4B-Instruct-q5_1.gguf
LFS Q5
3.43 GB Download
Youtu-VL-4B-Instruct-q5_1_l.gguf
LFS Q5
3.64 GB Download
Youtu-VL-4B-Instruct-q5_k_l.gguf
LFS Q5
3.53 GB Download
Youtu-VL-4B-Instruct-q5_k_m.gguf
LFS Q5
3.36 GB Download
Youtu-VL-4B-Instruct-q5_k_s.gguf
LFS Q5
3.33 GB Download
Youtu-VL-4B-Instruct-q6_k_l.gguf
LFS Q6
3.93 GB Download
Youtu-VL-4B-Instruct-q6_k_m.gguf
LFS Q6
3.77 GB Download
Youtu-VL-4B-Instruct-q8_0.gguf
LFS Q8
4.85 GB Download