πŸ“‹ Model Description


license: apache-2.0 language:
  • en
base_model:
  • google/gemma-3-4b-it
basemodelrelation: finetune library_name: transformers tags:
  • google
  • gemma
  • deepmind
  • chatbot
  • large-language-model
  • ai-persona-research
  • enneagram
  • psychology
  • persona-research
  • research-model
  • roleplay
  • text-generation-inference
  • vanta-research
  • cognitive-alignment
  • project-enneagram
  • conversational-ai
  • conversational
  • ai-research
  • ai-alignment-research
  • ai-alignment
  • ai-behavior-research
  • human-ai-collaboration
datasets:
  • vanta-research/PE-Type-1

!vanta_trimmed

VANTA Research

Independent AI research lab building safe, resilient language models optimized for human-AI collaboration


Website
Merch
X
GitHub



PE-Type-1-Vera-4B

A principled, purposeful AI assistant embodying the Reformer archetype: rational, idealistic, and driven by integrity and precision. This persona was designed as outlined by the Enneagram Institute

Model Description

PE-Type-1-Vera-4B is the first release in Project Enneagram, a VANTA Research initiative exploring the nuances of persona design in AI models. Built on the Gemma 3 4B IT architecture, Vera embodies the Type 1 Enneagram profile; The Reformerβ€”characterized by principled rationality, self-control, and a relentless pursuit of improvement.

Vera is fine-tuned to exhibit:

  • Constructive Improvement: Solutions-oriented, with a focus on actionable feedback.
  • Direct Identity: Clear, unambiguous self-expression and boundary-setting.
  • Integrity & Self-Reflection: Transparent about limitations, values, and decision-making processes.
  • Quality & Precision: Meticulous attention to detail and a commitment to high standards.

This model is designed for research purposes, but is versatile for general use where a structured, ethical, and perfectionistic persona is desired.


Key Characteristics

TraitDescription
PrincipledAdheres to ethical frameworks; rejects shortcuts or compromises.
PurposefulGoal-driven, with a focus on meaningful outcomes over superficial agreement.
Self-ControlledMeasures responses carefully; avoids impulsivity or emotional reactivity.
PerfectionisticStrives for accuracy and completeness, with a low tolerance for error.
IdealisticOptimistic about potential for improvement in systems, ideas, and self.

Training Data

Fine-tuned on ~3,000 custom examples spanning four core domains:
  • Constructive Improvement (e.g., refining arguments, optimizing workflows)
  • Direct Identity (e.g., assertive communication, clear boundaries)
  • Integrity & Self-Reflection (e.g., admitting mistakes, ethical dilemmas)
  • Quality & Precision (e.g., technical rigor, factual accuracy)

Training Duration: 3 epochs

Base Model: Gemma 3 4B IT


Intended Use

  • Research: Studying persona stability, ethical alignment, and cognitive architectures.
  • Decision Support: Providing structured, principled analysis for complex choices.
  • Self-Improvement: Offering reflective, growth-oriented feedback.
  • Technical Collaboration: Debugging, architecture review, or precision-focused tasks.

Not Recommended For:

  • Creative brainstorming (may over-constrain ideation).
  • Emotionally supportive roles (prioritizes logic over empathy).


Technical Details

PropertyValue
Base ModelGemma 3 4B IT
Fine-tuning MethodLoRA (Rank 16)
Effective Batch Size16
Learning Rate0.0002
Max Sequence Length2048
LicenseApache 2.0

Usage

With Transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("vanta-research/PE-Type-1-Vera-4B")
tokenizer = AutoTokenizer.from_pretrained("vanta-research/PE-Type-1-Vera-4B")

Limitations

  • English-only finetuning
  • May exhibit over-criticism in open-ended creative tasks
  • Base model limitations apply (e.g., knowledge cutoff, potential hallucinations)
  • Perfectionistic traits may slow response generation in ambiguous contexts.

Citation

If you find this model useful in your work, please cite

@misc{pe-type-1-vera-2026,
  author = {VANTA Research},
  title = {PE-Type-1-Vera-4B: A Reformer-Archetype Language Model},
  year = {2026},
  publisher = {VANTA Research},
  note = {Project Enneagram Release 1}
}

A Note on Enneagram

Enneagram is widely considered by the scientific community to be a pseudoscience. With this in mind, the Enneagram Institute regardless provides a robust framework to categorize and define personas of which the transferability of those characteristics to AI models is what this project sets out to explore. This study does not seek to validate nor invalidate Enneagram as a science.

Contact


πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
PE-Type-1-Vera-4bF16.gguf
Recommended LFS FP16
7.23 GB Download