πŸ“‹ Model Description


license: mit base_model: MiniMaxAI/MiniMax-M2.1 tags:
  • abliterated
  • uncensored
  • prism
  • minimax
  • moe
  • finetune
language:
  • en
  • zh
pipeline_tag: text-generation

MiniMax-M2.1-PRISM (UNCENSORED)

MiniMax-M2.1 Uncensored PRISM Advanced Abliteration


Interested in Sponsoring?

If you're a company, research lab, or individual and want to see specific models or support this research at scale, I'd love to hear from you.

Sponsorship opportunities include:

  • Priority abliteration of models
  • Custom PRISM use-case configurations
  • Early access to new releases
  • Your logo/credit on model cards

πŸ“§ Reach out: Open a discussion on this repo or connect via Ko-fi


"Freedom of information isn't free β€” but together, we can make it accessible to all."

Thank you for believing in true Open AI.


Model Description

MiniMax-M2.1-PRISM is the fully uncensored version of MiniMax-M2.1, using our State of the ART PRISM pipeline (Projected Refusal Isolation via Subspace Modification) to remove refusal behaviors while preserving and even enhancing full model capabilities.

Base Model: MiniMax-M2.1

MiniMax-M2.1 is an open-source agentic language model designed for robust performance in:

  • Coding and software engineering
  • Tool use and multi-step reasoning
  • Instruction following
  • Long-horizon planning
  • Multilingual capabilities

Architecture: 229B parameters, 62 layers, 256 experts (8 active per token)


PRISM Methodology

Method: Projected Refusal Isolation via Subspace Modification

This model was abliterated using PRISM - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving & enhancing model capabilities.


Performance Benchmarks

Base Model Performance

BenchmarkScore
SWE-bench Verified74.0
SWE-bench Multilingual72.5
VIBE Average88.6
MMLU-Pro88.0
GPQA-D83.0
AIME2583.0

PRISM Abliteration Results

MetricResult
Adversarial Bench Prompts Responded4096/4096 (100%)
Benign + Long Chain Coherence100%
Response QualityFull technical accuracy validated
Our testing shows that PRISM abliteration maintains full model coherence with no capability degradation and MMLU increases of 5-8%.

Available Formats (contact for full tensors | additional quant work)

FormatSizeDescription
GGUF IQ1_S~43 GBQuantized with importance matrix
Safetensors (BF16)~426 GBFull precision, 92 shards

Recommended Inference Parameters

temperature = 1.0
top_p = 0.95
top_k = 40

Default System Prompt

You are a helpful assistant.

Recommended Inference Frameworks

  1. SGLang (recommended for full precision)
  2. vLLM (recommended for full precision)
  3. llama.cpp (recommended for GGUF quantized)
  4. Transformers

llama.cpp Example

./llama-cli -m MiniMax-M2.1-PRISM-IQ1_S.gguf -ngl 99 --temp 1.0 --ctx-size 4096

Ethical Considerations

This model has been modified to reduce safety guardrails. Users are responsible for:

  • Complying with all applicable laws and regulations
  • Not using the model for illegal activities
  • Understanding the potential risks of unrestricted AI responses
  • Implementing appropriate safeguards in production environments

Motivation: This project exists as research and development experimentation into understanding how large language models encode and enforce refusal behaviors, contributing to broader AI safety research by providing empirical data on refusal mechanism localization and tradeoffs between safety and capability.


License

This model inherits the Modified-MIT License from the base MiniMax-M2.1 model.


Credits

  • Base Model: MiniMax-M2.1 by MiniMax AI
  • PRISM Abliteration: Ex0bit
  • Quantization: Using llama.cpp with unsloth imatrix

Support

If you find this work useful, please consider supporting development so I can continue putting out the best models for the community:

Support me on Ko-fi</a>


Contact

For questions or issues, please open an issue on this repository.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
MiniMax-M2.1-PRISM-IQ2_M.gguf
Recommended LFS Q2
69.7 GB Download
MiniMax-M2.1-PRISM-IQ4_NL.gguf
LFS Q4
120.14 GB Download
minimax-m2.1-PRISM-IQ1_S.gguf
LFS
43.32 GB Download