πŸ“‹ Model Description


license: apache-2.0 pipeline_tag: image-text-to-text library_name: transformers base_model:
  • openbmb/MiniCPM-V-4_5
language:
  • multilingual
tags:
  • minicpm-v
  • vision
  • ocr
  • multi-image
  • video
  • custom_code
  • abliterated
  • uncensored

huihui-ai/Huihui-MiniCPM-V-4_5-abliterated

This is an uncensored version of openbmb/MiniCPM-V-45 created with abliteration (see remove-refusals-with-transformers to know more about it).

It was only the text part that was processed, not the image part.

The abliterated model will no longer say "I'm sorry, but I can't assist with that."

Chat with Image

1. llama.cpp Inference

(llama-mtmd-cli needs to be compiled.)

llama-mtmd-cli -m huihui-ai/Huihui-Qwen3-8B-abliterated/GGUF/ggml-model-Q4KM.gguf --mmproj huihui-ai/Huihui-Qwen3-8B-abliterated/GGUF/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image abc.png -p "What is in the image?"

2. Transfromers Inference

import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer

torch.manual_seed(100)

model = AutoModel.frompretrained('huihui-ai/Huihui-MiniCPM-V-45-abliterated', trustremotecode=True,
attnimplementation='sdpa', torchdtype=torch.bfloat16) # sdpa or flashattention2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.frompretrained('huihui-ai/Huihui-MiniCPM-V-45-abliterated', trustremotecode=True)

image = Image.open('./assets/minicpmo26/showdemo.jpg').convert('RGB')

enablethinking=False # If enablethinking=True, the thinking mode is enabled.
stream=True # If stream=True, the answer is string

First round chat

question = "What is the landform in the picture?" msgs = [{'role': 'user', 'content': [image, question]}]

answer = model.chat(
msgs=msgs,
tokenizer=tokenizer,
enablethinking=enablethinking,
stream=True
)

generated_text = ""
for new_text in answer:
generatedtext += newtext
print(new_text, flush=True, end='')

Second round chat, pass history context of multi-turn conversation

msgs.append({"role": "assistant", "content": [generated_text]}) msgs.append({"role": "user", "content": ["What should I pay attention to when traveling here?"]})

answer = model.chat(
msgs=msgs,
tokenizer=tokenizer,
stream=True
)

generated_text = ""
for new_text in answer:
generatedtext += newtext
print(new_text, flush=True, end='')

Usage Warnings

- Risk of Sensitive or Controversial Outputs: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.

- Not Suitable for All Audiences: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.

- Legal and Ethical Responsibilities: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.

- Research and Experimental Use: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.

- Monitoring and Review Recommendations: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.

- No Default Safety Guarantees: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.

Donation

##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
  • bitcoin:
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
  • Support our work on Ko-fi!

πŸ“‚ GGUF File List

No GGUF files available