πŸ“‹ Model Description


base_model: arcee-ai/AFM-4.5B library_name: transformers pipeline_tag: text-generation language:
  • en
tags:
  • medical
  • instruction-tuned
  • dpo
  • grpo
  • cot
  • mergekit
  • arcee-fusion
  • openmed
license: apache-2.0

AFM-4.5B-OpenMed-GGUF

Lightweight medical finetune on top of Arcee’s AFM-4.5B for education and research use. Trained with a simple 3-stage recipe (SFT β†’ DPO β†’ GRPO-CoT) and finalized via Arcee Fusion weight merging (MergeKit).

More information about our methodology will be available in a forthcoming blog post.

All experiments were performed on AMD MI300x GPUs, with computing credits generously provided by Hot AISLE.

⚠️ Medical safety

This model is not a clinician. It can hallucinate and should not be used for diagnosis or treatment. Always involve qualified medical professionals.


TL;DR

  • Base: arcee-ai/AFM-4.5B – Arcee’s 4.5B instruction model intended for cloud-to-edge deployment.
  • Training (high level):
1) SFT proprietary synthetic medical datasets + tool-calling (search) traces 2) DPO using MedMCQA-derived preferences (multiple-choice signal) 3) GRPO for chain-of-thought enrichment, using MedReason verifiable rewards; short rationales encouraged, final answer checked. 4) Model merge: Arcee Fusion (MergeKit) for selective, importance-aware parameter fusion.
  • Eval (EleutherAI harness; author’s settings, bs=64)
- MMLU: 61.10 (vs 55.53 base) - MMLU-Pro: 33.44 (vs 32.61 base) – harder 10-choice variant. - IFEVAL: 63.55 (vs 63.67 base) – verifiable instruction following.

Note: Arcee’s internal evals may use different harnesses; avoid cross-harness comparisons.


What’s inside

Specialization steps

  1. Domain SFT (medical + tools)
Instruction-style synthetic medical Q&A + conversions; supervised search/tool-use traces to teach function-calling patterns compatible with chat templates.
  1. Preference alignment β€” DPO
Uses MedMCQA correctness as a proxy preference signal to bias toward concise, clinically reasonable options.
  1. Reasoning enrichment β€” GRPO (CoT)
Group Relative Policy Optimization without a critic; groups of sampled solutions are scored by verifiable rewards (answer correctness + light format checks). Trained with MedReason QA signal.
  1. Finalization β€” Arcee Fusion (MergeKit)
Selective weight fusion to preserve gains while limiting over-averaging; configured via mergemethod: arceefusion.

Intended use & limitations

Intended: Medical SLM's research, tool-augmented retrieval demos.

Out of scope: Unsupervised patient care, generating prescriptions, and time-critical guideline decisions.


Evaluation

Author-run with the EleutherAI lm-evaluation-harness; seeds, prompts, and templates affect absolute scores.

BenchmarkAFM-4.5B-OpenMedAFM-4.5B (same harness)
MMLU61.1055.53
MMLU-Pro33.4432.61
IFEVAL63.5563.67
  • MMLU-Pro increases difficulty (10 options; more reasoning-heavy); small deltas are still meaningful.
  • IFEVAL checks verifiable constraints (length, keyword counts, format, etc.).
mmluAFM-4.5B-OpenMedAFM-4.5B
other
clinical_knowledge67.5565.66
college_medicine64.7454.34
professional_medicine63.9759.56
virology49.448.19
stem
anatomy62.9656.3
college_biology78.4765.97
college_chemistry44.0037.00
highschoolbiology79.0371.29
highschoolchemistry53.243.84
groups
humanities56.1350.46
other68.9763.47
social sciences73.2568.61
stem48.9142.53

Reproduce (example commands)

# MMLU classic
lm_eval --model hf \
  --modelargs pretrained=openmed-community/AFM-4.5B-OpenMed,parallelize=True,dtype=bfloat16,trustremote_code=True \
  --task mmlu \
  --batch_size=64 \
  --applychattemplate \
  --output_path=results \
  --fewshotasmultiturn

MMLU-Pro (10-choice)

lm_eval --model hf \ --modelargs pretrained=openmed-community/AFM-4.5B-OpenMed,parallelize=True,dtype=bfloat16,trustremote_code=True \ --tasks leaderboardmmlupro \ --batch_size=64 \ --applychattemplate \ --output_path=results \ --fewshotasmultiturn

IFEVAL (verifiable instruction following)

lm_eval --model hf \ --modelargs pretrained=openmed-community/AFM-4.5B-OpenMed,parallelize=True,dtype=bfloat16,trustremote_code=True \ --tasks leaderboard_ifeval \ --batch_size=64 \ --applychattemplate \ --output_path=results \ --fewshotasmultiturn

Quickstart (Transformers)

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "openmed-community/AFM-4.5B-OpenMed"
tok = AutoTokenizer.frompretrained(modelid, use_fast=True)
model = AutoModelForCausalLM.frompretrained(modelid, torchdtype=torch.bfloat16, devicemap="auto")

messages = [
{"role": "system", "content": "You are a careful medical assistant. Cite sources and warn this is not medical advice."},
{"role": "user", "content": "Briefly: cellulitis vs erysipelas differences?"}
]
prompt = tok.applychattemplate(messages, addgenerationprompt=True, tokenize=False)
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(inputs, maxnewtokens=256, do_sample=False)
print(tok.decode(out[0], skipspecialtokens=True))

Data & training notes

  • SFT data: Proprietary synthetic medical data + search traces.
  • DPO signal: Preferences derived from MedMCQA multiple-choice correctness.
  • GRPO reward: Answer-checking + format verifiers; MedReason used to shape faithful, short CoT.
  • No known PHI; please open an issue if you spot any.

Compatibility & licenses

  • Base model: AFM-4.5B (Arcee). Refer to the base card/blog for architecture and usage details. License for AFM releases is Apache 2.0;
  • Merging: MergeKit with Arcee Fusion; see repo/blog for configuration.

Additional note

We also provide a non-merged openmed-community/AFM-4.5B-OpenMed-RL-CoT checkpoint after step 3 (GRPO). In our harness, it shows better CoT behavior but a significant drop on IFEVAL. Consider it if you want maximum reasoning verbosity, then apply your own MergeKit recipe.

πŸ“‚ GGUF File List

πŸ“ Filename πŸ“¦ Size ⚑ Download
AFM-4.5B-OpenMed-q2_k.gguf
LFS Q2
1.76 GB Download
AFM-4.5B-OpenMed-q4_k_m.gguf
Recommended LFS Q4
2.72 GB Download
AFM-4.5B-OpenMed-q8_0-00001-of-00002.gguf
LFS Q8
3.71 GB Download
AFM-4.5B-OpenMed-q8_0-00002-of-00002.gguf
LFS Q8
892.66 MB Download