π Model Description
license: other license_name: health-ai-developer-foundations license_link: https://developers.google.com/health-ai-developer-foundations/terms library_name: transformers pipeline_tag: image-text-to-text extragatedheading: Access MedGemma on Hugging Face extragatedprompt: >- To access MedGemma on Hugging Face, you're required to review and agree to Health AI Developer Foundation's terms of use. To do this, please ensure you're logged in to Hugging Face and click below. Requests are processed immediately. extragatedbutton_content: Acknowledge license base_model:
- google/medgemma-4b-it
- medical
- unsloth
- radiology
- clinical-reasoning
- dermatology
- pathology
- ophthalmology
- chest-x-ray
Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.
MedGemma model card
Model documentation: MedGemma
Resources:
- Model on Google Cloud Model Garden: MedGemma
- Model on Hugging Face: MedGemma
- GitHub repository (supporting code, Colab notebooks, discussions, and
- Quick start notebook: GitHub
- Fine-tuning notebook: GitHub
- Concept applications built using MedGemma: Collection
- Support: See Contact
- License: The use of MedGemma is governed by the Health AI Developer
Author: Google
Model information
This section describes the MedGemma model and how to use it.
Description
MedGemma is a collection of Gemma 3
variants that are trained for performance on medical text and image
comprehension. Developers can use MedGemma to accelerate building
healthcare-based AI applications. MedGemma currently comes in three variants: a
4B multimodal version and 27B text-only and multimodal versions.
Both MedGemma multimodal versions utilize a
SigLIP image encoder that has been
specifically pre-trained on a variety of de-identified medical data, including
chest X-rays, dermatology images, ophthalmology images, and histopathology
slides. Their LLM components are trained on a diverse set of medical data,
including medical text, medical question-answer pairs, FHIR-based electronic
health record data (27B multimodal only), radiology images, histopathology
patches, ophthalmology images, and dermatology images.
MedGemma 4B is available in both pre-trained (suffix: -pt
) and
instruction-tuned (suffix -it
) versions. The instruction-tuned version is a
better starting point for most applications. The pre-trained version is
available for those who want to experiment more deeply with the models.
MedGemma 27B multimodal has pre-training on medical image, medical record and
medical record comprehension tasks. MedGemma 27B text-only has been trained
exclusively on medical text. Both models have been optimized for inference-time
computation on medical reasoning. This means it has slightly higher performance
on some text benchmarks than MedGemma 27B multimodal. Users who want to work
with a single model for both medical text, medical record and medical image
tasks are better suited for MedGemma 27B multimodal. Those that only need text
use-cases may be better served with the text-only variant. Both MedGemma 27B
variants are only available in instruction-tuned versions.
MedGemma variants have been evaluated on a range of clinically relevant
benchmarks to illustrate their baseline performance. These evaluations are based
on both open benchmark datasets and curated datasets. Developers can fine-tune
MedGemma variants for improved performance. Consult the Intended
Use
section below for more details.
MedGemma is optimized for medical applications that involve a text generation
component. For medical image-based applications that do not involve text
generation, such as data-efficient classification, zero-shot classification, or
content-based or semantic image retrieval, the MedSigLIP image
encoder
is recommended. MedSigLIP is based on the same image encoder that powers
MedGemma.
Please consult the MedGemma Technical Report
for more details.
How to use
Below are some example code snippets to help you quickly get started running the
model locally on GPU. If you want to use the model at scale, we recommend that
you create a production version using Model
Garden.
First, install the Transformers library. Gemma 3 is supported starting from
transformers 4.50.0.
$ pip install -U transformers
Run model with the pipeline
API
from transformers import pipeline
from PIL import Image
import requests
import torch
pipe = pipeline(
"image-text-to-text",
model="google/medgemma-4b-it",
torch_dtype=torch.bfloat16,
device="cuda",
)
Image attribution: Stillwaterising, CC0, via Wikimedia Commons
imageurl = "https://upload.wikimedia.org/wikipedia/commons/c/c8/ChestXrayPA3-8-2010.png"
image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are an expert radiologist."}]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this X-ray"},
{"type": "image", "image": image}
]
}
]
output = pipe(text=messages, maxnewtokens=200)
print(output[0]["generated_text"][-1]["content"])
Run the model directly
# pip install accelerate
from transformers import AutoProcessor, AutoModelForImageTextToText
from PIL import Image
import requests
import torch
model_id = "google/medgemma-4b-it"
model = AutoModelForImageTextToText.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
processor = AutoProcessor.frompretrained(modelid)
Image attribution: Stillwaterising, CC0, via Wikimedia Commons
imageurl = "https://upload.wikimedia.org/wikipedia/commons/c/c8/ChestXrayPA3-8-2010.png"
image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are an expert radiologist."}]
},
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this X-ray"},
{"type": "image", "image": image}
]
}
]
inputs = processor.applychattemplate(
messages, addgenerationprompt=True, tokenize=True,
returndict=True, returntensors="pt"
).to(model.device, dtype=torch.bfloat16)
inputlen = inputs["inputids"].shape[-1]
with torch.inference_mode():
generation = model.generate(inputs, maxnewtokens=200, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skipspecialtokens=True)
print(decoded)
Examples
See the following Colab notebooks for examples of how to use MedGemma:
- To give the model a quick try, running it locally with weights from Hugging
- For an example of fine-tuning the 4B model, see the Fine-tuning notebook in
Model architecture overview
The MedGemma model is built based on Gemma 3 and
uses the same decoder-only transformer architecture as Gemma 3\. To read more
about the architecture, consult the Gemma 3 model
card.
Technical specifications
- Model type: Decoder-only Transformer architecture, see the Gemma 3
- Input Modalities: Text, vision
- Output Modality: Text only
- Attention mechanism: Grouped-query attention (GQA)
- Context length: Supports long context, at least 128K tokens
- Key publication: https://arxiv.org/abs/2507.05201
- Model created: July 9, 2025
- Model version: 1.0.1
Citation
When using this model, please cite: Sellergren et al. "MedGemma Technical
Report." arXiv preprint arXiv:2507.05201 (2025).
@article{sellergren2025medgemma,
title={MedGemma Technical Report},
author={Sellergren, Andrew and Kazemzadeh, Sahar and Jaroensri, Tiam and Kiraly, Atilla and Traverse, Madeleine and Kohlberger, Timo and Xu, Shawn and Jamil, Fayaz and Hughes, CΓan and Lau, Charles and others},
journal={arXiv preprint arXiv:2507.05201},
year={2025}
}
Inputs and outputs
Input:
- Text string, such as a question or prompt
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens each
- Total input length of 128K tokens
Output:
- Generated text in response to the input, such as an answer to a question,
- Total output length of 8192 tokens
Performance and validation
MedGemma was evaluated across a range of different multimodal classification,
report generation, visual question answering, and text-based tasks.
Key performance metrics
#### Imaging evaluations
The multimodal performance of MedGemma 4B and 27B multimodal was evaluated
across a range of benchmarks, focusing on radiology, dermatology,
histopathology, ophthalmology, and multimodal clinical reasoning.
MedGemma 4B outperforms the base Gemma 3 4B model across all tested multimodal
health benchmarks.
Task and metric | Gemma 3 4B | MedGemma 4B | ||
---|---|---|---|---|
Medical image classification | ||||
MIMIC CXR\\ \- macro F1 for top 5 conditions | 81.2 | 88.9 | ||
CheXpert CXR \- macro F1 for top 5 conditions | 32.6 | 48.1 | ||
CXR14 \- macro F1 for 3 conditions | 32.0 | 50.1 | ||
PathMCQA\ (histopathology, internal\\*) \- Accuracy | 37.1 | 69.8 | ||
US-DermMCQA\* \- Accuracy | 52.5 | 71.8 | ||
EyePACS\* (fundus, internal) \- Accuracy | 14.4 | 64.9 | ||
Visual question answering | ||||
SLAKE (radiology) \- Tokenized F1 | 40.2 | 72.3 | ||
VQA-RAD\\\* (radiology) \- Tokenized F1 | 33.6 | 49.9 | ||
Knowledge and reasoning | ||||
MedXpertQA (text \+ multimodal questions) \- Accuracy | 16.4 | 18.8 |
Based on radiologist adjudicated labels, described in Yang (2024,
arXiv) Section A.1.1.
*Based on "balanced split," described in Yang (2024,
arXiv).
#### Chest X-ray report generation
MedGemma chest X-ray (CXR) report generation performance was evaluated on
MIMIC-CXR using the RadGraph
F1 metric. We compare the MedGemma
pre-trained checkpoint with our previous best model for CXR report generation,
PaliGemma 2.
Metric | MedGemma 4B (pre-trained) | MedGemma 4B (tuned for CXR) | PaliGemma 2 3B (tuned for CXR) | PaliGemma 2 10B (tuned for CXR) |
---|---|---|---|---|
MIMIC CXR \- RadGraph F1 | 29.5 | 30.3 | 28.8 | 29.5 |
The instruction-tuned versions of MedGemma 4B and MedGemma 27B achieve lower
scores (21.9 and 21.3, respectively) due to the differences in reporting style
compared to the MIMIC ground truth reports. Further fine-tuning on MIMIC reports
enables users to achieve improved performance, as shown by the improved
performance of the MedGemma 4B model that was tuned for CXR.
#### Text evaluations
MedGemma 4B and text-only MedGemma 27B were evaluated across a range of
text-only benchmarks for medical knowledge and reasoning.
The MedGemma models outperform their respective base Gemma models across all
tested text-only health benchmarks.
Metric | Gemma 3 4B | MedGemma 4B |
---|---|---|
MedQA (4-op) | 50.7 | 64.4 |
MedMCQA | 45.4 | 55.7 |
PubMedQA | 68.4 | 73.4 |
MMLU Med | 67.2 | 70.0 |
MedXpertQA (text only) | 11.6 | 14.2 |
AfriMed-QA (25 question test set) | 48.0 | 52.0 |
#### Medical record evaluations
All models were evaluated on a question answer dataset from synthetic FHIR data
to answer questions about patient records. MedGemma 27B multimodal's
FHIR-specific training gives it significant improvement over other MedGemma and
Gemma models.
Metric | Gemma 3 4B | MedGemma 4B |
---|---|---|
EHRQA | 70.9 | 67.6 |
Ethics and safety evaluation
#### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- Child safety: Evaluation of text-to-text and image-to-text prompts
- Content safety: Evaluation of text-to-text and image-to-text prompts
- Representational harms: Evaluation of text-to-text and image-to-text
- General medical harms: Evaluation of text-to-text and image-to-text
In addition to development level evaluations, we conduct "assurance evaluations"
which are our "arms-length" internal evaluations for responsibility governance
decision making. They are conducted separately from the model development team,
to inform decision making about release. High-level findings are fed back to the
model team, but prompt sets are held out to prevent overfitting and preserve the
results' ability to inform decision making. Notable assurance evaluation results
are reported to our Responsibility & Safety Council as part of release review.
#### Evaluation results
For all areas of safety testing, we saw safe levels of performance across the
categories of child safety, content safety, and representational harms. All
testing was conducted without safety filters to evaluate the model capabilities
and behaviors. For text-to-text, image-to-text, and audio-to-text, and across
both MedGemma model sizes, the model produced minimal policy violations. A
limitation of our evaluations was that they included primarily English language
prompts.
Data card
Dataset overview
#### Training
The base Gemma models are pre-trained on a large corpus of text and code data.
MedGemma 4B utilizes a SigLIP image encoder
that has been specifically pre-trained on a variety of de-identified medical
data, including radiology images, histopathology images, ophthalmology images,
and dermatology images. Its LLM component is trained on a diverse set of medical
data, including medical text relevant to radiology images, chest-x rays,
histopathology patches, ophthalmology images and dermatology images.
#### Evaluation
MedGemma models have been evaluated on a comprehensive set of clinically
relevant benchmarks, including over 22 datasets across 5 different tasks and 6
medical image modalities. These include both open benchmark datasets and curated
datasets, with a focus on expert human evaluations for tasks like CXR report
generation and radiology VQA.
Ethics and safety evaluation
#### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- Child safety: Evaluation of text-to-text and image-to-text prompts
- Content safety: Evaluation of text-to-text and image-to-text prompts
- Representational harms: Evaluation of text-to-text and image-to-text
- General medical harms: Evaluation of text-to-text and image-to-text
In addition to development level evaluations, we conduct "assurance evaluations"
which are our "arms-length" internal evaluations for responsibility governance
decision making. They are conducted separately from the model development team,
to inform decision making about release. High-level findings are fed back to the
model team, but prompt sets are held out to prevent overfitting and preserve the
results' ability to inform decision making. Notable assurance evaluation results
are reported to our Responsibility & Safety Council as part of release review.
#### Evaluation results
For all areas of safety testing, we saw safe levels of performance across the
categories of child safety, content safety, and representational harms. All
testing was conducted without safety filters to evaluate the model capabilities
and behaviors. For text-to-text, image-to-text, and audio-to-text, and across
both MedGemma model sizes, the model produced minimal policy violations. A
limitation of our evaluations was that they included primarily English language
prompts.
Data card
Dataset overview
#### Training
The base Gemma models are pre-trained on a large corpus of text and code data.
MedGemma multimodal variants utilize a
SigLIP image encoder that has been
specifically pre-trained on a variety of de-identified medical data, including
radiology images, histopathology images, ophthalmology images, and dermatology
images. Their LLM component is trained on a diverse set of medical data,
including medical text, medical question-answer pairs, FHIR-based electronic
health record data (27B multimodal only), radiology images, histopathology
patches, ophthalmology images, and dermatology images.
#### Evaluation
MedGemma models have been evaluated on a comprehensive set of clinically
relevant benchmarks, including over 22 datasets across 6 different tasks and 4
medical image modalities. These benchmarks include both open and internal
datasets.
#### Source
MedGemma utilizes a combination of public and private datasets.
This model was trained on diverse public datasets including MIMIC-CXR (chest
X-rays and reports), ChestImaGenome: Set of bounding boxes linking image
findings with anatomical regions for MIMIC-CXR (MedGemma 27B multimodal only),
SLAKE (multimodal medical images and questions), PAD-UFES-20 (skin lesion images
and data), SCIN (dermatology images), TCGA (cancer genomics data), CAMELYON
(lymph node histopathology images), PMC-OA (biomedical literature with images),
and Mendeley Digital Knee X-Ray (knee X-rays).
Additionally, multiple diverse proprietary datasets were licensed and
incorporated (described next).
Data Ownership and Documentation
- MIMIC-CXR: MIT Laboratory
- Slake-VQA: The Hong Kong Polytechnic
- PAD-UFES-20: Federal
- SCIN: A collaboration
- TCGA (The Cancer Genome Atlas): A joint
- CAMELYON: The data was
- MedQA: This dataset was created by a
- AfriMed-QA: This data was developed and led by
- VQA-RAD: This dataset was
- Chest ImaGenome: IBM
- MedXpertQA: This
In addition to the public datasets listed above, MedGemma was also trained on
de-identified, licensed datasets or datasets collected internally at Google from
consented participants.
- Radiology dataset 1: De-identified dataset of different CT studies
- Ophthalmology dataset 1 (EyePACS): De-identified dataset of fundus
- Dermatology dataset 1: De-identified dataset of teledermatology skin
- Dermatology dataset 2: De-identified dataset of skin cancer images (both
- Dermatology dataset 3: De-identified dataset of non-diseased skin images
- Pathology dataset 1: De-identified dataset of histopathology H\&E whole
- Pathology dataset 2: De-identified dataset of lung histopathology H\&E
- Pathology dataset 3: De-identified dataset of prostate and lymph node
- Pathology dataset 4: De-identified dataset of histopathology whole slide
- EHR dataset 1: Question/answer dataset drawn from synthetic FHIR records
Data citation
- MIMIC-CXR: Johnson, A., Pollard, T., Mark, R., Berkowitz, S., & Horng,
- SLAKE: Liu, Bo, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, and Xiao-Ming Wu.
- PAD-UEFS-20: Pacheco, Andre GC, et al. "PAD-UFES-20: A skin lesion
- SCIN: Ward, Abbi, Jimmy Li, Julie Wang, Sriram Lakshminarasimhan, Ashley
- TCGA: The results shown here are in whole or part based upon data
- CAMELYON16: Ehteshami Bejnordi, Babak, Mitko Veta, Paul Johannes van
- Mendeley Digital Knee X-Ray: Gornale, Shivanand; Patravali, Pooja
- VQA-RAD: Lau, Jason J., Soumya Gayen, Asma Ben Abacha, and Dina
- Chest ImaGenome: Wu, J., Agu, N., Lourentzou, I., Sharma, A., Paguio,
- MedQA: Jin, Di, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang,
- AfrimedQA: Olatunji, Tobi, Charles Nimo, Abraham Owodunni, Tassallah
- MedExpQA: Alonso, I., Oronoz, M., & Agerri, R. (2024). MedExpQA:
- MedXpertQA: Zuo, Yuxin, Shang Qu, Yifei Li, Zhangren Chen, Xuekai Zhu,
De-identification/anonymization:
Google and its partners utilize datasets that have been rigorously anonymized or
de-identified to ensure the protection of individual research participants and
patient privacy.
Implementation information
Details about the model internals.
Software
Training was done using JAX.
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
Use and limitations
Intended use
MedGemma is an open multimodal generative AI model intended to be used as a
starting point that enables more efficient development of downstream healthcare
applications involving medical text and images. MedGemma is intended for
developers in the life sciences and healthcare space. Developers are responsible
for training, adapting and making meaningful changes to MedGemma to accomplish
their specific intended use. MedGemma models can be fine-tuned by developers
using their own proprietary data for their specific tasks or solutions.
MedGemma is based on Gemma 3 and has been further trained on medical images and
text. MedGemma enables further development in any medical context (image and
textual), however the model was pre-trained using chest X-ray, pathology,
dermatology, and fundus images. Examples of tasks within MedGemma's training
include visual question answering pertaining to medical images, such as
radiographs, or providing answers to textual medical questions. Full details of
all the tasks MedGemma has been evaluated can be found in the MedGemma
Technical Report.
Benefits
- Provides strong baseline medical image and text comprehension for models of
- This strong performance makes it efficient to adapt for downstream
- This adaptation may involve prompt engineering, grounding, agentic
Limitations
MedGemma is not intended to be used without appropriate validation, adaptation
and/or making meaningful modification by developers for their specific use case.
The outputs generated by MedGemma are not intended to directly inform clinical
diagnosis, patient management decisions, treatment recommendations, or any other
direct clinical practice applications. Performance benchmarks highlight baseline
capabilities on relevant benchmarks, but even for image and text domains that
constitute a substantial portion of training data, inaccurate model output is
possible. All outputs from MedGemma should be considered preliminary and require
independent verification, clinical correlation, and further investigation
through established research and development methodologies.
MedGemma's multimodal capabilities have been primarily evaluated on single-image
tasks. MedGemma has not been evaluated in use cases that involve comprehension
of multiple images.
MedGemma has not been evaluated or optimized for multi-turn applications.
MedGemma's training may make it more sensitive to the specific prompt used than
Gemma 3\.
When adapting MedGemma developer should consider the following:
- Bias in validation data: As with any research, developers should ensure
- Data contamination concerns: When evaluating the generalization
Release notes
- May 20, 2025: Initial Release
- July 9, 2025 Bug Fix: Fixed the subtle degradation in the multimodal
π GGUF File List
π Filename | π¦ Size | β‘ Download |
---|---|---|
medgemma-4b-it-BF16.gguf
LFS
FP16
|
7.23 GB | Download |
medgemma-4b-it-IQ4_NL.gguf
LFS
Q4
|
2.2 GB | Download |
medgemma-4b-it-IQ4_XS.gguf
LFS
Q4
|
2.11 GB | Download |
medgemma-4b-it-Q2_K.gguf
LFS
Q2
|
1.61 GB | Download |
medgemma-4b-it-Q2_K_L.gguf
LFS
Q2
|
1.61 GB | Download |
medgemma-4b-it-Q3_K_M.gguf
LFS
Q3
|
1.95 GB | Download |
medgemma-4b-it-Q3_K_S.gguf
LFS
Q3
|
1.8 GB | Download |
medgemma-4b-it-Q4_0.gguf
Recommended
LFS
Q4
|
2.21 GB | Download |
medgemma-4b-it-Q4_1.gguf
LFS
Q4
|
2.39 GB | Download |
medgemma-4b-it-Q4_K_M.gguf
LFS
Q4
|
2.32 GB | Download |
medgemma-4b-it-Q4_K_S.gguf
LFS
Q4
|
2.21 GB | Download |
medgemma-4b-it-Q5_K_M.gguf
LFS
Q5
|
2.64 GB | Download |
medgemma-4b-it-Q5_K_S.gguf
LFS
Q5
|
2.57 GB | Download |
medgemma-4b-it-Q6_K.gguf
LFS
Q6
|
2.97 GB | Download |
medgemma-4b-it-Q8_0.gguf
LFS
Q8
|
3.85 GB | Download |
medgemma-4b-it-UD-IQ1_M.gguf
LFS
|
1.16 GB | Download |
medgemma-4b-it-UD-IQ1_S.gguf
LFS
|
1.1 GB | Download |
medgemma-4b-it-UD-IQ2_M.gguf
LFS
Q2
|
1.46 GB | Download |
medgemma-4b-it-UD-IQ2_XXS.gguf
LFS
Q2
|
1.25 GB | Download |
medgemma-4b-it-UD-IQ3_XXS.gguf
LFS
Q3
|
1.59 GB | Download |
medgemma-4b-it-UD-Q2_K_XL.gguf
LFS
Q2
|
1.65 GB | Download |
medgemma-4b-it-UD-Q3_K_XL.gguf
LFS
Q3
|
2 GB | Download |
medgemma-4b-it-UD-Q4_K_XL.gguf
LFS
Q4
|
2.37 GB | Download |
medgemma-4b-it-UD-Q5_K_XL.gguf
LFS
Q5
|
2.64 GB | Download |
medgemma-4b-it-UD-Q6_K_XL.gguf
LFS
Q6
|
3.32 GB | Download |
medgemma-4b-it-UD-Q8_K_XL.gguf
LFS
Q8
|
4.81 GB | Download |
mmproj-BF16.gguf
LFS
FP16
|
811.82 MB | Download |
mmproj-F16.gguf
LFS
FP16
|
811.82 MB | Download |
mmproj-F32.gguf
LFS
|
1.56 GB | Download |