Main Session
Sep 29
PQA 03 - Central Nervous System, Professional Development/Medical Education

2600 - Use of a Foundation Medical Imaging Model to Improve the Performance for Automated Brain Metastasis Segmentation

08:00am - 09:00am PT
Hall F
Screen: 5
POSTER

Presenter(s)

Baozhou Sun, PhD, MBA Headshot
Baozhou Sun, PhD, MBA - Baylor College of Medicine, Houston, TX

Y. Han1, E. Zhu2, P. Pathak1, O. Awad1, A. S. Mohamed1, D. A. Hamstra1, X. Zhang2, S. A. Zaid1, and B. Sun1; 1Department of Radiation Oncology, Dan L. Duncan Comprehensive Cancer Center, Baylor College of Medicine, Houston, TX, 2Nanjing Medical University, NanJing, China

Purpose/Objective(s): Accurate segmentation of brain metastases is vital for diagnosis, treatment planning, and follow-up. However, manual segmentation is labor-intensive and subject to inter-observer variability. Traditional deep learning approaches, such as 3D convolutional neural networks (3D-CNNs), are typically designed and trained for a specific segmentation task and require an extensive annotated data set, and their performance can degrade significantly when applied to a new dataset. This study explores the potential of a foundation transformer model - the Segment Anything Model (SAM) trained on a large-scale dataset- as a more robust alternative for automated brain metastasis (BM) segmentation.

Materials/Methods: We adapted the Segment Anything in Medical Imaging (MedSAM) model, originally trained for general tissue and tumor segmentation using over 1.5 million images, for brain metastasis segmentation. Fine-tuning was performed using a few-shot learning approach on T1 post-contrast MRI pretreatment datasets from two institutions, including 301 patients with 2,548 lesions. For comparison, we trained an optimized DeepMedic-based 3D-CNN using the full training dataset. Independent evaluation was conducted on BM treatment data from a third institution, with pretreatment scans from 105 patients (397 lesions) and follow-up scans from 88 patients (338 lesions). Two radiation oncologists provided ground truth annotations. Segmentation performance was evaluated using Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95), with one physician’s contours as the reference. MedSAM’s performance at different few-shot learning stages was compared to 3D-CNNs and radiation oncologists on pretreatment and follow-up datasets.

Results: MedSAM fine-tuned with 50 cases achieved DSC/HD95 of 0.78/3.1mm (pretreatment) and 0.76/3.3mm (follow-up), outperforming traditional 3D-CNNs (0.70/3.5mm and 0.65/3.63mm, respectively) and the secondary physician (0.70/4.46mm and 0.65/4.8mm, respectively). With fine-tuning on only 10 patients, MedSAM still achieved scores of 0.77/3.4mm and 0.74/3.5mm, while zero-shot MedSAM achieved (0.70/4.15mm and 0.67/4.4mm, respectively).

Conclusion: Despite training only on pretreatment data, MedSAM demonstrated strong performance on follow-up scans, suggesting improved generalizability. These findings highlight its potential as a clinically viable and data-efficient solution for automated brain metastasis segmentation, reducing annotation burden and improving consistency in clinical workflows.