Main Session
Sep 30
PQA 09 - Hematologic Malignancies, Health Services Research, Digital Health Innovation and Informatics

3633 - Optimizing Segmentation Precision in Prostate Cancer Adaptive Radiotherapy with the Intentional Deep Overfit Learning (IDOL) Approach

04:00pm - 05:00pm PT
Hall F
Screen: 5
POSTER

Presenter(s)

Joonil Hwang, - KAIST, Daejeon, Daejeon

J. Hwang1, D. Choi2, E. Lee3, B. H. Kang4, Y. Park4, S. Cho5, and J. S. Kim6; 1Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology(KAIST), Daejeon, Korea, Republic of (South), 2Yonsei University Colledge of Medicine, Seoul, Korea, Republic of (South), 3Ewha Womans University of Medicine, Seoul, Korea, Republic of (South), 4Ewha, Seoul, Korea, Republic of (South), 5Korea Advanced Institute of Science and Technology, Daejeon, Korea, Republic of (South), 6Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Korea, Seoul, Korea, Republic of (South)

Purpose/Objective(s): Accurate segmentation of Gross Tumor Volume (GTV) and Planning Target Volume (PTV) is critical for ensuring precise dose delivery in prostate cancer adaptive radiation therapy (ART). However, anatomical variations across treatment fractions pose challenges to traditional deformable image registration (DIR)-based methods. We hypothesize that leveraging a deep learning framework capable of sequential fine-tuning can improve segmentation accuracy and consistency in ART. This study introduces the Intentional Deep Overfit Learning (IDOL) framework, which progressively refines segmentation accuracy across treatment fractions by integrating transformer-based and convolutional neural network architectures.

Materials/Methods: The IDOL framework employs two distinct fine-tuning strategies: IDOLadaptive and IDOLsequence. IDOLadaptive incrementally trains the model by incorporating data from the first treatment fraction onward while testing on the remaining ones, ultimately evaluating performance on the fifth fraction. In contrast, IDOLsequence adopts a stepwise fine-tuning approach, where each fraction is trained using the preceding fraction’s model, enabling adaptation to gradual anatomical changes. Both strategies leverage the Swin UNETR model, which combines global feature extraction from transformers with local spatial awareness from convolutional networks. Performance was assessed against traditional DIR and pre-trained models using Dice Similarity Coefficient (DSC), 95th percentile Hausdorff Distance (HD95), and Mean Surface Distance (MSD).

Results: Results confirmed our hypothesis, demonstrating that the IDOL framework significantly enhances segmentation accuracy over traditional methods. IDOLadaptive achieved a DSC of 0.9784 for GTV segmentation in the fifth fraction, surpassing IDOLsequence (0.9744) and the pre-trained model (0.9324). For PTV segmentation, IDOLadaptive achieved a DSC of 0.9501. Improvements were also observed in HD95 and MSD, underscoring the framework’s robustness in mitigating segmentation inconsistencies across treatment fractions.

Conclusion: The IDOL framework provides a novel and effective deep learning-based solution for ART, refining segmentation accuracy across treatment sessions through sequential fine-tuning. These findings have significant implications for clinical practice, as the framework offers a more reliable alternative to DIR-based methods, potentially improving real-time treatment adaptation and patient outcomes in prostate cancer radiotherapy. Future research will explore broader applications of this methodology to other cancer types and imaging modalities, with the goal of further optimizing ART strategies.