3671 - No-New-Network: Adapting Existing Deep Learning Frameworks for Versatile Clinical Applications in Radiotherapy
Presenter(s)
W. Lu1, M. Chen1, Q. Wang1, M. Kazemimoghadam1, K. Zhang2, H. Jiang1, and X. Gu3; 1Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 2Stanford University, Stanford, CA, 3Stanford University Department of Radiation Oncology, Palo Alto, CA
Purpose/Objective(s): Developing deep learning models for specific clinical tasks can be resource-intensive. This work demonstrates how existing deep learning frameworks can be leveraged with novel data and applications instead of designing new architectures. By focusing on data exploration and task-specific adaptations, we extend the capabilities of segmentation networks to applications such as image synthesis, dose prediction, anomaly detection, and contour quality assurance. This approach maximizes the utility of established frameworks while efficiently adapting them to diverse clinical needs.
Materials/Methods: We adapted the nnU-Net framework with minimal modifications for four distinct applications. For MR-to-synthetic-CT conversion, we replaced the loss function with Huber loss to transform the network into an image-to-image model and trained three variants (T1w2sCT, T2w2sCT, and T2Flair2sCT) across different treatment sites and imaging protocols using registered MR-CT pairs. A similar strategy was applied for PET-to-synthetic-CT conversion using paired PET/CT datasets. For universal dose prediction, we incorporated three input channels—prescription, avoidance, and beam trace images—to encode dose tradeoffs and beam configurations, training the model on a diverse dataset spanning 25 treatment sites. For anomaly detection, we modified nnU-Net into an autoencoder to reconstruct segmentation masks and identify anomalies. Each adaptation required only minor modifications, demonstrating the framework’s versatility. All applications run from the same complied code with small variant in configuration.
Results: The universal MR-to-synthetic-CT models achieved high gamma passing rates (98.6% for 2%/2mm), supporting MR-only radiotherapy planning. The PET-to-synthetic-CT model successfully generated anatomical structures without metal artifacts, enabling CT-free PET imaging. The dose prediction model attained a 92.36% gamma passing rate compared to clinical plans, demonstrating strong consistency with optimized plans. The anomaly detection model exhibited negligible errors for normal masks while detecting anomalies with significantly higher error rates, facilitating reliable quality assurance.
Conclusion: This study highlights the potential of repurposing existing deep learning frameworks for new clinical applications rather than developing entirely new architectures. By strategically adapting a segmentation network, we provide an efficient, cost-effective, and scalable approach to tackling various challenges in medical imaging and radiotherapy. This "No-New-Network" strategy minimizes coding requirements, accelerates model development, and promotes safer and more innovative solutions for clinical implementation.