Main Session
Oct 01
QP 27 - Radiation and Cancer Physics 13: Imaging for Treatment Monitoring

1156 - Orthogonal Projections-Guided Cascading Volumetric Reconstruction and Tumor-Tracking for Adaptive Intra-Fractional Radiotherapy

12:05pm - 12:10pm PT
Room 159

Presenter(s)

Gongsen Zhang, MD Headshot
Gongsen Zhang, MD - Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Ji'nan, Shandong

G. Zhang1, Y. An1, H. Shu2, Z. Jiang3, P. Gao1, Y. Wang3, J. Zhu4, and L. Wang5; 1Artificial Intelligence Laboratory, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Ji'nan, China, 2Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, Southeast University, Nanjing, China, 3Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Ji'nan, China, 4handong Cancer Hospital, Shandong Provincial Key Medical and Health Laboratory of Pediatric Cancer Precision Radiotherapy, Ji'nan, China, 5Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China

Purpose/Objective(s): Respiration-induced anatomical motion and deformation pose significant challenges to precision radiotherapy delivery, particularly in the areas of intra-fractional adaptive beam modification and online dose reconstruction. To address these issues, we propose a cascading framework for time-varying anatomical volumetric reconstruction and tumor-tracking, guided by onboard orthogonal-view X-ray projections.

Materials/Methods: The framework employs multiple deep learning components to decompose the volumetric tumor mask inference into two subtasks: i) volumetric reconstruction from planar projections, and ii) tumor localization on the reconstructed CT, to facilitate dimensionality elevation of anatomical information and phased regulation of component performance. Embedded with convolutional block attention modules (CBAM), a conditional generative-adversarial network (cGAN) is employed as backbone of the volumetric reconstruction network, for which filtered back projection (FBP) is improved to compensate for highly limited anatomical information from extremely sparse-view projections. For tumor localization, we use a hybrid network integrating Swin-Transformer and CNN blocks to perform segmentation on the reconstructed time-varying CT. This component is enhanced by stacking full-scale fused features from the reconstruction component into the multi-channel inputs. Additionally, the framework incorporates planning images and clinical delineation-based priori-knowledge enhancements throughout the cascading process. For the deep learning development, validation, and evaluation pipeline, we enrolled 312 real patients with non-small cell lung cancer (NSCLC) from our institute and a public dataset of The Cancer Imaging Archive (TCIA).

Results: The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) of the reconstructed CTs were 31.02 dB and 0.95, respectively, while tumor localization achieved a centroid deviation amplitude of 0.47 mm and a Dice similarity coefficient (DSC) of 0.96. Experiments on input data indicated that the number of sparse-view projections, rather than angles, affects reconstruction performance. Furthermore, effectiveness of improvements in network settings and framework workflow optimization are validated by ablation experiments. The FBP enhancement on input data and the embedded CBAM modules improved the PSNR of volumetric reconstruction by 2.94 dB and 0.14 dB, respectively, and the cascading localization and full-scale feature fusion strategies improved the DSC of time-varying tumor-tracking by 0.06 and 0.05, respectively. The total time taken for the cascading time-varying reconstruction and tumor-tracking was 197.35±7.12 ms.

Conclusion: With a cascading volumetric reconstruction and tumor-tracking pipeline, our proposed deep learning framework represents a promising step toward image-guided adaptive intra-fractional radiotherapy.