Main Session
Oct 01
QP 26 - Radiation and Cancer Physics 12: AI Application in Imaging and Treatment

1153 - Toward Digital Detectors; Synthetic Image Generation for a Novel Dual-Layer kV Imager

12:20pm - 12:25pm PT
Room 154

Presenter(s)

Dianne Ferguson, PhD, DABR - Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA

D. Ferguson1, F. De Kermenguy1, Y. H. Hu1, M. Jacobson1, M. Myronakis1, R. Etemadpour1, T. C. Harris1, V. Birrer2, R. Bruegger2, P. Corral Arroyo2, R. Fueglistaller2, M. Lehmann2, and R. Berbeco3; 1Department of Radiation Oncology, Brigham and Women’s Hospital, Dana Farber Cancer Institute and Harvard Medical School, Boston, MA, 2Varian Medical Systems, Baden-Dattwil, Switzerland, 3Brigham and Women's Hospital, Boston, MA

Purpose/Objective(s):

Modern radiation therapy relies on precise imaging before treatment delivery to ensure accurate tumor targeting. The most widely used imaging technologies for this purpose are kV X-ray flat-panel imagers, mounted on the treatment machine. We have been working on the development of a new generation of these devices, such as the dual-layer kV imager (DLI), which simultaneously acquires two images with some separation in the average energy. This panel construction enables spectral image processing, which includes techniques that can enhance or reduce the appearance of certain tissue types, potentially improving the visibility of the tumor at patient setup. Despite the capabilities of this technology, the potential for global impact is limited by the practical and financial challenges of large-scale production and dissemination. As an alternative to a new physical imager, we evaluate a deep learning neural network to accurately generate the second layer image, which would demonstrate the possibility for any currently installed imaging panel to function as a DLI through synthetic image generation.

Materials/Methods:

A dataset of 456 patient projection images was used as a training dataset for a conditional generative adversarial model (cGAN), trained to predict the second layer image of the DLI. The dataset includes a variety of anatomical sites and imaging protocols with the goal of evaluating the models' robustness to complex data. The trained cGan is composed of two networks, a generator and discriminator. The generator is an encoder-decoder model using a U-Net architecture which takes the true bottom layer image of the DLI and generates a synthetic image and the discriminator is a deep convolutional neural network that performs image classification, taking both the true and synthetic images and generates a likelihood that the synthetic image is fake. The model was trained over 100 epochs and the synthetic images, generated from a test dataset of 250 patient projections, were evaluated using common image similarity metrics, including the mean square error (MSE) and the structural similarity index measure (SSIM).

Results:

Evaluating the test dataset by comparing the generated and true bottom layer images through the evaluation of the MSE and the SSIM, the generated images were found to be very accurate to the true bottom layer images. The average MSE of the 250 image pairs was 0.0038 ± 0.0043, indicating a very close agreement of the generated pixel values to the true and the average SSIM was 0.993 ± 0.008 indicating that the structural information in the image pairs agrees very well.

Conclusion:

A cGAN model was trained to generate the bottom layer of the DLI from the top layer image. Evaluating the generated vs true images against standard similarity metrics, the model was shown to accurately generate pixel and structural information captured in the second layer image.