Main Session
Sep 29
SS 20 - Patient Safety 1: Harnessing AI and Team Efforts to Enhance Patient Care through Workflow and Automation Improvements

218 - Improving Contouring for Radiotherapy Trainees with a Novel Real-Time Contouring Feedback Tool

11:05am - 11:15am PT
Room 160

Presenter(s)

Abraham Arenas, MD - Baylor College of Medicine, Houston, TX

A. Arenas1, R. Neuberger1, M. O. Zorigt2, T. Baatar2, S. Buukhuu2, O. Bayarsaikhan2, M. Minjgee2, S. Ahmed1, S. A. Zaid1, D. A. Hamstra1, and B. Sun1; 1Department of Radiation Oncology, Dan L. Duncan Comprehensive Cancer Center, Baylor College of Medicine, Houston, TX, 2National Cancer Center of Mongolia, Ulaanbaatar, Mongolia

Purpose/Objective(s): Accurate and timely contouring is critical for radiation oncologists and vital for safe radiotherapy delivery, especially in low- and middle-income countries (LMICs) with limited training programs. Traditional training relies on physician feedback but demands significant commitment with potential intra/inter-observer variability. A real-time contouring feedback tool offers expert consensus guidelines and contoured cases in an online environment. Users receive instant feedback in guide mode and can assess their skills versus the expert consensus in verify mode. This study evaluates the potential of this tool to improve contouring accuracy in radiotherapy trainees.

Materials/Methods: Our pilot study used head and neck contour sets. Participants included five trainees: one 4th-year US medical student and four radiation oncology trainees from a LMIC. Each contoured a head and neck OAR set pre-intervention, recording baseline Dice Similarity Coefficient (DSC) scores. The most variable OARs were selected: L/R brachial plexus (BP), oral cavity, glottis, optic chiasm, L/R optic nerves, and L/R parotid. Next, trainees were split into two groups to complete five contour sets: Group A, guide (n=3), used utilized the real-time contouring feedback tool and guideline sets; Group B, verify (n=2), used verify to practice skills without expert consensus contours. Next, all participants completed a case in verify mode to evaluate the quality of their contours. The accuracy of contours versus the expert consensus for each OAR was assessed using DSC. Descriptive statistics included Wilcoxon Signed Rank Test to evaluate the difference between pre- and post-training scores of DSC.

Results: Group A with instant feedback had a significant increase in DSC: baseline mean ± SD of 0.34±0.31 to post-guide 0.65±0.27. While group B with verify mode had a baseline mean ± SD of 0.46±0.25 and post verify 0.53±0.23. Below are a few representative structures DSC (Mean ± SD) from each group. Wilcoxon signed rank test demonstrated a statistically significant difference in baseline DSC in Group A, p<0.001. While the verify arm (Group B) did not demonstrate a statistically significant difference in baseline versus final evaluation score, p=0.09.

Conclusion: Significant improvement in DSC from baseline to post-training evaluation are observed in the real-time contouring feedback tool training group and are not maintained in participants who only used verify mode. This novel software may aide trainees to accurately contour OARs, thus being a useful education tool for trainees regardless of radiation oncology background.

Abstract 218 - Table 1

Group A

Group B

OAR

Baseline

Post-Guide

Baseline

Post-Verify

BP L

0.09 ± 0.15

0.47 ± 0.40

0.39 ± 0.08

0.43 ± 0.15

BP R

0.10 ± 0.49

0.82 ± 0.14

0.40 ± 0.07

0.30 ± 0.06

Glottis

0.17 ± 0.29

0.76 ± 0.11

0.52 ± 0.00

0.49 ± 0.29

Parotid L

0.64 ± 0.04

0.85 ± 0.05

0.73 ± 0.06

0.76 ± 0.09

Parotid R

0.63 ± 0.08

0.82 ± 0.06

0.39 ± 0.55

0.76 ± 0.07