Main Session
Sep 30
PQA 09 - Hematologic Malignancies, Health Services Research, Digital Health Innovation and Informatics

3738 - A Recent Evaluation on the Performance of LLMs on Radiation Oncology Physics Using Questions of Randomly Shuffled Options

04:00pm - 05:00pm PT
Hall F
Screen: 12
POSTER

Presenter(s)

Peilong Wang, PhD - Mayo Clinic in Arizona, Phoenix, AZ

P. Wang1, J. Holmes1, Z. Liu2, D. Chen3, T. Liu2, J. Shen1, and W. Liu1; 1Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, 2School of Computing, University of Georgia, Athens, GA, 3Department of Radiology, Mayo Clinic, Rochester, MN

Purpose/Objective(s): We present a study to evaluate the performance of large language models (LLMs) in answering radiation oncology physics questions, focusing on the recently released models.

Materials/Methods: A set of 100 multiple-choice radiation oncology physics questions, previously created by a well-experienced physicist following the official study guide of the American Board of Radiology, was used for this study. The answer options of the questions were randomly shuffled five times to create five "new" exam sets. Five LLMs -- OpenAI o1-preview, GPT-4o, LLaMA 3.1 (405B), Gemini 1.5 Pro, and Claude 3.5 Sonnet -- with the versions released before September 30, 2024, were queried through Application Programming Interface (API) services using these new exam sets. To further evaluate their deductive reasoning ability, the correct answer options in the questions were replaced with "None of the above." Then, the explain-first and step-by-step instruction prompts were used to test if this strategy improved their reasoning ability. The performance of the LLMs was compared with that of the medical physicists.

Results: All models demonstrated expert-level performance on these questions, with o1-preview even surpassing medical physicists with a majority vote. When replacing the correct answer options with "None of the above", all models exhibited a considerable decline in performance, suggesting room for improvement. The explain-first and step-by-step instruction prompts helped enhance the reasoning ability of the LLaMA 3.1 (405B), Gemini 1.5 Pro, and Claude 3.5 Sonnet models. Table 1 shows the performance of LLMs on radiation oncology physics, evaluated using questions with randomly shuffled options. The performance accuracy of each LLM is reported as the average score percentage across all five trials. The uncertainty associated with the reported accuracy is not included in the table due to word count limitations.

Conclusion: These recently released LLMs demonstrated expert-level performance in answering radiation oncology physics questions, exhibiting great potential to assist in radiation oncology physics education and training. The method of randomly shuffling answer options can be used a novel data augmentation technique for both model evaluation and pre-training in natural language processing.

Abstract 3738 - Table 1

Experiment Category o1-preview GPT-4o LLaMA 3.1 405B Gemini 1.5 pro Claude 3.5 Sonnet Med. Phys. (Maj. Vote)
Answer options randomly shuffled All questions 94 90 84 80 88 91
Math-based 99 96 81 65 86 94
"None of the above" All questions 72 64 52 43 54 -
Math-based 88 79 45 24 39 -
"explain-first and step-by-step" prompt All questions - - 57 61 60 -
Math-based - - 47 68 48 -