4 10

Cited 0 times in

Cited 0 times in

Evaluating diagnostic accuracy of large language models in neuroradiology cases using image inputs from JAMA neurology and JAMA clinical challenges

Authors
 Albaqshi, Ahmed  ;  Ko, Ji Su  ;  Suh, Chong Hyun  ;  Suh, Pae Sun  ;  Shim, Woo Hyun  ;  Heo, Hwon  ;  Woo, Chang-Yun  ;  Park, Hyungjun 
Citation
 SCIENTIFIC REPORTS, Vol.15(1), 2025-11 
Article Number
 43027 
Journal Title
SCIENTIFIC REPORTS
Issue Date
2025-11
MeSH
Humans ; Jamaica ; Language* ; Large Language Models ; Neuroimaging* / methods ; Neurology*
Keywords
Artificial intelligence ; Deep learning ; Image interpretation ; Computer-assisted ; Neuroimaging
Abstract
This study assesses the diagnostic performance of six LLMs -GPT-4v, GPT-4o, Gemini 1.5 Pro, Gemini 1.5 Flash, Claude 3.0, and Claude 3.5-on complex neurology cases from JAMA Neurology and JAMA, focusing on their image interpretation abilities. We selected 56 radiology cases from JAMA Neurology and JAMA (from May 2015 to April 2024), rephrasing the text and reshuffling multiple-choice answer. Each LLM processed four input types: original quiz with images, rephrased text with images, rephrased text only, and images only. Model performance was compared with three neuroradiologists, and consistency was assessed across five repetitions using Fleiss' kappa. In the image-only condition, LLMs answered six specific questions regarding modality, sequence, contrast, plane, anatomical, and pathologic locations, and their accuracy was evaluated. Claude 3.5 achieved the highest accuracy (80.4%) on original image and text inputs. The accuracy using the rephrased quiz text with image ranged from 62.5% (35/56) to 76.8% (43/56). The accuracy using the rephrased quiz text only ranged from 51.8% (29/56) to 76.8% (43/56). LLMs performed on par with first-year fellows (71.4% [40/56]) but surpassed junior faculty (51.8% [29/56]) and second-year fellows (48.2% [27/56]). All LLMs showed almost similar results across the five repetitions (0.860-1.000). In image-only tasks, LLM accuracy in identifying pathologic locations ranged from 21.5% (28/130) to 63.1% (82/130). LLMs exhibit strong diagnostic performance with clinical text, yet their ability to interpret complex radiologic images independently is limited. Further refinement in image analysis is essential for these models to integrate fully into radiologic workflows.
Files in This Item:
90963.pdf Download
DOI
10.1038/s41598-025-06458-z
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers
Yonsei Authors
Suh, Pae Sun(서배선)
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/210001
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links