5 44

Cited 0 times in

Cited 0 times in

The ethics of simplification: balancing patient autonomy, comprehension, and accuracy in AI-generated radiology reports

Authors
 Lee, Hong-Seon  ;  Song, Seung-Hyun  ;  Park, Chaeri  ;  Seo, Jeongrok  ;  Kim, Won Hwa  ;  Kim, Jaeil  ;  Kim, Sungjun  ;  Han, Kyunghwa  ;  Lee, Young Han 
Citation
 BMC MEDICAL ETHICS, Vol.26(1), 2025-10 
Article Number
 136 
Journal Title
BMC MEDICAL ETHICS
ISSN
 * 
Issue Date
2025-10
MeSH
Artificial Intelligence* / ethics ; Comprehension* ; Humans ; Informed Consent / ethics ; Magnetic Resonance Imaging ; Personal Autonomy* ; Radiology* / ethics ; Retrospective Studies ; Tomography, X-Ray Computed
Keywords
AI-generated radiology reports ; Large Language Models ; Readability and comprehension ; Clinical accuracy ; Patient autonomy ; Informed consent ; Ethical implications
Abstract
BackgroundLarge language models (LLMs) such as GPT-4 are increasingly used to simplify radiology reports and improve patient comprehension. However, excessive simplification may undermine informed consent and autonomy by compromising clinical accuracy. This study investigates the ethical implications of readability thresholds in AI-generated radiology reports, identifying the minimum reading level at which clinical accuracy is preserved.MethodsWe retrospectively analyzed 500 computed tomography and magnetic resonance imaging reports from a tertiary hospital. Each report was transformed into 17 versions (reading grade levels 1-17) using GPT-4 Turbo. Readability metrics and word counts were calculated for each version. Clinical accuracy was evaluated using radiologist assessments and PubMed-BERTScore. We identified the first grade level at which a statistically significant decline in accuracy occurred, determining the lowest level that preserved both accuracy and readability. We further assessed potential clinical consequences in reports simplified to the 7th-grade level.ResultsReadability scores showed strong correlation with prompted reading levels (r = 0.80-0.84). Accuracy remained stable across grades 13-11 but declined significantly below grade 11. At the 7th-grade level, 20% of reports contained inaccuracies with potential to alter patient management, primarily due to omission, incorrect conversion, or inappropriate generalization. The 11th-grade level emerged as the current lower bound for preserving accuracy in LLM-generated radiology reports.ConclusionsOur findings highlight an ethical tension between improving readability and maintaining clinical accuracy. While 7th-grade readability remains an ethical ideal, current AI tools cannot reliably produce accurate reports below the 11th-grade level. Ethical implementation of AI-generated reporting should include layered communication strategies and model transparency to safeguard patient autonomy and comprehension.
Files in This Item:
90571.pdf Download
DOI
10.1186/s12910-025-01285-3
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers
Yonsei Authors
Kim, Sungjun(김성준) ORCID logo https://orcid.org/0000-0002-7876-7901
Lee, Young Han(이영한) ORCID logo https://orcid.org/0000-0002-5602-391X
Lee, Hong Seon(이홍선) ORCID logo https://orcid.org/0000-0003-2427-2783
Han, Kyung Hwa(한경화)
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/209316
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links