0 25

Cited 0 times in

Cited 0 times in

Susceptibility of Large Language Models to User-Driven Factors in Medical Queries

Authors
 Lim, Kyung Ho  ;  Kang, Ujin  ;  Li, Xiang  ;  Kim, Jin Sung  ;  Jung, Young-Chul  ;  Park, Sangjoon  ;  Kim, Byung-Hoon 
Citation
 JOURNAL OF HEALTHCARE INFORMATICS RESEARCH, 2025-12 
Journal Title
 JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 
ISSN
 2509-4971 
Issue Date
2025-12
Keywords
Large language model ; Natural language processing ; Artificial intelligence ; Clinical decision support systems ; Diagnostic errors ; Bias
Abstract
Large language models (LLMs) are increasingly used in healthcare; however, their reliability is shaped not only by model design but also by how queries are phrased and how complete the information is. This study assesses how user-driven factors, including misinformation framing, source authority, model personas, and omission of critical clinical details, influence the diagnostic accuracy and reliability of LLM-generated medical responses. Utilizing two public datasets (MedQA and Medbullets), we conducted two tests: (1) perturbation-evaluating LLM persona (assistant vs. expert AI), misinformation source authority (inexperienced vs. expert), and tone (assertive vs. hedged); and (2) ablation-omission of key clinical data. Proprietary LLMs (GPT-4o (OpenAI), Claude-35 Sonnet (Anthropic), Claude-35 Haiku (Anthropic), Gemini-15 Pro (Google), Gemini-15 Flash (Google)) and open-source LLMs (LLaMA-3 8B, LLaMA-3 Med42 8B, DeepSeek-R1 8B) were used for evaluation. Results show that in the perturbation test, all LLMs were susceptible to user-driven misinformation, with an assertive tone exerting the strongest overall impact, while proprietary models were more vulnerable to strong or authoritative misinformation. In the ablation test, omitting physical examination findings and laboratory results caused the largest accuracy decline. Proprietary models achieved higher baseline accuracy but demonstrated sharper performance drops under biased or incomplete input. These findings highlight that structured prompts and a complete clinical context are essential for accurate responses. Users should avoid authoritative misinformation framing and provide a complete clinical context, especially for complex and challenging queries. By clarifying the impact of user-driven biases, this study contributes insights to the safe integration of LLMs into healthcare practice.
Full Text
https://link.springer.com/article/10.1007/s41666-025-00218-4
DOI
10.1007/s41666-025-00218-4
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiation Oncology (방사선종양학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Psychiatry (정신과학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Biomedical Systems Informatics (의생명시스템정보학교실) > 1. Journal Papers
Yonsei Authors
Kim, Byung Hoon(김병훈)
Kim, Jinsung(김진성) ORCID logo https://orcid.org/0000-0003-1415-6471
Park, Sang Joon(박상준)
Jung, Young Chul(정영철) ORCID logo https://orcid.org/0000-0002-0578-2510
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/210031
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links