Susceptibility of Large Language Models to User-Driven Factors in Medical Queries
Authors
Lim, Kyung Ho ; Kang, Ujin ; Li, Xiang ; Kim, Jin Sung ; Jung, Young-Chul ; Park, Sangjoon ; Kim, Byung-Hoon
Citation
JOURNAL OF HEALTHCARE INFORMATICS RESEARCH, 2025-12
Journal Title
JOURNAL OF HEALTHCARE INFORMATICS RESEARCH
ISSN
2509-4971
Issue Date
2025-12
Keywords
Large language model ; Natural language processing ; Artificial intelligence ; Clinical decision support systems ; Diagnostic errors ; Bias
Abstract
Large language models (LLMs) are increasingly used in healthcare; however, their reliability is shaped not only by model design but also by how queries are phrased and how complete the information is. This study assesses how user-driven factors, including misinformation framing, source authority, model personas, and omission of critical clinical details, influence the diagnostic accuracy and reliability of LLM-generated medical responses. Utilizing two public datasets (MedQA and Medbullets), we conducted two tests: (1) perturbation-evaluating LLM persona (assistant vs. expert AI), misinformation source authority (inexperienced vs. expert), and tone (assertive vs. hedged); and (2) ablation-omission of key clinical data. Proprietary LLMs (GPT-4o (OpenAI), Claude-35 Sonnet (Anthropic), Claude-35 Haiku (Anthropic), Gemini-15 Pro (Google), Gemini-15 Flash (Google)) and open-source LLMs (LLaMA-3 8B, LLaMA-3 Med42 8B, DeepSeek-R1 8B) were used for evaluation. Results show that in the perturbation test, all LLMs were susceptible to user-driven misinformation, with an assertive tone exerting the strongest overall impact, while proprietary models were more vulnerable to strong or authoritative misinformation. In the ablation test, omitting physical examination findings and laboratory results caused the largest accuracy decline. Proprietary models achieved higher baseline accuracy but demonstrated sharper performance drops under biased or incomplete input. These findings highlight that structured prompts and a complete clinical context are essential for accurate responses. Users should avoid authoritative misinformation framing and provide a complete clinical context, especially for complex and challenging queries. By clarifying the impact of user-driven biases, this study contributes insights to the safe integration of LLMs into healthcare practice.