0 25

Cited 0 times in

Cited 0 times in

Susceptibility of Large Language Models to User-Driven Factors in Medical Queries

DC Field Value Language
dc.contributor.authorLim, Kyung Ho-
dc.contributor.authorKang, Ujin-
dc.contributor.authorLi, Xiang-
dc.contributor.authorKim, Jin Sung-
dc.contributor.authorJung, Young-Chul-
dc.contributor.authorPark, Sangjoon-
dc.contributor.authorKim, Byung-Hoon-
dc.date.accessioned2026-01-20T05:28:05Z-
dc.date.available2026-01-20T05:28:05Z-
dc.date.created2026-01-14-
dc.date.issued2025-12-
dc.identifier.issn2509-4971-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/210031-
dc.description.abstractLarge language models (LLMs) are increasingly used in healthcare; however, their reliability is shaped not only by model design but also by how queries are phrased and how complete the information is. This study assesses how user-driven factors, including misinformation framing, source authority, model personas, and omission of critical clinical details, influence the diagnostic accuracy and reliability of LLM-generated medical responses. Utilizing two public datasets (MedQA and Medbullets), we conducted two tests: (1) perturbation-evaluating LLM persona (assistant vs. expert AI), misinformation source authority (inexperienced vs. expert), and tone (assertive vs. hedged); and (2) ablation-omission of key clinical data. Proprietary LLMs (GPT-4o (OpenAI), Claude-3<middle dot>5 Sonnet (Anthropic), Claude-3<middle dot>5 Haiku (Anthropic), Gemini-1<middle dot>5 Pro (Google), Gemini-1<middle dot>5 Flash (Google)) and open-source LLMs (LLaMA-3 8B, LLaMA-3 Med42 8B, DeepSeek-R1 8B) were used for evaluation. Results show that in the perturbation test, all LLMs were susceptible to user-driven misinformation, with an assertive tone exerting the strongest overall impact, while proprietary models were more vulnerable to strong or authoritative misinformation. In the ablation test, omitting physical examination findings and laboratory results caused the largest accuracy decline. Proprietary models achieved higher baseline accuracy but demonstrated sharper performance drops under biased or incomplete input. These findings highlight that structured prompts and a complete clinical context are essential for accurate responses. Users should avoid authoritative misinformation framing and provide a complete clinical context, especially for complex and challenging queries. By clarifying the impact of user-driven biases, this study contributes insights to the safe integration of LLMs into healthcare practice.-
dc.language영어-
dc.publisherSPRINGERNATURE-
dc.relation.isPartOfJOURNAL OF HEALTHCARE INFORMATICS RESEARCH-
dc.titleSusceptibility of Large Language Models to User-Driven Factors in Medical Queries-
dc.typeArticle-
dc.contributor.googleauthorLim, Kyung Ho-
dc.contributor.googleauthorKang, Ujin-
dc.contributor.googleauthorLi, Xiang-
dc.contributor.googleauthorKim, Jin Sung-
dc.contributor.googleauthorJung, Young-Chul-
dc.contributor.googleauthorPark, Sangjoon-
dc.contributor.googleauthorKim, Byung-Hoon-
dc.identifier.doi10.1007/s41666-025-00218-4-
dc.identifier.urlhttps://link.springer.com/article/10.1007/s41666-025-00218-4-
dc.subject.keywordLarge language model-
dc.subject.keywordNatural language processing-
dc.subject.keywordArtificial intelligence-
dc.subject.keywordClinical decision support systems-
dc.subject.keywordDiagnostic errors-
dc.subject.keywordBias-
dc.contributor.affiliatedAuthorLim, Kyung Ho-
dc.contributor.affiliatedAuthorKim, Jin Sung-
dc.contributor.affiliatedAuthorJung, Young-Chul-
dc.contributor.affiliatedAuthorPark, Sangjoon-
dc.contributor.affiliatedAuthorKim, Byung-Hoon-
dc.identifier.scopusid2-s2.0-105023537542-
dc.identifier.wosid001627903000001-
dc.identifier.bibliographicCitationJOURNAL OF HEALTHCARE INFORMATICS RESEARCH, 2025-12-
dc.identifier.rimsid90939-
dc.type.rimsART-
dc.description.journalClass1-
dc.description.journalClass1-
dc.subject.keywordAuthorLarge language model-
dc.subject.keywordAuthorNatural language processing-
dc.subject.keywordAuthorArtificial intelligence-
dc.subject.keywordAuthorClinical decision support systems-
dc.subject.keywordAuthorDiagnostic errors-
dc.subject.keywordAuthorBias-
dc.type.docTypeArticle; Early Access-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryHealth Care Sciences & Services-
dc.relation.journalWebOfScienceCategoryMedical Informatics-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaHealth Care Sciences & Services-
dc.relation.journalResearchAreaMedical Informatics-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiation Oncology (방사선종양학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Psychiatry (정신과학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Biomedical Systems Informatics (의생명시스템정보학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.