Artificial Intelligence ; Generative AI ; Healthcare AI ; Hallucination ; Luciano Floridi ; Information ethics
Abstract
The morality of Artificial Intelligence (AI) remains a point of contention, especially considering the lack of self-awareness or intent in AI systems. However, there is a consensus on the need to incorporate moral reasoning into AI development. The question of AI’s inherent ‘evil’ remains largely theoretical, suggesting that practical discussions should center around the ethics of AI development and use. However, in the healthcare AI domain, the situation differs due to the bioethical considerations and practical applications of several AI algorithms and devices. Despite principles of bioethics being applicable, clear criteria are needed to evaluate situations where healthcare AI could cause harm or act against patients’ benefits. A notable issue is the ‘hallucination’ caused by generative AI, where users might receive fabricated information or misinformation. Though this doesn’t necessarily harm users, its implications in healthcare are questionable. This paper aims to define hallucination in healthcare AI as an evil based on Luciano Floridi’s information ethics, emphasizing the contextual specificity of healthcare. It argues for the use of evaluation metrics to measure and reduce hallucination levels in healthcare AI. After an overview of information ethics, this study applies the concept to the healthcare domain, suggesting the need for a truth-falsity metric tailored to healthcare