3 2

Cited 0 times in

Cited 0 times in

Why large language models cannot possess consciousness: an integrated information theory perspective

Authors
 Shin, Dong Ah  ;  Cho, Pyung Goo  ;  Ji, Gyu Yeul  ;  Park, Sang Hyuk  ;  Kim, Soo Heon  ;  Choo, Yoo Jin  ;  Chang, Min Cheol 
Citation
 JOURNAL OF YEUNGNAM MEDICAL SCIENCE, Vol.42, 2025-12 
Article Number
 79 
Journal Title
 JOURNAL OF YEUNGNAM MEDICAL SCIENCE 
Issue Date
2025-12
Keywords
Artificial intelligence ; Consciousness ; Information theory ; Large language models ; Natural language processing
Abstract
Background: The question of whether large language models (LLMs) possess consciousness has been increasingly debated. Integrated information theory (IIT) offers a quantitative framework for assessing consciousness through a measure of integrated information. Methods: This study applied IIT principles to the architecture of transformer-based LLMs, focusing on causal integration, temporal persistence, and system irreducibility. Ablation experiments on Generative Pretrained Transformer 2 (GPT-2) were performed, selectively removing individual attention heads and measuring changes in perplexity as a behavioral proxy for integrated information to empirically approximate the measure of integrated information. Results: The ablation study of a single attention head produced minimal or negative changes in perplexity in four out of five representative sentences, indicating redundancy or noise. Only one sentence revealed a significant increase in perplexity change (Delta PPL +11.29), reflecting a localized but nonessential contribution. A comparison with biological systems demonstrated that LLMs meet the IIT criterion of differentiation, but fail to meet the criteria of integration, causal closure, and temporal persistence. These findings confirm that LLMs are architecturally decomposable, lack persistent internal states, and do not sustain global causal irreducibility. Philosophical considerations, including Searle's Chinese Room argument, further support the idea that the linguistic fluency of LLMs arises from syntactic manipulation rather than semantic understanding. Conclusion: Current LLMs do not satisfy the structural and informational requirements of consciousness under IIT. Although capable of simulating intelligent language, LLMs remain unconscious systems with a negligible amount of integrated information, underscoring the distinction between linguistic competence and conscious experience.
Files in This Item:
91525.pdf Download
DOI
10.12701/jyms.2025.42.79
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Neurosurgery (신경외과학교실) > 1. Journal Papers
Yonsei Authors
Kim, Soo Heon(김수헌)
Shin, Dong A(신동아)
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/211013
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links