Cited 0 times in 
Cited 0 times in 
Why large language models cannot possess consciousness: an integrated information theory perspective
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Shin, Dong Ah | - |
| dc.contributor.author | Cho, Pyung Goo | - |
| dc.contributor.author | Ji, Gyu Yeul | - |
| dc.contributor.author | Park, Sang Hyuk | - |
| dc.contributor.author | Kim, Soo Heon | - |
| dc.contributor.author | Choo, Yoo Jin | - |
| dc.contributor.author | Chang, Min Cheol | - |
| dc.contributor.author | 김수헌 | - |
| dc.date.accessioned | 2026-02-24T05:07:04Z | - |
| dc.date.available | 2026-02-24T05:07:04Z | - |
| dc.date.created | 2026-02-24 | - |
| dc.date.issued | 2025-12 | - |
| dc.identifier.uri | https://ir.ymlib.yonsei.ac.kr/handle/22282913/211013 | - |
| dc.description.abstract | Background: The question of whether large language models (LLMs) possess consciousness has been increasingly debated. Integrated information theory (IIT) offers a quantitative framework for assessing consciousness through a measure of integrated information. Methods: This study applied IIT principles to the architecture of transformer-based LLMs, focusing on causal integration, temporal persistence, and system irreducibility. Ablation experiments on Generative Pretrained Transformer 2 (GPT-2) were performed, selectively removing individual attention heads and measuring changes in perplexity as a behavioral proxy for integrated information to empirically approximate the measure of integrated information. Results: The ablation study of a single attention head produced minimal or negative changes in perplexity in four out of five representative sentences, indicating redundancy or noise. Only one sentence revealed a significant increase in perplexity change (Delta PPL +11.29), reflecting a localized but nonessential contribution. A comparison with biological systems demonstrated that LLMs meet the IIT criterion of differentiation, but fail to meet the criteria of integration, causal closure, and temporal persistence. These findings confirm that LLMs are architecturally decomposable, lack persistent internal states, and do not sustain global causal irreducibility. Philosophical considerations, including Searle's Chinese Room argument, further support the idea that the linguistic fluency of LLMs arises from syntactic manipulation rather than semantic understanding. Conclusion: Current LLMs do not satisfy the structural and informational requirements of consciousness under IIT. Although capable of simulating intelligent language, LLMs remain unconscious systems with a negligible amount of integrated information, underscoring the distinction between linguistic competence and conscious experience. | - |
| dc.format | application/pdf | - |
| dc.language | 영어 | - |
| dc.publisher | YEUNGNAM UNIV COLL MEDICINE, YEUNGNAM UNIV INST MEDICAL SCIENCE | - |
| dc.relation.isPartOf | JOURNAL OF YEUNGNAM MEDICAL SCIENCE | - |
| dc.title | Why large language models cannot possess consciousness: an integrated information theory perspective | - |
| dc.type | Article | - |
| dc.contributor.googleauthor | Shin, Dong Ah | - |
| dc.contributor.googleauthor | Cho, Pyung Goo | - |
| dc.contributor.googleauthor | Ji, Gyu Yeul | - |
| dc.contributor.googleauthor | Park, Sang Hyuk | - |
| dc.contributor.googleauthor | Kim, Soo Heon | - |
| dc.contributor.googleauthor | Choo, Yoo Jin | - |
| dc.contributor.googleauthor | Chang, Min Cheol | - |
| dc.identifier.doi | 10.12701/jyms.2025.42.79 | - |
| dc.identifier.pmid | 41331515 | - |
| dc.subject.keyword | Artificial intelligence | - |
| dc.subject.keyword | Consciousness | - |
| dc.subject.keyword | Information theory | - |
| dc.subject.keyword | Large language models | - |
| dc.subject.keyword | Natural language processing | - |
| dc.contributor.affiliatedAuthor | Shin, Dong Ah | - |
| dc.contributor.affiliatedAuthor | Kim, Soo Heon | - |
| dc.identifier.scopusid | 2-s2.0-105029758675 | - |
| dc.identifier.wosid | 001671673000004 | - |
| dc.citation.volume | 42 | - |
| dc.identifier.bibliographicCitation | JOURNAL OF YEUNGNAM MEDICAL SCIENCE, Vol.42, 2025-12 | - |
| dc.identifier.rimsid | 91525 | - |
| dc.type.rims | ART | - |
| dc.description.journalClass | 1 | - |
| dc.description.journalClass | 1 | - |
| dc.subject.keywordAuthor | Artificial intelligence | - |
| dc.subject.keywordAuthor | Consciousness | - |
| dc.subject.keywordAuthor | Information theory | - |
| dc.subject.keywordAuthor | Large language models | - |
| dc.subject.keywordAuthor | Natural language processing | - |
| dc.type.docType | Article | - |
| dc.identifier.kciid | ART003284499 | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.relation.journalWebOfScienceCategory | Medicine, General & Internal | - |
| dc.relation.journalResearchArea | General & Internal Medicine | - |
| dc.identifier.articleno | 79 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.