2 1

Cited 0 times in

Cited 0 times in

Optimizing Within-Domain Gaze Estimation: Insights From a Novel Appearance-Based 2D Model

DC Field Value Language
dc.contributor.authorUgli, Sardor Abdirayimov Odil-
dc.contributor.authorBak, Se-Young-
dc.contributor.authorKim, Yubin-
dc.contributor.authorChung, Eun-Hye-
dc.contributor.authorKim, Heegoo-
dc.contributor.authorCho, Eunyoung-
dc.contributor.authorSuh, Miri-
dc.contributor.authorShin, Seyoung-
dc.contributor.authorJeon, HyeongMin-
dc.contributor.authorKim, MinYoung-
dc.date.accessioned2026-03-17T07:58:07Z-
dc.date.available2026-03-17T07:58:07Z-
dc.date.created2026-03-06-
dc.date.issued2026-01-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/211397-
dc.description.abstractAppearance-based gaze estimation has emerged as a promising alternative to traditional model-based methods, effectively addressing their limitations in terms of flexibility, cost, and adaptability to unconstrained environments. In this study, the Digital Therapeutics Research Team at Bundang CHA Medical Center developed a novel appearance-based gaze estimation algorithm, CHA-Gaze, by integrating head-pose information into an adaptive feature fusion network (AFF-Net) architecture, which is a widely recognized baseline in the field. To evaluate the effectiveness of CHA-Gaze, we conducted a unified validation using the MPIIFaceGaze dataset, which comprises 37,590 images from 15 participants acquired under semi-natural conditions. The results demonstrated that CHA-Gaze achieved a significantly lower mean Euclidean error of 1.88 cm, compared to 2.59 cm by AFF-Net (p < 0.001). These findings indicate that CHA-Gaze offers superior accuracy and improved robustness across various appearances and environmental conditions. This study confirms the effectiveness of architectural refinement within appearance-based gaze estimation frameworks and highlights the potential of CHA-Gaze for real-world deployment in applications, such as digital therapeutics, telehealth, and accessibility technologies. The proposed model provides a scalable, non-intrusive solution using standard webcams, making it suitable for widespread use in both clinical and consumer-grade settings.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.isPartOfIEEE ACCESS-
dc.relation.isPartOfIEEE ACCESS-
dc.titleOptimizing Within-Domain Gaze Estimation: Insights From a Novel Appearance-Based 2D Model-
dc.typeArticle-
dc.contributor.googleauthorUgli, Sardor Abdirayimov Odil-
dc.contributor.googleauthorBak, Se-Young-
dc.contributor.googleauthorKim, Yubin-
dc.contributor.googleauthorChung, Eun-Hye-
dc.contributor.googleauthorKim, Heegoo-
dc.contributor.googleauthorCho, Eunyoung-
dc.contributor.googleauthorSuh, Miri-
dc.contributor.googleauthorShin, Seyoung-
dc.contributor.googleauthorJeon, HyeongMin-
dc.contributor.googleauthorKim, MinYoung-
dc.identifier.doi10.1109/ACCESS.2026.3655327-
dc.relation.journalcodeJ03454-
dc.identifier.eissn2169-3536-
dc.subject.keywordEstimation-
dc.subject.keywordFeature extraction-
dc.subject.keywordComputational modeling-
dc.subject.keywordAdaptation models-
dc.subject.keywordAccuracy-
dc.subject.keywordNeurons-
dc.subject.keywordComputer architecture-
dc.subject.keywordBenchmark testing-
dc.subject.keywordAnnotations-
dc.subject.keywordVectors-
dc.subject.keywordAFF-Net-
dc.subject.keywordappearance-based gaze estimation-
dc.subject.keyworddeep learning-
dc.subject.keywordeye-tracking-
dc.subject.keywordgaze estimation-
dc.subject.keywordmachine learning-
dc.subject.keywordmulti-task regression module-
dc.contributor.affiliatedAuthorBak, Se-Young-
dc.identifier.scopusid2-s2.0-105028185580-
dc.identifier.wosid001673759200013-
dc.citation.volume14-
dc.citation.startPage12213-
dc.citation.endPage12223-
dc.identifier.bibliographicCitationIEEE ACCESS, Vol.14 : 12213-12223, 2026-01-
dc.identifier.rimsid91643-
dc.type.rimsART-
dc.description.journalClass1-
dc.description.journalClass1-
dc.subject.keywordAuthorEstimation-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorComputational modeling-
dc.subject.keywordAuthorAdaptation models-
dc.subject.keywordAuthorAccuracy-
dc.subject.keywordAuthorNeurons-
dc.subject.keywordAuthorComputer architecture-
dc.subject.keywordAuthorBenchmark testing-
dc.subject.keywordAuthorAnnotations-
dc.subject.keywordAuthorVectors-
dc.subject.keywordAuthorAFF-Net-
dc.subject.keywordAuthorappearance-based gaze estimation-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthoreye-tracking-
dc.subject.keywordAuthorgaze estimation-
dc.subject.keywordAuthormachine learning-
dc.subject.keywordAuthormulti-task regression module-
dc.subject.keywordPlusEYE GAZE-
dc.subject.keywordPlusTRACKING TECHNIQUES-
dc.subject.keywordPlusCONTROL INTERFACE-
dc.subject.keywordPlusATTENTION-
dc.subject.keywordPlusNETWORK-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
Appears in Collections:
1. College of Medicine (의과대학) > Others (기타) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.