154 206

Cited 5 times in

Automated diagnosis of ear disease using ensemble deep learning with a big otoendoscopy image database

DC FieldValueLanguage
dc.contributor.author박해정-
dc.contributor.author최재영-
dc.contributor.author차동철-
dc.date.accessioned2019-09-20T07:52:29Z-
dc.date.available2019-09-20T07:52:29Z-
dc.date.issued2019-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/171095-
dc.description.abstractBACKGROUND: Ear and mastoid disease can easily be treated by early detection and appropriate medical care. However, short of specialists and relatively low diagnostic accuracy calls for a new way of diagnostic strategy, in which deep learning may play a significant role. The current study presents a machine learning model to automatically diagnose ear disease using a large database of otoendoscopic images acquired in the clinical environment. METHODS: Total 10,544 otoendoscopic images were used to train nine public convolution-based deep neural networks to classify eardrum and external auditory canal features into six categories of ear diseases, covering most ear diseases (Normal, Attic retraction, Tympanic perforation, Otitis externa±myringitis, Tumor). After evaluating several optimization schemes, two best-performing models were selected to compose an ensemble classifier, by combining classification scores of each classifier. FINDINGS: According to accuracy and training time, transfer learning models based on Inception-V3 and ResNet101 were chosen and the ensemble classifier using the two models yielded a significant improvement over each model, the accuracy of which is in average 93·67% for the 5-folds cross-validation. Considering substantial data-size dependency of classifier performance in the transfer learning, evaluated in this study, the high accuracy in the current model is attributable to the large database. INTERPRETATION: The current study is unprecedented in terms of both disease diversity and diagnostic accuracy, which is compatible or even better than an average otolaryngologist. The classifier was trained with data in a various acquisition condition, which is suitable for the practical environment. This study shows the usefulness of utilizing a deep learning model in the early detection and treatment of ear disease in the clinical situation. FUND: This research was supported by Brain Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT(NRF-2017M3C7A1049051).-
dc.description.statementOfResponsibilityopen-
dc.formatapplication/pdf-
dc.languageEnglish-
dc.publisherElsevier-
dc.relation.isPartOfEBioMedicine-
dc.rightsCC BY-NC-ND 2.0 KR-
dc.titleAutomated diagnosis of ear disease using ensemble deep learning with a big otoendoscopy image database-
dc.typeArticle-
dc.contributor.collegeCollege of Medicine (의과대학)-
dc.contributor.departmentDept. of Nuclear Medicine (핵의학교실)-
dc.contributor.googleauthorDongchul Cha-
dc.contributor.googleauthorChongwon Pae-
dc.contributor.googleauthorSi-Baek Seong-
dc.contributor.googleauthorJae Young Choi-
dc.contributor.googleauthorHae-Jeong Park-
dc.identifier.doi10.1016/j.ebiom.2019.06.050-
dc.contributor.localIdA01730-
dc.contributor.localIdA04173-
dc.contributor.localIdA04173-
dc.relation.journalcodeJ03279-
dc.identifier.eissn2352-3964-
dc.identifier.pmid31272902-
dc.subject.keywordConvolutional neural network-
dc.subject.keywordDeep learning-
dc.subject.keywordEar disease-
dc.subject.keywordEnsemble learning-
dc.subject.keywordOtoendoscopy-
dc.subject.keywordTympanic membrane-
dc.contributor.alternativeNamePark, Hae Jeong-
dc.contributor.affiliatedAuthor박해정-
dc.contributor.affiliatedAuthor최재영-
dc.contributor.affiliatedAuthor최재영-
dc.citation.volume45-
dc.citation.startPage606-
dc.citation.endPage614-
dc.identifier.bibliographicCitationEBioMedicine, Vol.45 : 606-614, 2019-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Nuclear Medicine (핵의학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Otorhinolaryngology (이비인후과학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.