6 38

Cited 0 times in

Cited 0 times in

Enhanced Respiratory Sound Classification Using Deep Learning and Multi-Channel Auscultation

DC Field Value Language
dc.contributor.authorKim, Yeonkyeong-
dc.contributor.authorKim, Kyu Bom-
dc.contributor.authorLeem, Ah Young-
dc.contributor.authorKim, Kyuseok-
dc.contributor.authorLee, Su Hwan-
dc.date.accessioned2025-10-24T07:26:55Z-
dc.date.available2025-10-24T07:26:55Z-
dc.date.created2025-09-22-
dc.date.issued2025-08-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/207903-
dc.description.abstractBackground/Objectives: Identifying and classifying abnormal lung sounds is essential for diagnosing patients with respiratory disorders. In particular, the simultaneous recording of auscultation signals from multiple clinically relevant positions offers greater diagnostic potential compared to traditional single-channel measurements. This study aims to improve the accuracy of respiratory sound classification by leveraging multichannel signals and capturing positional characteristics from multiple sites in the same patient. Methods: We evaluated the performance of respiratory sound classification using multichannel lung sound data with a deep learning model that combines a convolutional neural network (CNN) and long short-term memory (LSTM), based on mel-frequency cepstral coefficients (MFCCs). We analyzed the impact of the number and placement of channels on classification performance. Results: The results demonstrated that using four-channel recordings improved accuracy, sensitivity, specificity, precision, and F1-score by approximately 1.11, 1.15, 1.05, 1.08, and 1.13 times, respectively, compared to using three, two, or single-channel recordings. Conclusions: This study confirms that multichannel data capture a richer set of features corresponding to various respiratory sound characteristics, leading to significantly improved classification performance. The proposed method holds promise for enhancing sound classification accuracy not only in clinical applications but also in broader domains such as speech and audio processing.-
dc.languageEnglish-
dc.publisherMDPI AG-
dc.relation.isPartOfJOURNAL OF CLINICAL MEDICINE-
dc.relation.isPartOfJOURNAL OF CLINICAL MEDICINE-
dc.titleEnhanced Respiratory Sound Classification Using Deep Learning and Multi-Channel Auscultation-
dc.typeArticle-
dc.contributor.googleauthorKim, Yeonkyeong-
dc.contributor.googleauthorKim, Kyu Bom-
dc.contributor.googleauthorLeem, Ah Young-
dc.contributor.googleauthorKim, Kyuseok-
dc.contributor.googleauthorLee, Su Hwan-
dc.identifier.doi10.3390/jcm14155437-
dc.relation.journalcodeJ03556-
dc.identifier.eissn2077-0383-
dc.identifier.pmid40807059-
dc.subject.keywordmulti-channel lung sound-
dc.subject.keyworddeep learning-
dc.subject.keywordmel-frequency cepstral coefficient-
dc.subject.keywordabnormal respiratory sounds-
dc.subject.keywordclinical implication-
dc.contributor.affiliatedAuthorKim, Yeonkyeong-
dc.contributor.affiliatedAuthorLeem, Ah Young-
dc.contributor.affiliatedAuthorLee, Su Hwan-
dc.identifier.wosid001548825700001-
dc.citation.volume14-
dc.citation.number15-
dc.identifier.bibliographicCitationJOURNAL OF CLINICAL MEDICINE, Vol.14(15), 2025-08-
dc.identifier.rimsid89438-
dc.type.rimsART-
dc.description.journalClass1-
dc.description.journalClass1-
dc.subject.keywordAuthormulti-channel lung sound-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthormel-frequency cepstral coefficient-
dc.subject.keywordAuthorabnormal respiratory sounds-
dc.subject.keywordAuthorclinical implication-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalWebOfScienceCategoryMedicine, General & Internal-
dc.relation.journalResearchAreaGeneral & Internal Medicine-
dc.identifier.articleno5437-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Internal Medicine (내과학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.