5 7

Cited 0 times in

Cited 0 times in

Attention-enhanced segmentation network for automated cerebral microbleed detection and burden assessment

DC Field Value Language
dc.contributor.authorCho, Kwon Hwi-
dc.contributor.authorJeon, Jonghyun-
dc.contributor.authorKim, Seonggyu-
dc.contributor.authorKim, Young Seo-
dc.contributor.authorKim, Yu-Mi-
dc.contributor.authorKim, Mi Kyung-
dc.contributor.authorShin, Min-Ho-
dc.contributor.authorChung, Insung-
dc.contributor.authorKoh, Sang Baek-
dc.contributor.authorKim, Hyeon Chang-
dc.contributor.authorPark, Chae Jung-
dc.contributor.authorLee, Jong-Min-
dc.date.accessioned2026-03-31T01:20:51Z-
dc.date.available2026-03-31T01:20:51Z-
dc.date.created2026-03-24-
dc.date.issued2026-03-
dc.identifier.issn1662-4548-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/211638-
dc.description.abstractIntroduction Cerebral microbleeds (CMBs) are small hemorrhagic lesions visible as hypointense foci on susceptibility-sensitive MRI and are established biomarkers of stroke risk and amyloid-related imaging abnormalities (ARIA-H) in patients receiving anti-amyloid therapy. However, automated detection remains challenging because true CMBs closely resemble veins, calcifications, and susceptibility artifacts. This visual ambiguity results in a persistent precision-recall trade-off, where models optimized for high sensitivity tend to generate excessive false positives, while precision-focused models risk missing clinically relevant lesions. To address this limitation, we propose an attention-enhanced segmentation framework designed to suppress confounding activations while preserving lesion sensitivity.Methods We developed RLK-UNet with Convolutional Block Attention Modules (CBAM), a single-stage encoder-decoder architecture that redefines skip connections as context-filtered pathways. The encoder incorporates large 13 & times;13 residual local kernel (RLK) convolutions to capture broad contextual information for distinguishing spherical microbleeds from elongated vascular structures. CBAM modules are embedded in all skip connections to selectively enhance lesion-relevant features and suppress irrelevant background responses before feature fusion. The model was trained and evaluated on a multi-site dataset of 506 T2*-GRE and SWI scans, with lesion-level detection assessed using precision, recall, F1-score, and average false positives per scan. Subject-level burden estimation was further evaluated using ARIA-H severity intervals.Results The proposed model achieved state-of-the-art lesion-level performance, with a precision of 0.891, recall of 0.887, F1-score of 0.887, and a markedly reduced false positive rate of 0.83 per subject. Five-fold cross-validation demonstrated stable performance with minimal variance across splits. In lesions <= 3 mm, the model maintained strong detection performance (F1-score 0.869) while effectively controlling false positives. Cross-modality evaluation between T2*-GRE and SWI confirmed robust generalization. Ablation studies verified that CBAM significantly improved precision while preserving sensitivity, and Grad-CAM visualizations demonstrated more spatially focused and clinically interpretable attention patterns. Subject-level CMB counts strongly correlated with ground truth (Spearman rho = 0.93), and severity classification aligned with ARIA-H intervals.Conclusion RLK-UNet with CBAM provides a robust and interpretable solution for automated CMB detection by directly addressing false-positive propagation through attention-guided skip connections. The framework achieves balanced precision and sensitivity within a single-stage architecture and demonstrates reliable subject-level burden estimation aligned with clinically meaningful ARIA-H categories. These findings support its potential application in vascular risk stratification and treatment monitoring in patients undergoing anti-amyloid therapy.-
dc.languageEnglish-
dc.publisherFrontiers Research Foundation-
dc.relation.isPartOfFRONTIERS IN NEUROSCIENCE-
dc.relation.isPartOfFRONTIERS IN NEUROSCIENCE-
dc.titleAttention-enhanced segmentation network for automated cerebral microbleed detection and burden assessment-
dc.typeArticle-
dc.contributor.googleauthorCho, Kwon Hwi-
dc.contributor.googleauthorJeon, Jonghyun-
dc.contributor.googleauthorKim, Seonggyu-
dc.contributor.googleauthorKim, Young Seo-
dc.contributor.googleauthorKim, Yu-Mi-
dc.contributor.googleauthorKim, Mi Kyung-
dc.contributor.googleauthorShin, Min-Ho-
dc.contributor.googleauthorChung, Insung-
dc.contributor.googleauthorKoh, Sang Baek-
dc.contributor.googleauthorKim, Hyeon Chang-
dc.contributor.googleauthorPark, Chae Jung-
dc.contributor.googleauthorLee, Jong-Min-
dc.identifier.doi10.3389/fnins.2026.1743039-
dc.relation.journalcodeJ02867-
dc.identifier.eissn1662-453X-
dc.identifier.pmid41859233-
dc.subject.keywordARIA-H-
dc.subject.keywordattention mechanism-
dc.subject.keywordCBAM-
dc.subject.keywordcerebral microbleeds-
dc.subject.keywordsegmentation-
dc.contributor.affiliatedAuthorKim, Hyeon Chang-
dc.contributor.affiliatedAuthorPark, Chae Jung-
dc.identifier.scopusid2-s2.0-105033009742-
dc.identifier.wosid001716869200001-
dc.citation.volume20-
dc.identifier.bibliographicCitationFRONTIERS IN NEUROSCIENCE, Vol.20, 2026-03-
dc.identifier.rimsid92209-
dc.type.rimsART-
dc.description.journalClass1-
dc.description.journalClass1-
dc.subject.keywordAuthorARIA-H-
dc.subject.keywordAuthorattention mechanism-
dc.subject.keywordAuthorCBAM-
dc.subject.keywordAuthorcerebral microbleeds-
dc.subject.keywordAuthorsegmentation-
dc.subject.keywordPlusINTRACEREBRAL HEMORRHAGE-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalWebOfScienceCategoryNeurosciences-
dc.relation.journalResearchAreaNeurosciences & Neurology-
dc.identifier.articleno1743039-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Preventive Medicine (예방의학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.