0 114

Cited 4 times in

Intelligent noninvasive meningioma grading with a fully automatic segmentation using interpretable multiparametric deep learning

Authors
 Yohan Jun  ;  Yae Won Park  ;  Hyungseob Shin  ;  Yejee Shin  ;  Jeong Ryong Lee  ;  Kyunghwa Han  ;  Sung Soo Ahn  ;  Soo Mee Lim 9  ;  Dosik Hwang  ;  Seung-Koo Lee 
Citation
 EUROPEAN RADIOLOGY, Vol.33(9) : 6124-6133, 2023-09 
Journal Title
EUROPEAN RADIOLOGY
ISSN
 0938-7994 
Issue Date
2023-09
MeSH
Deep Learning* ; Humans ; Magnetic Resonance Imaging / methods ; Meningeal Neoplasms* / diagnostic imaging ; Meningeal Neoplasms* / pathology ; Meningioma* / diagnostic imaging ; Meningioma* / pathology ; Neoplasm Grading ; Neuroimaging ; Retrospective Studies
Keywords
Deep learning ; Interpretable ; Magnetic resonance imaging ; Meningioma, grading
Abstract
ObjectivesTo establish a robust interpretable multiparametric deep learning (DL) model for automatic noninvasive grading of meningiomas along with segmentation.MethodsIn total, 257 patients with pathologically confirmed meningiomas (162 low-grade, 95 high-grade) who underwent a preoperative brain MRI, including T2-weighted (T2) and contrast-enhanced T1-weighted images (T1C), were included in the institutional training set. A two-stage DL grading model was constructed for segmentation and classification based on multiparametric three-dimensional U-net and ResNet. The models were validated in the external validation set consisting of 61 patients with meningiomas (46 low-grade, 15 high-grade). Relevance-weighted Class Activation Mapping (RCAM) method was used to interpret the DL features contributing to the prediction of the DL grading model.ResultsOn external validation, the combined T1C and T2 model showed a Dice coefficient of 0.910 in segmentation and the highest performance for meningioma grading compared to the T2 or T1C only models, with an area under the curve (AUC) of 0.770 (95% confidence interval: 0.644-0.895) and accuracy, sensitivity, and specificity of 72.1%, 73.3%, and 71.7%, respectively. The AUC and accuracy of the combined DL grading model were higher than those of the human readers (AUCs of 0.675-0.690 and accuracies of 65.6-68.9%, respectively). The RCAM of the DL grading model showed activated maps at the surface regions of meningiomas indicating that the model recognized the features at the tumor margin for grading.ConclusionsAn interpretable multiparametric DL model combining T1C and T2 can enable fully automatic grading of meningiomas along with segmentation.
Full Text
https://link.springer.com/article/10.1007/s00330-023-09590-4
DOI
10.1007/s00330-023-09590-4
Appears in Collections:
1. College of Medicine (의과대학) > Research Institute (부설연구소) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers
Yonsei Authors
Park, Yae Won(박예원) ORCID logo https://orcid.org/0000-0001-8907-5401
Ahn, Sung Soo(안성수) ORCID logo https://orcid.org/0000-0002-0503-5558
Lee, Seung Koo(이승구) ORCID logo https://orcid.org/0000-0001-5646-4072
Han, Kyung Hwa(한경화)
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/197954
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links