0 382

Cited 85 times in

Fully Automated CT Quantification of Epicardial Adipose Tissue by Deep Learning: A Multicenter Study

DC Field Value Language
dc.contributor.author장혁재-
dc.date.accessioned2020-07-17T00:38:58Z-
dc.date.available2020-07-17T00:38:58Z-
dc.date.issued2019-11-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/178396-
dc.description.abstractPurpose: To evaluate the performance of deep learning for robust and fully automated quantification of epicardial adipose tissue (EAT) from multicenter cardiac CT data. Materials and methods: In this multicenter study, a convolutional neural network approach was trained to quantify EAT on non-contrast material-enhanced calcium-scoring CT scans from multiple cohorts, scanners, and protocols (n = 850). Deep learning performance was compared with the performance of three expert readers and with interobserver variability in a subset of 141 scans. The deep learning algorithm was incorporated into research software. Automated EAT progression was compared with expert measurements for 70 patients with baseline and follow-up scans. Results: Automated quantification was performed in a mean (± standard deviation) time of 1.57 seconds ± 0.49, compared with 15 minutes for experts. Deep learning provided high agreement with expert manual quantification for all scans (R = 0.974; P < .001), with no significant bias (0.53 cm3; P = .13). Manual EAT volumes measured by two experienced readers were highly correlated (R = 0.984; P < .001) but with a bias of 4.35 cm3 (P < .001). Deep learning quantifications were highly correlated with the measurements of both experts (R = 0.973 and R = 0.979; P < .001), with significant bias for reader 1 (5.11 cm3; P < .001) but not for reader 2 (0.88 cm3; P = .26). EAT progression by deep learning correlated strongly with manual EAT progression (R = 0.905; P < .001) in 70 patients, with no significant bias (0.64 cm3; P = .43), and was related to an increased noncalcified plaque burden quantified from coronary CT angiography (5.7% vs 1.8%; P = .026). Conclusion: Deep learning allows rapid, robust, and fully automated quantification of EAT from calcium scoring CT. It performs as well as an expert reader and can be implemented for routine cardiovascular risk assessment.© RSNA, 2019See also the commentary by Schoepf and Abadia in this issue.-
dc.description.statementOfResponsibilityopen-
dc.languageRADIOLOGY-ARTIFICIAL INTELLIGENCE-
dc.publisherRADIOLOGY-ARTIFICIAL INTELLIGENCE-
dc.relation.isPartOfRADIOLOGY-ARTIFICIAL INTELLIGENCE-
dc.rightsCC BY-NC-ND 2.0 KR-
dc.titleFully Automated CT Quantification of Epicardial Adipose Tissue by Deep Learning: A Multicenter Study-
dc.typeArticle-
dc.contributor.collegeCollege of Medicine (의과대학)-
dc.contributor.departmentDept. of Internal Medicine (내과학교실)-
dc.contributor.googleauthorFrederic Commandeur-
dc.contributor.googleauthorMarkus Goeller-
dc.contributor.googleauthorAryabod Razipour-
dc.contributor.googleauthorSebastien Cadet-
dc.contributor.googleauthorMichaela M Hell-
dc.contributor.googleauthorJacek Kwiecinski-
dc.contributor.googleauthorXi Chen-
dc.contributor.googleauthorHyuk-Jae Chang-
dc.contributor.googleauthorMohamed Marwan-
dc.contributor.googleauthorStephan Achenbach-
dc.contributor.googleauthorDaniel S Berman-
dc.contributor.googleauthorPiotr J Slomka-
dc.contributor.googleauthorBalaji K Tamarappoo-
dc.contributor.googleauthorDamini Dey-
dc.identifier.doi10.1148/ryai.2019190045-
dc.contributor.localIdA03490-
dc.relation.journalcodeJ03846-
dc.identifier.eissn2638-6100-
dc.identifier.pmid32090206-
dc.identifier.urlhttps://pubs.rsna.org/doi/10.1148/ryai.2019190045-
dc.contributor.alternativeNameChang, Hyuck Jae-
dc.contributor.affiliatedAuthor장혁재-
dc.citation.volume1-
dc.citation.number6-
dc.citation.startPagee190045-
dc.identifier.bibliographicCitationRADIOLOGY-ARTIFICIAL INTELLIGENCE, Vol.1(6) : e190045, 2019-11-
dc.identifier.rimsid64586-
dc.type.rimsART-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Internal Medicine (내과학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.