0 251

Cited 12 times in

Development of a deep neural network for generating synthetic dual-energy chest x-ray images with single x-ray exposure

DC Field Value Language
dc.contributor.author김휘영-
dc.contributor.author최병욱-
dc.date.accessioned2022-08-19T06:29:33Z-
dc.date.available2022-08-19T06:29:33Z-
dc.date.issued2019-05-
dc.identifier.issn0031-9155-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/189217-
dc.description.abstractDual-energy chest radiography (DECR) is a medical imaging technology that can improve diagnostic accuracy. This technique can decompose single-energy chest radiography (SECR) images into separate bone- and soft tissue-only images. This can, however, double the radiation exposure to the patient. To address this limitation, we developed an algorithm for the synthesis of DECR from a SECR through deep learning. To predict high resolution images, we developed a novel deep learning architecture by modifying a conventional U-net to take advantage of the high frequency-dominant information that propagates from the encoding part to the decoding part. In addition, we used the anticorrelated relationship (ACR) of DECR for improving the quality of the predicted images. For training data, 300 pairs of SECR and their corresponding DECR images were used. To test the trained model, 50 DECR images from Yonsei University Severance Hospital and 662 publicly accessible SECRs were used. To evaluate the performance of the proposed method, we compared DECR and predicted images using a structural similarity approach (SSIM). In addition, we quantitatively evaluated image quality calculating the modulation transfer function and coefficient of variation. The proposed model selectively predicted the bone-and soft tissue-only CR images from an SECR image. The strategy for improving the spatial resolution by ACR was effective. Quantitative evaluation showed that the proposed method with ACR showed relatively high SSIM (over 0.85). In addition, predicted images with the proposed ACR model achieved better image quality measures than those of U-net. In conclusion, the proposed method can obtain high-quality bone-and soft tissue-only CR images without the need for additional hardware for double x-ray exposures in clinical practice.-
dc.description.statementOfResponsibilityrestriction-
dc.languageEnglish-
dc.publisherIOP Publishing-
dc.relation.isPartOfPHYSICS IN MEDICINE AND BIOLOGY-
dc.rightsCC BY-NC-ND 2.0 KR-
dc.subject.MESHDeep Learning*-
dc.subject.MESHHumans-
dc.subject.MESHImage Processing, Computer-Assisted / methods*-
dc.subject.MESHRadiation Exposure*-
dc.subject.MESHRadiography*-
dc.subject.MESHThorax / diagnostic imaging*-
dc.titleDevelopment of a deep neural network for generating synthetic dual-energy chest x-ray images with single x-ray exposure-
dc.typeArticle-
dc.contributor.collegeCollege of Medicine (의과대학)-
dc.contributor.departmentDept. of Biomedical Systems Informatics (의생명시스템정보학교실)-
dc.contributor.googleauthorDonghoon Lee-
dc.contributor.googleauthorHwiyoung Kim-
dc.contributor.googleauthorByungwook Choi-
dc.contributor.googleauthorHee-Joung Kim-
dc.identifier.doi10.1088/1361-6560/ab1cee-
dc.contributor.localIdA05971-
dc.contributor.localIdA04059-
dc.relation.journalcodeJ02523-
dc.identifier.eissn1361-6560-
dc.identifier.pmid31026841-
dc.identifier.urlhttps://iopscience.iop.org/article/10.1088/1361-6560/ab1cee-
dc.subject.keyworddual-energy chest radiography-
dc.subject.keyworddeep learning-
dc.subject.keywordU-net-
dc.subject.keywordanticorrelated relationship-
dc.contributor.alternativeNameKim, Hwiyoung-
dc.contributor.affiliatedAuthor김휘영-
dc.contributor.affiliatedAuthor최병욱-
dc.citation.volume64-
dc.citation.number11-
dc.citation.startPage115017-
dc.identifier.bibliographicCitationPHYSICS IN MEDICINE AND BIOLOGY, Vol.64(11) : 115017, 2019-05-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Biomedical Systems Informatics (의생명시스템정보학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.