20 45

Cited 0 times in

Self-supervised denoising of projection data for low-dose cone-beam CT

DC Field Value Language
dc.contributor.author김성원-
dc.contributor.author김승형-
dc.date.accessioned2024-05-30T06:49:16Z-
dc.date.available2024-05-30T06:49:16Z-
dc.date.issued2023-10-
dc.identifier.issn0094-2405-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/199360-
dc.description.abstractBackground: Convolutional neural networks (CNNs) have shown promising results in image denoising tasks. While most existing CNN-based methods depend on supervised learning by directly mapping noisy inputs to clean targets, high-quality references are often unavailable for interventional radiology such as cone-beam computed tomography (CBCT). Purpose: In this paper, we propose a novel self-supervised learning method that reduces noise in projections acquired by ordinary CBCT scans. Methods: With a network that partially blinds input, we are able to train the denoising model by mapping the partially blinded projections to the original projections. Additionally, we incorporate noise-to-noise learning into the self -supervised learning by mapping the adjacent projections to the original projections. With standard image reconstruction methods such as FDK-type algorithms, we can reconstruct high-quality CBCT images from the projections denoised by our projection-domain denoising method. Results: In the head phantom study, we measure peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) values of the proposed method along with the other denoising methods and uncorrected low-dose CBCT data for a quantitative comparison both in projection and image domains. The PSNR and SSIM values of our self -supervised denoising approach are 27.08 and 0.839, whereas those of uncorrected CBCT images are 15.68 and 0.103, respectively. In the retrospective study, we assess the quality of interventional patient CBCT images to evaluate the projection-domain and image-domain denoising methods. Both qualitative and quantitative results indicate that our approach can effectively produce high-quality CBCT images with low-dose projections in the absence of duplicate clean or noisy references. Conclusions: Our self -supervised learning strategy is capable of restoring anatomical information while efficiently removing noise in CBCT projection data.-
dc.description.statementOfResponsibilityopen-
dc.languageEnglish-
dc.publisherPublished for the American Assn. of Physicists in Medicine by the American Institute of Physics.-
dc.relation.isPartOfMEDICAL PHYSICS-
dc.rightsCC BY-NC-ND 2.0 KR-
dc.subject.MESHAlgorithms-
dc.subject.MESHCone-Beam Computed Tomography* / methods-
dc.subject.MESHHumans-
dc.subject.MESHImage Processing, Computer-Assisted / methods-
dc.subject.MESHNeural Networks, Computer*-
dc.subject.MESHPhantoms, Imaging-
dc.subject.MESHRetrospective Studies-
dc.titleSelf-supervised denoising of projection data for low-dose cone-beam CT-
dc.typeArticle-
dc.contributor.collegeCollege of Medicine (의과대학)-
dc.contributor.departmentDept. of Radiology (영상의학교실)-
dc.contributor.googleauthorKihwan Choi-
dc.contributor.googleauthorSeung Hyoung Kim-
dc.contributor.googleauthorSungwon Kim-
dc.identifier.doi10.1002/mp.16421-
dc.contributor.localIdA06542-
dc.contributor.localIdA00663-
dc.relation.journalcodeJ02206-
dc.identifier.eissn2473-4209-
dc.identifier.pmid37079443-
dc.subject.keywordcone-beam CT-
dc.subject.keyworddose reduction-
dc.subject.keywordmodel fusion-
dc.subject.keywordprojection-domain denoising-
dc.subject.keywordself-supervised learning-
dc.contributor.alternativeNameKim, Sungwon-
dc.contributor.affiliatedAuthor김성원-
dc.contributor.affiliatedAuthor김승형-
dc.citation.volume50-
dc.citation.number10-
dc.citation.startPage6319-
dc.citation.endPage6333-
dc.identifier.bibliographicCitationMEDICAL PHYSICS, Vol.50(10) : 6319-6333, 2023-10-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.