0 75

Cited 0 times in

Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography

DC Field Value Language
dc.contributor.author이상휘-
dc.date.accessioned2022-05-09T17:26:26Z-
dc.date.available2022-05-09T17:26:26Z-
dc.date.issued2022-02-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/188569-
dc.description.abstractCone-beam computed tomography (CBCT) is widely used in dental and maxillofacial imaging applications. However, CBCT suffers from shading artifacts owing to several factors, including photon scattering and data truncation. This paper presents a deep-learning-based method for eliminating the shading artifacts that interfere with the diagnostic and treatment processes. The proposed method involves a two-stage generative adversarial network (GAN)-based image-to-image translation, where it operates on unpaired CBCT and multidetector computed tomography (MDCT) images. The first stage uses a generic GAN along with the fidelity difference between the original CBCT and MDCT-like images generated by the network. Although this approach is generally effective for denoising, at times, it introduces additional artifacts that appear as bone-like structures in the output images. This is because the weak input fidelity between the two imaging modalities can make it difficult to preserve the morphological structures from complex shadowing artifacts. The second stage of the proposed model addresses this problem. In this stage, paired training data, excluding inappropriate data, were collected from the results obtained in the first stage. Subsequently, the fidelity-embedded GAN was retrained using the selected paired samples. The results obtained in this study reveal that the proposed approach substantially reduces the shadowing and secondary artifacts arising from incorrect data fidelity while preserving the morphological structures of the original CBCT image. In addition, the corrected image obtained using the proposed method facilitates accurate bone segmentation compared to the original and corrected CBCT images obtained using the unpaired method.-
dc.description.statementOfResponsibilityrestriction-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.isPartOfIEEE ACCESS-
dc.rightsCC BY-NC-ND 2.0 KR-
dc.titleUnpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography-
dc.typeArticle-
dc.contributor.collegeCollege of Dentistry (치과대학)-
dc.contributor.departmentDept. of Oral and Maxillofacial Surgery (구강악안면외과학교실)-
dc.contributor.googleauthorHyoung Suk Park-
dc.contributor.googleauthorKiwan Jeon-
dc.contributor.googleauthorSang-Hwy Lee-
dc.contributor.googleauthorJin Keun Seo-
dc.identifier.doi10.1109/ACCESS.2022.3155203-
dc.contributor.localIdA02839-
dc.relation.journalcodeJ03454-
dc.identifier.eissn2169-3536-
dc.identifier.urlhttps://ieeexplore.ieee.org/document/9722839-
dc.subject.keywordGenerative adversarial networks-
dc.subject.keywordComputed tomography-
dc.subject.keywordTraining data-
dc.subject.keywordTraining-
dc.subject.keywordBones-
dc.subject.keywordScattering-
dc.subject.keywordDentistry-
dc.contributor.alternativeNameLee, Sang Hwy-
dc.contributor.affiliatedAuthor이상휘-
dc.citation.volume10-
dc.citation.startPage26140-
dc.citation.endPage26148-
dc.identifier.bibliographicCitationIEEE ACCESS, Vol.10 : 26140-26148, 2022-02-
Appears in Collections:
2. College of Dentistry (치과대학) > Dept. of Oral and Maxillofacial Surgery (구강악안면외과학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.