Cited 4 times in
Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이상휘 | - |
dc.date.accessioned | 2022-05-09T17:26:26Z | - |
dc.date.available | 2022-05-09T17:26:26Z | - |
dc.date.issued | 2022-02 | - |
dc.identifier.uri | https://ir.ymlib.yonsei.ac.kr/handle/22282913/188569 | - |
dc.description.abstract | Cone-beam computed tomography (CBCT) is widely used in dental and maxillofacial imaging applications. However, CBCT suffers from shading artifacts owing to several factors, including photon scattering and data truncation. This paper presents a deep-learning-based method for eliminating the shading artifacts that interfere with the diagnostic and treatment processes. The proposed method involves a two-stage generative adversarial network (GAN)-based image-to-image translation, where it operates on unpaired CBCT and multidetector computed tomography (MDCT) images. The first stage uses a generic GAN along with the fidelity difference between the original CBCT and MDCT-like images generated by the network. Although this approach is generally effective for denoising, at times, it introduces additional artifacts that appear as bone-like structures in the output images. This is because the weak input fidelity between the two imaging modalities can make it difficult to preserve the morphological structures from complex shadowing artifacts. The second stage of the proposed model addresses this problem. In this stage, paired training data, excluding inappropriate data, were collected from the results obtained in the first stage. Subsequently, the fidelity-embedded GAN was retrained using the selected paired samples. The results obtained in this study reveal that the proposed approach substantially reduces the shadowing and secondary artifacts arising from incorrect data fidelity while preserving the morphological structures of the original CBCT image. In addition, the corrected image obtained using the proposed method facilitates accurate bone segmentation compared to the original and corrected CBCT images obtained using the unpaired method. | - |
dc.description.statementOfResponsibility | restriction | - |
dc.language | English | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.relation.isPartOf | IEEE ACCESS | - |
dc.rights | CC BY-NC-ND 2.0 KR | - |
dc.title | Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography | - |
dc.type | Article | - |
dc.contributor.college | College of Dentistry (치과대학) | - |
dc.contributor.department | Dept. of Oral and Maxillofacial Surgery (구강악안면외과학교실) | - |
dc.contributor.googleauthor | Hyoung Suk Park | - |
dc.contributor.googleauthor | Kiwan Jeon | - |
dc.contributor.googleauthor | Sang-Hwy Lee | - |
dc.contributor.googleauthor | Jin Keun Seo | - |
dc.identifier.doi | 10.1109/ACCESS.2022.3155203 | - |
dc.contributor.localId | A02839 | - |
dc.relation.journalcode | J03454 | - |
dc.identifier.eissn | 2169-3536 | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9722839 | - |
dc.subject.keyword | Generative adversarial networks | - |
dc.subject.keyword | Computed tomography | - |
dc.subject.keyword | Training data | - |
dc.subject.keyword | Training | - |
dc.subject.keyword | Bones | - |
dc.subject.keyword | Scattering | - |
dc.subject.keyword | Dentistry | - |
dc.contributor.alternativeName | Lee, Sang Hwy | - |
dc.contributor.affiliatedAuthor | 이상휘 | - |
dc.citation.volume | 10 | - |
dc.citation.startPage | 26140 | - |
dc.citation.endPage | 26148 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, Vol.10 : 26140-26148, 2022-02 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.