0 406

Cited 34 times in

Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy

DC Field Value Language
dc.contributor.author김진성-
dc.date.accessioned2019-10-28T02:02:57Z-
dc.date.available2019-10-28T02:02:57Z-
dc.date.issued2019-
dc.identifier.issn0094-2405-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/171458-
dc.description.abstractPURPOSE: The superior soft-tissue contrast achieved using magnetic resonance imaging (MRI) compared to x-ray computed tomography (CT) has led to the popularization of MRI-guided radiation therapy (MR-IGRT), especially in recent years with the advent of first and second generation MRI-based therapy delivery systems for MR-IGRT. The expanding use of these systems is driving interest in MRI-only RT workflows in which MRI is the sole imaging modality used for treatment planning and dose calculations. To enable such a workflow, synthetic CT (sCT) data must be generated based on a patient's MRI data so that dose calculations may be performed using the electron density information derived from CT images. In this study, we propose a novel deep spatial pyramid convolutional framework for the MRI-to-CT image-to-image translation task and compare its performance to the well established U-Net architecture in a generative adversarial network (GAN) framework. METHODS: Our proposed framework utilizes atrous convolution in a method named atrous spatial pyramid pooling (ASPP) to significantly reduce the total number of parameters required to describe the model while effectively capturing rich, multi-scale structural information in a manner that is not possible in the conventional framework. The proposed framework consists of a generative model composed of stacked encoders and decoders separated by the ASPP module, where atrous convolution is applied at increasing rates in parallel to encode large-scale features. The performance of the proposed method is compared to that of the conventional GAN framework in terms of the time required to train the model and the image quality of the generated sCT as measured by the root mean square error (RMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) depending on the size of the training data set. Dose calculations based on sCT data generated using the proposed architecture are also compared to clinical plans to evaluate the dosimetric accuracy of the method. RESULTS: Significant reductions in training time and improvements in image quality are observed at every training data set size when the proposed framework is adopted instead of the conventional framework. Over 1042 test images, values of 17.7 ± 4.3 HU, 0.9995 ± 0.0003, and 71.7 ± 2.3 are observed for the RMSE, SSIM, and PSNR metrics, respectively. Dose distributions calculated based on sCT data generated using the proposed framework demonstrate passing rates equal to or greater than 98% using the 3D gamma index with a 2%/2 mm criterion. CONCLUSIONS: The deep spatial pyramid convolutional framework proposed here demonstrates improved performance compared to the conventional GAN framework that has been applied to the image-to-image translation task of sCT generation. Adopting the method is a first step toward an MRI-only RT workflow that enables widespread clinical applications for MR-IGRT including online adaptive therapy.-
dc.description.statementOfResponsibilityrestriction-
dc.languageEnglish-
dc.publisherPublished for the American Assn. of Physicists in Medicine by the American Institute of Physics.-
dc.relation.isPartOfMedical Physics-
dc.rightsCC BY-NC-ND 2.0 KR-
dc.titleSynthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy-
dc.typeArticle-
dc.contributor.collegeCollege of Medicine (의과대학)-
dc.contributor.departmentDept. of Radiation Oncology (방사선종양학교실)-
dc.contributor.googleauthorSven Olberg-
dc.contributor.googleauthorHao Zhang-
dc.contributor.googleauthorWilliam R. Kennedy-
dc.contributor.googleauthorJaehee Chun-
dc.contributor.googleauthorVivian Rodriguez-
dc.contributor.googleauthorImran Zoberi-
dc.contributor.googleauthorMaria A. Thomas-
dc.contributor.googleauthorJin Sung Kim-
dc.contributor.googleauthorSasa Mutic-
dc.contributor.googleauthorOlga L. Green-
dc.contributor.googleauthorJustin C. Park-
dc.identifier.doi10.1002/mp.13716-
dc.contributor.localIdA04548-
dc.relation.journalcodeJ02206-
dc.identifier.eissn2473-4209-
dc.identifier.pmid31309586-
dc.identifier.urlhttps://aapm.onlinelibrary.wiley.com/doi/full/10.1002/mp.13716-
dc.subject.keywordMRI-
dc.subject.keywordMRI-guided RT-
dc.subject.keywordMRI-only RT-
dc.subject.keywordmachine learning-
dc.subject.keywordsynthetic CT-
dc.contributor.alternativeNameKim, Jinsung-
dc.contributor.affiliatedAuthor김진성-
dc.citation.volume46-
dc.citation.number9-
dc.citation.startPage4135-
dc.citation.endPage4147-
dc.identifier.bibliographicCitationMedical Physics, Vol.46(9) : 4135-4147, 2019-
dc.identifier.rimsid63916-
dc.type.rimsART-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiation Oncology (방사선종양학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.