0 222

Cited 1 times in

Cross-Modality Image Translation From Brain 18 F-FDG PET/CT Images to Fluid-Attenuated Inversion Recovery Images Using the CypixGAN Framework

DC Field Value Language
dc.contributor.author윤미진-
dc.contributor.author김동우-
dc.date.accessioned2024-12-06T03:51:49Z-
dc.date.available2024-12-06T03:51:49Z-
dc.date.issued2024-09-
dc.identifier.issn0363-9762-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/201264-
dc.description.abstractPurpose: PET/CT and MRI can accurately diagnose dementia but are expensive and inconvenient for patients. Therefore, we aimed to generate synthetic fluid-attenuated inversion recovery (FLAIR) images from 18F-FDG PET and CT images of the human brain using a generative adversarial network (GAN)-based deep learning framework called the CypixGAN, which combined the CycleGAN framework with the L1 loss function of the pix2pix. Patients and methods: Data from 143 patients who underwent PET/CT and MRI were used for training (n = 79), validation (n = 20), and testing (n = 44) the deep learning frameworks. Synthetic FLAIR images were generated using the pix2pix, CycleGAN, and CypixGAN, and white matter hyperintensities (WMHs) were then segmented. The performance of CypixGAN was compared with that of the other frameworks. Results: The CypixGAN outperformed the pix2pix and CycleGAN in generating synthetic FLAIR images with superior visual quality. Peak signal-to-noise ratio and structural similarity index (mean ± standard deviation) estimated using the CypixGAN (20.23 ± 1.31 and 0.80 ± 0.02, respectively) were significantly higher than those estimated using the pix2pix (19.35 ± 1.43 and 0.79 ± 0.02, respectively) and CycleGAN (18.74 ± 1.49 and 0.78 ± 0.02, respectively) (P < 0.001). WMHs in synthetic FLAIR images generated using the CypixGAN closely resembled those in ground-truth images, as indicated by the low absolute percentage volume differences and high dice similarity coefficients. Conclusions: The CypixGAN generated high-quality FLAIR images owing to the preservation of spatial information despite using unpaired images. This framework may help improve diagnostic performance and cost-effectiveness of PET/CT when MRI scan is unavailable.-
dc.description.statementOfResponsibilityrestriction-
dc.languageEnglish-
dc.publisherLippincott-
dc.relation.isPartOfCLINICAL NUCLEAR MEDICINE-
dc.rightsCC BY-NC-ND 2.0 KR-
dc.subject.MESHAged-
dc.subject.MESHBrain* / diagnostic imaging-
dc.subject.MESHDeep Learning-
dc.subject.MESHFemale-
dc.subject.MESHFluorodeoxyglucose F18*-
dc.subject.MESHHumans-
dc.subject.MESHImage Processing, Computer-Assisted* / methods-
dc.subject.MESHMagnetic Resonance Imaging-
dc.subject.MESHMale-
dc.subject.MESHMiddle Aged-
dc.subject.MESHPositron Emission Tomography Computed Tomography*-
dc.titleCross-Modality Image Translation From Brain 18 F-FDG PET/CT Images to Fluid-Attenuated Inversion Recovery Images Using the CypixGAN Framework-
dc.typeArticle-
dc.contributor.collegeCollege of Medicine (의과대학)-
dc.contributor.departmentDept. of Nuclear Medicine (핵의학교실)-
dc.contributor.googleauthorSangwon Lee-
dc.contributor.googleauthorJin Ho Jung-
dc.contributor.googleauthorYong Choi-
dc.contributor.googleauthorEunyeong Seok-
dc.contributor.googleauthorJiwoong Jung-
dc.contributor.googleauthorHyunkeong Lim-
dc.contributor.googleauthorDongwoo Kim-
dc.contributor.googleauthorMijin Yun-
dc.identifier.doi10.1097/rlu.0000000000005441-
dc.contributor.localIdA02550-
dc.relation.journalcodeJ00595-
dc.identifier.eissn1536-0229-
dc.identifier.pmid39325494-
dc.identifier.urlhttps://journals.lww.com/nuclearmed/fulltext/2024/11000/cross_modality_image_translation_from_brain.25.aspx-
dc.contributor.alternativeNameYun, Mi Jin-
dc.contributor.affiliatedAuthor윤미진-
dc.citation.volume49-
dc.citation.number11-
dc.citation.startPagee557-
dc.citation.endPagee565-
dc.identifier.bibliographicCitationCLINICAL NUCLEAR MEDICINE, Vol.49(11) : e557-e565, 2024-09-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Nuclear Medicine (핵의학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.