0 84

Cited 0 times in

Segmentation of white matter hyperintensities on 18 F-FDG PET/CT images with a generative adversarial network

Authors
 Kyeong Taek Oh  ;  Dongwoo Kim  ;  Byoung Seok Ye  ;  Sangwon Lee  ;  Mijin Yun  ;  Sun Kook Yoo 
Citation
 EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, Vol.8(11) : 3422-3431, 2021-10 
Journal Title
EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING
ISSN
 1619-7070 
Issue Date
2021-10
Keywords
18F-FDG PET/CT ; Feasibility study ; Generative adversarial network ; Segmentation ; White matter hyperintensities
Abstract
Purpose: White matter hyperintensities (WMH) are typically segmented using MRI because WMH are hardly visible on 18F-FDG PET/CT. This retrospective study was conducted to segment WMH and estimate their volumes from 18F-FDG PET with a generative adversarial network (WhyperGAN). Methods: We selected patients whose interval between MRI and FDG PET/CT scans was within 3 months, from January 2017 to December 2018, and classified them into mild, moderate, and severe groups by following the semiquantitative rating method of Fazekas. For each group, 50 patients were selected, and of them, we randomly selected 35 patients for training and 15 for testing. WMH were automatically segmented from FLAIR MRI with manual adjustment. Patches of WMH were extracted from 18F-FDG PET and segmented MRI. WhyperGAN was compared with H-DenseUnet, a deep learning method widely used for segmentation tasks, for segmentation performance based on the dice similarity coefficient (DSC), recall, and average volume differences (AVD). For volume estimation, the predicted WMH volumes from PET were compared with ground truth volumes. Results: The DSC values were associated with WMH volumes on MRI. For volumes >60 mL, the DSC values were 0.751 for WhyperGAN and 0.564 for H-DenseUnet. For volumes ≤60 mL, the DSC values rapidly decreased as the volume decreased (0.362 for WhyperGAN vs. 0.237 for H-DenseUnet). For recall, WhyperGAN achieved the highest value in the severe group (0.579 for WhyperGAN vs. 0.509 for H-DenseUnet). For AVD, WhyperGAN achieved the lowest score in the severe group (0.494 for WhyperGAN vs. 0.941 for H-DenseUnet). For the WMH volume estimation, WhyperGAN performed better than H-DenseUnet and yielded excellent correlation coefficients (r = 0.998, 0.983, and 0.908 in the severe, moderate, and mild group). Conclusions: Although limited by visual analysis, the WhyperGAN based can be used to automatically segment and estimate volumes of WMH from 18F-FDG PET/CT. This would increase the usefulness of 18F-FDG PET/CT for the evaluation of WMH in patients with cognitive impairment.
Full Text
https://link.springer.com/article/10.1007%2Fs00259-021-05285-4
DOI
10.1007/s00259-021-05285-4
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Medical Engineering (의학공학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Neurology (신경과학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Nuclear Medicine (핵의학교실) > 1. Journal Papers
Yonsei Authors
Kim, Dongwoo(김동우) ORCID logo https://orcid.org/0000-0002-1723-604X
Ye, Byoung Seok(예병석) ORCID logo https://orcid.org/0000-0003-0187-8440
Yoo, Sun Kook(유선국) ORCID logo https://orcid.org/0000-0002-6032-4686
Yun, Mi Jin(윤미진) ORCID logo https://orcid.org/0000-0002-1712-163X
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/185433
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links