0 172

Cited 17 times in

Deep feature classification of angiomyolipoma without visible fat and renal cell carcinoma in abdominal contrast-enhanced CT images with texture image patches and hand-crafted feature concatenation

 Hansang Lee  ;  Helen Hong  ;  Junmo Kim  ;  Dae Chul Jung 
 MEDICAL PHYSICS, Vol.45(4) : 1550-1561, 2018 
Journal Title
Issue Date
Angiomyolipoma/diagnostic imaging* ; Carcinoma, Renal Cell/diagnostic imaging* ; Contrast Media* ; Diagnosis, Differential ; Humans ; Image Processing, Computer-Assisted/methods* ; Kidney Neoplasms/diagnostic imaging* ; Machine Learning* ; Radiography, Abdominal ; Sensitivity and Specificity ; Tomography, X-Ray Computed*
angiomyolipoma without visible fat (AMLwvf) ; clear cell renal cell carcinoma (ccRCC) ; computed tomography (CT) ; computer-aided diagnosis (CAD) ; deep feature classification
PURPOSE: To develop an automatic deep feature classification (DFC) method for distinguishing benign angiomyolipoma without visible fat (AMLwvf) from malignant clear cell renal cell carcinoma (ccRCC) from abdominal contrast-enhanced computer tomography (CE CT) images. METHODS: A dataset including 80 abdominal CT images of 39 AMLwvf and 41 ccRCC patients was used. We proposed a DFC method for differentiating the small renal masses (SRM) into AMLwvf and ccRCC using the combination of hand-crafted and deep features, and machine learning classifiers. First, 71-dimensional hand-crafted features (HCF) of texture and shape were extracted from the SRM contours. Second, 1000-4000-dimensional deep features (DF) were extracted from the ImageNet pretrained deep learning model with the SRM image patches. In DF extraction, we proposed the texture image patches (TIP) to emphasize the texture information inside the mass in DFs and reduce the mass size variability. Finally, the two features were concatenated and the random forest (RF) classifier was trained on these concatenated features to classify the types of SRMs. The proposed method was tested on our dataset using leave-one-out cross-validation and evaluated using accuracy, sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and area under receiver operating characteristics curve (AUC). In experiments, the combinations of four deep learning models, AlexNet, VGGNet, GoogleNet, and ResNet, and four input image patches, including original, masked, mass-size, and texture image patches, were compared and analyzed. RESULTS: In qualitative evaluation, we observed the change in feature distributions between the proposed and comparative methods using tSNE method. In quantitative evaluation, we evaluated and compared the classification results, and observed that (a) the proposed HCF + DF outperformed HCF-only and DF-only, (b) AlexNet showed generally the best performances among the CNN models, and (c) the proposed TIPs not only achieved the competitive performances among the input patches, but also steady performance regardless of CNN models. As a result, the proposed method achieved the accuracy of 76.6 ± 1.4% for the proposed HCF + DF with AlexNet and TIPs, which improved the accuracy by 6.6%p and 8.3%p compared to HCF-only and DF-only, respectively. CONCLUSIONS: The proposed shape features and TIPs improved the HCFs and DFs, respectively, and the feature concatenation further enhanced the quality of features for differentiating AMLwvf from ccRCC in abdominal CE CT images.
Full Text
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers
Yonsei Authors
Jung, Dae Chul(정대철) ORCID logo https://orcid.org/0000-0001-5769-5083
사서에게 알리기


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.