234 528

Cited 28 times in

Radiomics machine learning study with a small sample size: Single random training-test set split may lead to unreliable results

Authors
 Chansik An  ;  Yae Won Park  ;  Sung Soo Ahn  ;  Kyunghwa Han  ;  Hwiyoung Kim  ;  Seung-Koo Lee 
Citation
 PLOS ONE, Vol.16(8) : e0256152, 2021-08 
Journal Title
PLOS ONE
Issue Date
2021-08
Abstract
This study aims to determine how randomly splitting a dataset into training and test sets affects the estimated performance of a machine learning model and its gap from the test performance under different conditions, using real-world brain tumor radiomics data. We conducted two classification tasks of different difficulty levels with magnetic resonance imaging (MRI) radiomics features: (1) "Simple" task, glioblastomas [n = 109] vs. brain metastasis [n = 58] and (2) "difficult" task, low- [n = 163] vs. high-grade [n = 95] meningiomas. Additionally, two undersampled datasets were created by randomly sampling 50% from these datasets. We performed random training-test set splitting for each dataset repeatedly to create 1,000 different training-test set pairs. For each dataset pair, the least absolute shrinkage and selection operator model was trained and evaluated using various validation methods in the training set, and tested in the test set, using the area under the curve (AUC) as an evaluation metric. The AUCs in training and testing varied among different training-test set pairs, especially with the undersampled datasets and the difficult task. The mean (±standard deviation) AUC difference between training and testing was 0.039 (±0.032) for the simple task without undersampling and 0.092 (±0.071) for the difficult task with undersampling. In a training-test set pair with the difficult task without undersampling, for example, the AUC was high in training but much lower in testing (0.882 and 0.667, respectively); in another dataset pair with the same task, however, the AUC was low in training but much higher in testing (0.709 and 0.911, respectively). When the AUC discrepancy between training and test, or generalization gap, was large, none of the validation methods helped sufficiently reduce the generalization gap. Our results suggest that machine learning after a single random training-test set split may lead to unreliable results in radiomics studies especially with small sample sizes.
Files in This Item:
T202103874.pdf Download
DOI
10.1371/journal.pone.0256152
Appears in Collections:
1. College of Medicine (의과대학) > Research Institute (부설연구소) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Biomedical Systems Informatics (의생명시스템정보학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers
Yonsei Authors
Kim, Hwiyoung(김휘영)
Park, Yae Won(박예원) ORCID logo https://orcid.org/0000-0001-8907-5401
Ahn, Sung Soo(안성수) ORCID logo https://orcid.org/0000-0002-0503-5558
Lee, Seung Koo(이승구) ORCID logo https://orcid.org/0000-0001-5646-4072
Han, Kyung Hwa(한경화)
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/184850
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links