Enhanced IDOL segmentation framework using personalized hyperspace learning IDOL
Authors
Byong Su Choi ; Chris J Beltran ; Sven Olberg ; Xiaoying Liang ; Bo Lu ; Jun Tan ; Alessio Parisi ; Janet Denbeigh ; Sridhar Yaddanapudi ; Jin Sung Kim ; Keith M Furutani ; Justin C Park ; Bongyong Song
ART ; auto segmentation ; deep learning ; head & neck ; overfitting
Abstract
Background: Adaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real-time or online ART workflow. In response to this challenge, approaches to auto-segmentation involving deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model.
Purpose: To address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto-segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)-IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation.
Methods: The PHL-IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (n = 100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (m < n) using the similarity metrics (mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL-IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients.
Results: Implementing the PHL-IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.81
±
0.05 with the general model, 0.83
±
0.04
for the continual model, 0.83
±
0.04
for the conventional IDOL model to an average of 0.87
±
0.03
with the PHL-IDOL model. Similarly, the Hausdorff distance decreased from 3.06
±
0.99
with the general model, 2.84
±
0.69
for the continual model, 2.79
±
0.79
for the conventional IDOL model and 2.36
±
0.52
for the PHL-IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL-IDOL model.
Conclusion: The PHL-IDOL framework applied to the auto-segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient-specific prior information in a task central to online ART workflows.