24 62

Cited 0 times in

The InterVision Framework: An Enhanced Fine-Tuning Deep Learning Strategy for Auto-Segmentation in Head and Neck

DC Field Value Language
dc.contributor.author김진성-
dc.date.accessioned2024-12-06T03:47:16Z-
dc.date.available2024-12-06T03:47:16Z-
dc.date.issued2024-09-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/201232-
dc.description.abstractAdaptive radiotherapy (ART) workflows are increasingly adopted to achieve dose escalation and tissue sparing under dynamic anatomical conditions. However, recontouring and time constraints hinder the implementation of real-time ART workflows. Various auto-segmentation methods, including deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS), have been developed to address these challenges. Despite the potential of DLS methods, clinical implementation remains difficult due to the need for large, high-quality datasets to ensure model generalizability. This study introduces an InterVision framework for segmentation. The InterVision framework can interpolate or create intermediate visuals between existing images to generate specific patient characteristics. The InterVision model is trained in two steps: (1) generating a general model using the dataset, and (2) tuning the general model using the dataset generated from the InterVision framework. The InterVision framework generates intermediate images between existing patient image slides using deformable vectors, effectively capturing unique patient characteristics. By creating a more comprehensive dataset that reflects these individual characteristics, the InterVision model demonstrates the ability to produce more accurate contours compared to general models. Models are evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) for 18 structures in 20 test patients. As a result, the Dice score was 0.81 ± 0.05 for the general model, 0.82 ± 0.04 for the general fine-tuning model, and 0.85 ± 0.03 for the InterVision model. The Hausdorff distance was 3.06 ± 1.13 for the general model, 2.81 ± 0.77 for the general fine-tuning model, and 2.52 ± 0.50 for the InterVision model. The InterVision model showed the best performance compared to the general model. The InterVision framework presents a versatile approach adaptable to various tasks where prior information is accessible, such as in ART settings. This capability is particularly valuable for accurately predicting complex organs and targets that pose challenges for traditional deep learning algorithms.-
dc.description.statementOfResponsibilityopen-
dc.languageEnglish-
dc.publisherMDPI-
dc.relation.isPartOfJOURNAL OF PERSONALIZED MEDICINE-
dc.rightsCC BY-NC-ND 2.0 KR-
dc.titleThe InterVision Framework: An Enhanced Fine-Tuning Deep Learning Strategy for Auto-Segmentation in Head and Neck-
dc.typeArticle-
dc.contributor.collegeCollege of Medicine (의과대학)-
dc.contributor.departmentDept. of Radiation Oncology (방사선종양학교실)-
dc.contributor.googleauthorByongsu Choi-
dc.contributor.googleauthorChris J Beltran-
dc.contributor.googleauthorSang Kyun Yoo-
dc.contributor.googleauthorNa Hye Kwon-
dc.contributor.googleauthorJin Sung Kim-
dc.contributor.googleauthorJustin Chunjoo Park-
dc.identifier.doi10.3390/jpm14090979-
dc.contributor.localIdA04548-
dc.relation.journalcodeJ04078-
dc.identifier.eissn2075-4426-
dc.identifier.pmid39338233-
dc.subject.keywordART-
dc.subject.keywordSwin-Unet-
dc.subject.keywordauto-segmentation-
dc.subject.keyworddeep learning-
dc.subject.keyworddeform vector-
dc.subject.keywordhead and neck-
dc.subject.keywordtransformer-
dc.contributor.alternativeNameKim, Jinsung-
dc.contributor.affiliatedAuthor김진성-
dc.citation.volume14-
dc.citation.number9-
dc.citation.startPage979-
dc.identifier.bibliographicCitationJOURNAL OF PERSONALIZED MEDICINE, Vol.14(9) : 979, 2024-09-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiation Oncology (방사선종양학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.