11 63

Cited 0 times in

HaN-Seg: The head and neck organ-at-risk CT and MR segmentation challenge

Authors
 Gašper Podobnik  ;  Bulat Ibragimov  ;  Elias Tappeiner  ;  Chanwoong Lee  ;  Jin Sung Kim  ;  Zacharia Mesbah  ;  Romain Modzelewski  ;  Yihao Ma  ;  Fan Yang  ;  Mikołaj Rudecki  ;  Marek Wodziński  ;  Primož Peterlin  ;  Primož Strojan  ;  Tomaž Vrtovec 
Citation
 RADIOTHERAPY AND ONCOLOGY, Vol.198 : 110410, 2024-09 
Journal Title
RADIOTHERAPY AND ONCOLOGY
ISSN
 0167-8140 
Issue Date
2024-09
MeSH
Head and Neck Neoplasms* / diagnostic imaging ; Head and Neck Neoplasms* / radiotherapy ; Humans ; Magnetic Resonance Imaging* / methods ; Organs at Risk* / radiation effects ; Radiotherapy Planning, Computer-Assisted* / methods ; Tomography, X-Ray Computed* / methods
Keywords
Computational challenge ; Computed tomography ; Deep learning ; Head and neck cancer ; Magnetic resonance ; Organs-at-risk ; Radiotherapy ; Segmentation
Abstract
Background and purpose: To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge.

Materials and methods: The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks. The performance was evaluated in terms of the Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95), and statistical ranking was applied for each metric by pairwise comparison of the submitted methods using the Wilcoxon signed-rank test.

Results: While 23 teams registered for the challenge, only seven submitted their methods for the final phase. The top-performing team achieved a DSC of 76.9 % and a HD95 of 3.5 mm. All participating teams utilized architectures based on U-Net, with the winning team leveraging rigid MR to CT registration combined with network entry-level concatenation of both modalities.

Conclusion: This challenge simulated a real-world clinical scenario by providing non-registered MR and CT images with varying fields-of-view and voxel sizes. Remarkably, the top-performing teams achieved segmentation performance surpassing the inter-observer agreement on the same dataset. These results set a benchmark for future research on this publicly available dataset and on paired multi-modal image segmentation in general.
Files in This Item:
T202504178.pdf Download
DOI
10.1016/j.radonc.2024.110410
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiation Oncology (방사선종양학교실) > 1. Journal Papers
Yonsei Authors
Kim, Jinsung(김진성) ORCID logo https://orcid.org/0000-0003-1415-6471
URI
https://ir.ymlib.yonsei.ac.kr/handle/22282913/206569
사서에게 알리기
  feedback

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse

Links