0 35

Cited 0 times in

Cited 0 times in

A tumor inpainting framework for MRI using automated masks based on channel-specific conditions across the volume

DC Field Value Language
dc.contributor.authorKim, Kyuri-
dc.contributor.authorCho, Hyungjoo-
dc.contributor.authorNa, Yoonho-
dc.contributor.authorYe, Sung-Joon-
dc.contributor.authorLee, Jimin-
dc.contributor.authorAhn, Sung Soo-
dc.contributor.authorPark, Ji Eun-
dc.contributor.authorKim, Hwiyoung-
dc.date.accessioned2025-11-17T00:47:13Z-
dc.date.available2025-11-17T00:47:13Z-
dc.date.created2025-07-22-
dc.date.issued2025-07-
dc.identifier.issn1746-8094-
dc.identifier.urihttps://ir.ymlib.yonsei.ac.kr/handle/22282913/208854-
dc.description.abstractDespite significant advancements in deep learning for medical screening, generating realistic annotation labels for specific disease groups remains a critical challenge. To address this, we propose an inpainting framework that employs realistic annotation masks, integrating channel-specific conditions across the image volume to produce high-quality paired datasets. Specifically, free-form annotation masks for 2D normal MR images were generated using a variational autoencoder (VAE) adjusted along the z-axis. Additionally, localized areas were inpainted using generative adversarial networks (GANs) with a cascaded generator featuring mask- guided boundary attention and a discriminator operating in a hyperspherical embedding space. This approach ensures the synthesis of high-coverage tumors with seamless integration into adjacent tissues. The proposed framework outperformed existing models, achieving a low FID score of 32.43, demonstrating its ability to generate highly realistic datasets. Moreover, when applied to data augmentation for downstream segmentation tasks, it improved the Dice score from 0.749 to 0.780, demonstrating its potential to enhance segmentation performance. These results underscore the framework's effectiveness in addressing critical challenges in building paired datasets in the medical domain. Our trained model and inference code are available on github.com/kyurikeem/Tumor-Inpainting.-
dc.languageEnglish-
dc.publisherElsevier-
dc.relation.isPartOfBIOMEDICAL SIGNAL PROCESSING AND CONTROL-
dc.relation.isPartOfBIOMEDICAL SIGNAL PROCESSING AND CONTROL-
dc.titleA tumor inpainting framework for MRI using automated masks based on channel-specific conditions across the volume-
dc.typeArticle-
dc.contributor.googleauthorKim, Kyuri-
dc.contributor.googleauthorCho, Hyungjoo-
dc.contributor.googleauthorNa, Yoonho-
dc.contributor.googleauthorYe, Sung-Joon-
dc.contributor.googleauthorLee, Jimin-
dc.contributor.googleauthorAhn, Sung Soo-
dc.contributor.googleauthorPark, Ji Eun-
dc.contributor.googleauthorKim, Hwiyoung-
dc.identifier.doi10.1016/j.bspc.2025.107579-
dc.relation.journalcodeJ03842-
dc.identifier.urlhttps://www.sciencedirect.com/science/article/pii/S1746809425000904-
dc.subject.keywordTumor inpainting-
dc.subject.keywordMedical image synthesis-
dc.subject.keywordGAN-
dc.subject.keywordMRI-
dc.contributor.affiliatedAuthorAhn, Sung Soo-
dc.contributor.affiliatedAuthorKim, Hwiyoung-
dc.identifier.scopusid2-s2.0-85216866712-
dc.identifier.wosid001423819800001-
dc.citation.volume105-
dc.identifier.bibliographicCitationBIOMEDICAL SIGNAL PROCESSING AND CONTROL, Vol.105, 2025-07-
dc.identifier.rimsid88049-
dc.type.rimsART-
dc.description.journalClass1-
dc.description.journalClass1-
dc.subject.keywordAuthorTumor inpainting-
dc.subject.keywordAuthorMedical image synthesis-
dc.subject.keywordAuthorGAN-
dc.subject.keywordAuthorMRI-
dc.type.docTypeArticle-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalWebOfScienceCategoryEngineering, Biomedical-
dc.relation.journalResearchAreaEngineering-
dc.identifier.articleno107579-
Appears in Collections:
1. College of Medicine (의과대학) > Dept. of Radiology (영상의학교실) > 1. Journal Papers
1. College of Medicine (의과대학) > Dept. of Neurosurgery (신경외과학교실) > 1. Journal Papers

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.