Cited 0 times in 
Cited 0 times in 
HP-GAN: Harnessing pretrained networks for GAN improvement with FakeTwins and discriminator consistency
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Son, Geonhui | - |
| dc.contributor.author | Lee, Jeong Ryong | - |
| dc.contributor.author | Hwang, Dosik | - |
| dc.date.accessioned | 2026-03-16T07:17:08Z | - |
| dc.date.available | 2026-03-16T07:17:08Z | - |
| dc.date.created | 2026-03-09 | - |
| dc.date.issued | 2026-07 | - |
| dc.identifier.issn | 0893-6080 | - |
| dc.identifier.uri | https://ir.ymlib.yonsei.ac.kr/handle/22282913/211295 | - |
| dc.description.abstract | Generative Adversarial Networks (GANs) have made significant progress in enhancing the quality of image synthesis. Recent methods frequently leverage pretrained networks to calculate perceptual losses or utilize pretrained feature spaces. In this paper, we extend the capabilities of pretrained networks by incorporating innovative self-supervised learning techniques and enforcing consistency between discriminators during GAN training. Our proposed method, named HP-GAN, effectively exploits neural network priors through two primary strategies: FakeTwins and discriminator consistency. FakeTwins leverages pretrained networks as encoders to compute a self-supervised loss and applies this through the generated images to train the generator, thereby enabling the generation of more diverse and high quality images. Additionally, we introduce a consistency mechanism between discriminators that evaluate feature maps extracted from Convolutional Neural Network (CNN) and Vision Transformer (ViT) feature networks. Discriminator consistency promotes coherent learning among discriminators and enhances training robustness by aligning their assessments of image quality. Our extensive evaluation across seventeen datasets-including scenarios with large, small, and limited data, and covering a variety of image domains-demonstrates that HP-GAN consistently outperforms current state-of-the-art methods in terms of Fr & eacute;chet Inception Distance (FID), achieving significant improvements in image diversity and quality. Code is available at: https://github.com/higun2/HP-GAN. | - |
| dc.language | 영어 | - |
| dc.publisher | PERGAMON-ELSEVIER SCIENCE LTD | - |
| dc.relation.isPartOf | NEURAL NETWORKS | - |
| dc.title | HP-GAN: Harnessing pretrained networks for GAN improvement with FakeTwins and discriminator consistency | - |
| dc.type | Article | - |
| dc.contributor.googleauthor | Son, Geonhui | - |
| dc.contributor.googleauthor | Lee, Jeong Ryong | - |
| dc.contributor.googleauthor | Hwang, Dosik | - |
| dc.identifier.doi | 10.1016/j.neunet.2026.108666 | - |
| dc.identifier.pmid | 41633247 | - |
| dc.subject.keyword | Image generation | - |
| dc.subject.keyword | Generative adversarial network | - |
| dc.subject.keyword | Pretrained network | - |
| dc.subject.keyword | Self-supervised learning | - |
| dc.contributor.affiliatedAuthor | Hwang, Dosik | - |
| dc.identifier.scopusid | 2-s2.0-105029054729 | - |
| dc.identifier.wosid | 001683690000001 | - |
| dc.citation.volume | 199 | - |
| dc.identifier.bibliographicCitation | NEURAL NETWORKS, Vol.199, 2026-07 | - |
| dc.identifier.rimsid | 91652 | - |
| dc.type.rims | ART | - |
| dc.description.journalClass | 1 | - |
| dc.description.journalClass | 1 | - |
| dc.subject.keywordAuthor | Image generation | - |
| dc.subject.keywordAuthor | Generative adversarial network | - |
| dc.subject.keywordAuthor | Pretrained network | - |
| dc.subject.keywordAuthor | Self-supervised learning | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Neurosciences | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Neurosciences & Neurology | - |
| dc.identifier.articleno | 108666 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.