Investigating the effect of loss functions on single-image GAN performance

Loss functions are crucial in training generative adversarial networks (GANs) and shaping the resulting outputs. These functions, specifically designed for GANs, optimize generator and discriminator networks together but in opposite directions. GAN models, which typically handle large datasets, have...

Full description

Saved in:
Bibliographic Details
Main Authors: Eyyup YİLDİZ, Mehmet Erkan YUKSEL, Selcuk SEVGEN
Format: Article
Language:English
Published: Bursa Technical University 2024-12-01
Series:Journal of Innovative Science and Engineering
Subjects:
Online Access:http://jise.btu.edu.tr/en/download/article-file/3991473
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Loss functions are crucial in training generative adversarial networks (GANs) and shaping the resulting outputs. These functions, specifically designed for GANs, optimize generator and discriminator networks together but in opposite directions. GAN models, which typically handle large datasets, have been successful in the field of deep learning. However, exploring the factors that influence the success of GAN models developed for limited data problems is an important area of research. In this study, we conducted a comprehensive investigation into the loss functions commonly used in GAN literature, such as binary cross entropy (BCE), Wasserstein generative adversarial network (WGAN), least squares generative adversarial network (LSGAN), and hinge loss. Our research focused on examining the impact of these loss functions on improving output quality and ensuring training convergence in single-image GANs. Specifically, we evaluated the performance of a single-image GAN model, SinGAN, using these loss functions in terms of image quality and diversity. Our experimental results demonstrated that loss functions successfully produce high-quality, diverse images from a single training image. Additionally, we found that the WGAN-GP and LSGAN-GP loss functions are more effective for single-image GAN models.
ISSN:2602-4217