Least Squares Generative Adversarial Networks

Least Squares Generative Adversarial Networks

5 Apr 2017 | Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang, and Stephen Paul Smolley
This paper introduces Least Squares Generative Adversarial Networks (LS-GANs), which use a least squares loss function for the discriminator instead of the traditional sigmoid cross entropy loss used in regular GANs. The key idea is that the least squares loss function penalizes samples that are far from the decision boundary, which helps in generating higher quality images and improves the stability of the learning process. The authors show that minimizing the LSGAN objective function is equivalent to minimizing the Pearson chi-squared divergence. LS-GANs are evaluated on five scene datasets and a handwritten Chinese character dataset with 3740 classes. The results show that LSGANs generate higher quality images than regular GANs and are more stable during training. Two model architectures are proposed: one for image generation with 112x112 resolution and another for tasks with many classes. The first architecture is evaluated on various scene datasets and outperforms the state-of-the-art method. The second architecture is used for generating readable Chinese characters. The paper also discusses the relationship between LSGANs and f-divergence, showing that minimizing the LSGAN objective function minimizes the Pearson chi-squared divergence between the real and generated data distributions. The authors also compare the stability of LSGANs with regular GANs using two experiments, demonstrating that LSGANs are more stable. The results show that LSGANs can generate higher quality images and are more stable during training compared to regular GANs. The paper concludes that LSGANs are a promising approach for generating realistic images and that further research is needed to extend LSGANs to more complex datasets.This paper introduces Least Squares Generative Adversarial Networks (LS-GANs), which use a least squares loss function for the discriminator instead of the traditional sigmoid cross entropy loss used in regular GANs. The key idea is that the least squares loss function penalizes samples that are far from the decision boundary, which helps in generating higher quality images and improves the stability of the learning process. The authors show that minimizing the LSGAN objective function is equivalent to minimizing the Pearson chi-squared divergence. LS-GANs are evaluated on five scene datasets and a handwritten Chinese character dataset with 3740 classes. The results show that LSGANs generate higher quality images than regular GANs and are more stable during training. Two model architectures are proposed: one for image generation with 112x112 resolution and another for tasks with many classes. The first architecture is evaluated on various scene datasets and outperforms the state-of-the-art method. The second architecture is used for generating readable Chinese characters. The paper also discusses the relationship between LSGANs and f-divergence, showing that minimizing the LSGAN objective function minimizes the Pearson chi-squared divergence between the real and generated data distributions. The authors also compare the stability of LSGANs with regular GANs using two experiments, demonstrating that LSGANs are more stable. The results show that LSGANs can generate higher quality images and are more stable during training compared to regular GANs. The paper concludes that LSGANs are a promising approach for generating realistic images and that further research is needed to extend LSGANs to more complex datasets.
Reach us at info@study.space