BEGAN: Boundary Equilibrium Generative Adversarial Networks

BEGAN: Boundary Equilibrium Generative Adversarial Networks

31 May 2017 | David Berthelot, Thomas Schumm, Luke Metz
BEGAN (Boundary Equilibrium Generative Adversarial Networks) is a new method for training GANs that uses an auto-encoder as the discriminator and introduces an equilibrium concept to balance the generator and discriminator. This method uses a loss derived from the Wasserstein distance and provides a new approximate convergence measure, fast and stable training, and high visual quality. It also allows for controlling the trade-off between image diversity and visual quality. The method is applied to image generation, achieving high visual quality even at higher resolutions using a relatively simple model architecture and standard training procedure. The paper introduces a new GAN objective that matches the distribution of auto-encoder losses rather than sample distributions. This approach uses a Wasserstein distance lower bound for auto-encoders and an equilibrium concept to balance the generator and discriminator. The equilibrium is controlled by a hyper-parameter γ, which determines the trade-off between image diversity and visual quality. The method uses proportional control theory to maintain the equilibrium between the generator and discriminator. The BEGAN model is trained using Adam with a learning rate of 0.0001, and it achieves high-quality results on a dataset of 360K celebrity face images. The model is tested on various resolutions and shows good performance in terms of image diversity and quality. The model also demonstrates good space continuity and interpolation capabilities, showing that it generalizes well rather than memorizing the training data. The convergence measure M_global is used to evaluate the model's performance, and it correlates well with image fidelity. The model is also tested on unbalanced networks, showing that it remains stable and converges to meaningful results even when one network is advantaged over the other. The paper concludes that BEGAN provides a novel approach to training GANs that balances the generator and discriminator, leading to stable and high-quality results. It also addresses some of the outstanding problems in GANs, such as measuring convergence, controlling distributional diversity, and maintaining the equilibrium between the discriminator and the generator. The method has potential applications in dynamically weighing regularization terms or other heterogeneous objectives.BEGAN (Boundary Equilibrium Generative Adversarial Networks) is a new method for training GANs that uses an auto-encoder as the discriminator and introduces an equilibrium concept to balance the generator and discriminator. This method uses a loss derived from the Wasserstein distance and provides a new approximate convergence measure, fast and stable training, and high visual quality. It also allows for controlling the trade-off between image diversity and visual quality. The method is applied to image generation, achieving high visual quality even at higher resolutions using a relatively simple model architecture and standard training procedure. The paper introduces a new GAN objective that matches the distribution of auto-encoder losses rather than sample distributions. This approach uses a Wasserstein distance lower bound for auto-encoders and an equilibrium concept to balance the generator and discriminator. The equilibrium is controlled by a hyper-parameter γ, which determines the trade-off between image diversity and visual quality. The method uses proportional control theory to maintain the equilibrium between the generator and discriminator. The BEGAN model is trained using Adam with a learning rate of 0.0001, and it achieves high-quality results on a dataset of 360K celebrity face images. The model is tested on various resolutions and shows good performance in terms of image diversity and quality. The model also demonstrates good space continuity and interpolation capabilities, showing that it generalizes well rather than memorizing the training data. The convergence measure M_global is used to evaluate the model's performance, and it correlates well with image fidelity. The model is also tested on unbalanced networks, showing that it remains stable and converges to meaningful results even when one network is advantaged over the other. The paper concludes that BEGAN provides a novel approach to training GANs that balances the generator and discriminator, leading to stable and high-quality results. It also addresses some of the outstanding problems in GANs, such as measuring convergence, controlling distributional diversity, and maintaining the equilibrium between the discriminator and the generator. The method has potential applications in dynamically weighing regularization terms or other heterogeneous objectives.
Reach us at info@study.space