31 May 2017 | David Berthelot, Thomas Schumm, Luke Metz
The paper introduces Boundary Equilibrium Generative Adversarial Networks (BEGAN), a novel approach to training auto-encoder-based GANs. BEGAN aims to balance the generator and discriminator during training, providing a new method for controlling the trade-off between image diversity and visual quality. The method uses a loss derived from the Wasserstein distance, which helps achieve fast and stable training, high visual quality, and better convergence. The authors derive an approximate convergence measure and demonstrate the effectiveness of BEGAN through experiments on celebrity face images, achieving high-resolution results with diverse and visually coherent images. The paper also discusses the robustness of the equilibrium balancing technique and compares BEGAN to other GAN variants, showing superior performance in terms of image quality and diversity.The paper introduces Boundary Equilibrium Generative Adversarial Networks (BEGAN), a novel approach to training auto-encoder-based GANs. BEGAN aims to balance the generator and discriminator during training, providing a new method for controlling the trade-off between image diversity and visual quality. The method uses a loss derived from the Wasserstein distance, which helps achieve fast and stable training, high visual quality, and better convergence. The authors derive an approximate convergence measure and demonstrate the effectiveness of BEGAN through experiments on celebrity face images, achieving high-resolution results with diverse and visually coherent images. The paper also discusses the robustness of the equilibrium balancing technique and compares BEGAN to other GAN variants, showing superior performance in terms of image quality and diversity.