21 Mar 2018 | Antreas Antoniou, Amos Storkey, Harrison Edwards
This paper introduces a Data Augmentation Generative Adversarial Network (DAGAN) for improving performance in low-data settings. DAGAN is a generative model that learns to generate additional data from existing data, enabling better generalization in both standard classifiers and few-shot learning systems. The model is based on image conditional Generative Adversarial Networks (GANs) and is trained on a source domain to generate data that can be used in a target domain. The DAGAN is shown to significantly improve classification accuracy in low-data regimes, with results showing over 13% improvement on the Omniglot dataset, and 0.5% and 1.8% improvements on the EMNIST and VGG-Face datasets, respectively. The DAGAN is also shown to enhance few-shot learning systems such as Matching Networks, where it improves performance by generating relevant comparator points for each class. The DAGAN is trained on the Omniglot dataset and used to augment data for the EMNIST and VGG-Face datasets, demonstrating its effectiveness across different domains. The paper also discusses the architecture of the DAGAN, including the use of a UResNet generator and a DenseNet discriminator, and presents results on three datasets: Omniglot, EMNIST, and VGG-Face. The results show that the DAGAN significantly improves performance in low-data settings, and that it can be applied to novel unseen classes. The paper concludes that DAGAN is a flexible and effective model for data augmentation in low-data settings, and that it can be used to improve performance in both standard classifiers and few-shot learning systems.This paper introduces a Data Augmentation Generative Adversarial Network (DAGAN) for improving performance in low-data settings. DAGAN is a generative model that learns to generate additional data from existing data, enabling better generalization in both standard classifiers and few-shot learning systems. The model is based on image conditional Generative Adversarial Networks (GANs) and is trained on a source domain to generate data that can be used in a target domain. The DAGAN is shown to significantly improve classification accuracy in low-data regimes, with results showing over 13% improvement on the Omniglot dataset, and 0.5% and 1.8% improvements on the EMNIST and VGG-Face datasets, respectively. The DAGAN is also shown to enhance few-shot learning systems such as Matching Networks, where it improves performance by generating relevant comparator points for each class. The DAGAN is trained on the Omniglot dataset and used to augment data for the EMNIST and VGG-Face datasets, demonstrating its effectiveness across different domains. The paper also discusses the architecture of the DAGAN, including the use of a UResNet generator and a DenseNet discriminator, and presents results on three datasets: Omniglot, EMNIST, and VGG-Face. The results show that the DAGAN significantly improves performance in low-data settings, and that it can be applied to novel unseen classes. The paper concludes that DAGAN is a flexible and effective model for data augmentation in low-data settings, and that it can be used to improve performance in both standard classifiers and few-shot learning systems.