Balancing Act: Distribution-Guided Debiasing in Diffusion Models

Balancing Act: Distribution-Guided Debiasing in Diffusion Models

29 May 2024 | Rishabh Parihar*,1 Abhijnya Bhat*,1 Abhipsa Basu1 Saswat Mallick1 Jogendra Nath Kundu2 R. Venkatesh Babu1
Diffusion Models (DMs) have emerged as powerful generative models, widely used for data augmentation and creative applications. However, DMs can reflect biases present in the training datasets, particularly in face generation where one demographic subgroup may be preferred over others. This paper presents a method to debias DMs without additional reference data or model retraining. The proposed method, called Distribution Guidance, enforces generated images to follow a prescribed attribute distribution by leveraging the latent features of denoising UNets. An Attribute Distribution Predictor (ADP) is trained to map these latent features to the desired attribute distribution. The method is evaluated on single and multiple attribute cases, showing significant improvements over baselines in both unconditional and text-conditional DMs. Additionally, the paper demonstrates the effectiveness of the method in training fair attribute classifiers by augmenting the training set with generated data. The code for this project is available at [GitHub](https://github.com/rishubhparihar/Balancing-Act-Diffusion-Models).Diffusion Models (DMs) have emerged as powerful generative models, widely used for data augmentation and creative applications. However, DMs can reflect biases present in the training datasets, particularly in face generation where one demographic subgroup may be preferred over others. This paper presents a method to debias DMs without additional reference data or model retraining. The proposed method, called Distribution Guidance, enforces generated images to follow a prescribed attribute distribution by leveraging the latent features of denoising UNets. An Attribute Distribution Predictor (ADP) is trained to map these latent features to the desired attribute distribution. The method is evaluated on single and multiple attribute cases, showing significant improvements over baselines in both unconditional and text-conditional DMs. Additionally, the paper demonstrates the effectiveness of the method in training fair attribute classifiers by augmenting the training set with generated data. The code for this project is available at [GitHub](https://github.com/rishubhparihar/Balancing-Act-Diffusion-Models).
Reach us at info@study.space
Understanding Balancing Act%3A Distribution-Guided Debiasing in Diffusion Models