1 May 2024 | Yixin Wan, Arjun Subramonian, Anaelia Ovalle, Zongyu Lin, Ashima Suvarna, Christina Chance, Hritik Bansal, Rebecca Pattichis, and Kai-Wei Chang
This paper presents the first comprehensive survey on biases in Text-to-Image (T2I) generative models, focusing on three dimensions: Gender, Skintone, and Geo-Cultural. The authors review 36 prior studies to understand how these biases are defined, evaluated, and mitigated. Key findings include:
1. **Gender and Skintone Bias**: While these biases are widely studied, geo-cultural bias remains under-explored.
2. **Occupational Association**: Most gender and skintone bias studies focus on occupational associations, with less attention to other aspects like power dynamics and explicit content generation.
3. **Non-Binary Identities**: Most gender bias studies overlook non-binary identities.
4. **Evaluation Datasets and Metrics**: There is a lack of unified evaluation frameworks and metrics, with methods varying widely across studies.
5. **Mitigation Methods**: Current methods fail to provide comprehensive and effective solutions for biases.
The authors identify future research directions, emphasizing the need for human-centric approaches in bias definition, evaluation, and mitigation. They highlight the importance of addressing social inequality, power differences, and the diverse needs of different social groups. The paper also discusses ethical considerations, such as inferring personal identities from images, classification biases, and the misuse of AI for justice-centered applications.This paper presents the first comprehensive survey on biases in Text-to-Image (T2I) generative models, focusing on three dimensions: Gender, Skintone, and Geo-Cultural. The authors review 36 prior studies to understand how these biases are defined, evaluated, and mitigated. Key findings include:
1. **Gender and Skintone Bias**: While these biases are widely studied, geo-cultural bias remains under-explored.
2. **Occupational Association**: Most gender and skintone bias studies focus on occupational associations, with less attention to other aspects like power dynamics and explicit content generation.
3. **Non-Binary Identities**: Most gender bias studies overlook non-binary identities.
4. **Evaluation Datasets and Metrics**: There is a lack of unified evaluation frameworks and metrics, with methods varying widely across studies.
5. **Mitigation Methods**: Current methods fail to provide comprehensive and effective solutions for biases.
The authors identify future research directions, emphasizing the need for human-centric approaches in bias definition, evaluation, and mitigation. They highlight the importance of addressing social inequality, power differences, and the diverse needs of different social groups. The paper also discusses ethical considerations, such as inferring personal identities from images, classification biases, and the misuse of AI for justice-centered applications.