1 May 2024 | Yixin Wan, Arjun Subramonian, Anaelia Ovale, Zongyu Lin, Ashima Suvarna, Christina Chance, Hritik Bansal, Rebecca Pattichis, and Kai-Wei Chang
This paper presents the first comprehensive survey on bias in text-to-image (T2I) generation models, focusing on three dimensions: gender, skintone, and geo-cultural bias. The authors review existing studies, highlighting key findings and challenges in defining, evaluating, and mitigating bias in T2I systems. They identify several limitations, including under-researched geo-cultural bias, limited exploration of non-binary identities, and a lack of unified evaluation frameworks. Current mitigation methods are found to be insufficient in addressing biases comprehensively. The survey emphasizes the need for human-centric approaches to bias definition, evaluation, and mitigation, aiming to build fair and trustworthy T2I technologies. The authors also discuss the social risks of bias in T2I models, such as reinforcing stereotypes, under-representation of marginalized groups, and potential harm in real-world applications. They propose future research directions, including more inclusive bias definitions, robust evaluation metrics, and adaptive mitigation strategies. The study underscores the importance of addressing bias in T2I systems to ensure equitable and ethical AI development.This paper presents the first comprehensive survey on bias in text-to-image (T2I) generation models, focusing on three dimensions: gender, skintone, and geo-cultural bias. The authors review existing studies, highlighting key findings and challenges in defining, evaluating, and mitigating bias in T2I systems. They identify several limitations, including under-researched geo-cultural bias, limited exploration of non-binary identities, and a lack of unified evaluation frameworks. Current mitigation methods are found to be insufficient in addressing biases comprehensively. The survey emphasizes the need for human-centric approaches to bias definition, evaluation, and mitigation, aiming to build fair and trustworthy T2I technologies. The authors also discuss the social risks of bias in T2I models, such as reinforcing stereotypes, under-representation of marginalized groups, and potential harm in real-world applications. They propose future research directions, including more inclusive bias definitions, robust evaluation metrics, and adaptive mitigation strategies. The study underscores the importance of addressing bias in T2I systems to ensure equitable and ethical AI development.