OpenBias: Open-set Bias Detection in Text-to-Image Generative Models

OpenBias: Open-set Bias Detection in Text-to-Image Generative Models

5 Aug 2024 | Moreno D'Inca, Elia Peruzzo, Massimiliano Mancini, Dejia Xu, Vidit Goel, Xingqian Xu, Zhangyang Wang, Humphrey Shi, Nicu Sebe
OpenBias is a novel pipeline for open-set bias detection in text-to-image generative models. It identifies and quantifies biases without relying on predefined bias sets. The pipeline consists of three stages: first, a Large Language Model (LLM) proposes potential biases based on captions; second, the target generative model generates images using these captions; and third, a Vision Question Answering (VQA) model assesses the presence and extent of the identified biases. OpenBias was tested on Stable Diffusion 1.5, 2, and XL, demonstrating agreement with closed-set bias detection methods and human judgment. The method discovers both well-known and novel biases, such as "person gender," "person race," "cake type," and "laptop brand." It also highlights the importance of context-aware bias detection, as biases can vary significantly based on the context of the caption. OpenBias provides a flexible and modular framework for bias detection, enabling the identification of biases in various domains. The results show that the model can detect biases in both context-aware and context-free scenarios, with a high alignment between the model's bias severity scores and human judgments. The study emphasizes the need for more inclusive open-set bias detection frameworks to address biases in generative models.OpenBias is a novel pipeline for open-set bias detection in text-to-image generative models. It identifies and quantifies biases without relying on predefined bias sets. The pipeline consists of three stages: first, a Large Language Model (LLM) proposes potential biases based on captions; second, the target generative model generates images using these captions; and third, a Vision Question Answering (VQA) model assesses the presence and extent of the identified biases. OpenBias was tested on Stable Diffusion 1.5, 2, and XL, demonstrating agreement with closed-set bias detection methods and human judgment. The method discovers both well-known and novel biases, such as "person gender," "person race," "cake type," and "laptop brand." It also highlights the importance of context-aware bias detection, as biases can vary significantly based on the context of the caption. OpenBias provides a flexible and modular framework for bias detection, enabling the identification of biases in various domains. The results show that the model can detect biases in both context-aware and context-free scenarios, with a high alignment between the model's bias severity scores and human judgments. The study emphasizes the need for more inclusive open-set bias detection frameworks to address biases in generative models.
Reach us at info@study.space
[slides] OpenBias%3A Open-Set Bias Detection in Text-to-Image Generative Models | StudySpace