From Melting Pots to Misrepresentations: Exploring Harms in Generative AI

From Melting Pots to Misrepresentations: Exploring Harms in Generative AI

May 11-16, 2024 | SANJANA GAUTAM, PRANAV NARAYANAN VENKIT, SOUROJIT GHOSH
The paper "From Melting Pots to Misrepresentations: Exploring Harms in Generative AI" examines the social harms caused by generative AI models, particularly their tendency to misrepresent human identities and reinforce stereotypes. The authors highlight how models like Gemini and GPT, despite their versatility, often reflect biases that favor certain demographics, leading to the marginalization of racial and ethnic minorities. The paper discusses the ethical implications of these biases and calls for a more community-centered approach to AI development. It presents a framework for understanding bias and its harms, including stereotyping, erasure, quality of service, dehumanization, and disparagement. The authors also raise open questions about the ethical redesign of generative models, emphasizing the need for transparency, accountability, and a focus on fairness beyond Western perspectives. The paper underscores the importance of addressing biases in AI systems to prevent discriminatory outcomes and ensure equitable representation. It advocates for a shift towards examining the harms of AI through the lens of social justice and ethical considerations, rather than just technical performance. The authors argue that generative AI systems must be developed with a human-centered approach, considering the ethical implications throughout the design process. The paper concludes by emphasizing the need for ongoing research and dialogue to address the challenges posed by generative AI and to promote more equitable and inclusive AI systems.The paper "From Melting Pots to Misrepresentations: Exploring Harms in Generative AI" examines the social harms caused by generative AI models, particularly their tendency to misrepresent human identities and reinforce stereotypes. The authors highlight how models like Gemini and GPT, despite their versatility, often reflect biases that favor certain demographics, leading to the marginalization of racial and ethnic minorities. The paper discusses the ethical implications of these biases and calls for a more community-centered approach to AI development. It presents a framework for understanding bias and its harms, including stereotyping, erasure, quality of service, dehumanization, and disparagement. The authors also raise open questions about the ethical redesign of generative models, emphasizing the need for transparency, accountability, and a focus on fairness beyond Western perspectives. The paper underscores the importance of addressing biases in AI systems to prevent discriminatory outcomes and ensure equitable representation. It advocates for a shift towards examining the harms of AI through the lens of social justice and ethical considerations, rather than just technical performance. The authors argue that generative AI systems must be developed with a human-centered approach, considering the ethical implications throughout the design process. The paper concludes by emphasizing the need for ongoing research and dialogue to address the challenges posed by generative AI and to promote more equitable and inclusive AI systems.
Reach us at info@study.space