From Melting Pots to Misrepresentations: Exploring Harms in Generative AI

From Melting Pots to Misrepresentations: Exploring Harms in Generative AI

2024 | SANJANA GAUTAM*, Pennsylvania State University, USA PRANAV NARAYANAN VENKIT*, Pennsylvania State University, USA SOUROJIT GHOSH*, University of Washington, USA
The paper "From Melting Pots to Misrepresentations: Exploring Harms in Generative AI" by Sanjana Gautam, Pranav Narayanan Venkit, and Sourojit Ghosh examines the social and ethical implications of advanced generative models like Gemini and GPT. Despite their widespread adoption in various sectors, these models have been criticized for discriminatory tendencies, particularly in favoring 'majority' demographics and marginalizing racial and ethnic groups. The authors highlight the need to address the biases and harms embedded in these models, which can lead to stereotyping, distortion, and neglect of marginalized communities. They present a critical review of existing research and open-ended questions to guide future studies, emphasizing the importance of a community-centered and human-centric approach to ethical redesign. The paper also discusses specific examples, such as the incident where Gemini refused to generate images of non-white individuals, and explores the broader societal impacts of these biases. The authors advocate for more transparent and equitable development processes, including detailed documentation of training datasets and a focus on power asymmetries and community representation. The paper concludes by calling for ongoing research and ethical considerations to address the biases and harms in generative AI systems.The paper "From Melting Pots to Misrepresentations: Exploring Harms in Generative AI" by Sanjana Gautam, Pranav Narayanan Venkit, and Sourojit Ghosh examines the social and ethical implications of advanced generative models like Gemini and GPT. Despite their widespread adoption in various sectors, these models have been criticized for discriminatory tendencies, particularly in favoring 'majority' demographics and marginalizing racial and ethnic groups. The authors highlight the need to address the biases and harms embedded in these models, which can lead to stereotyping, distortion, and neglect of marginalized communities. They present a critical review of existing research and open-ended questions to guide future studies, emphasizing the importance of a community-centered and human-centric approach to ethical redesign. The paper also discusses specific examples, such as the incident where Gemini refused to generate images of non-white individuals, and explores the broader societal impacts of these biases. The authors advocate for more transparent and equitable development processes, including detailed documentation of training datasets and a focus on power asymmetries and community representation. The paper concludes by calling for ongoing research and ethical considerations to address the biases and harms in generative AI systems.
Reach us at info@study.space
Understanding From Melting Pots to Misrepresentations%3A Exploring Harms in Generative AI