25 Jan 2022 | NINAREH MEHRABI, FRED MORSTATTER, NRIPSUTA SAXENA, KRISTINA LERMAN, and ARAM GALSTYAN, USC-ISI
This survey explores the challenges of bias and fairness in machine learning, emphasizing the importance of addressing these issues in AI systems that make critical decisions. AI systems are increasingly used in sensitive areas such as hiring, criminal justice, and healthcare, where biased outcomes can have serious consequences. The survey investigates real-world examples of biased AI applications, such as the COMPAS tool used in criminal justice, which showed bias against African-American offenders. It also discusses the sources of bias in data, algorithms, and user interactions, and how these can lead to unfair outcomes. The survey presents different definitions of fairness and how they are operationalized in the literature. It also examines various machine learning approaches and how fairness manifests in each. The survey highlights the need for future research to address bias in AI systems and suggests ways to mitigate the problem. The survey concludes that fairness is a crucial consideration in the design and engineering of AI systems, and that researchers should be aware of the potential harmful effects of biased algorithms. The survey also discusses the importance of developing tools and methodologies to assess and mitigate bias in AI systems.This survey explores the challenges of bias and fairness in machine learning, emphasizing the importance of addressing these issues in AI systems that make critical decisions. AI systems are increasingly used in sensitive areas such as hiring, criminal justice, and healthcare, where biased outcomes can have serious consequences. The survey investigates real-world examples of biased AI applications, such as the COMPAS tool used in criminal justice, which showed bias against African-American offenders. It also discusses the sources of bias in data, algorithms, and user interactions, and how these can lead to unfair outcomes. The survey presents different definitions of fairness and how they are operationalized in the literature. It also examines various machine learning approaches and how fairness manifests in each. The survey highlights the need for future research to address bias in AI systems and suggests ways to mitigate the problem. The survey concludes that fairness is a crucial consideration in the design and engineering of AI systems, and that researchers should be aware of the potential harmful effects of biased algorithms. The survey also discusses the importance of developing tools and methodologies to assess and mitigate bias in AI systems.