Fairness issues, current approaches, and challenges in machine learning models

Fairness issues, current approaches, and challenges in machine learning models

31 January 2024 | Tonni Das Jui, Pablo Rivas
This article presents a systematic mapping study of 94 articles addressing fairness issues in machine learning (ML) and artificial intelligence (AI). The study aims to provide a comprehensive overview of current research trends, methodologies, and challenges in ensuring fairness in ML models. The authors identify several key fairness issues, including biased training data, bias toward protected feature groups, biased decision models, lack of prediction transparency, and conflicting fairness definitions. They also discuss the factors contributing to these issues and the limitations of existing approaches. The study highlights the importance of addressing fairness in ML to prevent discriminatory outcomes and ensure equitable treatment of all groups. The authors propose a taxonomy of fairness-ensuring methodologies and discuss various approaches used to mitigate bias, including pre-processing, in-processing, and post-processing techniques. The study also identifies the most engaged countries and researchers in the field and discusses the future directions for research in ML fairness. The findings indicate that fairness in ML is a complex and evolving area of research that requires ongoing efforts to develop effective and equitable solutions.This article presents a systematic mapping study of 94 articles addressing fairness issues in machine learning (ML) and artificial intelligence (AI). The study aims to provide a comprehensive overview of current research trends, methodologies, and challenges in ensuring fairness in ML models. The authors identify several key fairness issues, including biased training data, bias toward protected feature groups, biased decision models, lack of prediction transparency, and conflicting fairness definitions. They also discuss the factors contributing to these issues and the limitations of existing approaches. The study highlights the importance of addressing fairness in ML to prevent discriminatory outcomes and ensure equitable treatment of all groups. The authors propose a taxonomy of fairness-ensuring methodologies and discuss various approaches used to mitigate bias, including pre-processing, in-processing, and post-processing techniques. The study also identifies the most engaged countries and researchers in the field and discusses the future directions for research in ML fairness. The findings indicate that fairness in ML is a complex and evolving area of research that requires ongoing efforts to develop effective and equitable solutions.
Reach us at info@study.space
[slides and audio] Fairness issues%2C current approaches%2C and challenges in machine learning models