This paper presents a systematic mapping study of 94 articles addressing fairness issues in machine learning (ML) models. The study aims to provide a comprehensive overview of the current state of research, identify key challenges, and propose future directions. The authors classify fairness issues into six main categories: biased training data, inherent bias, bias toward protected feature groups, decision model bias, lack of prediction transparency, and multiple definitions of fairness. They also categorize the methodologies used to address these issues, including pre-processing, in-processing, and post-processing techniques. The paper discusses the limitations of each approach and highlights the need for more standardized evaluation and classification of fairness methodologies. The study concludes with a discussion on the potential future directions in ML and AI fairness, emphasizing the importance of continuous research and development to ensure fair and equitable decision-making processes.This paper presents a systematic mapping study of 94 articles addressing fairness issues in machine learning (ML) models. The study aims to provide a comprehensive overview of the current state of research, identify key challenges, and propose future directions. The authors classify fairness issues into six main categories: biased training data, inherent bias, bias toward protected feature groups, decision model bias, lack of prediction transparency, and multiple definitions of fairness. They also categorize the methodologies used to address these issues, including pre-processing, in-processing, and post-processing techniques. The paper discusses the limitations of each approach and highlights the need for more standardized evaluation and classification of fairness methodologies. The study concludes with a discussion on the potential future directions in ML and AI fairness, emphasizing the importance of continuous research and development to ensure fair and equitable decision-making processes.