2017 | Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi
The paper introduces a new notion of unfairness called "disparate mistreatment," which is defined in terms of misclassification rates. Disparate mistreatment occurs when the misclassification rates differ for groups with different values of a sensitive attribute, such as race or gender. The authors propose intuitive measures of disparate mistreatment for decision boundary-based classifiers and show that these measures can be incorporated into their formulation as convex-concave constraints. Experiments on synthetic and real-world datasets demonstrate that their methodology effectively avoids disparate mistreatment, often at a small cost in terms of accuracy. The paper also discusses the trade-offs between fairness and accuracy, and compares their method with existing approaches, showing that their method can achieve similar levels of fairness while maintaining or improving accuracy.The paper introduces a new notion of unfairness called "disparate mistreatment," which is defined in terms of misclassification rates. Disparate mistreatment occurs when the misclassification rates differ for groups with different values of a sensitive attribute, such as race or gender. The authors propose intuitive measures of disparate mistreatment for decision boundary-based classifiers and show that these measures can be incorporated into their formulation as convex-concave constraints. Experiments on synthetic and real-world datasets demonstrate that their methodology effectively avoids disparate mistreatment, often at a small cost in terms of accuracy. The paper also discusses the trade-offs between fairness and accuracy, and compares their method with existing approaches, showing that their method can achieve similar levels of fairness while maintaining or improving accuracy.