2017 | Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi
This paper introduces a new notion of fairness called "disparate mistreatment," which measures unfairness based on misclassification rates across different groups defined by sensitive attributes (e.g., race, gender). Unlike existing fairness notions such as disparate treatment and disparate impact, disparate mistreatment focuses on the difference in misclassification rates between groups. The authors propose intuitive measures of disparate mistreatment for decision boundary-based classifiers and show how these can be incorporated into their formulation as convex-concave constraints. Experiments on synthetic and real-world datasets demonstrate that their method effectively avoids disparate mistreatment while maintaining reasonable accuracy.
The paper discusses three main fairness notions: disparate treatment, disparate impact, and disparate mistreatment. Disparate treatment occurs when decisions vary based on sensitive attributes, while disparate impact refers to outcomes that disproportionately benefit or harm certain groups. Disparate mistreatment, the focus of this work, occurs when misclassification rates differ between groups. The authors introduce a method to train classifiers that avoid disparate mistreatment by incorporating fairness constraints into the optimization process. These constraints are based on the covariance between sensitive attributes and the signed distance from misclassified examples to the decision boundary.
The method is tested on synthetic data and the ProPublica COMPAS dataset, which contains information on criminal offenders and their recidivism rates. The results show that the proposed method effectively reduces disparate mistreatment while maintaining acceptable accuracy. The method is compared to other approaches, including a post-processing method by Hardt et al., and is found to perform well in terms of fairness and accuracy. The authors conclude that their method provides a flexible trade-off between fairness and accuracy and can be used to avoid both disparate mistreatment and disparate treatment. Future work includes extending the method to handle other measures of disparate mistreatment, such as false discovery and false omission rates.This paper introduces a new notion of fairness called "disparate mistreatment," which measures unfairness based on misclassification rates across different groups defined by sensitive attributes (e.g., race, gender). Unlike existing fairness notions such as disparate treatment and disparate impact, disparate mistreatment focuses on the difference in misclassification rates between groups. The authors propose intuitive measures of disparate mistreatment for decision boundary-based classifiers and show how these can be incorporated into their formulation as convex-concave constraints. Experiments on synthetic and real-world datasets demonstrate that their method effectively avoids disparate mistreatment while maintaining reasonable accuracy.
The paper discusses three main fairness notions: disparate treatment, disparate impact, and disparate mistreatment. Disparate treatment occurs when decisions vary based on sensitive attributes, while disparate impact refers to outcomes that disproportionately benefit or harm certain groups. Disparate mistreatment, the focus of this work, occurs when misclassification rates differ between groups. The authors introduce a method to train classifiers that avoid disparate mistreatment by incorporating fairness constraints into the optimization process. These constraints are based on the covariance between sensitive attributes and the signed distance from misclassified examples to the decision boundary.
The method is tested on synthetic data and the ProPublica COMPAS dataset, which contains information on criminal offenders and their recidivism rates. The results show that the proposed method effectively reduces disparate mistreatment while maintaining acceptable accuracy. The method is compared to other approaches, including a post-processing method by Hardt et al., and is found to perform well in terms of fairness and accuracy. The authors conclude that their method provides a flexible trade-off between fairness and accuracy and can be used to avoid both disparate mistreatment and disparate treatment. Future work includes extending the method to handle other measures of disparate mistreatment, such as false discovery and false omission rates.