A Reductions Approach to Fair Classification

A Reductions Approach to Fair Classification

2018 | Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, Hanna Wallach
This paper presents a systematic approach to achieving fairness in binary classification. The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, which allows for a randomized classifier with the lowest (empirical) error subject to fairness constraints. The approach works for any representation of the cost-sensitive classifier and compares favorably to prior baselines on various datasets, overcoming several of their disadvantages. The paper focuses on two well-known quantitative definitions of fairness: demographic parity and equalized odds. Demographic parity requires that the prediction is statistically independent of the protected attribute, while equalized odds requires that the prediction is conditionally independent of the protected attribute given the label. The approach encompasses many other fairness definitions as special cases. The method reduces the problem of fair classification to a sequence of cost-sensitive classification problems. This allows for a randomized classifier that achieves the lowest error while satisfying the desired fairness constraints. The approach does not require test-time access to the protected attribute, and it guarantees the most accurate fair classifier. The paper also presents an algorithm for fair classification based on an exponentiated gradient method. This algorithm iteratively finds an approximate equilibrium between the Q-player and the λ-player, leading to a randomized classifier that satisfies the fairness constraints. The algorithm is evaluated on several datasets, showing that it performs well compared to other baselines, including post-processing and reweighting approaches. The paper concludes that the proposed approach provides a general-purpose method for achieving fairness in binary classification, with provable guarantees and flexibility in choosing the desired accuracy-fairness tradeoff. The approach works for any classifier representation and can handle multiple fairness definitions.This paper presents a systematic approach to achieving fairness in binary classification. The key idea is to reduce fair classification to a sequence of cost-sensitive classification problems, which allows for a randomized classifier with the lowest (empirical) error subject to fairness constraints. The approach works for any representation of the cost-sensitive classifier and compares favorably to prior baselines on various datasets, overcoming several of their disadvantages. The paper focuses on two well-known quantitative definitions of fairness: demographic parity and equalized odds. Demographic parity requires that the prediction is statistically independent of the protected attribute, while equalized odds requires that the prediction is conditionally independent of the protected attribute given the label. The approach encompasses many other fairness definitions as special cases. The method reduces the problem of fair classification to a sequence of cost-sensitive classification problems. This allows for a randomized classifier that achieves the lowest error while satisfying the desired fairness constraints. The approach does not require test-time access to the protected attribute, and it guarantees the most accurate fair classifier. The paper also presents an algorithm for fair classification based on an exponentiated gradient method. This algorithm iteratively finds an approximate equilibrium between the Q-player and the λ-player, leading to a randomized classifier that satisfies the fairness constraints. The algorithm is evaluated on several datasets, showing that it performs well compared to other baselines, including post-processing and reweighting approaches. The paper concludes that the proposed approach provides a general-purpose method for achieving fairness in binary classification, with provable guarantees and flexibility in choosing the desired accuracy-fairness tradeoff. The approach works for any classifier representation and can handle multiple fairness definitions.
Reach us at info@study.space