This paper examines the issue of bias in recidivism prediction instruments (RPIs) and discusses fairness criteria used to assess them. RPIs are used in the criminal justice system to predict the likelihood of reoffending, but their use has sparked controversy due to potential discriminatory effects. The authors argue that certain fairness criteria, such as predictive parity, cannot be simultaneously satisfied when recidivism rates differ across groups. They demonstrate that disparities in false positive and false negative rates can lead to disparate impact when high-risk assessments result in stricter penalties.
The study uses data from Broward County to analyze the COMPAS RPI, which is known for its high false positive rates for Black defendants and low false negative rates for White defendants. The authors show that these disparities are not due to bias in the RPI itself, but rather to the differing recidivism rates between groups. When an RPI satisfies predictive parity, it cannot achieve equal false positive and false negative rates across groups if recidivism prevalence differs.
The paper also explores how differences in error rates can lead to disparate impact under risk-based sentencing policies. It shows that even when an RPI satisfies predictive parity, the resulting penalties can be unequal between groups. The authors emphasize that fairness is a social and ethical concept, not a statistical one, and that the use of RPIs must be carefully evaluated to avoid discriminatory outcomes.
The study concludes that while RPIs can be more accurate than human judgment, they must be used with caution to ensure they do not perpetuate systemic biases. The authors call for further research into how data bias affects the fairness of RPIs and the need for transparent, equitable decision-making in the criminal justice system.This paper examines the issue of bias in recidivism prediction instruments (RPIs) and discusses fairness criteria used to assess them. RPIs are used in the criminal justice system to predict the likelihood of reoffending, but their use has sparked controversy due to potential discriminatory effects. The authors argue that certain fairness criteria, such as predictive parity, cannot be simultaneously satisfied when recidivism rates differ across groups. They demonstrate that disparities in false positive and false negative rates can lead to disparate impact when high-risk assessments result in stricter penalties.
The study uses data from Broward County to analyze the COMPAS RPI, which is known for its high false positive rates for Black defendants and low false negative rates for White defendants. The authors show that these disparities are not due to bias in the RPI itself, but rather to the differing recidivism rates between groups. When an RPI satisfies predictive parity, it cannot achieve equal false positive and false negative rates across groups if recidivism prevalence differs.
The paper also explores how differences in error rates can lead to disparate impact under risk-based sentencing policies. It shows that even when an RPI satisfies predictive parity, the resulting penalties can be unequal between groups. The authors emphasize that fairness is a social and ethical concept, not a statistical one, and that the use of RPIs must be carefully evaluated to avoid discriminatory outcomes.
The study concludes that while RPIs can be more accurate than human judgment, they must be used with caution to ensure they do not perpetuate systemic biases. The authors call for further research into how data bias affects the fairness of RPIs and the need for transparent, equitable decision-making in the criminal justice system.