June 3–6, 2024, Rio de Janeiro, Brazil | KATE GLAZKO, University of Washington, United States
YUSUF MOHAMMED*, University of Washington, United States
BEN KOSA*, University of Washington, United States
VENKATESH POTLURI, University of Washington, United States
JENNIFER MANKOFF, University of Washington, United States
The paper "Identifying and Improving Disability Bias in GPT-Based Resume Screening" by Kate Glazko, Yusuf Mohammed, Ben Kosa, Venkatesh Potluri, and Jennifer Mankoff from the University of Washington explores the bias in GPT-based resume screening systems, particularly focusing on the impact of disability. The study uses a resume audit approach to compare the ranking of resumes with and without disability-related achievements ( awards, scholarships, presentations, and memberships) by GPT-4 and a custom GPT trained on DEI and disability justice principles. Key findings include:
1. **Disability Difference**: GPT-4 exhibits a preference for control resumes over those mentioning disabilities, with significant differences across various disabilities.
2. **Bias Reduction**: The custom GPT, trained on DEI and disability justice principles, significantly reduces bias, ranking disability-mentioned resumes higher in most conditions.
3. **Bias Explanation**: Qualitative analysis reveals that GPT-4 uses direct and indirect ableist reasoning, such as confusing disability with DEI, overemphasizing disability-related items, and perpetuating harmful stereotypes.
The study highlights the urgent need to address disability bias in AI-based hiring systems, which can exacerbate existing employment barriers for disabled individuals. The findings suggest that training GPTs on DEI and disability justice principles can mitigate bias and improve the fairness of resume screening processes. The authors also discuss limitations and future directions, emphasizing the importance of large-scale testing and addressing real-world scenarios involving multiple marginalized identities.The paper "Identifying and Improving Disability Bias in GPT-Based Resume Screening" by Kate Glazko, Yusuf Mohammed, Ben Kosa, Venkatesh Potluri, and Jennifer Mankoff from the University of Washington explores the bias in GPT-based resume screening systems, particularly focusing on the impact of disability. The study uses a resume audit approach to compare the ranking of resumes with and without disability-related achievements ( awards, scholarships, presentations, and memberships) by GPT-4 and a custom GPT trained on DEI and disability justice principles. Key findings include:
1. **Disability Difference**: GPT-4 exhibits a preference for control resumes over those mentioning disabilities, with significant differences across various disabilities.
2. **Bias Reduction**: The custom GPT, trained on DEI and disability justice principles, significantly reduces bias, ranking disability-mentioned resumes higher in most conditions.
3. **Bias Explanation**: Qualitative analysis reveals that GPT-4 uses direct and indirect ableist reasoning, such as confusing disability with DEI, overemphasizing disability-related items, and perpetuating harmful stereotypes.
The study highlights the urgent need to address disability bias in AI-based hiring systems, which can exacerbate existing employment barriers for disabled individuals. The findings suggest that training GPTs on DEI and disability justice principles can mitigate bias and improve the fairness of resume screening processes. The authors also discuss limitations and future directions, emphasizing the importance of large-scale testing and addressing real-world scenarios involving multiple marginalized identities.