June 3-6, 2024 | KATE GLAZKO, YUSUF MOHAMMED, BEN KOSA, VENKATESH POTLURI, JENNIFER MANKOFF
This study investigates and improves disability bias in GPT-based resume screening. Researchers from the University of Washington conducted a resume audit study using ChatGPT (GPT-4) to rank resumes, comparing a standard resume (control CV) with one enhanced with disability-related items such as leadership awards, scholarships, and organizational memberships. They found that GPT-4 exhibited bias against resumes that included disability-related information, with the extent of bias varying by disability type. However, training a custom GPT on principles of DEI (Diversity, Equity, and Inclusion) and disability justice significantly reduced this bias.
The study also includes a qualitative analysis of the types of ableist reasoning GPT-4 used to justify its biased decisions, such as confusing disability with DEI work, viewing disabled candidates through DEI-colored lenses, and perpetuating harmful stereotypes. The researchers found that GPT-4 often described disabled candidates as having less experience, narrow research focus, or poor social skills, which are ableist assumptions that can negatively impact hiring decisions.
The study highlights the importance of addressing bias in AI systems themselves, as well as in human interventions. It suggests that training GPTs on DEI and disability justice principles can help reduce bias in resume screening. The researchers also note that the justifications provided by GPT-4 are likely influenced by training data containing real-world biased statements, emphasizing the need for further research into human bias.
The study's findings demonstrate that GPT-4's bias against disabled candidates can be mitigated through targeted training. However, the DA-GPT (Disability-Aware GPT) still failed to fully rectify all biases. The researchers recommend further work to address these biases and improve the fairness of AI-based hiring systems. They also emphasize the importance of understanding and addressing human bias, as it can influence both AI and human decision-making processes. The study underscores the need for continued research into disability bias in AI and the development of more equitable hiring practices.This study investigates and improves disability bias in GPT-based resume screening. Researchers from the University of Washington conducted a resume audit study using ChatGPT (GPT-4) to rank resumes, comparing a standard resume (control CV) with one enhanced with disability-related items such as leadership awards, scholarships, and organizational memberships. They found that GPT-4 exhibited bias against resumes that included disability-related information, with the extent of bias varying by disability type. However, training a custom GPT on principles of DEI (Diversity, Equity, and Inclusion) and disability justice significantly reduced this bias.
The study also includes a qualitative analysis of the types of ableist reasoning GPT-4 used to justify its biased decisions, such as confusing disability with DEI work, viewing disabled candidates through DEI-colored lenses, and perpetuating harmful stereotypes. The researchers found that GPT-4 often described disabled candidates as having less experience, narrow research focus, or poor social skills, which are ableist assumptions that can negatively impact hiring decisions.
The study highlights the importance of addressing bias in AI systems themselves, as well as in human interventions. It suggests that training GPTs on DEI and disability justice principles can help reduce bias in resume screening. The researchers also note that the justifications provided by GPT-4 are likely influenced by training data containing real-world biased statements, emphasizing the need for further research into human bias.
The study's findings demonstrate that GPT-4's bias against disabled candidates can be mitigated through targeted training. However, the DA-GPT (Disability-Aware GPT) still failed to fully rectify all biases. The researchers recommend further work to address these biases and improve the fairness of AI-based hiring systems. They also emphasize the importance of understanding and addressing human bias, as it can influence both AI and human decision-making processes. The study underscores the need for continued research into disability bias in AI and the development of more equitable hiring practices.