25 Jan 2024 | Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, Inioluwa Deborah Raji
The paper "AI Auditing: The Broken Bus on the Road to AI Accountability" by Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, and Ioni Luwa Raji explores the challenges and effectiveness of AI audits in achieving meaningful accountability. The authors taxonomize current AI audit practices conducted by various stakeholders, including regulators, law firms, civil society, journalism, academia, and consulting agencies. They assess the impact of these audits and find that only a subset translates to desired accountability outcomes. The paper identifies practices necessary for effective AI audits, emphasizing the connections between audit design, methodology, and institutional context.
The introduction highlights the widespread risks associated with AI systems, such as functional failures, disparate performance, and privacy violations. It defines an AI audit as any independent evaluation of AI systems for the purpose of accountability. However, unlike more mature audit industries, AI audits often do not consistently lead to concrete outcomes like influencing corporate action or shaping broader policies.
The background section provides a detailed definition of an AI audit and the different types of auditors, including internal and external auditors. It also outlines the stages of an audit process: harms discovery, standards identification, performance analysis, and audit communication and advocacy.
The methods section describes the extensive literature review conducted by the authors, including academic research and non-academic audit practices from various domains. They collected and analyzed a comprehensive sample of 341 academic audit studies and identified key characteristics of audit practices in journalism, civil society, government, consulting agencies, and corporate audits.
The results section is divided into two parts: academic audits and non-academic audits. For academic audits, the authors categorized studies into product/model/algorithm audits, data audits, ecosystem audits, and meta-commentary & critique. They found that while academic audits are impactful, their immediate consequences are often unclear. For non-academic audits, the authors examined the context, methodology, and impact of audits conducted by law firms, consulting agencies, journalism, civil society, and government. They found that these audits vary in their effectiveness and impact, with journalistic audits often resulting in the most significant outcomes, including policy changes and legislative actions.
Overall, the paper emphasizes the need for a more systematic and effective approach to AI audits to achieve meaningful accountability in the AI ecosystem.The paper "AI Auditing: The Broken Bus on the Road to AI Accountability" by Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, and Ioni Luwa Raji explores the challenges and effectiveness of AI audits in achieving meaningful accountability. The authors taxonomize current AI audit practices conducted by various stakeholders, including regulators, law firms, civil society, journalism, academia, and consulting agencies. They assess the impact of these audits and find that only a subset translates to desired accountability outcomes. The paper identifies practices necessary for effective AI audits, emphasizing the connections between audit design, methodology, and institutional context.
The introduction highlights the widespread risks associated with AI systems, such as functional failures, disparate performance, and privacy violations. It defines an AI audit as any independent evaluation of AI systems for the purpose of accountability. However, unlike more mature audit industries, AI audits often do not consistently lead to concrete outcomes like influencing corporate action or shaping broader policies.
The background section provides a detailed definition of an AI audit and the different types of auditors, including internal and external auditors. It also outlines the stages of an audit process: harms discovery, standards identification, performance analysis, and audit communication and advocacy.
The methods section describes the extensive literature review conducted by the authors, including academic research and non-academic audit practices from various domains. They collected and analyzed a comprehensive sample of 341 academic audit studies and identified key characteristics of audit practices in journalism, civil society, government, consulting agencies, and corporate audits.
The results section is divided into two parts: academic audits and non-academic audits. For academic audits, the authors categorized studies into product/model/algorithm audits, data audits, ecosystem audits, and meta-commentary & critique. They found that while academic audits are impactful, their immediate consequences are often unclear. For non-academic audits, the authors examined the context, methodology, and impact of audits conducted by law firms, consulting agencies, journalism, civil society, and government. They found that these audits vary in their effectiveness and impact, with journalistic audits often resulting in the most significant outcomes, including policy changes and legislative actions.
Overall, the paper emphasizes the need for a more systematic and effective approach to AI audits to achieve meaningful accountability in the AI ecosystem.