14 Mar 2024 | Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, Inioluwa Deborah Raji
The paper "Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling" by Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, and Ioni Luwa Raji explores the current landscape of AI audit tools and their effectiveness in supporting the accountability goals of AI auditing. The authors conducted a landscape analysis of 390 AI audit tools and interviewed 35 practitioners from various organizations to understand the challenges and needs in AI auditing. They found that while there are many tools available for evaluating AI systems, these tools often fall short of supporting the full scope of accountability goals. Key areas identified for future tool development include harms discovery, advocacy, and data access. The study highlights the need for more comprehensive infrastructure that goes beyond evaluation to ensure meaningful accountability in AI auditing. The authors also discuss the challenges practitioners face, such as accessing high-quality data, applying consistent standards, and ensuring audit integrity. They conclude with recommendations for researchers, policymakers, and practitioners to improve the tools and practices in AI auditing.The paper "Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling" by Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, and Ioni Luwa Raji explores the current landscape of AI audit tools and their effectiveness in supporting the accountability goals of AI auditing. The authors conducted a landscape analysis of 390 AI audit tools and interviewed 35 practitioners from various organizations to understand the challenges and needs in AI auditing. They found that while there are many tools available for evaluating AI systems, these tools often fall short of supporting the full scope of accountability goals. Key areas identified for future tool development include harms discovery, advocacy, and data access. The study highlights the need for more comprehensive infrastructure that goes beyond evaluation to ensure meaningful accountability in AI auditing. The authors also discuss the challenges practitioners face, such as accessing high-quality data, applying consistent standards, and ensuring audit integrity. They conclude with recommendations for researchers, policymakers, and practitioners to improve the tools and practices in AI auditing.