Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling

Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling

March 15, 2024 | Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, Inioluwa Deborah Raji
This paper explores the current state of AI audit tools and identifies gaps and opportunities for improving AI accountability. The authors conducted interviews with 35 AI audit practitioners and analyzed 390 AI audit tools to understand the challenges and needs in the field. They found that while many tools exist for evaluating AI systems, they often fall short in supporting the broader goals of AI accountability, such as harms discovery, advocacy, and data access. The study highlights the need for more comprehensive tools that support the full range of AI audit activities, including auditor independence, peer review, standardization, and advocacy. The authors also note that many AI audit practitioners struggle to access high-quality data, apply consistent standards, and ensure audit integrity. They recommend that the field move beyond evaluation-focused tools towards more comprehensive infrastructure for AI accountability. The study also identifies challenges in data collection and transparency, as well as legal risks associated with external data collection. The authors suggest that future research should focus on developing tools and processes that facilitate data access, ensure data quality, and support independent auditing. The study concludes that there is a need for more standardized, context-specific, and holistic evaluation frameworks, as well as more inclusive and transparent AI audit practices.This paper explores the current state of AI audit tools and identifies gaps and opportunities for improving AI accountability. The authors conducted interviews with 35 AI audit practitioners and analyzed 390 AI audit tools to understand the challenges and needs in the field. They found that while many tools exist for evaluating AI systems, they often fall short in supporting the broader goals of AI accountability, such as harms discovery, advocacy, and data access. The study highlights the need for more comprehensive tools that support the full range of AI audit activities, including auditor independence, peer review, standardization, and advocacy. The authors also note that many AI audit practitioners struggle to access high-quality data, apply consistent standards, and ensure audit integrity. They recommend that the field move beyond evaluation-focused tools towards more comprehensive infrastructure for AI accountability. The study also identifies challenges in data collection and transparency, as well as legal risks associated with external data collection. The authors suggest that future research should focus on developing tools and processes that facilitate data access, ensure data quality, and support independent auditing. The study concludes that there is a need for more standardized, context-specific, and holistic evaluation frameworks, as well as more inclusive and transparent AI audit practices.
Reach us at info@study.space
Understanding Towards AI Accountability Infrastructure%3A Gaps and Opportunities in AI Audit Tooling