Open Problems in Technical AI Governance

Open Problems in Technical AI Governance

20 Jul 2024 | Anka Reuel, Ben Bucknall, Stephen Casper, Tim Fist, Lisa Soder, Onni Aarne, Lewis Hammond, Lujain Ibrahim, Alan Chan, Peter Wills, Markus Anderljung, Ben Garfinkel, Lennart Heim, Andrew Trask, Gabriel Mukobi, Rylan Schaeffer, Mauricio Baker, Sara Hooker, Irene Solaiman, Alexandra Sasha Luccioni, Nitarshan Rajkumar, Nicolas Moës, Neel Guha, Jessica Newman, Yoshua Bengio, Tobin South, Alex Pentland, Jeffrey Ladish, Sanmi Koyejo, Mykel J. Kochenderfer, Robert Trager
This paper explores open problems in technical AI governance (TAIG), which refers to technical analysis and tools for supporting effective AI governance. TAIG aims to address challenges in identifying areas requiring intervention, assessing governance actions, and enhancing governance options through enforcement, incentivization, or compliance mechanisms. The paper presents a taxonomy of TAIG based on capacities (actions useful for governance) and targets (key elements in the AI value chain). It outlines open problems within each category, including questions for future research. AI governance involves processes and structures for making, implementing, and enforcing decisions related to AI. TAIG contributes to AI governance by identifying areas for intervention, informing decisions, and enhancing governance options. For example, deployment evaluations can help identify the need for policy interventions, while designing robust models can prevent downstream misuse. The paper discusses the limitations of current TAIG tools and the potential pitfalls of techno-solutionism, emphasizing the need for careful management of technical and non-technical solutions. It also highlights the importance of complementary approaches to AI safety and governance, such as sociotechnical methods. The paper identifies several open problems in TAIG, including data identification, infrastructure for analyzing large datasets, attribution of model behavior to data, compute governance, model evaluations, and deployment impact assessments. These problems require further technical advances to ensure robust and effective AI governance.This paper explores open problems in technical AI governance (TAIG), which refers to technical analysis and tools for supporting effective AI governance. TAIG aims to address challenges in identifying areas requiring intervention, assessing governance actions, and enhancing governance options through enforcement, incentivization, or compliance mechanisms. The paper presents a taxonomy of TAIG based on capacities (actions useful for governance) and targets (key elements in the AI value chain). It outlines open problems within each category, including questions for future research. AI governance involves processes and structures for making, implementing, and enforcing decisions related to AI. TAIG contributes to AI governance by identifying areas for intervention, informing decisions, and enhancing governance options. For example, deployment evaluations can help identify the need for policy interventions, while designing robust models can prevent downstream misuse. The paper discusses the limitations of current TAIG tools and the potential pitfalls of techno-solutionism, emphasizing the need for careful management of technical and non-technical solutions. It also highlights the importance of complementary approaches to AI safety and governance, such as sociotechnical methods. The paper identifies several open problems in TAIG, including data identification, infrastructure for analyzing large datasets, attribution of model behavior to data, compute governance, model evaluations, and deployment impact assessments. These problems require further technical advances to ensure robust and effective AI governance.
Reach us at info@study.space
[slides] Open Problems in Technical AI Governance | StudySpace