Computing Power and the Governance of Artificial Intelligence

Computing Power and the Governance of Artificial Intelligence

February 14, 2024 | Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O'Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, Diane Coyle
Computing power, or "compute," is essential for developing and deploying artificial intelligence (AI). Governments and companies are using compute to govern AI, such as investing in domestic compute capacity, controlling compute flow to competing countries, and subsidizing compute access. However, these efforts are only the beginning of how compute can be used to govern AI. Compute is a particularly effective point of intervention because it is detectable, excludable, and quantifiable, and is produced via a concentrated supply chain. These characteristics, along with the importance of compute for cutting-edge AI models, suggest that governing compute can contribute to achieving common policy objectives, such as ensuring the safety and beneficial use of AI. Policymakers could use compute to increase regulatory visibility into AI, allocate resources to promote beneficial outcomes, and enforce restrictions against irresponsible or malicious AI development. However, compute-based policies and technologies vary in readiness for implementation, with some being piloted and others hindered by the need for fundamental research. Naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power. The paper suggests guardrails to minimize these risks. Compute governance is attractive for policymakers because of its detectability, excludability, quantifiability, and supply chain concentration. These features make compute a valuable tool for AI governance. Compute can enhance three key areas of governance: visibility, allocation, and enforcement. However, compute governance is not the whole story of AI governance. Other approaches are likely needed to address small-scale uses of compute that could pose major risks. Compute governance can pose risks to privacy and other critical values, and policymakers have limited experience in managing its unintended consequences. To mitigate these risks, the paper recommends implementing key safeguards, such as focusing on governance of industrial-scale compute and incorporating privacy-preserving practices and technology. The paper discusses a range of policy options and considerations available to different governing entities with decision-making authority. It also provides several illustrative policy mechanisms for visibility, allocation, and enforcement. The paper also discusses the risks of compute governance and possible mitigations, including unintended consequences, risks from centralization and concentration of power, and the need for guardrails. The paper concludes that compute governance is an important tool for AI governance, but it is not the whole story. The paper also highlights the need for a holistic theory and appraisal of compute governance.Computing power, or "compute," is essential for developing and deploying artificial intelligence (AI). Governments and companies are using compute to govern AI, such as investing in domestic compute capacity, controlling compute flow to competing countries, and subsidizing compute access. However, these efforts are only the beginning of how compute can be used to govern AI. Compute is a particularly effective point of intervention because it is detectable, excludable, and quantifiable, and is produced via a concentrated supply chain. These characteristics, along with the importance of compute for cutting-edge AI models, suggest that governing compute can contribute to achieving common policy objectives, such as ensuring the safety and beneficial use of AI. Policymakers could use compute to increase regulatory visibility into AI, allocate resources to promote beneficial outcomes, and enforce restrictions against irresponsible or malicious AI development. However, compute-based policies and technologies vary in readiness for implementation, with some being piloted and others hindered by the need for fundamental research. Naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power. The paper suggests guardrails to minimize these risks. Compute governance is attractive for policymakers because of its detectability, excludability, quantifiability, and supply chain concentration. These features make compute a valuable tool for AI governance. Compute can enhance three key areas of governance: visibility, allocation, and enforcement. However, compute governance is not the whole story of AI governance. Other approaches are likely needed to address small-scale uses of compute that could pose major risks. Compute governance can pose risks to privacy and other critical values, and policymakers have limited experience in managing its unintended consequences. To mitigate these risks, the paper recommends implementing key safeguards, such as focusing on governance of industrial-scale compute and incorporating privacy-preserving practices and technology. The paper discusses a range of policy options and considerations available to different governing entities with decision-making authority. It also provides several illustrative policy mechanisms for visibility, allocation, and enforcement. The paper also discusses the risks of compute governance and possible mitigations, including unintended consequences, risks from centralization and concentration of power, and the need for guardrails. The paper concludes that compute governance is an important tool for AI governance, but it is not the whole story. The paper also highlights the need for a holistic theory and appraisal of compute governance.
Reach us at info@study.space