AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research

AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research

2024 | Riley Simmons-Edler, Ryan P. Badman, Shayne Longpre, Kanaka Rajan
The paper discusses the risks and implications of the development and deployment of Autonomous Weapon Systems (AWS) driven by machine learning (ML). It highlights that while AWS have already begun to replace human soldiers in various battlefield roles, reducing the political cost of offensive warfare, they also pose significant geopolitical risks. The authors argue that AWS can lead to more frequent conflicts, particularly "low-intensity" conflicts, and reduce the human cost of war, which can prevent states from being deterred by the human cost of aggression. They also warn that AWS can facilitate terrorism, assassinations, and attacks on civilians, and undermine internal stability and representative government. The paper emphasizes the need for transparency and caution in the development and deployment of AWS, particularly in the context of international law and ethical considerations. It calls for a ban on human-independent AWS and the establishment of clear guidelines on the levels of autonomy acceptable in AWS. The authors also advocate for increased transparency and oversight of AWS capabilities and their use, including the need for independent watchdogs and journalists to report on AWS operations. Additionally, the paper addresses the potential proliferation of AWS and the threat it poses to academic research. It argues that attempts to restrict access to ML hardware and expertise are ineffective and may hinder progress in the field. Instead, it suggests that protecting trained AWS models and relevant datasets is a more feasible goal. The authors recommend that academic institutions treat military funding with the same level of oversight as industry funding to prevent ethical concerns and maintain academic independence. Overall, the paper aims to raise awareness among the public, ML researchers, and policymakers about the near-future risks posed by full or near-full autonomy in military technology and provides regulatory suggestions to mitigate these risks.The paper discusses the risks and implications of the development and deployment of Autonomous Weapon Systems (AWS) driven by machine learning (ML). It highlights that while AWS have already begun to replace human soldiers in various battlefield roles, reducing the political cost of offensive warfare, they also pose significant geopolitical risks. The authors argue that AWS can lead to more frequent conflicts, particularly "low-intensity" conflicts, and reduce the human cost of war, which can prevent states from being deterred by the human cost of aggression. They also warn that AWS can facilitate terrorism, assassinations, and attacks on civilians, and undermine internal stability and representative government. The paper emphasizes the need for transparency and caution in the development and deployment of AWS, particularly in the context of international law and ethical considerations. It calls for a ban on human-independent AWS and the establishment of clear guidelines on the levels of autonomy acceptable in AWS. The authors also advocate for increased transparency and oversight of AWS capabilities and their use, including the need for independent watchdogs and journalists to report on AWS operations. Additionally, the paper addresses the potential proliferation of AWS and the threat it poses to academic research. It argues that attempts to restrict access to ML hardware and expertise are ineffective and may hinder progress in the field. Instead, it suggests that protecting trained AWS models and relevant datasets is a more feasible goal. The authors recommend that academic institutions treat military funding with the same level of oversight as industry funding to prevent ethical concerns and maintain academic independence. Overall, the paper aims to raise awareness among the public, ML researchers, and policymakers about the near-future risks posed by full or near-full autonomy in military technology and provides regulatory suggestions to mitigate these risks.
Reach us at info@study.space
[slides and audio] AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research