8 Dec 2021 | Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atossa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving and Iason Gabriel
This paper aims to structure the risk landscape associated with large-scale Language Models (LMs) to foster responsible innovation. It identifies and analyzes a wide range of established and anticipated risks, drawing on multidisciplinary literature from computer science, linguistics, and social sciences. The paper outlines six specific risk areas: Discrimination, Exclusion, and Toxicity; Information Hazards; Misinformation Harms; Malicious Uses; Human-Computer Interaction Harms; and Automation, Access, and Environmental Harms. Each risk area is detailed with examples and considerations, highlighting the mechanisms by which these risks arise and the potential harms they can cause. The paper also discusses potential mitigation approaches and organizational responsibilities, emphasizing the need for inclusive dialogue and collaboration in addressing these risks. The overall goal is to support responsible decision-making, contribute to public discourse, and guide mitigation efforts in the development and use of LMs.This paper aims to structure the risk landscape associated with large-scale Language Models (LMs) to foster responsible innovation. It identifies and analyzes a wide range of established and anticipated risks, drawing on multidisciplinary literature from computer science, linguistics, and social sciences. The paper outlines six specific risk areas: Discrimination, Exclusion, and Toxicity; Information Hazards; Misinformation Harms; Malicious Uses; Human-Computer Interaction Harms; and Automation, Access, and Environmental Harms. Each risk area is detailed with examples and considerations, highlighting the mechanisms by which these risks arise and the potential harms they can cause. The paper also discusses potential mitigation approaches and organizational responsibilities, emphasizing the need for inclusive dialogue and collaboration in addressing these risks. The overall goal is to support responsible decision-making, contribute to public discourse, and guide mitigation efforts in the development and use of LMs.