29 May 2024 | Francisco Eiras¹, Aleksandar Petrov¹, Bertie Vidgen², Christian Schroeder de Witt¹, Fabio Pizzati¹, Katherine Elkins³, Supratik Mukhopadhyay⁴, Adel Bibi¹, Aaron Purewal⁵, Botos Csaba¹, Fabro Steibel⁶, Fazel Keshkar⁷, Fazl Barez⁷, Genevieve Smith⁸, Gianluca Guadagni⁹, Jon Chun³, Jordi Cabot¹⁰,¹¹, Joseph Marvin Imperial¹²,¹³, Juan A. Nolazco-Flores¹⁴, Lori Landay¹⁵, Matthew Jackson¹, Philip H.S. Torr¹, Trevor Darrell⁸, Yong Suk Lee¹⁶, and Jakob Foerster¹
This paper explores the risks and opportunities of open-source generative AI (Gen AI) models, emphasizing the need for responsible development and deployment. The authors argue that the benefits of open-source Gen AI outweigh its risks, advocating for open sourcing of models, training data, and evaluation data. They propose a three-stage framework for Gen AI development (near, mid, and long-term) to analyze the risks and opportunities associated with open-source models. The near-term stage is characterized by early use and exploration of current technology, while the mid-term involves widespread adoption and scaling. The long-term stage is defined by technological advancements that enable greater AI capabilities.
The paper discusses the current governance landscape of open-source Gen AI, highlighting regulatory developments such as the EU AI Act, Biden's Executive Order on AI, and China's Gen AI legislation. These regulations aim to address risks associated with open-source Gen AI, including dual-use and runaway technological progress. The authors also present an openness taxonomy for Gen AI, categorizing models into fully closed, semi-open, and fully open based on the availability of their components. They analyze the openness of 45 high-impact LLMs, noting a significant skew towards closed-source models in terms of training data and safety evaluation code.
The paper evaluates the risks and opportunities of open-source Gen AI in the near to mid-term and long-term stages. In the near to mid-term, open-source models offer benefits such as promoting research and innovation, improving affordability, and enabling flexibility and customization. However, they also pose risks, including the potential for misuse and the difficulty of ensuring safety and security. In the long-term, open-source Gen AI could help reduce existential risks associated with AI, such as those posed by Artificial General Intelligence (AGI). The authors conclude that open-source Gen AI should be encouraged, with appropriate legislation and regulation to ensure responsible development and deployment.This paper explores the risks and opportunities of open-source generative AI (Gen AI) models, emphasizing the need for responsible development and deployment. The authors argue that the benefits of open-source Gen AI outweigh its risks, advocating for open sourcing of models, training data, and evaluation data. They propose a three-stage framework for Gen AI development (near, mid, and long-term) to analyze the risks and opportunities associated with open-source models. The near-term stage is characterized by early use and exploration of current technology, while the mid-term involves widespread adoption and scaling. The long-term stage is defined by technological advancements that enable greater AI capabilities.
The paper discusses the current governance landscape of open-source Gen AI, highlighting regulatory developments such as the EU AI Act, Biden's Executive Order on AI, and China's Gen AI legislation. These regulations aim to address risks associated with open-source Gen AI, including dual-use and runaway technological progress. The authors also present an openness taxonomy for Gen AI, categorizing models into fully closed, semi-open, and fully open based on the availability of their components. They analyze the openness of 45 high-impact LLMs, noting a significant skew towards closed-source models in terms of training data and safety evaluation code.
The paper evaluates the risks and opportunities of open-source Gen AI in the near to mid-term and long-term stages. In the near to mid-term, open-source models offer benefits such as promoting research and innovation, improving affordability, and enabling flexibility and customization. However, they also pose risks, including the potential for misuse and the difficulty of ensuring safety and security. In the long-term, open-source Gen AI could help reduce existential risks associated with AI, such as those posed by Artificial General Intelligence (AGI). The authors conclude that open-source Gen AI should be encouraged, with appropriate legislation and regulation to ensure responsible development and deployment.