July 2019 | Roy Schwartz*, Jesse Dodge*, Noah A. Smith, Oren Etzioni
Green AI is an emerging focus at the Allen Institute for AI. The paper advocates for making efficiency an evaluation criterion for AI research alongside accuracy. It proposes reporting the financial cost or "price tag" of developing, training, and running models to provide baselines for investigating increasingly efficient methods. The goal is to make AI both greener and more inclusive, enabling any inspired undergraduate with a laptop to write high-quality research papers.
Since 2012, the field of AI has seen remarkable progress in various capabilities, achieved through increasingly large and computationally-intensive deep learning models. The computational cost of training deep learning models has increased exponentially, with training cost doubling every few months. This trend has led to a significant environmental impact, as the carbon footprint of AI research is surprisingly large. This is ironic, as deep learning was inspired by the human brain, which is remarkably energy efficient.
Red AI refers to AI research that seeks to obtain state-of-the-art results in accuracy through the use of massive computational power. This approach has led to diminishing returns, as the computational cost of research is increasing exponentially, far exceeding Moore's Law. The paper argues that a larger weight should be given to efficiency metrics in AI research.
Green AI refers to AI research that yields novel results without increasing computational cost, and ideally reducing it. The paper proposes reporting the total number of floating point operations (FPO) required to generate a result as a measure of efficiency. FPO provides an estimate of the amount of work performed by a computational process and is strongly correlated with the running time of the model.
The paper also advocates for making efficiency an official contribution in major AI conferences and encourages researchers to report the budget/accuracy curve observed during training. Additionally, it highlights the importance of releasing pretrained models publicly to save others the costs of retraining them.
The paper concludes that Green AI has the potential to make AI more inclusive and environmentally friendly, while also moving towards a more cognitively plausible direction. It highlights several important research directions and open questions in the field of Green AI.Green AI is an emerging focus at the Allen Institute for AI. The paper advocates for making efficiency an evaluation criterion for AI research alongside accuracy. It proposes reporting the financial cost or "price tag" of developing, training, and running models to provide baselines for investigating increasingly efficient methods. The goal is to make AI both greener and more inclusive, enabling any inspired undergraduate with a laptop to write high-quality research papers.
Since 2012, the field of AI has seen remarkable progress in various capabilities, achieved through increasingly large and computationally-intensive deep learning models. The computational cost of training deep learning models has increased exponentially, with training cost doubling every few months. This trend has led to a significant environmental impact, as the carbon footprint of AI research is surprisingly large. This is ironic, as deep learning was inspired by the human brain, which is remarkably energy efficient.
Red AI refers to AI research that seeks to obtain state-of-the-art results in accuracy through the use of massive computational power. This approach has led to diminishing returns, as the computational cost of research is increasing exponentially, far exceeding Moore's Law. The paper argues that a larger weight should be given to efficiency metrics in AI research.
Green AI refers to AI research that yields novel results without increasing computational cost, and ideally reducing it. The paper proposes reporting the total number of floating point operations (FPO) required to generate a result as a measure of efficiency. FPO provides an estimate of the amount of work performed by a computational process and is strongly correlated with the running time of the model.
The paper also advocates for making efficiency an official contribution in major AI conferences and encourages researchers to report the budget/accuracy curve observed during training. Additionally, it highlights the importance of releasing pretrained models publicly to save others the costs of retraining them.
The paper concludes that Green AI has the potential to make AI more inclusive and environmentally friendly, while also moving towards a more cognitively plausible direction. It highlights several important research directions and open questions in the field of Green AI.