Evaluating Large Language Models Trained on Code

Evaluating Large Language Models Trained on Code

14 Jul 2021 | Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba
The paper introduces Codex, a GPT language model fine-tuned on publicly available code from GitHub, and evaluates its Python code-writing capabilities. The model is used to power GitHub Copilot and is evaluated on the HumanEval dataset, which measures functional correctness for synthesizing programs from docstrings. Codex solves 28.8% of the problems, significantly outperforming GPT-3 (0%) and GPT-J (11.4%). The authors find that repeated sampling from the model is an effective strategy for producing working solutions, solving 70.2% of problems with 100 samples per problem. The paper also discusses the limitations of Codex, including difficulty with long chains of operations and binding operations to variables. Finally, the authors discuss the broader impacts of deploying powerful code generation technologies, covering safety, security, and economic implications.The paper introduces Codex, a GPT language model fine-tuned on publicly available code from GitHub, and evaluates its Python code-writing capabilities. The model is used to power GitHub Copilot and is evaluated on the HumanEval dataset, which measures functional correctness for synthesizing programs from docstrings. Codex solves 28.8% of the problems, significantly outperforming GPT-3 (0%) and GPT-J (11.4%). The authors find that repeated sampling from the model is an effective strategy for producing working solutions, solving 70.2% of problems with 100 samples per problem. The paper also discusses the limitations of Codex, including difficulty with long chains of operations and binding operations to variables. Finally, the authors discuss the broader impacts of deploying powerful code generation technologies, covering safety, security, and economic implications.
Reach us at info@study.space