25 May 2023 | Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, Andy Zeng
The paper "Code as Policies: Language Model Programs for Embodied Control" by Jacky Liang et al. explores the use of large language models (LLMs) trained on code-completion to generate robot policy code from natural language commands. The authors demonstrate that these code-writing LLMs can be repurposed to write robot policy code, which can process perception outputs, parameterize control primitive APIs, and chain classic logic structures. By providing example commands and corresponding policy code, LLMs can autonomously generate new policy code for new commands. This approach, called *Code as Policies* (CaP), enables robots to perform spatial-geometric reasoning, generalize to new instructions, and prescribe precise values based on context. The paper introduces a robot-centric formulation of language model-generated programs (LMPs) and presents a new benchmark to evaluate future language models in robotics code-generation tasks. The authors also discuss the benefits and limitations of CaP, including its ability to handle cross-embodied plans and its reliance on specific perception and control APIs. Experiments across multiple real robot platforms show the effectiveness of CaP in various tasks, such as drawing shapes, pick and place operations, and mobile manipulation.The paper "Code as Policies: Language Model Programs for Embodied Control" by Jacky Liang et al. explores the use of large language models (LLMs) trained on code-completion to generate robot policy code from natural language commands. The authors demonstrate that these code-writing LLMs can be repurposed to write robot policy code, which can process perception outputs, parameterize control primitive APIs, and chain classic logic structures. By providing example commands and corresponding policy code, LLMs can autonomously generate new policy code for new commands. This approach, called *Code as Policies* (CaP), enables robots to perform spatial-geometric reasoning, generalize to new instructions, and prescribe precise values based on context. The paper introduces a robot-centric formulation of language model-generated programs (LMPs) and presents a new benchmark to evaluate future language models in robotics code-generation tasks. The authors also discuss the benefits and limitations of CaP, including its ability to handle cross-embodied plans and its reliance on specific perception and control APIs. Experiments across multiple real robot platforms show the effectiveness of CaP in various tasks, such as drawing shapes, pick and place operations, and mobile manipulation.