Scarecrows in Oz: The Use of Large Language Models in HRI

Scarecrows in Oz: The Use of Large Language Models in HRI

January 2024 | TOM WILLIAMS, Colorado School of Mines, USA; CYNTHIA MATUSZEK, University of Maryland, Baltimore County, USA; ROSS MEAD, Semio, Inc., USA; NICK DEPALMA, Plus One Robotics, USA
The article "Scarecrows in Oz: The Use of Large Language Models in HRI" by Tom Williams, Cynthia Matuszek, Ross Mead, and Nick DePalma explores the potential and challenges of integrating Large Language Models (LLMs) into Human-Robot Interaction (HRI). The authors argue that while LLMs can be used as "Scarecrows"—black-box modules that enable quick prototyping and full-pipeline solutions without the need for ethical, safety, or control considerations—this approach has limitations and risks. They highlight that Scarecrows can serve as useful placeholders in language-capable robot architectures but ultimately need to be replaced or supplemented by more robust and theoretically motivated solutions. The article discusses the opportunities and risks associated with using LLMs in HRI. Opportunities include the ability to quickly build and test new capabilities, while risks include the lack of trustworthiness, bias, and the potential for harmful persuasive power. The authors emphasize the importance of transparent reporting guidelines to address these risks, including details on model specification, motivation for using LLMs, ethical concerns, and the path to deployment. The article concludes by advocating for a balanced approach that leverages the capabilities of LLMs while maintaining transparency, ethical standards, and scientific reproducibility. It calls for the development of guidelines and standards to ensure responsible and effective use of LLMs in HRI.The article "Scarecrows in Oz: The Use of Large Language Models in HRI" by Tom Williams, Cynthia Matuszek, Ross Mead, and Nick DePalma explores the potential and challenges of integrating Large Language Models (LLMs) into Human-Robot Interaction (HRI). The authors argue that while LLMs can be used as "Scarecrows"—black-box modules that enable quick prototyping and full-pipeline solutions without the need for ethical, safety, or control considerations—this approach has limitations and risks. They highlight that Scarecrows can serve as useful placeholders in language-capable robot architectures but ultimately need to be replaced or supplemented by more robust and theoretically motivated solutions. The article discusses the opportunities and risks associated with using LLMs in HRI. Opportunities include the ability to quickly build and test new capabilities, while risks include the lack of trustworthiness, bias, and the potential for harmful persuasive power. The authors emphasize the importance of transparent reporting guidelines to address these risks, including details on model specification, motivation for using LLMs, ethical concerns, and the path to deployment. The article concludes by advocating for a balanced approach that leverages the capabilities of LLMs while maintaining transparency, ethical standards, and scientific reproducibility. It calls for the development of guidelines and standards to ensure responsible and effective use of LLMs in HRI.
Reach us at info@study.space
Understanding Scarecrows in Oz%3A The Use of Large Language Models in HRI