Scarecrows in Oz: The Use of Large Language Models in HRI

Scarecrows in Oz: The Use of Large Language Models in HRI

January 2024 | TOM WILLIAMS, CYNTHIA MATUSZEK, ROSS MEAD, NICK DEPALMA
The article "Scarecrows in Oz: The Use of Large Language Models in HRI" explores the use of Large Language Models (LLMs) in Human-Robot Interaction (HRI). While direct deployment of LLMs on robots may be problematic due to ethical, safety, and control concerns, LLMs can serve as "Scarecrows"—black-box modules that enable quick, full-pipeline solutions in HRI. These Scarecrows are not ideal solutions but can act as placeholders, similar to the "Wizard of Oz" (WoZ) approach. The authors argue that LLMs can provide useful capabilities for HRI, even if they are not perfect or theoretically sound. They also highlight the potential risks of using LLMs, including issues with accuracy, morality, and intentionality, which are critical in HRI. The article suggests that LLMs can be used as temporary solutions or as robust, principled solutions depending on the context. The authors also propose reporting guidelines for the use of LLMs in HRI, similar to those for WoZ techniques. The use of LLMs in HRI is a rapidly evolving area, and the authors emphasize the need for careful consideration of the ethical, practical, and scientific implications of their use. The article concludes that while LLMs offer significant potential, their use must be balanced with a consideration of the risks and the need for transparency and ethical design.The article "Scarecrows in Oz: The Use of Large Language Models in HRI" explores the use of Large Language Models (LLMs) in Human-Robot Interaction (HRI). While direct deployment of LLMs on robots may be problematic due to ethical, safety, and control concerns, LLMs can serve as "Scarecrows"—black-box modules that enable quick, full-pipeline solutions in HRI. These Scarecrows are not ideal solutions but can act as placeholders, similar to the "Wizard of Oz" (WoZ) approach. The authors argue that LLMs can provide useful capabilities for HRI, even if they are not perfect or theoretically sound. They also highlight the potential risks of using LLMs, including issues with accuracy, morality, and intentionality, which are critical in HRI. The article suggests that LLMs can be used as temporary solutions or as robust, principled solutions depending on the context. The authors also propose reporting guidelines for the use of LLMs in HRI, similar to those for WoZ techniques. The use of LLMs in HRI is a rapidly evolving area, and the authors emphasize the need for careful consideration of the ethical, practical, and scientific implications of their use. The article concludes that while LLMs offer significant potential, their use must be balanced with a consideration of the risks and the need for transparency and ethical design.
Reach us at info@study.space