Yell At Your Robot 🎤 Improving On-the-Fly from Language Corrections

Yell At Your Robot 🎤 Improving On-the-Fly from Language Corrections

19 Mar 2024 | Lucy Xiaoyang Shi, Zheyuan Hu, Tony Z. Zhao, Archit Sharma, Karl Pertsch, Jianlan Luo, Sergey Levine, Chelsea Finn
The paper "Yell At Your Robot" by Lucy Xiaoyang Shi et al. introduces a system that enables robots to improve their performance on complex, long-horizon tasks through real-time verbal corrections from humans. The system, called YAY Robot, combines a high-level policy that generates language instructions with a low-level policy that executes these instructions. Key contributions include: 1. **Real-Time Adaptation**: The system can adapt to diverse, contextual language commands in real-time, allowing users to provide corrective feedback through natural language. 2. **Continuous Improvement**: The high-level policy can be continuously fine-tuned using human corrections, leading to significant performance improvements over time. 3. **Hierarchical Setup**: The hierarchical setup allows the robot to reuse primitive skills and handle complex, long-horizon tasks more effectively. 4. **Natural Language Feedback**: Human feedback is gathered naturally during the robot's daily operations, making it accessible and intuitive for end-users. The paper evaluates YAY Robot on three bi-manual manipulation tasks: packing items into a bag, preparing trail mix, and cleaning a plate. Results show that YAY Robot achieves a 20% improvement in success rates compared to the base policy, with real-time language corrections enhancing task success and autonomous performance. The system also demonstrates better handling of compounding errors and out-of-distribution scenarios. The authors highlight the limitations of their approach, such as the need for a performant low-level policy and the potential for further improvements in low-level policy performance. They suggest that future research could explore methods to handle non-verbal communication and integrate pretrained vision-language models more effectively.The paper "Yell At Your Robot" by Lucy Xiaoyang Shi et al. introduces a system that enables robots to improve their performance on complex, long-horizon tasks through real-time verbal corrections from humans. The system, called YAY Robot, combines a high-level policy that generates language instructions with a low-level policy that executes these instructions. Key contributions include: 1. **Real-Time Adaptation**: The system can adapt to diverse, contextual language commands in real-time, allowing users to provide corrective feedback through natural language. 2. **Continuous Improvement**: The high-level policy can be continuously fine-tuned using human corrections, leading to significant performance improvements over time. 3. **Hierarchical Setup**: The hierarchical setup allows the robot to reuse primitive skills and handle complex, long-horizon tasks more effectively. 4. **Natural Language Feedback**: Human feedback is gathered naturally during the robot's daily operations, making it accessible and intuitive for end-users. The paper evaluates YAY Robot on three bi-manual manipulation tasks: packing items into a bag, preparing trail mix, and cleaning a plate. Results show that YAY Robot achieves a 20% improvement in success rates compared to the base policy, with real-time language corrections enhancing task success and autonomous performance. The system also demonstrates better handling of compounding errors and out-of-distribution scenarios. The authors highlight the limitations of their approach, such as the need for a performant low-level policy and the potential for further improvements in low-level policy performance. They suggest that future research could explore methods to handle non-verbal communication and integrate pretrained vision-language models more effectively.
Reach us at info@study.space
Understanding Yell At Your Robot%3A Improving On-the-Fly from Language Corrections