6 Mar 2024 | Xuxin Cheng*, Yandong Ji*, Junming Chen, Ruihan Yang, Ge Yang, Xiaolong Wang
The paper "Expressive Whole-Body Control for Humanoid Robots" by Xuxin Cheng, Yandong Ji, Junming Chen, Ruihan Yang, Ge Yang, and Xiaolong Wang from UC San Diego introduces a novel approach called Expressive Whole-Body Control (ExBody) to enable humanoid robots to generate rich, diverse, and expressive motions in real-world scenarios. The authors leverage large-scale human motion capture data from the graphics community and use Reinforcement Learning (RL) to train a whole-body control policy on a human-sized robot, specifically the Unitree H1. The key challenge addressed is the gap between the degrees of freedom (DoFs) and physical capabilities of the robot and the extensive DoFs in human motion capture data.
ExBody trains the upper body of the robot to imitate reference motions while relaxing the imitation constraint on the legs, requiring them to follow a given velocity robustly. This approach allows the robot to perform various tasks such as walking on different terrains, dancing, and interacting with humans. The method is trained in simulation and then transferred to the real world through Sim2Real transfer, demonstrating robust and expressive behavior.
The paper includes extensive evaluations in both simulation and real-world settings, showing that ExBody can track root movement goals and expression goals effectively. The authors compare their method with baselines and adversarial methods, demonstrating superior performance in terms of linear velocity tracking and robustness. The study also highlights the importance of learning from large datasets and the benefits of random state initialization in policy learning.
The paper concludes by discussing the limitations of the approach, such as the loss of information during motion retargeting and the need for a reliable protective system to handle potential falls. Overall, the work paves the way for more versatile and reliable humanoid robots capable of performing a wide range of tasks.The paper "Expressive Whole-Body Control for Humanoid Robots" by Xuxin Cheng, Yandong Ji, Junming Chen, Ruihan Yang, Ge Yang, and Xiaolong Wang from UC San Diego introduces a novel approach called Expressive Whole-Body Control (ExBody) to enable humanoid robots to generate rich, diverse, and expressive motions in real-world scenarios. The authors leverage large-scale human motion capture data from the graphics community and use Reinforcement Learning (RL) to train a whole-body control policy on a human-sized robot, specifically the Unitree H1. The key challenge addressed is the gap between the degrees of freedom (DoFs) and physical capabilities of the robot and the extensive DoFs in human motion capture data.
ExBody trains the upper body of the robot to imitate reference motions while relaxing the imitation constraint on the legs, requiring them to follow a given velocity robustly. This approach allows the robot to perform various tasks such as walking on different terrains, dancing, and interacting with humans. The method is trained in simulation and then transferred to the real world through Sim2Real transfer, demonstrating robust and expressive behavior.
The paper includes extensive evaluations in both simulation and real-world settings, showing that ExBody can track root movement goals and expression goals effectively. The authors compare their method with baselines and adversarial methods, demonstrating superior performance in terms of linear velocity tracking and robustness. The study also highlights the importance of learning from large datasets and the benefits of random state initialization in policy learning.
The paper concludes by discussing the limitations of the approach, such as the loss of information during motion retargeting and the need for a reliable protective system to handle potential falls. Overall, the work paves the way for more versatile and reliable humanoid robots capable of performing a wide range of tasks.