6 Mar 2024 | Xuxin Cheng*, Yandong Ji*, Junming Chen, Ruihan Yang, Ge Yang, Xiaolong Wang
This paper presents a method for enabling human-sized humanoid robots to generate expressive, diverse, and realistic whole-body motions. The proposed approach, called Expressive Whole-Body Control (ExBody), combines large-scale human motion capture data with deep reinforcement learning (RL) to train a policy that can control a humanoid robot in the real world. The method focuses on learning a goal-conditioned motor policy that can track both root movement goals and expression goals, allowing the robot to perform a wide range of motions, including walking, dancing, and handshaking.
ExBody addresses the challenge of transferring motion data from human motion capture datasets to real-world robots by relaxing the imitation constraints on the lower body while encouraging the upper body to imitate reference motions. This approach allows the robot to maintain robustness and compliance, enabling it to interact naturally with humans and perform complex tasks in diverse environments.
The method is evaluated on both simulated and real-world scenarios, demonstrating its effectiveness in generating expressive and robust motions. The results show that ExBody outperforms other approaches in terms of tracking performance and robustness, particularly in handling complex motions that require precise control of the upper body. The paper also discusses the limitations of the approach, including the challenges of retargeting motion data to real-world robots and the need for reliable protective systems to prevent damage during falls.
Overall, the work contributes to the field of humanoid robotics by providing a new approach for learning expressive whole-body control that can be applied to real-world robots. The method has the potential to enable more natural and intuitive interactions between robots and humans, opening up new possibilities for applications in areas such as service robotics, entertainment, and human-robot collaboration.This paper presents a method for enabling human-sized humanoid robots to generate expressive, diverse, and realistic whole-body motions. The proposed approach, called Expressive Whole-Body Control (ExBody), combines large-scale human motion capture data with deep reinforcement learning (RL) to train a policy that can control a humanoid robot in the real world. The method focuses on learning a goal-conditioned motor policy that can track both root movement goals and expression goals, allowing the robot to perform a wide range of motions, including walking, dancing, and handshaking.
ExBody addresses the challenge of transferring motion data from human motion capture datasets to real-world robots by relaxing the imitation constraints on the lower body while encouraging the upper body to imitate reference motions. This approach allows the robot to maintain robustness and compliance, enabling it to interact naturally with humans and perform complex tasks in diverse environments.
The method is evaluated on both simulated and real-world scenarios, demonstrating its effectiveness in generating expressive and robust motions. The results show that ExBody outperforms other approaches in terms of tracking performance and robustness, particularly in handling complex motions that require precise control of the upper body. The paper also discusses the limitations of the approach, including the challenges of retargeting motion data to real-world robots and the need for reliable protective systems to prevent damage during falls.
Overall, the work contributes to the field of humanoid robotics by providing a new approach for learning expressive whole-body control that can be applied to real-world robots. The method has the potential to enable more natural and intuitive interactions between robots and humans, opening up new possibilities for applications in areas such as service robotics, entertainment, and human-robot collaboration.