GraspXL: Generating Grasping Motions for Diverse Objects at Scale

GraspXL: Generating Grasping Motions for Diverse Objects at Scale

12 Jul 2024 | Hui Zhang, Sammy Christen, Zicong Fan, Otmar Hilliges, and Jie Song
GraspXL is a policy learning framework that generates grasping motions for diverse objects at scale without requiring 3D hand-object data. The framework addresses the challenge of synthesizing grasping motions for a wide range of unseen objects, achieving a success rate of 82.2% for over 500,000 objects. It unifies the generation of grasping motions across multiple motion objectives, diverse object shapes, and dexterous hand morphologies. The framework uses a learning curriculum and objective-driven guidance to enable stable grasping while satisfying multiple objectives. It also works with reconstructed or generated objects and different dexterous hands, such as Shadow, Allegro, and Faive. The method is evaluated on PartNet and ShapeNet datasets, showing superior performance compared to existing methods. GraspXL achieves higher success rates and reduces objective errors by 30-50% compared to SynH2R. It is also effective on large-scale object datasets like Objaverse and on generated and reconstructed objects. The framework demonstrates robustness to reconstruction noise and generalizes well across different robotic hands. The method's success is attributed to its ability to handle multiple objectives simultaneously and its efficient real-time inference capabilities. The code, models, and dataset are available for further research.GraspXL is a policy learning framework that generates grasping motions for diverse objects at scale without requiring 3D hand-object data. The framework addresses the challenge of synthesizing grasping motions for a wide range of unseen objects, achieving a success rate of 82.2% for over 500,000 objects. It unifies the generation of grasping motions across multiple motion objectives, diverse object shapes, and dexterous hand morphologies. The framework uses a learning curriculum and objective-driven guidance to enable stable grasping while satisfying multiple objectives. It also works with reconstructed or generated objects and different dexterous hands, such as Shadow, Allegro, and Faive. The method is evaluated on PartNet and ShapeNet datasets, showing superior performance compared to existing methods. GraspXL achieves higher success rates and reduces objective errors by 30-50% compared to SynH2R. It is also effective on large-scale object datasets like Objaverse and on generated and reconstructed objects. The framework demonstrates robustness to reconstruction noise and generalizes well across different robotic hands. The method's success is attributed to its ability to handle multiple objectives simultaneously and its efficient real-time inference capabilities. The code, models, and dataset are available for further research.
Reach us at info@study.space
[slides] GraspXL%3A Generating Grasping Motions for Diverse Objects at Scale | StudySpace