12 Jul 2024 | Hui Zhang, Sammy Christen, Zicong Fan, Otmar Hilliges, and Jie Song
GraspXL is a policy learning framework that generates grasping motions for a wide variety of objects, motion objectives, and hand morphologies. The method does not rely on 3D hand-object interaction data during training and can robustly generalize to grasp a broad range of unseen objects. It achieves a success rate of 82.2% on over 500,000 unseen objects, adhering to multiple objectives such as graspable areas, heading directions, wrist rotations, and hand positions. GraspXL uses a reinforcement learning paradigm and physics simulation to handle varying object shapes and multiple objectives. It introduces an objective-driven guidance technique and a learning curriculum to enable stable grasping while satisfying multiple objectives. The method is evaluated on PartNet and ShapeNet datasets, demonstrating superior performance and broad generalization capabilities across different object sizes, reconstructed and generated objects, and various robotic hands. The framework's effectiveness is validated through quantitative and qualitative evaluations, and the code, models, and dataset are available for further research.GraspXL is a policy learning framework that generates grasping motions for a wide variety of objects, motion objectives, and hand morphologies. The method does not rely on 3D hand-object interaction data during training and can robustly generalize to grasp a broad range of unseen objects. It achieves a success rate of 82.2% on over 500,000 unseen objects, adhering to multiple objectives such as graspable areas, heading directions, wrist rotations, and hand positions. GraspXL uses a reinforcement learning paradigm and physics simulation to handle varying object shapes and multiple objectives. It introduces an objective-driven guidance technique and a learning curriculum to enable stable grasping while satisfying multiple objectives. The method is evaluated on PartNet and ShapeNet datasets, demonstrating superior performance and broad generalization capabilities across different object sizes, reconstructed and generated objects, and various robotic hands. The framework's effectiveness is validated through quantitative and qualitative evaluations, and the code, models, and dataset are available for further research.