GAMMA: Generalizable Articulation Modeling and Manipulation for Articulated Objects

GAMMA: Generalizable Articulation Modeling and Manipulation for Articulated Objects

1 Mar 2024 | Qiaojun Yu, Junbo Wang, Wenhai Liu, Ce Hao, Liu Liu, Lin Shao, Weiming Wang, Cewu Lu
GAMMA: Generalizable Articulation Modeling and Manipulation for Articulated Objects This paper proposes a novel framework called GAMMA for generalizable articulation modeling and manipulation of articulated objects. Articulated objects, such as cabinets and doors, are common in daily life but are challenging to manipulate due to their diverse shapes, categories, and kinetic constraints. Prior methods focused on specific joint types and lacked generalizability to unseen objects. GAMMA learns articulation modeling and grasp pose affordance from diverse articulated objects, and uses adaptive manipulation to iteratively reduce modeling errors and improve performance. The framework was trained on the PartNet-Mobility dataset and evaluated in SAPIEN simulation and real-world Franka robot experiments. Results show that GAMMA significantly outperforms state-of-the-art methods in unseen and cross-category articulated objects. GAMMA segments the object into parts, estimates joint parameters, and generates grasp poses. It then uses physics-guided adaptive manipulation to iteratively update joint parameters and improve performance. The framework is effective in both simulation and real-world environments, demonstrating strong generalizability and success in cross-category tasks. The paper also compares GAMMA with other baselines, including ANCSH, reinforcement learning, Where2Act, and VAT-Mart, showing that GAMMA outperforms them in both modeling and manipulation tasks. Real-world experiments show that GAMMA can generalize to real-world point clouds and successfully manipulate articulated objects. The framework is effective in both simulation and real-world environments, demonstrating strong generalizability and success in cross-category tasks.GAMMA: Generalizable Articulation Modeling and Manipulation for Articulated Objects This paper proposes a novel framework called GAMMA for generalizable articulation modeling and manipulation of articulated objects. Articulated objects, such as cabinets and doors, are common in daily life but are challenging to manipulate due to their diverse shapes, categories, and kinetic constraints. Prior methods focused on specific joint types and lacked generalizability to unseen objects. GAMMA learns articulation modeling and grasp pose affordance from diverse articulated objects, and uses adaptive manipulation to iteratively reduce modeling errors and improve performance. The framework was trained on the PartNet-Mobility dataset and evaluated in SAPIEN simulation and real-world Franka robot experiments. Results show that GAMMA significantly outperforms state-of-the-art methods in unseen and cross-category articulated objects. GAMMA segments the object into parts, estimates joint parameters, and generates grasp poses. It then uses physics-guided adaptive manipulation to iteratively update joint parameters and improve performance. The framework is effective in both simulation and real-world environments, demonstrating strong generalizability and success in cross-category tasks. The paper also compares GAMMA with other baselines, including ANCSH, reinforcement learning, Where2Act, and VAT-Mart, showing that GAMMA outperforms them in both modeling and manipulation tasks. Real-world experiments show that GAMMA can generalize to real-world point clouds and successfully manipulate articulated objects. The framework is effective in both simulation and real-world environments, demonstrating strong generalizability and success in cross-category tasks.
Reach us at info@study.space
Understanding GAMMA%3A Generalizable Articulation Modeling and Manipulation for Articulated Objects