6 Jun 2024 | Yijia Weng, Bowen Wen, Jonathan Tremblay, Valts Blukis, Dieter Fox, Leonidas Guibas, Stan Birchfield
The paper addresses the challenge of constructing digital twins of unknown articulated objects from two RGBD scans at different articulation states. The method is divided into two stages: first, it reconstructs the object's shape at each state using a Neural Object Field, and then it recovers the articulation model, including part segmentation and joint articulations. By explicitly modeling point-level correspondences and leveraging cues from images, 3D reconstructions, and kinematics, the method achieves more accurate and stable results compared to prior work. It handles multiple movable parts and does not rely on object shape priors. The approach is evaluated on challenging datasets, including synthetic and real-world scenes, demonstrating its effectiveness and robustness. The paper also introduces a novel synthetic dataset with objects having more than one joint, further validating the method's generalizability.The paper addresses the challenge of constructing digital twins of unknown articulated objects from two RGBD scans at different articulation states. The method is divided into two stages: first, it reconstructs the object's shape at each state using a Neural Object Field, and then it recovers the articulation model, including part segmentation and joint articulations. By explicitly modeling point-level correspondences and leveraging cues from images, 3D reconstructions, and kinematics, the method achieves more accurate and stable results compared to prior work. It handles multiple movable parts and does not rely on object shape priors. The approach is evaluated on challenging datasets, including synthetic and real-world scenes, demonstrating its effectiveness and robustness. The paper also introduces a novel synthetic dataset with objects having more than one joint, further validating the method's generalizability.