Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects

Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects

6 Jun 2024 | Yijia Weng, Bowen Wen, Jonathan Tremblay, Valts Blukis, Dieter Fox, Leonidas Guibas, Stan Birchfield
This paper presents a neural implicit representation method for building digital twins of unknown articulated objects from two RGBD scans at different articulation states. The method decomposes the problem into two stages: first, reconstructing object-level shape at each state using a Signed Distance Function (SDF) representation; second, recovering the articulation model including part segmentation and joint parameters. By explicitly modeling point-level correspondences and leveraging cues from images, 3D reconstructions, and kinematics, the method achieves more accurate and stable results compared to prior work. It handles multiple movable parts without relying on object shape or structure priors. The method is evaluated on both synthetic and real-world datasets, including the PARIS dataset and a newly introduced synthetic multi-part dataset. It outperforms existing state-of-the-art methods in terms of accuracy and stability, particularly in handling complex articulated objects with multiple joints. The method is robust to initializations and can handle a wide range of unknown articulated objects without assuming any category or articulation priors. It also demonstrates generalizability to complex unknown articulated objects with multiple movable parts using only multi-view scans at two different articulation states. The method is shown to produce more stable results than baselines under different initializations. The key contributions include a framework for reconstructing the geometry and articulation model of unknown articulated objects, a decoupling of the problem into object shape reconstruction and articulation model reasoning, and extensive evaluation on both synthetic and real-world data showing consistent and stable performance.This paper presents a neural implicit representation method for building digital twins of unknown articulated objects from two RGBD scans at different articulation states. The method decomposes the problem into two stages: first, reconstructing object-level shape at each state using a Signed Distance Function (SDF) representation; second, recovering the articulation model including part segmentation and joint parameters. By explicitly modeling point-level correspondences and leveraging cues from images, 3D reconstructions, and kinematics, the method achieves more accurate and stable results compared to prior work. It handles multiple movable parts without relying on object shape or structure priors. The method is evaluated on both synthetic and real-world datasets, including the PARIS dataset and a newly introduced synthetic multi-part dataset. It outperforms existing state-of-the-art methods in terms of accuracy and stability, particularly in handling complex articulated objects with multiple joints. The method is robust to initializations and can handle a wide range of unknown articulated objects without assuming any category or articulation priors. It also demonstrates generalizability to complex unknown articulated objects with multiple movable parts using only multi-view scans at two different articulation states. The method is shown to produce more stable results than baselines under different initializations. The key contributions include a framework for reconstructing the geometry and articulation model of unknown articulated objects, a decoupling of the problem into object shape reconstruction and articulation model reasoning, and extensive evaluation on both synthetic and real-world data showing consistent and stable performance.
Reach us at info@study.space
[slides and audio] Neural Implicit Representation for Building Digital Twins of Unknown Articulated Objects