9 Apr 2017 | Andy Zeng1 Shuran Song1 Matthias Nießner2 Matthew Fisher2,4 Jianxiong Xiao3 Thomas Funkhouser1
3DMatch is a data-driven model that learns a local volumetric patch descriptor to establish correspondences between partial 3D data, addressing the challenges of noisy, low-resolution, and incomplete 3D scan data. The model leverages self-supervised feature learning from existing RGB-D reconstructions to amass training data. Experiments show that 3DMatch outperforms state-of-the-art methods in matching local geometry in new scenes, instance-level object model alignment, and mesh surface correspondence. The code, data, benchmarks, and pre-trained models are available online. The paper also discusses related work, the learning process from reconstructions, network architecture, and evaluation results, including keypoint matching, geometric registration, and generalization to different tasks and spatial scales.3DMatch is a data-driven model that learns a local volumetric patch descriptor to establish correspondences between partial 3D data, addressing the challenges of noisy, low-resolution, and incomplete 3D scan data. The model leverages self-supervised feature learning from existing RGB-D reconstructions to amass training data. Experiments show that 3DMatch outperforms state-of-the-art methods in matching local geometry in new scenes, instance-level object model alignment, and mesh surface correspondence. The code, data, benchmarks, and pre-trained models are available online. The paper also discusses related work, the learning process from reconstructions, network architecture, and evaluation results, including keypoint matching, geometric registration, and generalization to different tasks and spatial scales.