DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

16 Jan 2019 | Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove
DeepSDF is a learned continuous Signed Distance Function (SDF) representation for shape modeling that enables high-quality shape representation, interpolation, and completion from partial and noisy 3D data. Unlike classical SDFs, which represent single shapes, DeepSDF can represent an entire class of shapes. It uses a continuous volumetric field where the magnitude of a point represents its distance to the shape boundary, and the sign indicates whether the point is inside or outside the shape. The surface is implicitly represented as the zero-level set of the learned function. DeepSDF is trained using a probabilistic auto-decoder, which allows it to generate continuous surfaces with complex topologies and achieve state-of-the-art performance in shape reconstruction and completion. The model uses a latent code to condition the SDF, enabling efficient and continuous shape representation. DeepSDF outperforms existing methods in terms of memory efficiency, requiring only 7.4 MB to represent entire classes of shapes, which is less than half the memory footprint of a single uncompressed 512³ 3D bitmap. The model is trained on synthetic data from ShapeNet and tested on various tasks, including shape completion and interpolation. DeepSDF's ability to handle partial and noisy inputs, along with its efficient memory usage, makes it a promising approach for 3D shape learning and reconstruction.DeepSDF is a learned continuous Signed Distance Function (SDF) representation for shape modeling that enables high-quality shape representation, interpolation, and completion from partial and noisy 3D data. Unlike classical SDFs, which represent single shapes, DeepSDF can represent an entire class of shapes. It uses a continuous volumetric field where the magnitude of a point represents its distance to the shape boundary, and the sign indicates whether the point is inside or outside the shape. The surface is implicitly represented as the zero-level set of the learned function. DeepSDF is trained using a probabilistic auto-decoder, which allows it to generate continuous surfaces with complex topologies and achieve state-of-the-art performance in shape reconstruction and completion. The model uses a latent code to condition the SDF, enabling efficient and continuous shape representation. DeepSDF outperforms existing methods in terms of memory efficiency, requiring only 7.4 MB to represent entire classes of shapes, which is less than half the memory footprint of a single uncompressed 512³ 3D bitmap. The model is trained on synthetic data from ShapeNet and tested on various tasks, including shape completion and interpolation. DeepSDF's ability to handle partial and noisy inputs, along with its efficient memory usage, makes it a promising approach for 3D shape learning and reconstruction.
Reach us at info@study.space