25 Mar 2024 | Mulin Yu, Tao Lu, Linning Xu, Lihan Jiang, Yuanbo Xiangli, and Bo Dai
GSDF: 3DGS Meets SDF for Improved Rendering and Reconstruction
This paper introduces GSDF, a novel dual-branch architecture that combines the benefits of a flexible and efficient 3D Gaussian Splatting (3DGS) representation with neural Signed Distance Fields (SDF). The core idea is to leverage and enhance the strengths of each branch while alleviating their limitations through mutual guidance and joint supervision. The proposed method shows that our design unlocks the potential for more accurate and detailed surface reconstructions, and at the same time benefits 3DGS rendering with structures that are more aligned with the underlying geometry.
The paper discusses the challenges of rendering and reconstruction in computer vision and computer graphics, and presents a solution that combines the strengths of 3DGS and SDF. The method is evaluated on diverse scenes and shows significant improvements in both rendering and reconstruction quality. The paper also discusses the limitations of existing methods and how GSDF addresses these issues through a dual-branch framework.
The proposed method consists of two branches: a GS-branch for rendering and an SDF-branch for surface reconstruction. The GS-branch is used to render depth maps and guide the ray sampling process of the SDF-branch. The SDF-branch is used to guide the density control of the GS-branch, growing Gaussian primitives in near-surface regions and pruning otherwise. The two branches are also aligned in terms of geometry properties (depth and normal) to encourage more coherent physical alignment between Gaussian primitives and surfaces.
The paper also discusses the training strategy and loss design for the proposed method. The GS-branch is supervised by rendering losses between the rendered RGB images and ground truth. The SDF-branch is supervised by the rendering loss with Eikonal penalties and curvature discrepancies. The mutual geometry supervision comprises the depth and normal consistency losses applied on both branches.
The paper presents extensive experiments that demonstrate the effectiveness of the proposed method. The results show that the GSDF method achieves superior rendering and reconstruction quality compared to existing methods. The method is also efficient and can be adapted to work with other existing or future models. The paper concludes that the proposed method has the potential to achieve enhanced rendering and reconstruction quality while maintaining efficiency in both training and inference.GSDF: 3DGS Meets SDF for Improved Rendering and Reconstruction
This paper introduces GSDF, a novel dual-branch architecture that combines the benefits of a flexible and efficient 3D Gaussian Splatting (3DGS) representation with neural Signed Distance Fields (SDF). The core idea is to leverage and enhance the strengths of each branch while alleviating their limitations through mutual guidance and joint supervision. The proposed method shows that our design unlocks the potential for more accurate and detailed surface reconstructions, and at the same time benefits 3DGS rendering with structures that are more aligned with the underlying geometry.
The paper discusses the challenges of rendering and reconstruction in computer vision and computer graphics, and presents a solution that combines the strengths of 3DGS and SDF. The method is evaluated on diverse scenes and shows significant improvements in both rendering and reconstruction quality. The paper also discusses the limitations of existing methods and how GSDF addresses these issues through a dual-branch framework.
The proposed method consists of two branches: a GS-branch for rendering and an SDF-branch for surface reconstruction. The GS-branch is used to render depth maps and guide the ray sampling process of the SDF-branch. The SDF-branch is used to guide the density control of the GS-branch, growing Gaussian primitives in near-surface regions and pruning otherwise. The two branches are also aligned in terms of geometry properties (depth and normal) to encourage more coherent physical alignment between Gaussian primitives and surfaces.
The paper also discusses the training strategy and loss design for the proposed method. The GS-branch is supervised by rendering losses between the rendered RGB images and ground truth. The SDF-branch is supervised by the rendering loss with Eikonal penalties and curvature discrepancies. The mutual geometry supervision comprises the depth and normal consistency losses applied on both branches.
The paper presents extensive experiments that demonstrate the effectiveness of the proposed method. The results show that the GSDF method achieves superior rendering and reconstruction quality compared to existing methods. The method is also efficient and can be adapted to work with other existing or future models. The paper concludes that the proposed method has the potential to achieve enhanced rendering and reconstruction quality while maintaining efficiency in both training and inference.