10 Jun 2024 | Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, Guofeng Zhang
The paper introduces PGSR (Planar-based Gaussian Splatting Reconstruction), a novel method for efficient and high-fidelity surface reconstruction from multi-view RGB images. PGSR addresses the limitations of 3D Gaussian Splatting (3DGS) by introducing an unbiased depth rendering method and incorporating single-view and multi-view geometric regularizations. The key contributions include:
1. **Unbiased Depth Rendering**: PGSR compresses 3D Gaussians into flat planes, rendering distance and normal maps, which are then transformed into unbiased depth maps. This method ensures that the depth estimation is accurate and aligns well with the actual surface.
2. **Geometric Regularization**: Single-view and multi-view regularizations are introduced to optimize the plane parameters, ensuring global geometric consistency. Single-view regularization uses the local plane assumption to compute normals, while multi-view regularization ensures consistency across multiple views.
3. **Exposure Compensation**: A camera exposure compensation model is proposed to handle large illumination variations, enhancing reconstruction accuracy.
4. **Performance**: PGSR achieves fast training and rendering speeds while maintaining high-fidelity rendering and geometric reconstruction accuracy. Experiments on various datasets (MipNeRF360, DTU, TnT) demonstrate superior performance compared to state-of-the-art methods.
The paper also discusses the limitations and future work, including the need to improve reconstruction in regions with limited viewpoints and reflective surfaces. Overall, PGSR provides a robust and efficient solution for surface reconstruction in computer vision applications.The paper introduces PGSR (Planar-based Gaussian Splatting Reconstruction), a novel method for efficient and high-fidelity surface reconstruction from multi-view RGB images. PGSR addresses the limitations of 3D Gaussian Splatting (3DGS) by introducing an unbiased depth rendering method and incorporating single-view and multi-view geometric regularizations. The key contributions include:
1. **Unbiased Depth Rendering**: PGSR compresses 3D Gaussians into flat planes, rendering distance and normal maps, which are then transformed into unbiased depth maps. This method ensures that the depth estimation is accurate and aligns well with the actual surface.
2. **Geometric Regularization**: Single-view and multi-view regularizations are introduced to optimize the plane parameters, ensuring global geometric consistency. Single-view regularization uses the local plane assumption to compute normals, while multi-view regularization ensures consistency across multiple views.
3. **Exposure Compensation**: A camera exposure compensation model is proposed to handle large illumination variations, enhancing reconstruction accuracy.
4. **Performance**: PGSR achieves fast training and rendering speeds while maintaining high-fidelity rendering and geometric reconstruction accuracy. Experiments on various datasets (MipNeRF360, DTU, TnT) demonstrate superior performance compared to state-of-the-art methods.
The paper also discusses the limitations and future work, including the need to improve reconstruction in regions with limited viewpoints and reflective surfaces. Overall, PGSR provides a robust and efficient solution for surface reconstruction in computer vision applications.