10 Jun 2024 | Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, Guofeng Zhang
PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction
This paper proposes a novel method, PGSR, for efficient and high-fidelity surface reconstruction from multi-view RGB images without geometric priors. PGSR improves upon 3D Gaussian Splatting (3DGS) by introducing an unbiased depth rendering method that directly computes the distance from the camera to the Gaussian plane and the corresponding normal map, then divides them to obtain unbiased depth. This method ensures accurate geometric reconstruction and multi-view consistency. PGSR also incorporates single-view and multi-view geometric regularization, as well as a camera exposure compensation model to handle large illumination variations. Experiments on indoor and outdoor scenes show that PGSR achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods. PGSR's code is publicly available, and more information can be found on the project page (https://zju3dv.github.io/pgsr/).
PGSR compresses 3D Gaussians into flat planes and renders distance and normal maps, which are then transformed into unbiased depth maps. Single-view and multi-view geometric regularization ensure high precision in global geometry. Exposure compensation RGB loss enhances reconstruction accuracy. PGSR achieves state-of-the-art reconstruction accuracy while maintaining the high rendering accuracy and speed of 3DGS. Training time is near 100 times faster compared to state-of-the-art NeRF-based methods.
PGSR's unbiased depth rendering method ensures that the rendered depth is consistent with the actual surface. This is achieved by first rendering the normal and distance maps of the plane, then converting them into depth maps. The method involves blending Gaussian plane accumulations to determine a pixel's plane parameters. The intersection of the ray and plane defines the depth, depending on the Gaussian's position and rotation. By dividing the distance map by the normal map, the depth estimation is unbiased and falls on the estimated plane.
PGSR also introduces geometric regularization terms from single-view and multi-view to optimize these geometric parameters, achieving globally consistent high-precision geometric reconstruction. The method is validated on the MipNeRF360, DTU, and Tanks and Temples datasets, showing that PGSR achieves the highest reconstruction accuracy and rendering quality compared to current state-of-the-art methods. The method is also applied to virtual reality applications, achieving immersive, high-fidelity virtual reality effects with high-precision depth estimation.PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction
This paper proposes a novel method, PGSR, for efficient and high-fidelity surface reconstruction from multi-view RGB images without geometric priors. PGSR improves upon 3D Gaussian Splatting (3DGS) by introducing an unbiased depth rendering method that directly computes the distance from the camera to the Gaussian plane and the corresponding normal map, then divides them to obtain unbiased depth. This method ensures accurate geometric reconstruction and multi-view consistency. PGSR also incorporates single-view and multi-view geometric regularization, as well as a camera exposure compensation model to handle large illumination variations. Experiments on indoor and outdoor scenes show that PGSR achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods. PGSR's code is publicly available, and more information can be found on the project page (https://zju3dv.github.io/pgsr/).
PGSR compresses 3D Gaussians into flat planes and renders distance and normal maps, which are then transformed into unbiased depth maps. Single-view and multi-view geometric regularization ensure high precision in global geometry. Exposure compensation RGB loss enhances reconstruction accuracy. PGSR achieves state-of-the-art reconstruction accuracy while maintaining the high rendering accuracy and speed of 3DGS. Training time is near 100 times faster compared to state-of-the-art NeRF-based methods.
PGSR's unbiased depth rendering method ensures that the rendered depth is consistent with the actual surface. This is achieved by first rendering the normal and distance maps of the plane, then converting them into depth maps. The method involves blending Gaussian plane accumulations to determine a pixel's plane parameters. The intersection of the ray and plane defines the depth, depending on the Gaussian's position and rotation. By dividing the distance map by the normal map, the depth estimation is unbiased and falls on the estimated plane.
PGSR also introduces geometric regularization terms from single-view and multi-view to optimize these geometric parameters, achieving globally consistent high-precision geometric reconstruction. The method is validated on the MipNeRF360, DTU, and Tanks and Temples datasets, showing that PGSR achieves the highest reconstruction accuracy and rendering quality compared to current state-of-the-art methods. The method is also applied to virtual reality applications, achieving immersive, high-fidelity virtual reality effects with high-precision depth estimation.