11 Jul 2024 | Jiawei Zhang, Jiahe Li, Xiaohan Yu, Lei Huang, Lin Gu, Jin Zheng, and Xiao Bai
CoR-GS: Sparse-View 3D Gaussian Splatting via Co-Regularization
This paper introduces a new co-regularization approach to improve sparse-view 3D Gaussian Splatting (3DGS). The key idea is to identify and suppress inaccurate reconstructions by analyzing point disagreement and rendering disagreement between two 3D Gaussian radiance fields. Point disagreement refers to differences in Gaussian positions, while rendering disagreement refers to differences in rendered pixels. These disagreements are negatively correlated with accurate reconstruction, allowing us to identify inaccurate reconstructions without ground-truth information.
The proposed method, CoR-GS, includes two components: co-pruning and pseudo-view co-regularization. Co-pruning identifies and removes Gaussians that are located in inaccurate positions. Pseudo-view co-regularization suppresses rendering disagreement by considering pixels that exhibit high rendering disagreement as inaccurately rendered.
Experiments on LLFF, Mip-NeRF360, DTU, and Blender datasets show that CoR-GS effectively regularizes scene geometry, reconstructs compact representations, and achieves state-of-the-art novel view synthesis quality under sparse training views. CoR-GS is more efficient than other methods and achieves better results in terms of PSNR, SSIM, and AVGE metrics. The method is also effective in reconstructing full 360-degree unbounded scenes with sparse training views.CoR-GS: Sparse-View 3D Gaussian Splatting via Co-Regularization
This paper introduces a new co-regularization approach to improve sparse-view 3D Gaussian Splatting (3DGS). The key idea is to identify and suppress inaccurate reconstructions by analyzing point disagreement and rendering disagreement between two 3D Gaussian radiance fields. Point disagreement refers to differences in Gaussian positions, while rendering disagreement refers to differences in rendered pixels. These disagreements are negatively correlated with accurate reconstruction, allowing us to identify inaccurate reconstructions without ground-truth information.
The proposed method, CoR-GS, includes two components: co-pruning and pseudo-view co-regularization. Co-pruning identifies and removes Gaussians that are located in inaccurate positions. Pseudo-view co-regularization suppresses rendering disagreement by considering pixels that exhibit high rendering disagreement as inaccurately rendered.
Experiments on LLFF, Mip-NeRF360, DTU, and Blender datasets show that CoR-GS effectively regularizes scene geometry, reconstructs compact representations, and achieves state-of-the-art novel view synthesis quality under sparse training views. CoR-GS is more efficient than other methods and achieves better results in terms of PSNR, SSIM, and AVGE metrics. The method is also effective in reconstructing full 360-degree unbounded scenes with sparse training views.