Pixel-GS: Density Control with Pixel-aware Gradient for 3D Gaussian Splatting

Pixel-GS: Density Control with Pixel-aware Gradient for 3D Gaussian Splatting

22 Mar 2024 | Zheng Zhang, Wenbo Hu, Yixing Lao, Tong He, Hengshuang Zhao
Pixel-GS is a novel method for 3D Gaussian Splatting (3DGS) that improves the reconstruction of scenes with insufficient initial points by incorporating pixel-aware gradient control. The main issue with 3DGS is its reliance on the quality of the initial point cloud, which can lead to blurring and needle-like artifacts in areas with sparse points. Pixel-GS addresses this by adjusting the growth condition of Gaussians based on the number of pixels they cover in each view, allowing for more effective growth of large Gaussians that are visible from multiple viewpoints. This approach dynamically weights the gradients from different views, promoting the growth of points in areas with insufficient initial points. Additionally, Pixel-GS introduces a strategy to scale the gradient field based on the distance to the camera, suppressing the growth of "floaters" near the camera. Extensive experiments on the Mip-NeRF 360 and Tanks & Temples datasets show that Pixel-GS achieves state-of-the-art rendering quality while maintaining real-time performance, outperforming existing methods in terms of both quantitative and qualitative results. The method is also more robust to the quality of the initial point cloud, as demonstrated by experiments where a significant proportion of initial points were discarded. Overall, Pixel-GS enhances the accuracy and detail of reconstructions, particularly in areas with sparse initial points, while maintaining efficient rendering speeds.Pixel-GS is a novel method for 3D Gaussian Splatting (3DGS) that improves the reconstruction of scenes with insufficient initial points by incorporating pixel-aware gradient control. The main issue with 3DGS is its reliance on the quality of the initial point cloud, which can lead to blurring and needle-like artifacts in areas with sparse points. Pixel-GS addresses this by adjusting the growth condition of Gaussians based on the number of pixels they cover in each view, allowing for more effective growth of large Gaussians that are visible from multiple viewpoints. This approach dynamically weights the gradients from different views, promoting the growth of points in areas with insufficient initial points. Additionally, Pixel-GS introduces a strategy to scale the gradient field based on the distance to the camera, suppressing the growth of "floaters" near the camera. Extensive experiments on the Mip-NeRF 360 and Tanks & Temples datasets show that Pixel-GS achieves state-of-the-art rendering quality while maintaining real-time performance, outperforming existing methods in terms of both quantitative and qualitative results. The method is also more robust to the quality of the initial point cloud, as demonstrated by experiments where a significant proportion of initial points were discarded. Overall, Pixel-GS enhances the accuracy and detail of reconstructions, particularly in areas with sparse initial points, while maintaining efficient rendering speeds.
Reach us at info@study.space