Taming 3DGS: High-Quality Radiance Fields with Limited Resources

Taming 3DGS: High-Quality Radiance Fields with Limited Resources

21 Jun 2024 | Saswat Subhajyoti Mallick, Rahul Goel, Bernhard Kerbl, Francisco Vicente Carrasco, Markus Steinberger, Fernando De La Torre
The paper "Taming 3DGS: High-Quality Radiance Fields with Limited Resources" addresses the challenges of training and rendering 3D Gaussian Splatting (3DGS) models on constrained devices. 3DGS, while offering fast and high-fidelity rendering, suffers from excessive memory consumption and unpredictable training times, making it unsuitable for applications requiring fixed-size inputs or limited computational resources. To tackle these issues, the authors propose a guided, purely constructive densification process that controls the number of Gaussians added during training. This process is designed to converge with an exact number of Gaussians, reducing redundancy and improving efficiency. The method uses a score-based sampling approach to guide the densification, ensuring that Gaussians are added where they are most needed, such as regions with high positional gradients or important regions of interest. Additionally, the paper introduces optimizations to speed up the training process, including a faster solution for gradient computation and attribute updates, and an efficient parallelization scheme for backpropagation. These enhancements significantly reduce training times and improve resource efficiency. The evaluation demonstrates that the proposed method achieves competitive quality metrics with 3DGS while reducing model size and training time by 4-5 times. In scenarios with more generous budgets, the quality surpasses that of 3DGS. The method opens new possibilities for novel-view synthesis in constrained environments, such as mobile devices and edge devices, by enabling high-quality, efficient 3D scene reconstruction.The paper "Taming 3DGS: High-Quality Radiance Fields with Limited Resources" addresses the challenges of training and rendering 3D Gaussian Splatting (3DGS) models on constrained devices. 3DGS, while offering fast and high-fidelity rendering, suffers from excessive memory consumption and unpredictable training times, making it unsuitable for applications requiring fixed-size inputs or limited computational resources. To tackle these issues, the authors propose a guided, purely constructive densification process that controls the number of Gaussians added during training. This process is designed to converge with an exact number of Gaussians, reducing redundancy and improving efficiency. The method uses a score-based sampling approach to guide the densification, ensuring that Gaussians are added where they are most needed, such as regions with high positional gradients or important regions of interest. Additionally, the paper introduces optimizations to speed up the training process, including a faster solution for gradient computation and attribute updates, and an efficient parallelization scheme for backpropagation. These enhancements significantly reduce training times and improve resource efficiency. The evaluation demonstrates that the proposed method achieves competitive quality metrics with 3DGS while reducing model size and training time by 4-5 times. In scenarios with more generous budgets, the quality surpasses that of 3DGS. The method opens new possibilities for novel-view synthesis in constrained environments, such as mobile devices and edge devices, by enabling high-quality, efficient 3D scene reconstruction.
Reach us at info@study.space