The paper introduces DOGS (Distributed-Oriented Gaussian Splatting), a method that accelerates the training of 3D Gaussian Splatting (3DGS) for large-scale 3D reconstruction. 3DGS is a novel view synthesis (NVS) technique that represents scenes using anisotropic 3D Gaussians, offering superior rendering performance and high-fidelity quality compared to previous methods like NeRF. However, 3DGS has been limited by its high memory requirements and long training times on large scenes.
DOGS addresses these challenges by splitting the scene into multiple intersected blocks and training each block distributedly across different compute nodes. The method uses the Alternating Direction Method of Multipliers (ADMM) to ensure consistency among the shared 3D Gaussians across blocks. During training, a global 3DGS model is maintained on a master node, while local 3DGS models are trained on slave nodes. The local models are averaged to update the global model, ensuring convergence and consistency. This distributed approach reduces training time by 6+ times while maintaining state-of-the-art rendering quality.
The paper evaluates DOGS on large-scale urban datasets, demonstrating significant improvements in both training efficiency and rendering quality compared to other methods. The method also includes adaptive penalty parameters and over-relaxation techniques to enhance convergence. Ablation studies show the effectiveness of the consensus step and balanced scene splitting. The code for DOGS is publicly available, making it a valuable contribution to the field of 3D reconstruction.The paper introduces DOGS (Distributed-Oriented Gaussian Splatting), a method that accelerates the training of 3D Gaussian Splatting (3DGS) for large-scale 3D reconstruction. 3DGS is a novel view synthesis (NVS) technique that represents scenes using anisotropic 3D Gaussians, offering superior rendering performance and high-fidelity quality compared to previous methods like NeRF. However, 3DGS has been limited by its high memory requirements and long training times on large scenes.
DOGS addresses these challenges by splitting the scene into multiple intersected blocks and training each block distributedly across different compute nodes. The method uses the Alternating Direction Method of Multipliers (ADMM) to ensure consistency among the shared 3D Gaussians across blocks. During training, a global 3DGS model is maintained on a master node, while local 3DGS models are trained on slave nodes. The local models are averaged to update the global model, ensuring convergence and consistency. This distributed approach reduces training time by 6+ times while maintaining state-of-the-art rendering quality.
The paper evaluates DOGS on large-scale urban datasets, demonstrating significant improvements in both training efficiency and rendering quality compared to other methods. The method also includes adaptive penalty parameters and over-relaxation techniques to enhance convergence. Ablation studies show the effectiveness of the consensus step and balanced scene splitting. The code for DOGS is publicly available, making it a valuable contribution to the field of 3D reconstruction.