VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction

VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction

27 Feb 2024 | Jiaqi Lin, Zhihao Li, Xiao Tang, Jianzhuang Liu, Shiyong Liu, Jiayue Liu, Yangdi Lu, Xiaofei Wu, Songcen Xu, Youliang Yan, Wenming Yang
VastGaussian is a novel method for high-quality large-scale scene reconstruction and real-time rendering based on 3D Gaussian Splatting. It addresses the challenges of scaling 3D Gaussian Splatting to large scenes, including limited video memory, long optimization time, and appearance variations. The method introduces a progressive data partitioning strategy to divide a large scene into multiple cells, which are then optimized independently and merged into a complete scene. This approach allows for efficient optimization and seamless merging, reducing the number of 3D Gaussians needed and improving reconstruction quality. Additionally, a decoupled appearance modeling technique is introduced to suppress floaters caused by appearance variations, enabling consistent rendering across different views. The method outperforms existing NeRF-based methods in terms of reconstruction quality and rendering speed, achieving state-of-the-art results on multiple large scene datasets. The approach enables fast optimization and high-fidelity real-time rendering, making it suitable for applications such as autonomous driving, aerial surveying, and virtual reality.VastGaussian is a novel method for high-quality large-scale scene reconstruction and real-time rendering based on 3D Gaussian Splatting. It addresses the challenges of scaling 3D Gaussian Splatting to large scenes, including limited video memory, long optimization time, and appearance variations. The method introduces a progressive data partitioning strategy to divide a large scene into multiple cells, which are then optimized independently and merged into a complete scene. This approach allows for efficient optimization and seamless merging, reducing the number of 3D Gaussians needed and improving reconstruction quality. Additionally, a decoupled appearance modeling technique is introduced to suppress floaters caused by appearance variations, enabling consistent rendering across different views. The method outperforms existing NeRF-based methods in terms of reconstruction quality and rendering speed, achieving state-of-the-art results on multiple large scene datasets. The approach enables fast optimization and high-fidelity real-time rendering, making it suitable for applications such as autonomous driving, aerial surveying, and virtual reality.
Reach us at info@study.space