24 Mar 2024 | Jiahe Li, Jiawei Zhang, Xiao Bai, Jin Zheng, Xin Ning, Jun Zhou, Lin Gu
DNGaussian optimizes sparse-view 3D Gaussian radiance fields with global-local depth normalization. The method introduces depth-regularized Gaussian radiance fields for real-time, high-quality few-shot novel view synthesis with low computational costs. It addresses the geometry degradation in sparse-view settings by applying hard and soft depth regularization, along with global-local depth normalization to enhance small local depth changes. The method outperforms state-of-the-art methods on LLFF, DTU, and Blender datasets, achieving significantly reduced memory costs, 25× faster training, and over 3000× faster rendering speed. The framework combines hard and soft depth regularization to enable spatial reshaping without compromising color details. Global-local depth normalization helps focus on small local depth changes, improving detailed geometry reconstruction. The method is efficient and effective for few-shot novel view synthesis, with a neural color renderer and depth supervision from pre-trained monocular depth estimators. The experiments show that DNGaussian achieves competitive quality and efficiency across multiple benchmarks, excelling in capturing details with significantly lower training costs and real-time rendering. The method is the first to analyze and address depth regularization for 3D Gaussian splatting under coarse depth cues.DNGaussian optimizes sparse-view 3D Gaussian radiance fields with global-local depth normalization. The method introduces depth-regularized Gaussian radiance fields for real-time, high-quality few-shot novel view synthesis with low computational costs. It addresses the geometry degradation in sparse-view settings by applying hard and soft depth regularization, along with global-local depth normalization to enhance small local depth changes. The method outperforms state-of-the-art methods on LLFF, DTU, and Blender datasets, achieving significantly reduced memory costs, 25× faster training, and over 3000× faster rendering speed. The framework combines hard and soft depth regularization to enable spatial reshaping without compromising color details. Global-local depth normalization helps focus on small local depth changes, improving detailed geometry reconstruction. The method is efficient and effective for few-shot novel view synthesis, with a neural color renderer and depth supervision from pre-trained monocular depth estimators. The experiments show that DNGaussian achieves competitive quality and efficiency across multiple benchmarks, excelling in capturing details with significantly lower training costs and real-time rendering. The method is the first to analyze and address depth regularization for 3D Gaussian splatting under coarse depth cues.