Deblurring 3D Gaussian Splatting

Deblurring 3D Gaussian Splatting

27 May 2024 | Byeonghyeon Lee, Howoong Lee, Xiangyu Sun, Usman Ali, and Eunbyung Park
This paper proposes Deblurring 3D Gaussian Splatting, a novel real-time deblurring framework for 3D Gaussian Splatting (3D-GS). The method addresses the issue of blurriness in training images, which can arise from defocusing, object motion, or camera shake. Traditional methods for deblurring rely on volumetric rendering, which is computationally expensive and not suitable for real-time applications. In contrast, 3D-GS uses a differentiable rasterization approach, enabling real-time rendering. However, it is sensitive to blurriness in the training images, as the Gaussian parameters are directly used to model the scene. To overcome this, the proposed method introduces a small Multi-Layer Perceptron (MLP) that manipulates the covariance of each 3D Gaussian to model scene blurriness. The MLP adjusts the rotation and scaling factors of the Gaussians, effectively simulating the blurring effects. This allows the framework to reconstruct fine and sharp details from blurry images while maintaining real-time rendering capabilities. The method is evaluated on benchmark datasets and shows state-of-the-art performance in terms of rendering quality and speed, achieving over 800 FPS. The framework also addresses the issue of sparse point clouds, which can occur when the input images are blurry. To compensate for this, the method adds extra points with valid color features to the point cloud using K-nearest-neighbor interpolation. Additionally, it prunes Gaussians based on their position to keep more Gaussians on the far plane, improving the reconstruction of distant details. The proposed method is compared with existing deblurring approaches, including Deblur-NeRF, Sharp-NeRF, DP-NeRF, and PDRF. It achieves superior performance in terms of rendering quality and speed, while maintaining real-time capabilities. The method is also effective in handling both defocus and camera motion blur, as it can adjust the Gaussian parameters to model the different types of blurriness. In conclusion, Deblurring 3D Gaussian Splatting is a novel and effective approach for real-time deblurring of 3D scenes. By manipulating the covariance of 3D Gaussians, the method can reconstruct fine and sharp details from blurry images while maintaining real-time rendering capabilities. The method is evaluated on benchmark datasets and shows state-of-the-art performance in terms of rendering quality and speed.This paper proposes Deblurring 3D Gaussian Splatting, a novel real-time deblurring framework for 3D Gaussian Splatting (3D-GS). The method addresses the issue of blurriness in training images, which can arise from defocusing, object motion, or camera shake. Traditional methods for deblurring rely on volumetric rendering, which is computationally expensive and not suitable for real-time applications. In contrast, 3D-GS uses a differentiable rasterization approach, enabling real-time rendering. However, it is sensitive to blurriness in the training images, as the Gaussian parameters are directly used to model the scene. To overcome this, the proposed method introduces a small Multi-Layer Perceptron (MLP) that manipulates the covariance of each 3D Gaussian to model scene blurriness. The MLP adjusts the rotation and scaling factors of the Gaussians, effectively simulating the blurring effects. This allows the framework to reconstruct fine and sharp details from blurry images while maintaining real-time rendering capabilities. The method is evaluated on benchmark datasets and shows state-of-the-art performance in terms of rendering quality and speed, achieving over 800 FPS. The framework also addresses the issue of sparse point clouds, which can occur when the input images are blurry. To compensate for this, the method adds extra points with valid color features to the point cloud using K-nearest-neighbor interpolation. Additionally, it prunes Gaussians based on their position to keep more Gaussians on the far plane, improving the reconstruction of distant details. The proposed method is compared with existing deblurring approaches, including Deblur-NeRF, Sharp-NeRF, DP-NeRF, and PDRF. It achieves superior performance in terms of rendering quality and speed, while maintaining real-time capabilities. The method is also effective in handling both defocus and camera motion blur, as it can adjust the Gaussian parameters to model the different types of blurriness. In conclusion, Deblurring 3D Gaussian Splatting is a novel and effective approach for real-time deblurring of 3D scenes. By manipulating the covariance of 3D Gaussians, the method can reconstruct fine and sharp details from blurry images while maintaining real-time rendering capabilities. The method is evaluated on benchmark datasets and shows state-of-the-art performance in terms of rendering quality and speed.
Reach us at info@study.space