24 Mar 2024 | Cheng Peng*, Yutao Tang*, Yifan Zhou, Nengyu Wang, Xijun Liu, Deming Li, and Rama Chellappa
BAGS is a novel method for scene reconstruction and novel view synthesis that addresses the issue of image blur. It introduces a Blur Proposal Network (BPN) to model blur by estimating per-pixel convolution kernels and a quality-assessing mask that indicates regions of blur. BPN considers spatial, color, and depth variations of the scene to maximize modeling capacity. Additionally, BPN proposes a quality-assessing mask that indicates regions where blur occurs. BAGS also introduces a coarse-to-fine kernel optimization scheme, which is fast and avoids sub-optimal solutions due to a sparse point cloud initialization. BAGS achieves photorealistic renderings under various challenging blur conditions and imaging geometry, while significantly improving upon existing approaches. BAGS is robust against various types of blur, including motion blur, defocus blur, and downscaling blur. It is designed to robustly optimize 3D Gaussians by introducing additional modeling capacities in 2D, which allows BAGS to model away 3D-inconsistent blur from the scene. BAGS consists of two parts: a Blur Proposal Network (BPN) and a coarse-to-fine optimization scheme. BPN models a per-pixel convolution kernel for every pixel x, where K is the kernel size. The rasterization efficiency in Gaussian Splatting allows us to model h(x) as a full convolution kernel. Additionally, BPN estimates a per-pixel scalar mask, controlling the areas where blur modeling takes place. BAGS also introduces a coarse-to-fine kernel optimization scheme, which gradually increases the training image resolution and the estimated kernel size with additional neural network layers. This improves the stability of the joint optimization process given a sparse point cloud. BAGS is evaluated on three image blur scenarios and finds significant quantitative and visual improvements compared to current SoTA methods. BAGS is also compared with other methods in terms of performance and efficiency. The results show that BAGS achieves better performance and efficiency than other methods.BAGS is a novel method for scene reconstruction and novel view synthesis that addresses the issue of image blur. It introduces a Blur Proposal Network (BPN) to model blur by estimating per-pixel convolution kernels and a quality-assessing mask that indicates regions of blur. BPN considers spatial, color, and depth variations of the scene to maximize modeling capacity. Additionally, BPN proposes a quality-assessing mask that indicates regions where blur occurs. BAGS also introduces a coarse-to-fine kernel optimization scheme, which is fast and avoids sub-optimal solutions due to a sparse point cloud initialization. BAGS achieves photorealistic renderings under various challenging blur conditions and imaging geometry, while significantly improving upon existing approaches. BAGS is robust against various types of blur, including motion blur, defocus blur, and downscaling blur. It is designed to robustly optimize 3D Gaussians by introducing additional modeling capacities in 2D, which allows BAGS to model away 3D-inconsistent blur from the scene. BAGS consists of two parts: a Blur Proposal Network (BPN) and a coarse-to-fine optimization scheme. BPN models a per-pixel convolution kernel for every pixel x, where K is the kernel size. The rasterization efficiency in Gaussian Splatting allows us to model h(x) as a full convolution kernel. Additionally, BPN estimates a per-pixel scalar mask, controlling the areas where blur modeling takes place. BAGS also introduces a coarse-to-fine kernel optimization scheme, which gradually increases the training image resolution and the estimated kernel size with additional neural network layers. This improves the stability of the joint optimization process given a sparse point cloud. BAGS is evaluated on three image blur scenarios and finds significant quantitative and visual improvements compared to current SoTA methods. BAGS is also compared with other methods in terms of performance and efficiency. The results show that BAGS achieves better performance and efficiency than other methods.