2009 | Anat Levin1,2, Yair Weiss1,3, Fredo Durand1, William T. Freeman1,4
The paper "Understanding and Evaluating Blind Deconvolution Algorithms" by Levin et al. (2009) addresses the challenge of recovering a sharp image from a blurred one when the blur kernel is unknown. The authors analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. They explain the failure of the naive Maximum A Posteriori (MAP) approach, which often favors no-blur explanations, and demonstrate that a MAP estimation of the kernel alone can be well-constrained and accurately recover the true blur, given that the kernel size is often smaller than the image size.
The paper highlights the importance of the strong asymmetry between the dimensionalities of the unknown image \( x \) and the blur kernel \( k \). While the dimensionality of \( x \) increases with image size, the support of \( k \) remains fixed and small relative to the image size. This asymmetry allows for the successful estimation of \( k \) using a MAP approach, even when the dimensionality of \( x \) is high.
The authors also collect motion-blurred data with ground truth and compare recent algorithms under equal settings. Their evaluation suggests that the variational Bayes approach of Fergus et al. (2009) outperforms all existing alternatives. Additionally, they find that the shift-invariant blur assumption made by most algorithms is often violated, and realistic camera shake includes in-plane rotations.
The paper concludes by discussing the key challenges and components that make blind deconvolution possible, emphasizing the importance of thoughtful estimator choices over prior distributions. It also suggests that future research should focus on improving estimators for existing priors and developing blur models that relax the spatially uniform blur assumption.The paper "Understanding and Evaluating Blind Deconvolution Algorithms" by Levin et al. (2009) addresses the challenge of recovering a sharp image from a blurred one when the blur kernel is unknown. The authors analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. They explain the failure of the naive Maximum A Posteriori (MAP) approach, which often favors no-blur explanations, and demonstrate that a MAP estimation of the kernel alone can be well-constrained and accurately recover the true blur, given that the kernel size is often smaller than the image size.
The paper highlights the importance of the strong asymmetry between the dimensionalities of the unknown image \( x \) and the blur kernel \( k \). While the dimensionality of \( x \) increases with image size, the support of \( k \) remains fixed and small relative to the image size. This asymmetry allows for the successful estimation of \( k \) using a MAP approach, even when the dimensionality of \( x \) is high.
The authors also collect motion-blurred data with ground truth and compare recent algorithms under equal settings. Their evaluation suggests that the variational Bayes approach of Fergus et al. (2009) outperforms all existing alternatives. Additionally, they find that the shift-invariant blur assumption made by most algorithms is often violated, and realistic camera shake includes in-plane rotations.
The paper concludes by discussing the key challenges and components that make blind deconvolution possible, emphasizing the importance of thoughtful estimator choices over prior distributions. It also suggests that future research should focus on improving estimators for existing priors and developing blur models that relax the spatially uniform blur assumption.