04 February 2020 | Dmitry Ulyanov · Andrea Vedaldi · Victor Lempitsky
Deep Image Prior is a method that uses the structure of a generator network to capture image statistics without requiring explicit training. The approach leverages the network's architecture to solve inverse problems such as denoising, super-resolution, and inpainting. Unlike traditional methods that rely on learned data priors, this method uses a randomly initialized neural network as a handcrafted prior, achieving results comparable to state-of-the-art learning-based approaches. The network's structure implicitly captures natural image statistics, allowing it to perform well in various image restoration tasks. The method is particularly effective in scenarios where the image prior must supplement information lost during degradation. The approach is also applicable to understanding the information within deep neural network activations, where it can replace conventional priors with improved results. The method is evaluated on various tasks, including denoising, super-resolution, and inpainting, demonstrating its effectiveness and versatility. The results show that the deep image prior can achieve high-quality image restoration without explicit training, highlighting the power of the network's implicit prior. The method is also applied to activation maximization, where it produces images that highly activate specific neurons. The approach is further extended to flash-no flash image pair-based restoration, demonstrating its applicability to complex tasks. Overall, the deep image prior offers a powerful alternative to traditional learning-based methods, leveraging the structure of the network to capture natural image statistics and achieve high-quality results.Deep Image Prior is a method that uses the structure of a generator network to capture image statistics without requiring explicit training. The approach leverages the network's architecture to solve inverse problems such as denoising, super-resolution, and inpainting. Unlike traditional methods that rely on learned data priors, this method uses a randomly initialized neural network as a handcrafted prior, achieving results comparable to state-of-the-art learning-based approaches. The network's structure implicitly captures natural image statistics, allowing it to perform well in various image restoration tasks. The method is particularly effective in scenarios where the image prior must supplement information lost during degradation. The approach is also applicable to understanding the information within deep neural network activations, where it can replace conventional priors with improved results. The method is evaluated on various tasks, including denoising, super-resolution, and inpainting, demonstrating its effectiveness and versatility. The results show that the deep image prior can achieve high-quality image restoration without explicit training, highlighting the power of the network's implicit prior. The method is also applied to activation maximization, where it produces images that highly activate specific neurons. The approach is further extended to flash-no flash image pair-based restoration, demonstrating its applicability to complex tasks. Overall, the deep image prior offers a powerful alternative to traditional learning-based methods, leveraging the structure of the network to capture natural image statistics and achieve high-quality results.