RePaint: Inpainting using Denoising Diffusion Probabilistic Models

RePaint: Inpainting using Denoising Diffusion Probabilistic Models

31 Aug 2022 | Andreas Lugmayr Martin Danelljan Andres Romero Fisher Yu Radu Timofte Luc Van Gool
RePaint is an inpainting method that uses Denoising Diffusion Probabilistic Models (DDPM) to fill in missing regions of an image. The method leverages a pretrained unconditional DDPM as a generative prior, allowing it to produce high-quality and diverse outputs for any mask type. Unlike traditional approaches that require mask-specific training, RePaint conditions the generation process by sampling unmasked regions using the given image information during reverse diffusion iterations. This approach enables the model to generate semantically meaningful content without being restricted to specific mask distributions. The method is validated on standard and extreme masks, outperforming state-of-the-art autoregressive and GAN-based approaches for five out of six mask distributions. RePaint produces detailed, high-quality images that are semantically meaningful and visually realistic. The method is flexible and can handle various mask types, including extreme masks where most of the image is missing. The approach is evaluated on CelebA-HQ and ImageNet datasets, showing superior performance in terms of realism and semantic consistency. The method also demonstrates strong results in terms of diversity and quality, with users preferring RePaint over other methods in most cases. The model is trained to generate images that are consistent with the training data distribution, leading to realistic and semantically meaningful outputs. RePaint is able to handle a wide range of mask types and produces high-quality results, making it a promising approach for image inpainting.RePaint is an inpainting method that uses Denoising Diffusion Probabilistic Models (DDPM) to fill in missing regions of an image. The method leverages a pretrained unconditional DDPM as a generative prior, allowing it to produce high-quality and diverse outputs for any mask type. Unlike traditional approaches that require mask-specific training, RePaint conditions the generation process by sampling unmasked regions using the given image information during reverse diffusion iterations. This approach enables the model to generate semantically meaningful content without being restricted to specific mask distributions. The method is validated on standard and extreme masks, outperforming state-of-the-art autoregressive and GAN-based approaches for five out of six mask distributions. RePaint produces detailed, high-quality images that are semantically meaningful and visually realistic. The method is flexible and can handle various mask types, including extreme masks where most of the image is missing. The approach is evaluated on CelebA-HQ and ImageNet datasets, showing superior performance in terms of realism and semantic consistency. The method also demonstrates strong results in terms of diversity and quality, with users preferring RePaint over other methods in most cases. The model is trained to generate images that are consistent with the training data distribution, leading to realistic and semantically meaningful outputs. RePaint is able to handle a wide range of mask types and produces high-quality results, making it a promising approach for image inpainting.
Reach us at info@study.space
[slides and audio] RePaint%3A Inpainting using Denoising Diffusion Probabilistic Models