Free-Form Image Inpainting with Gated Convolution

Free-Form Image Inpainting with Gated Convolution

22 Oct 2019 | Jiahui Yu1 Zhe Lin2 Jimei Yang2 Xiaohui Shen3 Xin Lu2 Thomas Huang1
The paper presents a generative image inpainting system that uses gated convolutions to handle free-form masks and user guidance. The system is trained on millions of images without additional labeling, addressing the limitations of vanilla convolutions that treat all pixels as valid. Gated convolutions learn dynamic feature selection mechanisms for each channel and spatial location, improving color consistency and quality. The paper introduces SN-PatchGAN, a patch-based GAN loss designed for free-form image inpainting, which is simple, fast, and stable. The system supports various tasks such as removing distracting objects, modifying layouts, clearing watermarks, and editing faces. Experimental results on benchmark datasets demonstrate superior performance compared to previous methods, with higher-quality and more flexible results. The system is also interactive, allowing users to provide sparse sketches as guidance. The code, demo, and models are available online.The paper presents a generative image inpainting system that uses gated convolutions to handle free-form masks and user guidance. The system is trained on millions of images without additional labeling, addressing the limitations of vanilla convolutions that treat all pixels as valid. Gated convolutions learn dynamic feature selection mechanisms for each channel and spatial location, improving color consistency and quality. The paper introduces SN-PatchGAN, a patch-based GAN loss designed for free-form image inpainting, which is simple, fast, and stable. The system supports various tasks such as removing distracting objects, modifying layouts, clearing watermarks, and editing faces. Experimental results on benchmark datasets demonstrate superior performance compared to previous methods, with higher-quality and more flexible results. The system is also interactive, allowing users to provide sparse sketches as guidance. The code, demo, and models are available online.
Reach us at info@study.space