Free-Form Image Inpainting with Gated Convolution

Free-Form Image Inpainting with Gated Convolution

22 Oct 2019 | Jiahui Yu1 Zhe Lin2 Jimei Yang2 Xiaohui Shen3 Xin Lu2 Thomas Huang1
We present a free-form image inpainting system based on gated convolution, which enables the completion of images with arbitrary-shaped masks and user guidance. The system is trained using millions of images without additional labeling. The proposed gated convolution addresses the limitations of conventional convolutions by learning a dynamic feature selection mechanism for each channel and spatial location. This allows the model to better handle free-form masks and user guidance, such as sketches, leading to higher-quality and more flexible results compared to previous methods. The system supports tasks like removing distracting objects, modifying image layouts, clearing watermarks, and editing faces. We also introduce a patch-based GAN loss, SN-PatchGAN, which is simple, fast, and stable in training. Results on automatic and user-guided inpainting demonstrate the effectiveness of our approach. Our system is evaluated on benchmark datasets such as Places2 and CelebA-HQ, showing superior performance in terms of image quality and user interaction. The system is fully convolutional and supports different input resolutions. We also provide an automatic algorithm for generating free-form masks and a method for generating user-guided sketches. The system is trained end-to-end and can be tested on free-form holes at arbitrary locations. The results show that our approach produces more visually pleasing and realistic inpainting results compared to previous methods. The system is also evaluated through user studies, showing that it produces more natural and accurate results than existing methods. The proposed approach is effective for a wide range of image inpainting tasks, including object removal and creative editing.We present a free-form image inpainting system based on gated convolution, which enables the completion of images with arbitrary-shaped masks and user guidance. The system is trained using millions of images without additional labeling. The proposed gated convolution addresses the limitations of conventional convolutions by learning a dynamic feature selection mechanism for each channel and spatial location. This allows the model to better handle free-form masks and user guidance, such as sketches, leading to higher-quality and more flexible results compared to previous methods. The system supports tasks like removing distracting objects, modifying image layouts, clearing watermarks, and editing faces. We also introduce a patch-based GAN loss, SN-PatchGAN, which is simple, fast, and stable in training. Results on automatic and user-guided inpainting demonstrate the effectiveness of our approach. Our system is evaluated on benchmark datasets such as Places2 and CelebA-HQ, showing superior performance in terms of image quality and user interaction. The system is fully convolutional and supports different input resolutions. We also provide an automatic algorithm for generating free-form masks and a method for generating user-guided sketches. The system is trained end-to-end and can be tested on free-form holes at arbitrary locations. The results show that our approach produces more visually pleasing and realistic inpainting results compared to previous methods. The system is also evaluated through user studies, showing that it produces more natural and accurate results than existing methods. The proposed approach is effective for a wide range of image inpainting tasks, including object removal and creative editing.
Reach us at info@study.space
[slides and audio] Free-Form Image Inpainting With Gated Convolution