Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting

21 Nov 2016 | Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell, Alexei A. Efros
The paper introduces Context Encoders, an unsupervised visual feature learning algorithm driven by context-based pixel prediction. Context Encoders are convolutional neural networks (CNNs) trained to generate missing parts of an image based on its surrounding context. The authors propose a joint loss function that combines a reconstruction loss and an adversarial loss to improve the quality of the learned features. The reconstruction loss captures the overall structure of the missing region, while the adversarial loss ensures that the predictions are realistic and coherent with the context. The method is evaluated on various tasks, including classification, object detection, and semantic segmentation, demonstrating competitive performance with state-of-the-art unsupervised methods. Additionally, the learned features are useful for semantic inpainting tasks, either as a standalone method or as an initialization for non-parametric methods. The paper also discusses related work in unsupervised and self-supervised learning, image generation, and inpainting, highlighting the unique contributions of Context Encoders.The paper introduces Context Encoders, an unsupervised visual feature learning algorithm driven by context-based pixel prediction. Context Encoders are convolutional neural networks (CNNs) trained to generate missing parts of an image based on its surrounding context. The authors propose a joint loss function that combines a reconstruction loss and an adversarial loss to improve the quality of the learned features. The reconstruction loss captures the overall structure of the missing region, while the adversarial loss ensures that the predictions are realistic and coherent with the context. The method is evaluated on various tasks, including classification, object detection, and semantic segmentation, demonstrating competitive performance with state-of-the-art unsupervised methods. Additionally, the learned features are useful for semantic inpainting tasks, either as a standalone method or as an initialization for non-parametric methods. The paper also discusses related work in unsupervised and self-supervised learning, image generation, and inpainting, highlighting the unique contributions of Context Encoders.
Reach us at info@study.space
[slides] Context Encoders%3A Feature Learning by Inpainting | StudySpace