Context Encoding for Semantic Segmentation

Context Encoding for Semantic Segmentation

23 Mar 2018 | Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, Amit Agrawal
This paper introduces the Context Encoding Module (CEM) to improve semantic segmentation by incorporating global contextual information. The CEM selectively highlights class-dependent feature maps, enhancing the network's ability to understand scene context. The module is integrated into the Fully Convolutional Network (FCN) framework and includes a Semantic Encoding Loss (SE-loss) to regularize training, ensuring the network learns semantic context. The proposed EncNet, which uses a pre-trained ResNet with the CEM, achieves state-of-the-art results on datasets such as PASCAL-Context (51.7% mIoU) and PASCAL VOC 2012 (85.9% mIoU). Additionally, the CEM is shown to improve the performance of shallow networks in image classification tasks, achieving an error rate of 3.45% on the CIFAR-10 dataset. The source code for the complete system is publicly available.This paper introduces the Context Encoding Module (CEM) to improve semantic segmentation by incorporating global contextual information. The CEM selectively highlights class-dependent feature maps, enhancing the network's ability to understand scene context. The module is integrated into the Fully Convolutional Network (FCN) framework and includes a Semantic Encoding Loss (SE-loss) to regularize training, ensuring the network learns semantic context. The proposed EncNet, which uses a pre-trained ResNet with the CEM, achieves state-of-the-art results on datasets such as PASCAL-Context (51.7% mIoU) and PASCAL VOC 2012 (85.9% mIoU). Additionally, the CEM is shown to improve the performance of shallow networks in image classification tasks, achieving an error rate of 3.45% on the CIFAR-10 dataset. The source code for the complete system is publicly available.
Reach us at info@study.space