22 Aug 2019 | Jia-Xing Zhao, Jiang-Jiang Liu, Deng-Ping Fan, Yang Cao, Ju-Feng Yang, Ming-Ming Cheng*
The paper "EGNet: Edge Guidance Network for Salient Object Detection" by Jia-Xing Zhao et al. addresses the issue of coarse object boundaries in salient object detection (SOD) using fully convolutional neural networks (FCNs). To improve this, the authors propose an Edge Guidance Network (EGNet) that models the complementarity between salient edge information and salient object information. EGNet consists of three main steps:
1. **Progressive Fusion of Salient Object Features**: The network extracts multi-resolution salient object features using a U-Net architecture.
2. **Integration of Local and Global Information for Salient Edge Features**: A non-local module integrates local edge information from Conv2-2 and global location information to generate salient edge features.
3. **One-to-One Guidance Module**: This module couples the salient edge features with salient object features at various resolutions to leverage their complementary information.
The proposed method is evaluated on six widely used datasets and shows superior performance compared to state-of-the-art methods, achieving the best results under three evaluation metrics: F-measure, mean absolute error (MAE), and S-measure. The source code for EGNet is available at http://mmcheng.net/egnet/.The paper "EGNet: Edge Guidance Network for Salient Object Detection" by Jia-Xing Zhao et al. addresses the issue of coarse object boundaries in salient object detection (SOD) using fully convolutional neural networks (FCNs). To improve this, the authors propose an Edge Guidance Network (EGNet) that models the complementarity between salient edge information and salient object information. EGNet consists of three main steps:
1. **Progressive Fusion of Salient Object Features**: The network extracts multi-resolution salient object features using a U-Net architecture.
2. **Integration of Local and Global Information for Salient Edge Features**: A non-local module integrates local edge information from Conv2-2 and global location information to generate salient edge features.
3. **One-to-One Guidance Module**: This module couples the salient edge features with salient object features at various resolutions to leverage their complementary information.
The proposed method is evaluated on six widely used datasets and shows superior performance compared to state-of-the-art methods, achieving the best results under three evaluation metrics: F-measure, mean absolute error (MAE), and S-measure. The source code for EGNet is available at http://mmcheng.net/egnet/.