Saliency Detection by Multi-Context Deep Learning

Saliency Detection by Multi-Context Deep Learning

| Rui Zhao, Wanli Ouyang, Hongsheng Li, Xiaogang Wang
This paper proposes a multi-context deep learning framework for salient object detection. The framework integrates global and local context to improve saliency detection performance. The global context is used to model saliency across the entire image, while the local context is used for detailed saliency prediction in specific areas. The global and local contexts are jointly modeled in a unified framework. To enhance training, different pre-training strategies are investigated, and a task-specific pre-training scheme is designed to suit saliency detection. Contemporary deep models from the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection is evaluated. The framework is extensively tested on five public datasets, showing significant improvements over state-of-the-art methods. The multi-context model outperforms the single-context model on all datasets, especially on the PASCAL dataset, where it increases the F-measure score by around 5%. Task-specific pre-training strategies are also explored, and results show that they significantly improve performance. The framework is flexible and can incorporate various contemporary deep models, such as AlexNet, Clarifai, OverFeat, and GoogLeNet. The approach is validated through extensive experiments, demonstrating its effectiveness in saliency detection. The framework's ability to combine global and local context enables more accurate and robust saliency detection, particularly in complex scenes with confusing backgrounds.This paper proposes a multi-context deep learning framework for salient object detection. The framework integrates global and local context to improve saliency detection performance. The global context is used to model saliency across the entire image, while the local context is used for detailed saliency prediction in specific areas. The global and local contexts are jointly modeled in a unified framework. To enhance training, different pre-training strategies are investigated, and a task-specific pre-training scheme is designed to suit saliency detection. Contemporary deep models from the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection is evaluated. The framework is extensively tested on five public datasets, showing significant improvements over state-of-the-art methods. The multi-context model outperforms the single-context model on all datasets, especially on the PASCAL dataset, where it increases the F-measure score by around 5%. Task-specific pre-training strategies are also explored, and results show that they significantly improve performance. The framework is flexible and can incorporate various contemporary deep models, such as AlexNet, Clarifai, OverFeat, and GoogLeNet. The approach is validated through extensive experiments, demonstrating its effectiveness in saliency detection. The framework's ability to combine global and local context enables more accurate and robust saliency detection, particularly in complex scenes with confusing backgrounds.
Reach us at info@futurestudyspace.com