24 Feb 2014 | Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, Yann LeCun
The paper "OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks" presents an integrated framework for using Convolutional Networks (ConvNets) for classification, localization, and detection tasks. The authors introduce a novel deep learning approach to localization by learning to predict object boundaries and accumulate bounding boxes to increase detection confidence. They demonstrate that different tasks can be learned simultaneously using a single shared network. The framework won the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and achieved competitive results for detection and classification tasks. Post-competition work further improved the detection task, and a feature extractor called OverFeat was released.
The paper discusses the advantages and disadvantages of ConvNets, highlighting their end-to-end training capabilities and high computational requirements. It proposes a multi-scale and sliding window approach to efficiently implement localization and detection within a ConvNet. The authors also introduce a method for accumulating predicted bounding boxes to enhance detection accuracy.
Experiments on the ILSVRC 2012 and 2013 datasets show state-of-the-art results in localization and detection tasks. The classification architecture is similar to the best ILSVRC12 architecture but with improvements in network design and inference steps. The paper details the model design, training process, and feature extraction methods, including multi-scale classification and localization.
The localization task involves predicting object bounding boxes at each spatial location and scale, while the detection task predicts multiple objects in an image, including false positives. The authors combine individual predictions using a greedy merge strategy to produce final high-confidence bounding boxes.
The paper concludes by discussing potential improvements, such as backpropagation through the entire network, optimizing the intersection-over-union criterion, and alternate parameterizations of bounding boxes.The paper "OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks" presents an integrated framework for using Convolutional Networks (ConvNets) for classification, localization, and detection tasks. The authors introduce a novel deep learning approach to localization by learning to predict object boundaries and accumulate bounding boxes to increase detection confidence. They demonstrate that different tasks can be learned simultaneously using a single shared network. The framework won the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and achieved competitive results for detection and classification tasks. Post-competition work further improved the detection task, and a feature extractor called OverFeat was released.
The paper discusses the advantages and disadvantages of ConvNets, highlighting their end-to-end training capabilities and high computational requirements. It proposes a multi-scale and sliding window approach to efficiently implement localization and detection within a ConvNet. The authors also introduce a method for accumulating predicted bounding boxes to enhance detection accuracy.
Experiments on the ILSVRC 2012 and 2013 datasets show state-of-the-art results in localization and detection tasks. The classification architecture is similar to the best ILSVRC12 architecture but with improvements in network design and inference steps. The paper details the model design, training process, and feature extraction methods, including multi-scale classification and localization.
The localization task involves predicting object bounding boxes at each spatial location and scale, while the detection task predicts multiple objects in an image, including false positives. The authors combine individual predictions using a greedy merge strategy to produce final high-confidence bounding boxes.
The paper concludes by discussing potential improvements, such as backpropagation through the entire network, optimizing the intersection-over-union criterion, and alternate parameterizations of bounding boxes.