GhostNet: More Features from Cheap Operations

GhostNet: More Features from Cheap Operations

13 Mar 2020 | Kai Han1 Yunhe Wang1 Qi Tian1* Jianyuan Guo2 Chunjing Xu1 Chang Xu3
The paper introduces a novel Ghost module designed to generate more feature maps from cheap operations, addressing the redundancy in feature maps of successful CNNs. The Ghost module splits an ordinary convolutional layer into two parts: a primary convolution that generates intrinsic feature maps, and a series of linear transformations that create additional feature maps from these intrinsic maps. This approach reduces computational costs and parameters while preserving or improving recognition performance. The GhostNet architecture, built using Ghost bottlenecks, is then proposed, which achieves higher accuracy (e.g., 75.7% top-1 accuracy) on the ImageNet ILSVRC-2012 dataset compared to MobileNetV3 with similar computational cost. Experiments on various benchmarks demonstrate the effectiveness and efficiency of the Ghost module and GhostNet.The paper introduces a novel Ghost module designed to generate more feature maps from cheap operations, addressing the redundancy in feature maps of successful CNNs. The Ghost module splits an ordinary convolutional layer into two parts: a primary convolution that generates intrinsic feature maps, and a series of linear transformations that create additional feature maps from these intrinsic maps. This approach reduces computational costs and parameters while preserving or improving recognition performance. The GhostNet architecture, built using Ghost bottlenecks, is then proposed, which achieves higher accuracy (e.g., 75.7% top-1 accuracy) on the ImageNet ILSVRC-2012 dataset compared to MobileNetV3 with similar computational cost. Experiments on various benchmarks demonstrate the effectiveness and efficiency of the Ghost module and GhostNet.
Reach us at info@study.space
[slides and audio] GhostNet%3A More Features From Cheap Operations