13 Mar 2020 | Kai Han1 Yunhe Wang1 Qi Tian1* Jianyuan Guo2 Chunjing Xu1 Chang Xu3
GhostNet: More Features from Cheap Operations
This paper proposes a novel Ghost module to generate more feature maps from cheap operations. The Ghost module is designed to generate ghost feature maps by applying a series of linear transformations to a set of intrinsic feature maps. The Ghost module can be used as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC2012 classification dataset.
The Ghost module splits the original convolutional layer into two parts. The first part involves ordinary convolutions but their total number will be rigorously controlled. Given the intrinsic feature maps from the first part, a series of simple linear operations are then applied for generating more feature maps. Without changing the size of output feature map, the overall required number of parameters and computational complexities in this Ghost module have been decreased, compared with those in vanilla convolutional neural networks. Based on Ghost module, we establish an efficient neural architecture, namely, GhostNet. We first replace original convolutional layers in benchmark neural architectures to demonstrate the effectiveness of Ghost modules, and then verify the superiority of our GhostNets on several benchmark visual datasets. Experimental results show that, the proposed Ghost module is able to decrease computational costs of generic convolutional layer while preserving similar recognition performance, and GhostNets can surpass state-of-the-art efficient deep models such as MobileNetV3 on various tasks with fast inference on mobile devices.GhostNet: More Features from Cheap Operations
This paper proposes a novel Ghost module to generate more feature maps from cheap operations. The Ghost module is designed to generate ghost feature maps by applying a series of linear transformations to a set of intrinsic feature maps. The Ghost module can be used as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC2012 classification dataset.
The Ghost module splits the original convolutional layer into two parts. The first part involves ordinary convolutions but their total number will be rigorously controlled. Given the intrinsic feature maps from the first part, a series of simple linear operations are then applied for generating more feature maps. Without changing the size of output feature map, the overall required number of parameters and computational complexities in this Ghost module have been decreased, compared with those in vanilla convolutional neural networks. Based on Ghost module, we establish an efficient neural architecture, namely, GhostNet. We first replace original convolutional layers in benchmark neural architectures to demonstrate the effectiveness of Ghost modules, and then verify the superiority of our GhostNets on several benchmark visual datasets. Experimental results show that, the proposed Ghost module is able to decrease computational costs of generic convolutional layer while preserving similar recognition performance, and GhostNets can surpass state-of-the-art efficient deep models such as MobileNetV3 on various tasks with fast inference on mobile devices.