ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design

ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design

30 Jul 2018 | Ningning Ma *1,2 Xiangyu Zhang *1 Hai-Tao Zheng2 Jian Sun1
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design ShuffleNet V2 is a new efficient convolutional neural network (CNN) architecture that improves speed and accuracy. The paper proposes practical guidelines for efficient CNN design, emphasizing the importance of direct metrics like speed rather than indirect ones like FLOPs. It evaluates performance on target platforms, not just FLOPs. The authors conducted controlled experiments on two platforms (GPU and ARM) and derived four guidelines for efficient network design. The first guideline is that equal channel width minimizes memory access cost (MAC). The second guideline is that excessive group convolution increases MAC. The third guideline is that network fragmentation reduces the degree of parallelism. The fourth guideline is that element-wise operations are non-negligible. Based on these guidelines, the authors designed ShuffleNet V2, which is more efficient and accurate than previous networks. It uses a channel split operation to maintain a large number of equally wide channels without dense convolution or too many groups. It also reduces element-wise operations and improves feature reuse. ShuffleNet V2 was tested on ImageNet 2012 classification dataset and showed superior performance in terms of speed and accuracy. It outperformed other networks like MobileNet V2, ShuffleNet V1, and Xception. It also performed well in object detection tasks, such as COCO object detection. The paper concludes that efficient CNN architecture design should consider direct metrics like speed, not just FLOPs. It presents practical guidelines and a novel architecture, ShuffleNet V2, which has been validated through comprehensive experiments. The authors hope this work will inspire future research in platform-aware and practical network architecture design.ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design ShuffleNet V2 is a new efficient convolutional neural network (CNN) architecture that improves speed and accuracy. The paper proposes practical guidelines for efficient CNN design, emphasizing the importance of direct metrics like speed rather than indirect ones like FLOPs. It evaluates performance on target platforms, not just FLOPs. The authors conducted controlled experiments on two platforms (GPU and ARM) and derived four guidelines for efficient network design. The first guideline is that equal channel width minimizes memory access cost (MAC). The second guideline is that excessive group convolution increases MAC. The third guideline is that network fragmentation reduces the degree of parallelism. The fourth guideline is that element-wise operations are non-negligible. Based on these guidelines, the authors designed ShuffleNet V2, which is more efficient and accurate than previous networks. It uses a channel split operation to maintain a large number of equally wide channels without dense convolution or too many groups. It also reduces element-wise operations and improves feature reuse. ShuffleNet V2 was tested on ImageNet 2012 classification dataset and showed superior performance in terms of speed and accuracy. It outperformed other networks like MobileNet V2, ShuffleNet V1, and Xception. It also performed well in object detection tasks, such as COCO object detection. The paper concludes that efficient CNN architecture design should consider direct metrics like speed, not just FLOPs. It presents practical guidelines and a novel architecture, ShuffleNet V2, which has been validated through comprehensive experiments. The authors hope this work will inspire future research in platform-aware and practical network architecture design.
Reach us at info@study.space