FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search

FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search

24 May 2019 | Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, Kurt Keutzer
The paper introduces a differentiable neural architecture search (DNAS) framework, named FBNets (Facebook-Berkeley-Nets), to design efficient ConvNets for mobile devices. DNAS optimizes ConvNet architectures using gradient-based methods, avoiding the need to train individual architectures separately. This approach significantly reduces computational costs compared to previous methods, which often rely on reinforcement learning and require training thousands of architectures. FBNets achieve state-of-the-art performance in terms of accuracy and efficiency, outperforming both manually and automatically designed models. FBNet-B, for instance, achieves 74.1% top-1 accuracy with 295M FLOPs and 23.1 ms latency on a Samsung S8 phone, which is 2.4x smaller and 1.5x faster than MobileNetV2-1.3. The search cost for FBNet is estimated to be 420x smaller than MnasNet, at only 216 GPU-hours. FBNets also demonstrate better adaptability to different resolutions and channel sizes, achieving 1.5% to 6.4% higher accuracy than MobileNetV2. Additionally, FBNets can be optimized for specific devices, such as the iPhone X, achieving a 1.4x speedup compared to a Samsung-optimized model. The paper includes detailed experimental results and comparisons with state-of-the-art models, highlighting the effectiveness and efficiency of the proposed DNAS framework.The paper introduces a differentiable neural architecture search (DNAS) framework, named FBNets (Facebook-Berkeley-Nets), to design efficient ConvNets for mobile devices. DNAS optimizes ConvNet architectures using gradient-based methods, avoiding the need to train individual architectures separately. This approach significantly reduces computational costs compared to previous methods, which often rely on reinforcement learning and require training thousands of architectures. FBNets achieve state-of-the-art performance in terms of accuracy and efficiency, outperforming both manually and automatically designed models. FBNet-B, for instance, achieves 74.1% top-1 accuracy with 295M FLOPs and 23.1 ms latency on a Samsung S8 phone, which is 2.4x smaller and 1.5x faster than MobileNetV2-1.3. The search cost for FBNet is estimated to be 420x smaller than MnasNet, at only 216 GPU-hours. FBNets also demonstrate better adaptability to different resolutions and channel sizes, achieving 1.5% to 6.4% higher accuracy than MobileNetV2. Additionally, FBNets can be optimized for specific devices, such as the iPhone X, achieving a 1.4x speedup compared to a Samsung-optimized model. The paper includes detailed experimental results and comparisons with state-of-the-art models, highlighting the effectiveness and efficiency of the proposed DNAS framework.
Reach us at info@study.space
[slides] FBNet%3A Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search | StudySpace