Searching for MobileNetV3

Searching for MobileNetV3

20 Nov 2019 | Andrew Howard, Weijun Wang, Mark Sandler, Yukun Zhu, Grace Chu, Liang-Chieh Chen, Vijay Vasudevan, Bo Chen, Quoc V. Le, Mingxing Tan, Hartwig Adam
The paper introduces MobileNetV3, a next-generation mobile neural network model that combines complementary search techniques and novel architecture design. MobileNetV3 is optimized for mobile devices through hardware-aware network architecture search (NAS) and the NetAdapt algorithm, resulting in two new models: MobileNetV3-Large and MobileNetV3-Small, tailored for high and low resource use cases. These models are adapted for tasks like object detection and semantic segmentation. For semantic segmentation, a new efficient decoder called Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP) is proposed, achieving state-of-the-art results. MobileNetV3-Large improves ImageNet classification accuracy by 3.2% while reducing latency by 20% compared to MobileNetV2. MobileNetV3-Small achieves 6.6% higher accuracy with comparable latency. MobileNetV3-Large detection is 25% faster than MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 34% faster than MobileNetV2 R-ASPP for Cityscapes segmentation. The paper explores how automated search algorithms and network design can work together to improve performance. It introduces new efficient nonlinearities, such as h-swish, and modifies existing layers to reduce computational cost. The models are evaluated on classification, detection, and segmentation tasks, demonstrating significant improvements in accuracy and efficiency. MobileNetV3-Large and MobileNetV3-Small are shown to outperform previous models in terms of performance and efficiency, with MobileNetV3-Small being particularly effective for low-resource scenarios. The paper also presents results on COCO detection and Cityscapes segmentation, showing that MobileNetV3 models achieve high accuracy with reduced latency. The models are designed to be efficient and compatible with mobile platforms, making them suitable for a wide range of applications.The paper introduces MobileNetV3, a next-generation mobile neural network model that combines complementary search techniques and novel architecture design. MobileNetV3 is optimized for mobile devices through hardware-aware network architecture search (NAS) and the NetAdapt algorithm, resulting in two new models: MobileNetV3-Large and MobileNetV3-Small, tailored for high and low resource use cases. These models are adapted for tasks like object detection and semantic segmentation. For semantic segmentation, a new efficient decoder called Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP) is proposed, achieving state-of-the-art results. MobileNetV3-Large improves ImageNet classification accuracy by 3.2% while reducing latency by 20% compared to MobileNetV2. MobileNetV3-Small achieves 6.6% higher accuracy with comparable latency. MobileNetV3-Large detection is 25% faster than MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 34% faster than MobileNetV2 R-ASPP for Cityscapes segmentation. The paper explores how automated search algorithms and network design can work together to improve performance. It introduces new efficient nonlinearities, such as h-swish, and modifies existing layers to reduce computational cost. The models are evaluated on classification, detection, and segmentation tasks, demonstrating significant improvements in accuracy and efficiency. MobileNetV3-Large and MobileNetV3-Small are shown to outperform previous models in terms of performance and efficiency, with MobileNetV3-Small being particularly effective for low-resource scenarios. The paper also presents results on COCO detection and Cityscapes segmentation, showing that MobileNetV3 models achieve high accuracy with reduced latency. The models are designed to be efficient and compatible with mobile platforms, making them suitable for a wide range of applications.
Reach us at info@study.space