Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation

Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation

6 Apr 2019 | Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan Yuille, Li Fei-Fei
Auto-DeepLab is a hierarchical neural architecture search method for semantic image segmentation. The paper proposes a two-level architecture search space that includes both cell-level and network-level structures, enabling more efficient and effective search for semantic segmentation architectures. The method uses a differentiable formulation to perform gradient-based architecture search, which is efficient and requires only 3 P100 GPU days for training on Cityscapes images. The proposed method achieves state-of-the-art performance on the Cityscapes, PASCAL VOC 2012, and ADE20K datasets without ImageNet pretraining. Auto-DeepLab outperforms existing models such as FRRN-B and GridNet on Cityscapes, and performs comparably with ImageNet-pretrained models. The method also demonstrates superior performance on PASCAL VOC 2012 and ADE20K, achieving better results with less data. The paper also presents a detailed analysis of the architecture search process, including the continuous relaxation of architectures and the decoding of discrete architectures. The results show that the proposed method significantly improves performance in semantic image segmentation tasks.Auto-DeepLab is a hierarchical neural architecture search method for semantic image segmentation. The paper proposes a two-level architecture search space that includes both cell-level and network-level structures, enabling more efficient and effective search for semantic segmentation architectures. The method uses a differentiable formulation to perform gradient-based architecture search, which is efficient and requires only 3 P100 GPU days for training on Cityscapes images. The proposed method achieves state-of-the-art performance on the Cityscapes, PASCAL VOC 2012, and ADE20K datasets without ImageNet pretraining. Auto-DeepLab outperforms existing models such as FRRN-B and GridNet on Cityscapes, and performs comparably with ImageNet-pretrained models. The method also demonstrates superior performance on PASCAL VOC 2012 and ADE20K, achieving better results with less data. The paper also presents a detailed analysis of the architecture search process, including the continuous relaxation of architectures and the decoding of discrete architectures. The results show that the proposed method significantly improves performance in semantic image segmentation tasks.
Reach us at info@study.space
Understanding Auto-DeepLab%3A Hierarchical Neural Architecture Search for Semantic Image Segmentation