DC-NAS: Divide-and-Conquer Neural Architecture Search for Multi-Modal Classification

DC-NAS: Divide-and-Conquer Neural Architecture Search for Multi-Modal Classification

2024 | Xinyan Liang1, Pinhan Fu1, Qian Guo2, Keyin Zheng1, Yuhua Qian1*
The paper introduces DC-NAS (Divide-and-Conquer Neural Architecture Search), an efficient evolutionary-based method for multi-modal classification (MMC). DC-NAS addresses the time-consuming issue of training and evaluating large models in traditional NAS-MMC methods by dividing the population into sub-populations, each trained on a subset of the data. The main contributions of DC-NAS include: 1. **Efficient Training**: By training sub-populations on smaller datasets, DC-NAS reduces training time. 2. **Knowledge Exchange**: Sub-populations trained on partial data exchange knowledge via special knowledge bases to improve performance. 3. **State-of-the-Art Performance**: DC-NAS achieves competitive classification performance with reduced training time and fewer model parameters compared to existing methods. The paper evaluates DC-NAS on three popular multi-modal tasks: multi-label movie genre classification, action recognition with RGB and body joints, and dynamic hand gesture recognition. Experimental results show that DC-NAS outperforms or matches the performance of state-of-the-art methods in terms of classification accuracy, training efficiency, and model size. The effectiveness of DC-NAS is demonstrated through ablation studies, which show that both the divide-and-conquer evolution and knowledge transfer components are crucial for its success.The paper introduces DC-NAS (Divide-and-Conquer Neural Architecture Search), an efficient evolutionary-based method for multi-modal classification (MMC). DC-NAS addresses the time-consuming issue of training and evaluating large models in traditional NAS-MMC methods by dividing the population into sub-populations, each trained on a subset of the data. The main contributions of DC-NAS include: 1. **Efficient Training**: By training sub-populations on smaller datasets, DC-NAS reduces training time. 2. **Knowledge Exchange**: Sub-populations trained on partial data exchange knowledge via special knowledge bases to improve performance. 3. **State-of-the-Art Performance**: DC-NAS achieves competitive classification performance with reduced training time and fewer model parameters compared to existing methods. The paper evaluates DC-NAS on three popular multi-modal tasks: multi-label movie genre classification, action recognition with RGB and body joints, and dynamic hand gesture recognition. Experimental results show that DC-NAS outperforms or matches the performance of state-of-the-art methods in terms of classification accuracy, training efficiency, and model size. The effectiveness of DC-NAS is demonstrated through ablation studies, which show that both the divide-and-conquer evolution and knowledge transfer components are crucial for its success.
Reach us at info@study.space