12 Jan 2020 | Wei-Yu Chen, Yen-Cheng Liu & Zsolt Kira, Yu-Chiang Frank Wang, Jia-Bin Huang
This paper investigates the challenges and limitations of few-shot classification, a task where models must recognize new classes with limited labeled examples. The authors present a comprehensive comparative analysis of several representative few-shot classification algorithms, highlighting that deeper network backbones significantly reduce performance differences among methods on datasets with limited domain differences. They introduce a modified baseline method that achieves competitive performance with state-of-the-art meta-learning algorithms on the mini-ImageNet and CUB datasets. Additionally, they propose a new experimental setting to evaluate the cross-domain generalization ability of few-shot classification algorithms, demonstrating that sophisticated algorithms do not outperform baselines in realistic cross-domain scenarios. The study emphasizes the importance of reducing intra-class variation, especially with shallow feature backbones, and shows that domain shifts are critical in few-shot learning. The authors also provide a detailed empirical study, including implementation details and results, to support their findings and foster future research in the field.This paper investigates the challenges and limitations of few-shot classification, a task where models must recognize new classes with limited labeled examples. The authors present a comprehensive comparative analysis of several representative few-shot classification algorithms, highlighting that deeper network backbones significantly reduce performance differences among methods on datasets with limited domain differences. They introduce a modified baseline method that achieves competitive performance with state-of-the-art meta-learning algorithms on the mini-ImageNet and CUB datasets. Additionally, they propose a new experimental setting to evaluate the cross-domain generalization ability of few-shot classification algorithms, demonstrating that sophisticated algorithms do not outperform baselines in realistic cross-domain scenarios. The study emphasizes the importance of reducing intra-class variation, especially with shallow feature backbones, and shows that domain shifts are critical in few-shot learning. The authors also provide a detailed empirical study, including implementation details and results, to support their findings and foster future research in the field.