12 Jan 2020 | Wei-Yu Chen, Yen-Cheng Liu & Zsolt Kira, Yu-Chiang Frank Wang, Jia-Bin Huang
This paper investigates the performance of few-shot classification algorithms under different settings, focusing on the impact of backbone depth, domain shifts, and data augmentation. The authors present a consistent comparative analysis of several representative few-shot classification algorithms, showing that deeper backbones significantly reduce performance differences among methods on datasets with limited domain differences. They also introduce a modified baseline method that achieves competitive performance with state-of-the-art methods on the mini-ImageNet and CUB datasets. Additionally, they propose a new experimental setting to evaluate cross-domain generalization ability for few-shot classification algorithms.
The study reveals that reducing intra-class variation is important when using shallow backbones but less critical when using deeper ones. In a realistic cross-domain evaluation setting, a baseline method with standard fine-tuning practices performs favorably against other state-of-the-art few-shot learning algorithms. The authors also show that current few-shot classification algorithms fail to address domain shifts between base and novel classes, highlighting the importance of learning to adapt to domain differences.
The paper compares several representative few-shot learning algorithms, including initialization-based, metric learning-based, and hallucination-based methods. It also discusses the effects of increasing network depth, domain differences between base and novel classes, and further adaptation steps on the performance of few-shot classification algorithms. The results show that deeper backbones reduce performance gaps among methods, while domain shifts can significantly affect performance. The authors also demonstrate that further adaptation improves the performance of some meta-learning methods but may harm others, especially in scenarios with little domain difference.
The study concludes that the standard evaluation setting for few-shot classification has limitations, and that learning to adapt in the meta-training stage is an important direction for future research. The authors make their source code publicly available to foster future progress in the field.This paper investigates the performance of few-shot classification algorithms under different settings, focusing on the impact of backbone depth, domain shifts, and data augmentation. The authors present a consistent comparative analysis of several representative few-shot classification algorithms, showing that deeper backbones significantly reduce performance differences among methods on datasets with limited domain differences. They also introduce a modified baseline method that achieves competitive performance with state-of-the-art methods on the mini-ImageNet and CUB datasets. Additionally, they propose a new experimental setting to evaluate cross-domain generalization ability for few-shot classification algorithms.
The study reveals that reducing intra-class variation is important when using shallow backbones but less critical when using deeper ones. In a realistic cross-domain evaluation setting, a baseline method with standard fine-tuning practices performs favorably against other state-of-the-art few-shot learning algorithms. The authors also show that current few-shot classification algorithms fail to address domain shifts between base and novel classes, highlighting the importance of learning to adapt to domain differences.
The paper compares several representative few-shot learning algorithms, including initialization-based, metric learning-based, and hallucination-based methods. It also discusses the effects of increasing network depth, domain differences between base and novel classes, and further adaptation steps on the performance of few-shot classification algorithms. The results show that deeper backbones reduce performance gaps among methods, while domain shifts can significantly affect performance. The authors also demonstrate that further adaptation improves the performance of some meta-learning methods but may harm others, especially in scenarios with little domain difference.
The study concludes that the standard evaluation setting for few-shot classification has limitations, and that learning to adapt in the meta-training stage is an important direction for future research. The authors make their source code publicly available to foster future progress in the field.