Zero-Shot Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly

Zero-Shot Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly

23 Sep 2020 | Yongqin Xian, Student Member, IEEE, Christoph H. Lampert, Bernt Schiele, Fellow, IEEE, and Zeynep Akata, Member, IEEE
This paper aims to provide a comprehensive evaluation of zero-shot learning (ZSL) methods, addressing both the good and bad aspects of the field. The authors define a new benchmark by unifying evaluation protocols and data splits from publicly available datasets, and propose a new dataset, Animals with Attributes 2 (AWA2), which is publicly available. They compare and analyze a significant number of state-of-the-art methods in both classic zero-shot and generalized zero-shot settings. The paper also discusses the limitations of current ZSL methods and suggests ways to improve them. The authors evaluate three aspects of ZSL: methods, datasets, and evaluation protocols. They emphasize the importance of tuning hyperparameters on a disjoint validation set and using per-class averaged top-1 accuracy as an evaluation metric. They also highlight the need to evaluate methods on less populated or rare classes and to include training classes in the search space for more practical generalized zero-shot learning. The paper reviews various ZSL approaches, including linear and nonlinear compatibility learning frameworks, attribute-based methods, and hybrid models. It also discusses transductive ZSL settings, where unlabeled images from unseen classes are available during training. The evaluation covers multiple datasets, including coarse-grained (aPY, AWA1), medium-grained (SUN, CUB), and fine-grained datasets (CUB, SUN), as well as the large-scale ImageNet dataset. The authors propose new dataset splits to ensure that test classes do not overlap with the training set used to pre-train ResNet, which is crucial for maintaining the zero-shot assumption. The paper provides detailed results and comparisons of various methods, including reproducibility checks, robustness evaluations, and visualizations of method rankings. It concludes by highlighting the need for more practical and comprehensive evaluations of ZSL methods.This paper aims to provide a comprehensive evaluation of zero-shot learning (ZSL) methods, addressing both the good and bad aspects of the field. The authors define a new benchmark by unifying evaluation protocols and data splits from publicly available datasets, and propose a new dataset, Animals with Attributes 2 (AWA2), which is publicly available. They compare and analyze a significant number of state-of-the-art methods in both classic zero-shot and generalized zero-shot settings. The paper also discusses the limitations of current ZSL methods and suggests ways to improve them. The authors evaluate three aspects of ZSL: methods, datasets, and evaluation protocols. They emphasize the importance of tuning hyperparameters on a disjoint validation set and using per-class averaged top-1 accuracy as an evaluation metric. They also highlight the need to evaluate methods on less populated or rare classes and to include training classes in the search space for more practical generalized zero-shot learning. The paper reviews various ZSL approaches, including linear and nonlinear compatibility learning frameworks, attribute-based methods, and hybrid models. It also discusses transductive ZSL settings, where unlabeled images from unseen classes are available during training. The evaluation covers multiple datasets, including coarse-grained (aPY, AWA1), medium-grained (SUN, CUB), and fine-grained datasets (CUB, SUN), as well as the large-scale ImageNet dataset. The authors propose new dataset splits to ensure that test classes do not overlap with the training set used to pre-train ResNet, which is crucial for maintaining the zero-shot assumption. The paper provides detailed results and comparisons of various methods, including reproducibility checks, robustness evaluations, and visualizations of method rankings. It concludes by highlighting the need for more practical and comprehensive evaluations of ZSL methods.
Reach us at info@study.space