19 Jun 2017 | Jake Snell, Kevin Swersky, Richard S. Zemel
The paper introduces *prototypical networks* for few-shot learning, a task where a classifier must adapt to new classes with only a few examples. Prototypical networks learn a metric space where classification is performed by computing distances to prototype representations of each class. The approach is simpler and more efficient than recent meta-learning algorithms, achieving state-of-the-art results on benchmark datasets like Omniglot and miniImageNet. The authors analyze the impact of design choices, such as the distance metric and episode composition, and extend the method to zero-shot learning, achieving top performance on the Caltech-UCSD Birds (CUB) dataset. The paper also discusses related work and highlights the effectiveness of prototypical networks in handling limited data and novel classes.The paper introduces *prototypical networks* for few-shot learning, a task where a classifier must adapt to new classes with only a few examples. Prototypical networks learn a metric space where classification is performed by computing distances to prototype representations of each class. The approach is simpler and more efficient than recent meta-learning algorithms, achieving state-of-the-art results on benchmark datasets like Omniglot and miniImageNet. The authors analyze the impact of design choices, such as the distance metric and episode composition, and extend the method to zero-shot learning, achieving top performance on the Caltech-UCSD Birds (CUB) dataset. The paper also discusses related work and highlights the effectiveness of prototypical networks in handling limited data and novel classes.