7 Nov 2020 | Timothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey
The paper "Meta-Learning in Neural Networks: A Survey" by Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey provides an extensive overview of the field of meta-learning, also known as learning-to-learn. Meta-learning aims to improve the learning algorithm itself, given experience from multiple learning episodes, addressing challenges such as data and computation bottlenecks and generalization. The authors discuss the definitions of meta-learning and its relationship with related fields like transfer learning and hyperparameter optimization. They propose a new taxonomy to categorize meta-learning methods based on meta-representation, meta-objective, and meta-optimizer. The paper surveys promising applications of meta-learning, including few-shot learning, reinforcement learning, and neural architecture search, and discusses outstanding challenges and future research directions. The introduction highlights the limitations of traditional machine learning approaches and the benefits of meta-learning, such as improved data efficiency and better alignment with human and animal learning. The background section formally defines meta-learning and its historical context, while the related fields section clarifies the differences between meta-learning and other techniques like transfer learning, domain adaptation, and hyperparameter optimization. The proposed taxonomy provides a comprehensive framework for understanding and developing new meta-learning methods. The survey section breaks down existing literature according to the proposed taxonomy, covering various meta-representations, meta-optimizers, and meta-objectives.The paper "Meta-Learning in Neural Networks: A Survey" by Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey provides an extensive overview of the field of meta-learning, also known as learning-to-learn. Meta-learning aims to improve the learning algorithm itself, given experience from multiple learning episodes, addressing challenges such as data and computation bottlenecks and generalization. The authors discuss the definitions of meta-learning and its relationship with related fields like transfer learning and hyperparameter optimization. They propose a new taxonomy to categorize meta-learning methods based on meta-representation, meta-objective, and meta-optimizer. The paper surveys promising applications of meta-learning, including few-shot learning, reinforcement learning, and neural architecture search, and discusses outstanding challenges and future research directions. The introduction highlights the limitations of traditional machine learning approaches and the benefits of meta-learning, such as improved data efficiency and better alignment with human and animal learning. The background section formally defines meta-learning and its historical context, while the related fields section clarifies the differences between meta-learning and other techniques like transfer learning, domain adaptation, and hyperparameter optimization. The proposed taxonomy provides a comprehensive framework for understanding and developing new meta-learning methods. The survey section breaks down existing literature according to the proposed taxonomy, covering various meta-representations, meta-optimizers, and meta-objectives.