Generalizing from a Few Examples: A Survey on Few-Shot Learning

Generalizing from a Few Examples: A Survey on Few-Shot Learning

March 2020 | YAQING WANG, Hong Kong University of Science and Technology and Baidu Research QUANMING YAO*, 4Paradigm Inc. JAMES T. KWOK, Hong Kong University of Science and Technology LIONEL M. NI, Hong Kong University of Science and Technology
This paper provides a comprehensive survey of Few-Shot Learning (FSL), a machine learning paradigm that enables rapid generalization from a small number of examples. FSL is particularly useful when data is scarce, as it leverages prior knowledge to improve learning performance. The paper defines FSL as a type of machine learning problem where the training data contains only a limited number of examples with supervised information. It distinguishes FSL from related problems such as weakly supervised learning, imbalanced learning, transfer learning, and meta-learning. The core challenge in FSL is the unreliability of the empirical risk minimizer, which is analyzed using error decomposition in supervised learning. To address this, FSL methods are categorized into three perspectives: (i) data, which uses prior knowledge to augment the training data; (ii) model, which reduces the hypothesis space by leveraging prior knowledge; and (iii) algorithm, which uses prior knowledge to guide the search for the best hypothesis. The paper reviews existing FSL methods, discusses their pros and cons, and proposes promising future directions in terms of problem setup, techniques, applications, and theories. It also highlights the importance of FSL in various applications, including robotics, drug discovery, and rare case learning. The survey concludes that FSL is a critical area of research that bridges the gap between AI and human learning capabilities.This paper provides a comprehensive survey of Few-Shot Learning (FSL), a machine learning paradigm that enables rapid generalization from a small number of examples. FSL is particularly useful when data is scarce, as it leverages prior knowledge to improve learning performance. The paper defines FSL as a type of machine learning problem where the training data contains only a limited number of examples with supervised information. It distinguishes FSL from related problems such as weakly supervised learning, imbalanced learning, transfer learning, and meta-learning. The core challenge in FSL is the unreliability of the empirical risk minimizer, which is analyzed using error decomposition in supervised learning. To address this, FSL methods are categorized into three perspectives: (i) data, which uses prior knowledge to augment the training data; (ii) model, which reduces the hypothesis space by leveraging prior knowledge; and (iii) algorithm, which uses prior knowledge to guide the search for the best hypothesis. The paper reviews existing FSL methods, discusses their pros and cons, and proposes promising future directions in terms of problem setup, techniques, applications, and theories. It also highlights the importance of FSL in various applications, including robotics, drug discovery, and rare case learning. The survey concludes that FSL is a critical area of research that bridges the gap between AI and human learning capabilities.
Reach us at info@study.space