META-LEARNING WITH DIFFERENTIABLE CLOSED-FORM SOLVERS

META-LEARNING WITH DIFFERENTIABLE CLOSED-FORM SOLVERS

24 Jul 2019 | Luca Bertinetto, João Henriques, Philip H.S. Torr, Andrea Vedaldi
This paper introduces a novel approach to few-shot learning by using differentiable closed-form solvers as the base learning component in a meta-learning framework. The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its internal model, enabling it to quickly adapt to novel data. This requires back-propagating errors through the solver steps. The Woodbury identity is used to make the small number of examples work to our advantage. Both closed-form and iterative solvers based on ridge regression and logistic regression are proposed. The methods achieve performance competitive with or superior to the state of the art on three benchmarks. The paper also discusses related work in meta-learning, including methods based on nearest-neighbours, gradient descent, and memory-augmented models. The proposed approach uses simple and fast-converging methods as base learners, such as ridge regression, which allow more flexibility and faster convergence. The paper presents experiments on three few-shot learning benchmarks: Omniglot, miniImageNet, and CIFAR-FS, demonstrating the effectiveness of the proposed methods. The results show that the proposed methods achieve high accuracy on these benchmarks, with R2-D2 outperforming other methods in some cases. The paper also discusses the efficiency of the proposed methods, showing that they are faster than MAML and comparable to prototypical networks. The results indicate that the proposed methods are effective for few-shot learning and can be used for online adaptation.This paper introduces a novel approach to few-shot learning by using differentiable closed-form solvers as the base learning component in a meta-learning framework. The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its internal model, enabling it to quickly adapt to novel data. This requires back-propagating errors through the solver steps. The Woodbury identity is used to make the small number of examples work to our advantage. Both closed-form and iterative solvers based on ridge regression and logistic regression are proposed. The methods achieve performance competitive with or superior to the state of the art on three benchmarks. The paper also discusses related work in meta-learning, including methods based on nearest-neighbours, gradient descent, and memory-augmented models. The proposed approach uses simple and fast-converging methods as base learners, such as ridge regression, which allow more flexibility and faster convergence. The paper presents experiments on three few-shot learning benchmarks: Omniglot, miniImageNet, and CIFAR-FS, demonstrating the effectiveness of the proposed methods. The results show that the proposed methods achieve high accuracy on these benchmarks, with R2-D2 outperforming other methods in some cases. The paper also discusses the efficiency of the proposed methods, showing that they are faster than MAML and comparable to prototypical networks. The results indicate that the proposed methods are effective for few-shot learning and can be used for online adaptation.
Reach us at info@study.space
Understanding Meta-learning with differentiable closed-form solvers