28 Sep 2017 | Zhenguo Li Fengwei Zhou Fei Chen Hang Li
Meta-SGD is a meta-learner that can quickly and accurately learn from few examples in supervised learning and reinforcement learning. Unlike existing meta-learners such as LSTM and MAML, Meta-SGD is simpler, easier to implement, and more efficient. It learns the initialization, update direction, and learning rate of the learner in a single meta-learning process, making it highly competitive in few-shot learning tasks. The paper introduces Meta-SGD, which is an SGD-like meta-learner that can initialize and adapt any differentiable learner in one step. It is trained to learn the learner's initialization, update direction, and learning rate, enabling it to perform well on regression, classification, and reinforcement learning tasks. The paper compares Meta-SGD with other meta-learners and shows that it outperforms them in terms of performance and efficiency. The results show that Meta-SGD can learn quickly from a few examples and adapt to new tasks efficiently. The paper also discusses the related work in meta-learning and the experimental results on various few-shot learning tasks. The results demonstrate that Meta-SGD is a promising approach for few-shot learning.Meta-SGD is a meta-learner that can quickly and accurately learn from few examples in supervised learning and reinforcement learning. Unlike existing meta-learners such as LSTM and MAML, Meta-SGD is simpler, easier to implement, and more efficient. It learns the initialization, update direction, and learning rate of the learner in a single meta-learning process, making it highly competitive in few-shot learning tasks. The paper introduces Meta-SGD, which is an SGD-like meta-learner that can initialize and adapt any differentiable learner in one step. It is trained to learn the learner's initialization, update direction, and learning rate, enabling it to perform well on regression, classification, and reinforcement learning tasks. The paper compares Meta-SGD with other meta-learners and shows that it outperforms them in terms of performance and efficiency. The results show that Meta-SGD can learn quickly from a few examples and adapt to new tasks efficiently. The paper also discusses the related work in meta-learning and the experimental results on various few-shot learning tasks. The results demonstrate that Meta-SGD is a promising approach for few-shot learning.