Meta Networks

Meta Networks

8 Jun 2017 | Tsendsuren Munkhdalai, Hong Yu
This paper introduces a novel meta learning method called Meta Networks (MetaNet), which aims to address the challenge of rapid generalization on new concepts with limited training data while preserving performance on previously learned tasks. MetaNet learns meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. The model consists of a base learner and a meta learner, operating in separate spaces (meta space and task space). The meta learner operates in the abstract meta space to support continual learning and meta knowledge acquisition, while the base learner performs within each task. The training weights of MetaNet evolve at different time scales: standard slow weights are updated through a learning algorithm, task-level fast weights are updated within each task, and example-level fast weights are updated for specific input examples. MetaNet also includes an external memory to facilitate rapid learning and generalization. The paper evaluates MetaNet on Omniglot and Mini-ImageNet benchmarks, achieving near human-level performance and outperforming baseline approaches by up to 6% accuracy. It demonstrates several appealing properties of MetaNet, including generalization and continual learning capabilities. The experimental results show that MetaNet can effectively parameterize neural networks with fixed weights and supports meta-level continual learning up to a certain point. The paper concludes with discussions on future directions, such as exploring more robust meta information and developing synaptic weights that can maintain higher-order information.This paper introduces a novel meta learning method called Meta Networks (MetaNet), which aims to address the challenge of rapid generalization on new concepts with limited training data while preserving performance on previously learned tasks. MetaNet learns meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. The model consists of a base learner and a meta learner, operating in separate spaces (meta space and task space). The meta learner operates in the abstract meta space to support continual learning and meta knowledge acquisition, while the base learner performs within each task. The training weights of MetaNet evolve at different time scales: standard slow weights are updated through a learning algorithm, task-level fast weights are updated within each task, and example-level fast weights are updated for specific input examples. MetaNet also includes an external memory to facilitate rapid learning and generalization. The paper evaluates MetaNet on Omniglot and Mini-ImageNet benchmarks, achieving near human-level performance and outperforming baseline approaches by up to 6% accuracy. It demonstrates several appealing properties of MetaNet, including generalization and continual learning capabilities. The experimental results show that MetaNet can effectively parameterize neural networks with fixed weights and supports meta-level continual learning up to a certain point. The paper concludes with discussions on future directions, such as exploring more robust meta information and developing synaptic weights that can maintain higher-order information.
Reach us at info@study.space
[slides and audio] Meta Networks