META-LEARNING WITH LATENT EMBEDDING OPTIMIZATION

META-LEARNING WITH LATENT EMBEDDING OPTIMIZATION

26 Mar 2019 | Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero & Raia Hadsell
Latent Embedding Optimization (LEO) is a meta-learning approach that addresses the challenges of few-shot learning and fast adaptation by learning a low-dimensional latent embedding of model parameters. This method decouples gradient-based adaptation from the high-dimensional parameter space, enabling more effective adaptation by operating in the latent space. LEO achieves state-of-the-art performance on the miniImageNet and tieredImageNet datasets, demonstrating its effectiveness in capturing uncertainty and adapting models efficiently. The approach involves learning a stochastic latent space conditioned on input data, which is then used to generate model parameters. This allows for data-dependent initialization and optimization in the latent space, leading to better generalization and adaptation. LEO also incorporates a relation network to consider the pairwise relationships between classes, enhancing the model's ability to handle ambiguous tasks. The method is evaluated on both regression and classification tasks, showing its versatility and effectiveness in handling few-shot learning scenarios. The results indicate that LEO outperforms existing methods, particularly in scenarios with limited data, by leveraging the latent space for more efficient and effective adaptation.Latent Embedding Optimization (LEO) is a meta-learning approach that addresses the challenges of few-shot learning and fast adaptation by learning a low-dimensional latent embedding of model parameters. This method decouples gradient-based adaptation from the high-dimensional parameter space, enabling more effective adaptation by operating in the latent space. LEO achieves state-of-the-art performance on the miniImageNet and tieredImageNet datasets, demonstrating its effectiveness in capturing uncertainty and adapting models efficiently. The approach involves learning a stochastic latent space conditioned on input data, which is then used to generate model parameters. This allows for data-dependent initialization and optimization in the latent space, leading to better generalization and adaptation. LEO also incorporates a relation network to consider the pairwise relationships between classes, enhancing the model's ability to handle ambiguous tasks. The method is evaluated on both regression and classification tasks, showing its versatility and effectiveness in handling few-shot learning scenarios. The results indicate that LEO outperforms existing methods, particularly in scenarios with limited data, by leveraging the latent space for more efficient and effective adaptation.
Reach us at info@study.space
[slides and audio] Meta-Learning with Latent Embedding Optimization