Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

2016 | Yarin Gal, Zoubin Ghahramani
This paper presents a theoretical framework that interprets dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes (GPs). The authors show that dropout can be seen as a Bayesian approximation of a GP, where the dropout objective minimizes the Kullback-Leibler divergence between an approximate distribution and the posterior of a deep GP. This interpretation allows for the extraction of model uncertainty information from existing dropout NNs without increasing computational complexity or test accuracy. The paper includes extensive experiments on regression and classification tasks, demonstrating significant improvements in predictive log-likelihood and RMSE compared to state-of-the-art methods. Additionally, the authors explore the use of dropout's uncertainty in reinforcement learning, showing its potential to improve performance. The results highlight the importance of model uncertainty in deep learning and provide a practical approach to incorporating it into existing models.This paper presents a theoretical framework that interprets dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes (GPs). The authors show that dropout can be seen as a Bayesian approximation of a GP, where the dropout objective minimizes the Kullback-Leibler divergence between an approximate distribution and the posterior of a deep GP. This interpretation allows for the extraction of model uncertainty information from existing dropout NNs without increasing computational complexity or test accuracy. The paper includes extensive experiments on regression and classification tasks, demonstrating significant improvements in predictive log-likelihood and RMSE compared to state-of-the-art methods. Additionally, the authors explore the use of dropout's uncertainty in reinforcement learning, showing its potential to improve performance. The results highlight the importance of model uncertainty in deep learning and provide a practical approach to incorporating it into existing models.
Reach us at info@study.space
[slides] Dropout as a Bayesian Approximation%3A Representing Model Uncertainty in Deep Learning | StudySpace