15 Jul 2015 | José Miguel Hernández-Lobato, Ryan P. Adams
The paper introduces Probabilistic Backpropagation (PBP), a novel method for scalable Bayesian learning of neural networks. PBP addresses the limitations of traditional backpropagation, such as the need for hyperparameter tuning, lack of calibrated probabilistic predictions, and overfitting. Unlike existing Bayesian techniques that struggle with large datasets and network sizes, PBP is designed to be fast and efficient. It works by propagating probabilities through the network to compute the marginal likelihood, followed by backward computation of gradients. Experiments on ten real-world datasets demonstrate that PBP is significantly faster than other techniques while offering competitive predictive performance and accurate estimates of posterior variance on network weights. The method is particularly useful for large-scale applications where traditional Bayesian approaches are impractical due to computational constraints.The paper introduces Probabilistic Backpropagation (PBP), a novel method for scalable Bayesian learning of neural networks. PBP addresses the limitations of traditional backpropagation, such as the need for hyperparameter tuning, lack of calibrated probabilistic predictions, and overfitting. Unlike existing Bayesian techniques that struggle with large datasets and network sizes, PBP is designed to be fast and efficient. It works by propagating probabilities through the network to compute the marginal likelihood, followed by backward computation of gradients. Experiments on ten real-world datasets demonstrate that PBP is significantly faster than other techniques while offering competitive predictive performance and accurate estimates of posterior variance on network weights. The method is particularly useful for large-scale applications where traditional Bayesian approaches are impractical due to computational constraints.