Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

17 Dec 2019 | Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua V. Dillon, Balaji Lakshminarayanan, Jasper Snoek
This paper evaluates the reliability of predictive uncertainty estimates in deep learning models under dataset shift. The authors investigate how well different methods quantify uncertainty when the data distribution changes from the training distribution. They find that traditional post-hoc calibration methods often fail under dataset shift, while some methods that marginalize over models perform better. The study compares various probabilistic deep learning methods, including Bayesian and non-Bayesian approaches, on classification tasks across image, text, and categorical modalities. The results show that ensembles and dropout-based methods outperform other approaches in terms of calibration and accuracy under dataset shift. The paper highlights the importance of evaluating predictive uncertainty under distributional shift for real-world applications where data distributions may change over time. It also emphasizes the need for robust uncertainty estimation to ensure reliable model decisions in such scenarios.This paper evaluates the reliability of predictive uncertainty estimates in deep learning models under dataset shift. The authors investigate how well different methods quantify uncertainty when the data distribution changes from the training distribution. They find that traditional post-hoc calibration methods often fail under dataset shift, while some methods that marginalize over models perform better. The study compares various probabilistic deep learning methods, including Bayesian and non-Bayesian approaches, on classification tasks across image, text, and categorical modalities. The results show that ensembles and dropout-based methods outperform other approaches in terms of calibration and accuracy under dataset shift. The paper highlights the importance of evaluating predictive uncertainty under distributional shift for real-world applications where data distributions may change over time. It also emphasizes the need for robust uncertainty estimation to ensure reliable model decisions in such scenarios.
Reach us at info@study.space
[slides] Can You Trust Your Model's Uncertainty%3F Evaluating Predictive Uncertainty Under Dataset Shift | StudySpace