DEEP ANOMALY DETECTION WITH OUTLIER EXPOSURE

DEEP ANOMALY DETECTION WITH OUTLIER EXPOSURE

28 Jan 2019 | Dan Hendrycks, Mantas Mazeika, Thomas Dietterich
Outlier Exposure (OE) is a method for improving deep anomaly detection by training models on auxiliary datasets of outliers. This approach enables anomaly detectors to generalize and detect unseen anomalies. The method is evaluated on natural language processing and small- and large-scale vision tasks, showing significant improvements in detection performance. OE is particularly effective in mitigating issues where generative models may assign higher likelihoods to out-of-distribution examples than in-distribution ones. The method is flexible and robust, with characteristics of the auxiliary dataset influencing performance. OE is applied to various datasets, including SVHN, CIFAR-10, CIFAR-100, Tiny ImageNet, Places365, and text datasets like 20 Newsgroups and TREC. Experiments show that OE improves the calibration of neural network classifiers and enhances density estimation for out-of-distribution samples. The method is computationally efficient and can be applied with low overhead to existing systems. OE is effective in enhancing out-of-distribution detection systems across various settings, including vision and natural language processing.Outlier Exposure (OE) is a method for improving deep anomaly detection by training models on auxiliary datasets of outliers. This approach enables anomaly detectors to generalize and detect unseen anomalies. The method is evaluated on natural language processing and small- and large-scale vision tasks, showing significant improvements in detection performance. OE is particularly effective in mitigating issues where generative models may assign higher likelihoods to out-of-distribution examples than in-distribution ones. The method is flexible and robust, with characteristics of the auxiliary dataset influencing performance. OE is applied to various datasets, including SVHN, CIFAR-10, CIFAR-100, Tiny ImageNet, Places365, and text datasets like 20 Newsgroups and TREC. Experiments show that OE improves the calibration of neural network classifiers and enhances density estimation for out-of-distribution samples. The method is computationally efficient and can be applied with low overhead to existing systems. OE is effective in enhancing out-of-distribution detection systems across various settings, including vision and natural language processing.
Reach us at info@study.space
Understanding Deep Anomaly Detection with Outlier Exposure