Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

28 Mar 2019 | Dan Hendrycks, Thomas Dietterich
This paper introduces two new benchmarks for evaluating the robustness of image classifiers to common corruptions and perturbations: IMAGENET-C and IMAGENET-P. IMAGENET-C evaluates the robustness of classifiers to common image corruptions such as noise, blur, and weather, while IMAGENET-P evaluates robustness to common perturbations, including small changes in image content. The authors find that there are minimal differences in corruption robustness between AlexNet and ResNet classifiers. They also discover that certain methods, such as adversarial logit pairing, can significantly improve robustness to common perturbations. The benchmarks aim to help future research develop networks that can robustly generalize to real-world conditions. The paper also discusses various approaches to improving robustness, including histogram equalization, multiscale networks, and larger feature-aggregating models. The results show that larger models and those with better feature aggregation can achieve greater robustness. The authors conclude that robustness is an important factor in the development of deep learning systems, especially for safety-critical applications.This paper introduces two new benchmarks for evaluating the robustness of image classifiers to common corruptions and perturbations: IMAGENET-C and IMAGENET-P. IMAGENET-C evaluates the robustness of classifiers to common image corruptions such as noise, blur, and weather, while IMAGENET-P evaluates robustness to common perturbations, including small changes in image content. The authors find that there are minimal differences in corruption robustness between AlexNet and ResNet classifiers. They also discover that certain methods, such as adversarial logit pairing, can significantly improve robustness to common perturbations. The benchmarks aim to help future research develop networks that can robustly generalize to real-world conditions. The paper also discusses various approaches to improving robustness, including histogram equalization, multiscale networks, and larger feature-aggregating models. The results show that larger models and those with better feature aggregation can achieve greater robustness. The authors conclude that robustness is an important factor in the development of deep learning systems, especially for safety-critical applications.
Reach us at info@study.space
Understanding Benchmarking Neural Network Robustness to Common Corruptions and Perturbations