This paper establishes rigorous benchmarks for image classifier robustness, introducing two new datasets: IMAGENet-C and IMAGENet-P. IMAGENet-C evaluates classifiers' robustness to common visual corruptions, while IMAGENet-P assesses their robustness to common perturbations. The authors find that there are negligible changes in relative corruption robustness from AlexNet to ResNet classifiers. They also discover methods to enhance corruption and perturbation robustness, including histogram equalization, multiscale architectures, and larger feature-aggregating models. Notably, an adversarial defense designed for $\ell_{\infty}$ perturbations provides substantial common perturbation robustness. The benchmarks and findings aim to aid future research in developing networks that robustly generalize.This paper establishes rigorous benchmarks for image classifier robustness, introducing two new datasets: IMAGENet-C and IMAGENet-P. IMAGENet-C evaluates classifiers' robustness to common visual corruptions, while IMAGENet-P assesses their robustness to common perturbations. The authors find that there are negligible changes in relative corruption robustness from AlexNet to ResNet classifiers. They also discover methods to enhance corruption and perturbation robustness, including histogram equalization, multiscale architectures, and larger feature-aggregating models. Notably, an adversarial defense designed for $\ell_{\infty}$ perturbations provides substantial common perturbation robustness. The benchmarks and findings aim to aid future research in developing networks that robustly generalize.