29 Oct 2019 | Dan Hendrycks, Mantas Mazeika*, Saurav Kadavath*, Dawn Song
The paper "Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty" by Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song explores the benefits of self-supervised learning (SSL) in enhancing model robustness and uncertainty estimation. The authors argue that while SSL has traditionally been viewed as a means to reduce the need for annotations, it can also significantly improve model robustness to adversarial examples, label corruptions, and common input corruptions. Additionally, SSL is found to be particularly effective in out-of-distribution (OOD) detection, even surpassing fully supervised methods on challenging, near-distribution outliers.
The study demonstrates that SSL can improve robustness without necessarily improving clean accuracy. For instance, auxiliary rotation prediction, a form of SSL, shows substantial gains in robustness to adversarial perturbations, common corruptions, and label corruptions. In the context of OOD detection, SSL techniques outperform both standard supervised methods and other SSL approaches, achieving higher AUROC values on various anomaly datasets.
The authors conclude that SSL can be a valuable tool for improving model robustness and uncertainty estimation, suggesting that future research should focus on integrating SSL with task-specific methods to further enhance these aspects. The paper also provides detailed experimental setups and results, supporting the effectiveness of SSL in these areas.The paper "Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty" by Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song explores the benefits of self-supervised learning (SSL) in enhancing model robustness and uncertainty estimation. The authors argue that while SSL has traditionally been viewed as a means to reduce the need for annotations, it can also significantly improve model robustness to adversarial examples, label corruptions, and common input corruptions. Additionally, SSL is found to be particularly effective in out-of-distribution (OOD) detection, even surpassing fully supervised methods on challenging, near-distribution outliers.
The study demonstrates that SSL can improve robustness without necessarily improving clean accuracy. For instance, auxiliary rotation prediction, a form of SSL, shows substantial gains in robustness to adversarial perturbations, common corruptions, and label corruptions. In the context of OOD detection, SSL techniques outperform both standard supervised methods and other SSL approaches, achieving higher AUROC values on various anomaly datasets.
The authors conclude that SSL can be a valuable tool for improving model robustness and uncertainty estimation, suggesting that future research should focus on integrating SSL with task-specific methods to further enhance these aspects. The paper also provides detailed experimental setups and results, supporting the effectiveness of SSL in these areas.