Theoretically Principled Trade-off between Robustness and Accuracy

Theoretically Principled Trade-off between Robustness and Accuracy

24 Jun 2019 | Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, Michael I. Jordan
The paper identifies a trade-off between robustness and accuracy in the context of adversarial examples, a critical issue in machine learning. It decomposes the robust error into natural error and boundary error, providing a differentiable upper bound using classification-calibrated loss. This theoretical analysis leads to a new defense method, TRADES, which trades off adversarial robustness for accuracy. The method is evaluated on real-world datasets and performs well, achieving state-of-the-art performance on benchmarks. Notably, it won first place in the NeurIPS 2018 Adversarial Vision Challenge, outperforming the runner-up by 11.41% in terms of mean $\ell_2$ perturbation distance. The paper also discusses the theoretical foundations and experimental results, highlighting the effectiveness of TRADES in both black-box and white-box attack scenarios.The paper identifies a trade-off between robustness and accuracy in the context of adversarial examples, a critical issue in machine learning. It decomposes the robust error into natural error and boundary error, providing a differentiable upper bound using classification-calibrated loss. This theoretical analysis leads to a new defense method, TRADES, which trades off adversarial robustness for accuracy. The method is evaluated on real-world datasets and performs well, achieving state-of-the-art performance on benchmarks. Notably, it won first place in the NeurIPS 2018 Adversarial Vision Challenge, outperforming the runner-up by 11.41% in terms of mean $\ell_2$ perturbation distance. The paper also discusses the theoretical foundations and experimental results, highlighting the effectiveness of TRADES in both black-box and white-box attack scenarios.
Reach us at info@study.space