Adversarial Training for Free!

Adversarial Training for Free!

20 Nov 2019 | Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, Tom Goldstein
The paper introduces a novel adversarial training algorithm that significantly reduces the computational cost of generating adversarial examples, making it feasible for large-scale datasets like ImageNet. Traditional adversarial training, which involves generating adversarial examples for each gradient update, is highly resource-intensive. The proposed "free" adversarial training method updates both model parameters and image perturbations using a single backward pass, achieving comparable robustness to standard PGD adversarial training with much less computational overhead. This approach can be 3-30 times faster than other strong adversarial training methods and can train a robust model for ImageNet classification with 40% accuracy against PGD attacks on a single workstation with four P100 GPUs in about two days. The method maintains the same computational cost as natural training and does not compromise generalization accuracy significantly. The paper also discusses the trade-offs between robustness and generalization, showing that the proposed method achieves good robustness without substantial accuracy loss.The paper introduces a novel adversarial training algorithm that significantly reduces the computational cost of generating adversarial examples, making it feasible for large-scale datasets like ImageNet. Traditional adversarial training, which involves generating adversarial examples for each gradient update, is highly resource-intensive. The proposed "free" adversarial training method updates both model parameters and image perturbations using a single backward pass, achieving comparable robustness to standard PGD adversarial training with much less computational overhead. This approach can be 3-30 times faster than other strong adversarial training methods and can train a robust model for ImageNet classification with 40% accuracy against PGD attacks on a single workstation with four P100 GPUs in about two days. The method maintains the same computational cost as natural training and does not compromise generalization accuracy significantly. The paper also discusses the trade-offs between robustness and generalization, showing that the proposed method achieves good robustness without substantial accuracy loss.
Reach us at info@study.space