Countering Adversarial Images using Input Transformations

Countering Adversarial Images using Input Transformations

25 Jan 2018 | Chuan Guo, Mayank Rana & Moustapha Cissé & Laurens van der Maaten
This paper explores strategies to defend against adversarial attacks on image-classification systems by transforming inputs before feeding them to the system. The authors investigate various image transformations, including bit-depth reduction, JPEG compression, total variance minimization, and image quilting. Experiments on the ImageNet dataset show that total variance minimization and image quilting are highly effective, particularly when the convolutional network is trained on transformed images. These defenses are non-differentiable and inherently random, making it difficult for adversaries to circumvent them. The best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks. The paper also discusses the effectiveness of these defenses in both gray-box and black-box settings, and compares them with prior work, demonstrating their superior performance against a range of adversarial attack methods.This paper explores strategies to defend against adversarial attacks on image-classification systems by transforming inputs before feeding them to the system. The authors investigate various image transformations, including bit-depth reduction, JPEG compression, total variance minimization, and image quilting. Experiments on the ImageNet dataset show that total variance minimization and image quilting are highly effective, particularly when the convolutional network is trained on transformed images. These defenses are non-differentiable and inherently random, making it difficult for adversaries to circumvent them. The best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks. The paper also discusses the effectiveness of these defenses in both gray-box and black-box settings, and compares them with prior work, demonstrating their superior performance against a range of adversarial attack methods.
Reach us at info@study.space