Poisoning Attacks against Support Vector Machines

Poisoning Attacks against Support Vector Machines

25 Mar 2013 | Battista Biggio, Blaine Nelson, Pavel Laskov
This paper presents a poisoning attack against Support Vector Machines (SVMs), which involves injecting specially crafted training data to increase the SVM's test error. The attack exploits the fact that most learning algorithms assume their training data comes from a natural distribution, which may not hold in security-sensitive settings. The proposed attack uses a gradient ascent strategy based on the properties of the SVM's optimal solution, enabling the attack to be constructed in the input space even for non-linear kernels. The method reliably identifies good local maxima of the non-convex validation error surface, significantly increasing the classifier's test error. The attack is based on the properties of the optimal solution of the SVM training problem. The attacker can manipulate the optimal SVM solution by inserting specially crafted attack points. The method is kernelized, allowing it to work with both linear and non-linear kernels. The attack is evaluated on artificial and real data, demonstrating its effectiveness in increasing classification errors. The results show that a single attack point can significantly increase the classification error, highlighting the vulnerability of SVMs to poisoning attacks. The paper also discusses the implications of the attack method, including the need to consider resistance against adversarial training data in the design of learning algorithms. The method is compared to previous work, showing that it offers a more practical approach for optimizing the impact of data-driven attacks against kernel-based learning algorithms. The paper concludes with future work directions, including the simultaneous optimization of multi-point attacks and the incorporation of real-world inverse feature-mapping problems.This paper presents a poisoning attack against Support Vector Machines (SVMs), which involves injecting specially crafted training data to increase the SVM's test error. The attack exploits the fact that most learning algorithms assume their training data comes from a natural distribution, which may not hold in security-sensitive settings. The proposed attack uses a gradient ascent strategy based on the properties of the SVM's optimal solution, enabling the attack to be constructed in the input space even for non-linear kernels. The method reliably identifies good local maxima of the non-convex validation error surface, significantly increasing the classifier's test error. The attack is based on the properties of the optimal solution of the SVM training problem. The attacker can manipulate the optimal SVM solution by inserting specially crafted attack points. The method is kernelized, allowing it to work with both linear and non-linear kernels. The attack is evaluated on artificial and real data, demonstrating its effectiveness in increasing classification errors. The results show that a single attack point can significantly increase the classification error, highlighting the vulnerability of SVMs to poisoning attacks. The paper also discusses the implications of the attack method, including the need to consider resistance against adversarial training data in the design of learning algorithms. The method is compared to previous work, showing that it offers a more practical approach for optimizing the impact of data-driven attacks against kernel-based learning algorithms. The paper concludes with future work directions, including the simultaneous optimization of multi-point attacks and the incorporation of real-world inverse feature-mapping problems.
Reach us at info@study.space