3 Oct 2016 | Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, Thomas Ristenpart
This paper presents model extraction attacks that can steal machine learning (ML) models from cloud-based ML-as-a-service (MLaaS) platforms. These attacks exploit the fact that ML models often provide confidence values along with class labels in their predictions, which can be used to infer the model's parameters. The authors demonstrate that these attacks can extract models with near-perfect fidelity for popular model classes, including logistic regression, neural networks, and decision trees. They show that even when confidence values are omitted from model outputs, attacks can still be effective. The paper also discusses the implications of these attacks, including the potential leakage of sensitive training data and the risk of evasion attacks in security applications. The authors propose countermeasures, including the use of adaptive algorithms and the development of new model extraction techniques. The results highlight the need for careful ML model deployment and the development of new countermeasures to protect against model extraction attacks. The paper also includes experimental results showing the effectiveness of these attacks on real-world MLaaS platforms such as BigML and Amazon Machine Learning.This paper presents model extraction attacks that can steal machine learning (ML) models from cloud-based ML-as-a-service (MLaaS) platforms. These attacks exploit the fact that ML models often provide confidence values along with class labels in their predictions, which can be used to infer the model's parameters. The authors demonstrate that these attacks can extract models with near-perfect fidelity for popular model classes, including logistic regression, neural networks, and decision trees. They show that even when confidence values are omitted from model outputs, attacks can still be effective. The paper also discusses the implications of these attacks, including the potential leakage of sensitive training data and the risk of evasion attacks in security applications. The authors propose countermeasures, including the use of adaptive algorithms and the development of new model extraction techniques. The results highlight the need for careful ML model deployment and the development of new countermeasures to protect against model extraction attacks. The paper also includes experimental results showing the effectiveness of these attacks on real-world MLaaS platforms such as BigML and Amazon Machine Learning.