Membership Inference Attacks Against Machine Learning Models

Membership Inference Attacks Against Machine Learning Models

31 Mar 2017 | Reza Shokri, Marco Stronati, Congzheng Song, Vityal Shmatikov
This paper investigates the fundamental issue of membership inference attacks against machine learning models. The authors focus on determining whether a given data record was part of the model's training dataset. They propose a method to perform membership inference by training an attack model to distinguish the target model's behavior on training inputs from its behavior on non-training inputs. The attack model is trained using shadow models, which are created to mimic the target model but with known training datasets. The authors evaluate their techniques on classification models trained by commercial "machine learning as a service" providers like Google and Amazon, using realistic datasets and tasks. They demonstrate that these models can be vulnerable to membership inference attacks, with high accuracy rates, especially for multi-class classification models trained on retail transaction datasets. The paper also discusses the root causes of these attacks and evaluates various mitigation strategies. The results highlight the significant risk of membership inference in sensitive datasets, such as hospital discharge records, and provide insights into the factors influencing the leakage of membership information.This paper investigates the fundamental issue of membership inference attacks against machine learning models. The authors focus on determining whether a given data record was part of the model's training dataset. They propose a method to perform membership inference by training an attack model to distinguish the target model's behavior on training inputs from its behavior on non-training inputs. The attack model is trained using shadow models, which are created to mimic the target model but with known training datasets. The authors evaluate their techniques on classification models trained by commercial "machine learning as a service" providers like Google and Amazon, using realistic datasets and tasks. They demonstrate that these models can be vulnerable to membership inference attacks, with high accuracy rates, especially for multi-class classification models trained on retail transaction datasets. The paper also discusses the root causes of these attacks and evaluates various mitigation strategies. The results highlight the significant risk of membership inference in sensitive datasets, such as hospital discharge records, and provide insights into the factors influencing the leakage of membership information.
Reach us at info@study.space
[slides] Membership Inference Attacks Against Machine Learning Models | StudySpace