1 Nov 2018 | Luca Melis, Congzheng Song, Emiliano De Cristofaro, Vitaly Shmatikov
The paper "Exploiting Unintended Feature Leakage in Collaborative Learning" by Luca Melis explores the privacy risks associated with collaborative machine learning, particularly federated learning. The authors demonstrate that model updates during collaborative training can leak unintended information about participants' training data, leading to membership inference and property inference attacks. Membership inference involves inferring whether specific data points were used in the training, while property inference involves inferring properties of the training data that are not directly related to the model's main task. The paper evaluates these attacks on various datasets and tasks, showing high accuracy in inferring sensitive information such as specific locations, authorship, and even the appearance of certain individuals in photos. The authors also discuss the limitations of current defenses and propose active multi-task learning as a way for adversaries to further exploit the model's internal representations. The experiments highlight the need for more robust privacy mechanisms in collaborative learning to protect sensitive data.The paper "Exploiting Unintended Feature Leakage in Collaborative Learning" by Luca Melis explores the privacy risks associated with collaborative machine learning, particularly federated learning. The authors demonstrate that model updates during collaborative training can leak unintended information about participants' training data, leading to membership inference and property inference attacks. Membership inference involves inferring whether specific data points were used in the training, while property inference involves inferring properties of the training data that are not directly related to the model's main task. The paper evaluates these attacks on various datasets and tasks, showing high accuracy in inferring sensitive information such as specific locations, authorship, and even the appearance of certain individuals in photos. The authors also discuss the limitations of current defenses and propose active multi-task learning as a way for adversaries to further exploit the model's internal representations. The experiments highlight the need for more robust privacy mechanisms in collaborative learning to protect sensitive data.