Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning

Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning

14 Sep 2017 | Briland Hitaj, Giuseppe Ateniese, Fernando Perez-Cruz
Deep learning has become popular for its ability to perform end-to-end learning, where features and classifiers are learned simultaneously, leading to improved classification accuracy. However, privacy concerns arise as centralized training can expose sensitive user data. Collaborative deep learning aims to address this by allowing local training and sharing only a subset of parameters. Differential privacy (DP) is used to obfuscate parameters, but the authors show that this is ineffective against a new attack using Generative Adversarial Networks (GANs). The attack exploits the real-time nature of learning to generate samples that mimic private training data, allowing an adversary to infer sensitive information. Even with DP, the attack can succeed if the model's accuracy is high enough. The attack works in both centralized and decentralized settings, and is effective against various models, including convolutional neural networks (CNNs). The authors argue that record-level DP is not sufficient to protect against this attack. They propose a novel active inference attack that is more effective than existing methods. The attack can be applied in a white-box setting where the attacker has access to internal model parameters. The results show that the attack can recover sensitive information, even when parameters are obfuscated. The authors emphasize that their attack does not violate DP but highlights the limitations of its application in collaborative learning. They also discuss the implications of their findings for privacy in collaborative deep learning, noting that differential privacy may not be sufficient to protect against such attacks. The paper concludes that collaborative learning is less privacy-friendly than centralized learning, as any user can compromise the privacy of others. The authors suggest that secure aggregation protocols may be more effective than differential privacy in protecting against such attacks. The paper also discusses related work on attacks on machine learning models and privacy-preserving techniques, highlighting the challenges of protecting data in collaborative learning environments. The authors propose a new attack using GANs that can infer sensitive information from a model, even when parameters are obfuscated. The attack works by generating samples that mimic private data, allowing the adversary to infer information about the training set. The paper concludes that collaborative learning is vulnerable to such attacks, and that more robust privacy-preserving techniques are needed.Deep learning has become popular for its ability to perform end-to-end learning, where features and classifiers are learned simultaneously, leading to improved classification accuracy. However, privacy concerns arise as centralized training can expose sensitive user data. Collaborative deep learning aims to address this by allowing local training and sharing only a subset of parameters. Differential privacy (DP) is used to obfuscate parameters, but the authors show that this is ineffective against a new attack using Generative Adversarial Networks (GANs). The attack exploits the real-time nature of learning to generate samples that mimic private training data, allowing an adversary to infer sensitive information. Even with DP, the attack can succeed if the model's accuracy is high enough. The attack works in both centralized and decentralized settings, and is effective against various models, including convolutional neural networks (CNNs). The authors argue that record-level DP is not sufficient to protect against this attack. They propose a novel active inference attack that is more effective than existing methods. The attack can be applied in a white-box setting where the attacker has access to internal model parameters. The results show that the attack can recover sensitive information, even when parameters are obfuscated. The authors emphasize that their attack does not violate DP but highlights the limitations of its application in collaborative learning. They also discuss the implications of their findings for privacy in collaborative deep learning, noting that differential privacy may not be sufficient to protect against such attacks. The paper concludes that collaborative learning is less privacy-friendly than centralized learning, as any user can compromise the privacy of others. The authors suggest that secure aggregation protocols may be more effective than differential privacy in protecting against such attacks. The paper also discusses related work on attacks on machine learning models and privacy-preserving techniques, highlighting the challenges of protecting data in collaborative learning environments. The authors propose a new attack using GANs that can infer sensitive information from a model, even when parameters are obfuscated. The attack works by generating samples that mimic private data, allowing the adversary to infer information about the training set. The paper concludes that collaborative learning is vulnerable to such attacks, and that more robust privacy-preserving techniques are needed.
Reach us at info@study.space
Understanding Deep Models Under the GAN%3A Information Leakage from Collaborative Deep Learning