Inverting Gradients - How easy is it to break privacy in federated learning?

Inverting Gradients - How easy is it to break privacy in federated learning?

11 Sep 2020 | Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, Michael Moeller
The paper "Inverting Gradients - How easy is it to break privacy in federated learning?" by Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller, explores the security of parameter gradient sharing in federated learning. Federated learning is a distributed learning paradigm where multiple users collaboratively train a neural network by sharing parameter updates (gradients) instead of raw data, aiming to protect user privacy. However, the authors demonstrate that this approach is not as secure as previously thought. The study reveals that it is possible to reconstruct images at high resolution from parameter gradients, even for trained deep networks. This is achieved by exploiting magnitude-invariant losses and optimization strategies based on adversarial attacks. The authors show that any input to a fully connected layer can be reconstructed analytically, independent of the network architecture. They also analyze the effects of architecture and parameters on the difficulty of reconstruction and conclude that averaging gradients over multiple iterations or images does not significantly enhance privacy. The paper includes a theoretical analysis and numerical reconstruction methods, demonstrating the effectiveness of their approach on various network architectures, including deep and non-smooth ones. Empirical results using modern computer vision architectures for image classification show that image recovery is feasible in realistic settings, even with deep and non-smooth architectures. The authors conclude that differential privacy remains the only provable way to guarantee security in federated learning, but it significantly reduces model accuracy, highlighting the need for further research on privacy-preserving techniques.The paper "Inverting Gradients - How easy is it to break privacy in federated learning?" by Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller, explores the security of parameter gradient sharing in federated learning. Federated learning is a distributed learning paradigm where multiple users collaboratively train a neural network by sharing parameter updates (gradients) instead of raw data, aiming to protect user privacy. However, the authors demonstrate that this approach is not as secure as previously thought. The study reveals that it is possible to reconstruct images at high resolution from parameter gradients, even for trained deep networks. This is achieved by exploiting magnitude-invariant losses and optimization strategies based on adversarial attacks. The authors show that any input to a fully connected layer can be reconstructed analytically, independent of the network architecture. They also analyze the effects of architecture and parameters on the difficulty of reconstruction and conclude that averaging gradients over multiple iterations or images does not significantly enhance privacy. The paper includes a theoretical analysis and numerical reconstruction methods, demonstrating the effectiveness of their approach on various network architectures, including deep and non-smooth ones. Empirical results using modern computer vision architectures for image classification show that image recovery is feasible in realistic settings, even with deep and non-smooth architectures. The authors conclude that differential privacy remains the only provable way to guarantee security in federated learning, but it significantly reduces model accuracy, highlighting the need for further research on privacy-preserving techniques.
Reach us at info@study.space