Inverting Gradients - How easy is it to break privacy in federated learning?

Inverting Gradients - How easy is it to break privacy in federated learning?

11 Sep 2020 | Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, Michael Moeller
Federated learning allows collaborative training of neural networks while keeping user data private. However, this study shows that parameter gradients can be used to reconstruct input data, compromising privacy. By exploiting magnitude-invariant loss and adversarial optimization strategies, the researchers demonstrate that high-resolution images can be reconstructed from gradients, even in trained deep networks. They analyze how network architecture and parameters affect reconstruction difficulty, proving that inputs to fully connected layers can be reconstructed analytically regardless of the rest of the network. Practical experiments show that even averaging gradients over multiple iterations or images does not fully protect user privacy. The study also discusses the implications of these findings for real-world applications, showing that multiple images can be reconstructed from averaged gradients. The research highlights the vulnerability of federated learning to gradient inversion attacks and suggests that differential privacy remains the only guaranteed method for secure training. The study also addresses the broader impact of federated learning on privacy, showing that previous assumptions about enhanced privacy are not always valid. The findings emphasize the need for stronger privacy-preserving techniques in federated learning.Federated learning allows collaborative training of neural networks while keeping user data private. However, this study shows that parameter gradients can be used to reconstruct input data, compromising privacy. By exploiting magnitude-invariant loss and adversarial optimization strategies, the researchers demonstrate that high-resolution images can be reconstructed from gradients, even in trained deep networks. They analyze how network architecture and parameters affect reconstruction difficulty, proving that inputs to fully connected layers can be reconstructed analytically regardless of the rest of the network. Practical experiments show that even averaging gradients over multiple iterations or images does not fully protect user privacy. The study also discusses the implications of these findings for real-world applications, showing that multiple images can be reconstructed from averaged gradients. The research highlights the vulnerability of federated learning to gradient inversion attacks and suggests that differential privacy remains the only guaranteed method for secure training. The study also addresses the broader impact of federated learning on privacy, showing that previous assumptions about enhanced privacy are not always valid. The findings emphasize the need for stronger privacy-preserving techniques in federated learning.
Reach us at info@study.space
[slides and audio] Inverting Gradients - How easy is it to break privacy in federated learning%3F