Deep Leakage from Gradients

Deep Leakage from Gradients

19 Dec 2019 | Ligeng Zhu, Zhijian Liu, Song Han
Deep Leakage from Gradients is a privacy risk in distributed machine learning where shared gradients can reveal private training data. This paper shows that it is possible to recover private data from shared gradients, a phenomenon called Deep Leakage from Gradients (DLG). The attack is more effective than previous methods, recovering pixel-wise accurate images and token-wise matching texts. The attack involves optimizing dummy inputs and labels to match real gradients, revealing the original data. Experiments on vision and language tasks show DLG can fully recover training data in few gradient steps. To prevent DLG, strategies like gradient pruning, noise addition, and gradient compression are proposed. Gradient pruning is the most effective, as it significantly reduces the attack's success. The paper highlights the need to rethink gradient sharing safety in multi-node systems.Deep Leakage from Gradients is a privacy risk in distributed machine learning where shared gradients can reveal private training data. This paper shows that it is possible to recover private data from shared gradients, a phenomenon called Deep Leakage from Gradients (DLG). The attack is more effective than previous methods, recovering pixel-wise accurate images and token-wise matching texts. The attack involves optimizing dummy inputs and labels to match real gradients, revealing the original data. Experiments on vision and language tasks show DLG can fully recover training data in few gradient steps. To prevent DLG, strategies like gradient pruning, noise addition, and gradient compression are proposed. Gradient pruning is the most effective, as it significantly reduces the attack's success. The paper highlights the need to rethink gradient sharing safety in multi-node systems.
Reach us at info@study.space
Understanding Deep Leakage from Gradients