Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses

Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses

11 Sep 2024 | Hao Fang, Yixiang Qiu, Hongyao Yu, Wenbo Yu, Jiawei Kong, Baoli Chong, Bin Chen, Member, IEEE, Xuan Wang, Member, IEEE, Shu-Tao Xia, Member, IEEE, and Ke Xu, Fellow, IEEE
This paper provides a comprehensive survey of Model Inversion (MI) attacks and defenses on Deep Neural Networks (DNNs). MI attacks pose a significant privacy threat by reconstructing private training data from pre-trained models, which can be used to recover sensitive information. The paper begins with an overview of early MI studies on traditional machine learning (ML) scenarios, followed by a detailed analysis of recent MI attacks and defenses on DNNs across multiple modalities and learning tasks. It categorizes these methods into different categories and provides a novel taxonomy. The paper also discusses promising research directions and potential solutions to open issues. To facilitate further research, an open-source model inversion toolbox has been implemented and made available on GitHub. The survey covers various aspects of MI attacks, including attacker capabilities, reconstructed data modalities, learning tasks, and evaluation metrics, providing a holistic view of the field.This paper provides a comprehensive survey of Model Inversion (MI) attacks and defenses on Deep Neural Networks (DNNs). MI attacks pose a significant privacy threat by reconstructing private training data from pre-trained models, which can be used to recover sensitive information. The paper begins with an overview of early MI studies on traditional machine learning (ML) scenarios, followed by a detailed analysis of recent MI attacks and defenses on DNNs across multiple modalities and learning tasks. It categorizes these methods into different categories and provides a novel taxonomy. The paper also discusses promising research directions and potential solutions to open issues. To facilitate further research, an open-source model inversion toolbox has been implemented and made available on GitHub. The survey covers various aspects of MI attacks, including attacker capabilities, reconstructed data modalities, learning tasks, and evaluation metrics, providing a holistic view of the field.
Reach us at info@study.space