Lightweight Privacy Protection via Adversarial Sample

Lightweight Privacy Protection via Adversarial Sample

26 March 2024 | Guangxu Xie, Gaopan Hou, Qingqi Pei, Haibo Huang
This paper addresses the challenge of lightweight privacy protection using adversarial samples, particularly on local devices with limited computational capabilities. Traditional adversarial sample-based privacy protections often rely on large, centralized models, which can be impractical for user devices. To overcome this, the authors propose a method that combines model pruning techniques with adversarial sample privacy protections. Specifically, they use structural pruning to reduce the parameter count of deep learning models, making them more suitable for local deployment. Two structural pruning-based adversarial sample privacy protections (SP-ASPPs) are designed: one based on the existing AttrGuard method and another based on MemGuard. These protections allow users to generate adversarial noise locally, reducing the computational burden on their devices. The effectiveness of the proposed methods is evaluated on four datasets through extensive experiments. The results show that the pruned models maintain high accuracy, while the attack models' inference accuracy decreases significantly, demonstrating the effectiveness of the proposed SP-ASPPs. The paper also discusses the trade-offs between model size, accuracy, and defense effectiveness, providing insights into the optimal pruning ratios and noise budgets for different datasets. Key contributions of the paper include: - Introducing a lightweight adversarial sample privacy protection scheme that leverages structural pruning to reduce the computational burden on user devices. - Designing two structural pruning-based adversarial sample privacy protections (SP-ASPPs) that are more suitable for local deployment. - Demonstrating the effectiveness of the proposed methods through experiments on real-world datasets. The paper concludes by highlighting the potential of the proposed approach to enhance privacy protection while maintaining computational efficiency, making it a valuable contribution to the field of privacy-preserving machine learning.This paper addresses the challenge of lightweight privacy protection using adversarial samples, particularly on local devices with limited computational capabilities. Traditional adversarial sample-based privacy protections often rely on large, centralized models, which can be impractical for user devices. To overcome this, the authors propose a method that combines model pruning techniques with adversarial sample privacy protections. Specifically, they use structural pruning to reduce the parameter count of deep learning models, making them more suitable for local deployment. Two structural pruning-based adversarial sample privacy protections (SP-ASPPs) are designed: one based on the existing AttrGuard method and another based on MemGuard. These protections allow users to generate adversarial noise locally, reducing the computational burden on their devices. The effectiveness of the proposed methods is evaluated on four datasets through extensive experiments. The results show that the pruned models maintain high accuracy, while the attack models' inference accuracy decreases significantly, demonstrating the effectiveness of the proposed SP-ASPPs. The paper also discusses the trade-offs between model size, accuracy, and defense effectiveness, providing insights into the optimal pruning ratios and noise budgets for different datasets. Key contributions of the paper include: - Introducing a lightweight adversarial sample privacy protection scheme that leverages structural pruning to reduce the computational burden on user devices. - Designing two structural pruning-based adversarial sample privacy protections (SP-ASPPs) that are more suitable for local deployment. - Demonstrating the effectiveness of the proposed methods through experiments on real-world datasets. The paper concludes by highlighting the potential of the proposed approach to enhance privacy protection while maintaining computational efficiency, making it a valuable contribution to the field of privacy-preserving machine learning.
Reach us at info@study.space