Lightweight Privacy Protection via Adversarial Sample

Lightweight Privacy Protection via Adversarial Sample

26 March 2024 | Guangxu Xie, Gaopan Hou, Qingqi Pei, Haibo Huang
This paper proposes two structural pruning-based adversarial sample privacy protection schemes (SP-ASPPs) to reduce the computational and storage requirements of adversarial sample-based privacy protection, making it more suitable for deployment on users' local devices. Adversarial samples, which can cause misclassification in deep learning models, are used to protect user privacy by introducing noise into data. However, traditional adversarial sample-based privacy protection methods often rely on large deep learning models, which are difficult to deploy on resource-constrained devices. To address this, the authors introduce structural pruning, a technique that reduces the number of parameters in deep learning models, enabling more efficient deployment. The proposed SP-ASPPs are based on two existing adversarial sample-based privacy protection methods, AttriGuard and MemGuard. By applying structural pruning to these models, the authors design two new privacy protection schemes that can be deployed locally on users' devices. The pruned models are used to generate adversarial noise, which is then added to user data to protect privacy. The effectiveness of the proposed schemes is evaluated on four datasets, demonstrating that the pruned models maintain high accuracy while significantly reducing computational and storage requirements. The results show that the pruned models achieve a high level of accuracy, with minimal degradation compared to the original models. Additionally, the pruned models significantly reduce the parameter count and computational cost, making them more suitable for deployment on local devices. The attack inference accuracy is also reduced, indicating that the pruned models provide better privacy protection. The experiments demonstrate that the proposed SP-ASPPs are effective in achieving local privacy protection while maintaining high accuracy and low computational overhead. The results suggest that structural pruning is a promising approach for improving the efficiency and feasibility of adversarial sample-based privacy protection.This paper proposes two structural pruning-based adversarial sample privacy protection schemes (SP-ASPPs) to reduce the computational and storage requirements of adversarial sample-based privacy protection, making it more suitable for deployment on users' local devices. Adversarial samples, which can cause misclassification in deep learning models, are used to protect user privacy by introducing noise into data. However, traditional adversarial sample-based privacy protection methods often rely on large deep learning models, which are difficult to deploy on resource-constrained devices. To address this, the authors introduce structural pruning, a technique that reduces the number of parameters in deep learning models, enabling more efficient deployment. The proposed SP-ASPPs are based on two existing adversarial sample-based privacy protection methods, AttriGuard and MemGuard. By applying structural pruning to these models, the authors design two new privacy protection schemes that can be deployed locally on users' devices. The pruned models are used to generate adversarial noise, which is then added to user data to protect privacy. The effectiveness of the proposed schemes is evaluated on four datasets, demonstrating that the pruned models maintain high accuracy while significantly reducing computational and storage requirements. The results show that the pruned models achieve a high level of accuracy, with minimal degradation compared to the original models. Additionally, the pruned models significantly reduce the parameter count and computational cost, making them more suitable for deployment on local devices. The attack inference accuracy is also reduced, indicating that the pruned models provide better privacy protection. The experiments demonstrate that the proposed SP-ASPPs are effective in achieving local privacy protection while maintaining high accuracy and low computational overhead. The results suggest that structural pruning is a promising approach for improving the efficiency and feasibility of adversarial sample-based privacy protection.
Reach us at info@study.space
[slides] Lightweight Privacy Protection via Adversarial Sample | StudySpace