Federated Unlearning for Human Activity Recognition

Federated Unlearning for Human Activity Recognition

17 Jan 2024 | Kongyang Chen, Dongping Zhang, Yaping Chai, Weibin Zhang, Shaowei Wang, Jiaxing Shen
This paper proposes a lightweight federated unlearning method for Human Activity Recognition (HAR) to enable clients to remove their training data from the global model without compromising other clients' privacy. The method uses a third-party dataset unrelated to model training and employs KL divergence as a loss function to align the predicted probability distribution on forgotten data with that of the third-party dataset. A membership inference evaluation method is introduced to assess unlearning effectiveness. Experimental results across diverse datasets show that the method achieves unlearning accuracy comparable to retraining methods, resulting in speedups ranging from hundreds to thousands. The method conserves communication resources and does not require the participation of other clients. It is applied to two HAR datasets and the MNIST dataset, demonstrating comparable accuracy to retraining methods with significant speedups. The paper also discusses the impact of third-party data selection on unlearning effectiveness and evaluates the performance of the unlearned model. The results show that the method effectively removes data from the model while maintaining model performance. The method is validated on two HAR datasets and the MNIST dataset, demonstrating its effectiveness in unlearning client data while preserving privacy. The paper concludes that the proposed method is a pioneering solution for federated unlearning in HAR, contributing to enhanced privacy protection in this domain.This paper proposes a lightweight federated unlearning method for Human Activity Recognition (HAR) to enable clients to remove their training data from the global model without compromising other clients' privacy. The method uses a third-party dataset unrelated to model training and employs KL divergence as a loss function to align the predicted probability distribution on forgotten data with that of the third-party dataset. A membership inference evaluation method is introduced to assess unlearning effectiveness. Experimental results across diverse datasets show that the method achieves unlearning accuracy comparable to retraining methods, resulting in speedups ranging from hundreds to thousands. The method conserves communication resources and does not require the participation of other clients. It is applied to two HAR datasets and the MNIST dataset, demonstrating comparable accuracy to retraining methods with significant speedups. The paper also discusses the impact of third-party data selection on unlearning effectiveness and evaluates the performance of the unlearned model. The results show that the method effectively removes data from the model while maintaining model performance. The method is validated on two HAR datasets and the MNIST dataset, demonstrating its effectiveness in unlearning client data while preserving privacy. The paper concludes that the proposed method is a pioneering solution for federated unlearning in HAR, contributing to enhanced privacy protection in this domain.
Reach us at info@study.space