22 Jan 2024 | Xunkai Li, Yulin Zhao, Zhengyu Wu, Wentao Zhang, Rong-Hua Li, Guoren Wang
This paper proposes a new graph unlearning method called Mutual Evolution Graph Unlearning (MEGU), which simultaneously evolves the predictive and unlearning capacities of graph unlearning. MEGU addresses the challenges of existing graph unlearning strategies, which often rely on well-designed architectures or manual processes, leading to inefficiency and limited generalization. The method introduces a unified training framework that ensures complementary optimization for both prediction and unlearning tasks. Extensive experiments on 9 graph benchmark datasets demonstrate that MEGU achieves superior performance in feature, node, and edge-level unlearning tasks, with average improvements of 2.7%, 2.5%, and 3.2% compared to state-of-the-art baselines. Additionally, MEGU exhibits high training efficiency, reducing time and space overhead by 159.8x and 9.6x, respectively, compared to retraining GNN from scratch. The method also achieves significant training speedups, up to 4.5×-7.2×, and demonstrates mutual evolution between the predictive and unlearning modules, enhancing both performance and efficiency. The paper also discusses the challenges and considerations in graph unlearning, including data privacy, model robustness, and the need for efficient and generalizable unlearning strategies. The proposed MEGU framework is designed to be model-agnostic and adaptable to various graph structures, making it a promising solution for effective and general graph unlearning.This paper proposes a new graph unlearning method called Mutual Evolution Graph Unlearning (MEGU), which simultaneously evolves the predictive and unlearning capacities of graph unlearning. MEGU addresses the challenges of existing graph unlearning strategies, which often rely on well-designed architectures or manual processes, leading to inefficiency and limited generalization. The method introduces a unified training framework that ensures complementary optimization for both prediction and unlearning tasks. Extensive experiments on 9 graph benchmark datasets demonstrate that MEGU achieves superior performance in feature, node, and edge-level unlearning tasks, with average improvements of 2.7%, 2.5%, and 3.2% compared to state-of-the-art baselines. Additionally, MEGU exhibits high training efficiency, reducing time and space overhead by 159.8x and 9.6x, respectively, compared to retraining GNN from scratch. The method also achieves significant training speedups, up to 4.5×-7.2×, and demonstrates mutual evolution between the predictive and unlearning modules, enhancing both performance and efficiency. The paper also discusses the challenges and considerations in graph unlearning, including data privacy, model robustness, and the need for efficient and generalizable unlearning strategies. The proposed MEGU framework is designed to be model-agnostic and adaptable to various graph structures, making it a promising solution for effective and general graph unlearning.