Towards Effective and General Graph Unlearning via Mutual Evolution

Towards Effective and General Graph Unlearning via Mutual Evolution

22 Jan 2024 | Xunkai Li1*, Yulin Zhao*3, Zhengyu Wu1, Wentao Zhang4,5, Rong-Hua Li1,2, Guoren Wang1
The paper "Towards Effective and General Graph Unlearning via Mutual Evolution" addresses the challenges of data privacy and model robustness in graph-based AI applications. It introduces Mutual Evolution Graph Unlearning (MEGU), a novel framework that simultaneously optimizes the predictive and unlearning capabilities of graph neural networks (GNNs). MEGU aims to balance unlearning performance and framework generalization, ensuring complementary optimization in a unified training framework. MEGU's key contributions include: 1. **New Perspective**: The paper emphasizes the constraints of current graph unlearning (GU) strategies from a new perspective, focusing on two crucial modules: the predictive module and the unlearning module. 2. **New Method**: MEGU integrates these two modules, providing a topology-guided mutually boosting mechanism. The predictive module maintains predictive accuracy while adjusting the original model, and the unlearning module generates predictions for non-unlearning entities while removing the influence of unlearning entities. 3. **SOTA Performance**: Extensive experiments on 9 benchmark datasets demonstrate MEGU's superior performance in feature, node, and edge-level unlearning tasks, achieving average improvements of 2.7%, 2.5%, and 3.2%, respectively, compared to state-of-the-art baselines. Additionally, MEGU exhibits high training efficiency, reducing time and space overhead by an average of 159.8x and 9.6x, respectively, compared to retraining GNNs from scratch. The paper also provides a comprehensive review of recent GU methods, highlighting their limitations and proposing MEGU as a more effective and efficient solution. The experimental setup includes detailed descriptions of datasets, baselines, and unlearning targets, with thorough evaluation metrics and ablation studies to validate the effectiveness and robustness of MEGU.The paper "Towards Effective and General Graph Unlearning via Mutual Evolution" addresses the challenges of data privacy and model robustness in graph-based AI applications. It introduces Mutual Evolution Graph Unlearning (MEGU), a novel framework that simultaneously optimizes the predictive and unlearning capabilities of graph neural networks (GNNs). MEGU aims to balance unlearning performance and framework generalization, ensuring complementary optimization in a unified training framework. MEGU's key contributions include: 1. **New Perspective**: The paper emphasizes the constraints of current graph unlearning (GU) strategies from a new perspective, focusing on two crucial modules: the predictive module and the unlearning module. 2. **New Method**: MEGU integrates these two modules, providing a topology-guided mutually boosting mechanism. The predictive module maintains predictive accuracy while adjusting the original model, and the unlearning module generates predictions for non-unlearning entities while removing the influence of unlearning entities. 3. **SOTA Performance**: Extensive experiments on 9 benchmark datasets demonstrate MEGU's superior performance in feature, node, and edge-level unlearning tasks, achieving average improvements of 2.7%, 2.5%, and 3.2%, respectively, compared to state-of-the-art baselines. Additionally, MEGU exhibits high training efficiency, reducing time and space overhead by an average of 159.8x and 9.6x, respectively, compared to retraining GNNs from scratch. The paper also provides a comprehensive review of recent GU methods, highlighting their limitations and proposing MEGU as a more effective and efficient solution. The experimental setup includes detailed descriptions of datasets, baselines, and unlearning targets, with thorough evaluation metrics and ablation studies to validate the effectiveness and robustness of MEGU.
Reach us at info@study.space