Cross-Context Backdoor Attacks against Graph Prompt Learning

Cross-Context Backdoor Attacks against Graph Prompt Learning

August 25-29, 2024 | Xiaoting Lyu, Yufei Han, Wei Wang, Hangwei Qian, Ivor Tsang, Xiangliang Zhang
This paper introduces CrossBA, the first cross-context backdoor attack against Graph Prompt Learning (GPL), which exploits vulnerabilities in GPL systems by manipulating only the pretraining phase without requiring knowledge of downstream applications. GPL bridges the gap between pretraining and downstream tasks by training GNN encoders on unlabeled data and tailoring prompts for downstream applications. However, backdoor attacks can be introduced during pretraining, leading to downstream models misclassifying backdoored inputs to attacker-chosen target labels. CrossBA addresses this by optimizing trigger graphs and prompt transformations to transfer backdoor threats from pretrained encoders to downstream applications. Through extensive experiments across five cross-context scenarios and five benchmark datasets, CrossBA consistently achieves high attack success rates while preserving the functionality of downstream applications on clean inputs. The study also explores potential countermeasures against CrossBA, revealing that current defenses are insufficient to mitigate the attack. Theoretical analysis and empirical results demonstrate that CrossBA effectively enhances backdoor transferability in cross-context GPL scenarios while maintaining model utility. The paper highlights the persistent backdoor threats to GPL systems and raises concerns about the trustworthiness of GPL techniques.This paper introduces CrossBA, the first cross-context backdoor attack against Graph Prompt Learning (GPL), which exploits vulnerabilities in GPL systems by manipulating only the pretraining phase without requiring knowledge of downstream applications. GPL bridges the gap between pretraining and downstream tasks by training GNN encoders on unlabeled data and tailoring prompts for downstream applications. However, backdoor attacks can be introduced during pretraining, leading to downstream models misclassifying backdoored inputs to attacker-chosen target labels. CrossBA addresses this by optimizing trigger graphs and prompt transformations to transfer backdoor threats from pretrained encoders to downstream applications. Through extensive experiments across five cross-context scenarios and five benchmark datasets, CrossBA consistently achieves high attack success rates while preserving the functionality of downstream applications on clean inputs. The study also explores potential countermeasures against CrossBA, revealing that current defenses are insufficient to mitigate the attack. Theoretical analysis and empirical results demonstrate that CrossBA effectively enhances backdoor transferability in cross-context GPL scenarios while maintaining model utility. The paper highlights the persistent backdoor threats to GPL systems and raises concerns about the trustworthiness of GPL techniques.
Reach us at info@study.space