LESSON: Multi-Label Adversarial False Data Injection Attack for Deep Learning Locational Detection

LESSON: Multi-Label Adversarial False Data Injection Attack for Deep Learning Locational Detection

29 Jan 2024 | Jiwei Tian, Chao Shen, Buhong Wang, Xiaofang Xia, Meng Zhang, Chenhao Lin, and Qian Li
This paper addresses the challenge of multi-label adversarial false data injection attacks (AFDIA) in power systems, focusing on both detection and localization. The authors propose a general multi-label adversarial attack framework called muLi-labEI (LESSON), which includes three key components: Perturbing State Variables, Tailored Loss Function Design, and Change of Variables. These components help find suitable multi-label adversarial perturbations within physical constraints to bypass both Bad Data Detection (BDD) and Neural Attack Location (NAL) detection mechanisms. The paper explores four typical LESSON attacks based on two dimensions of attack objectives: induced estimation error and locational detection results. Extensive experimental analyses using IEEE test systems (14-bus, 30-bus, and 118-bus) demonstrate the effectiveness of the proposed attack framework, highlighting its potential to pose serious security threats to large-scale power systems. The results also show that the success rate of the attacks is influenced by the initial FDIA attack scale and the learning rate of the Adam optimizer, with larger scales and higher learning rates generally leading to higher success rates.This paper addresses the challenge of multi-label adversarial false data injection attacks (AFDIA) in power systems, focusing on both detection and localization. The authors propose a general multi-label adversarial attack framework called muLi-labEI (LESSON), which includes three key components: Perturbing State Variables, Tailored Loss Function Design, and Change of Variables. These components help find suitable multi-label adversarial perturbations within physical constraints to bypass both Bad Data Detection (BDD) and Neural Attack Location (NAL) detection mechanisms. The paper explores four typical LESSON attacks based on two dimensions of attack objectives: induced estimation error and locational detection results. Extensive experimental analyses using IEEE test systems (14-bus, 30-bus, and 118-bus) demonstrate the effectiveness of the proposed attack framework, highlighting its potential to pose serious security threats to large-scale power systems. The results also show that the success rate of the attacks is influenced by the initial FDIA attack scale and the learning rate of the Adam optimizer, with larger scales and higher learning rates generally leading to higher success rates.
Reach us at info@study.space
[slides] LESSON%3A Multi-Label Adversarial False Data Injection Attack for Deep Learning Locational Detection | StudySpace