| Jiwei Tian, Chao Shen, Buhong Wang, Xiaofang Xia, Meng Zhang, Chenhao Lin, and Qian Li
This paper proposes a general multi-label adversarial attack framework called LESSON for detecting and locating adversarial false data injection attacks (AFDIA) in deep learning-based locational detection systems. The framework includes three key components: Perturbing State Variables, Tailored Loss Function Design, and Change of Variables. These components enable the generation of multi-label adversarial perturbations that can bypass both Bad Data Detection (BDD) and Neural Attack Location (NAL) systems. Four typical LESSON attacks are analyzed, with two attack objectives: one related to state estimation errors and the other to locational detection results. The experimental results show that the proposed framework is highly effective, with success rates of up to 100% in large-scale power systems. The framework is designed to meet physical constraints and ensure that adversarial perturbations are realistic and effective. The study highlights the vulnerability of deep learning-based systems to multi-label adversarial attacks and emphasizes the need for robust defense mechanisms. The results demonstrate that the proposed LESSON attack framework poses significant security risks to smart grids and underscores the importance of developing effective countermeasures against such attacks.This paper proposes a general multi-label adversarial attack framework called LESSON for detecting and locating adversarial false data injection attacks (AFDIA) in deep learning-based locational detection systems. The framework includes three key components: Perturbing State Variables, Tailored Loss Function Design, and Change of Variables. These components enable the generation of multi-label adversarial perturbations that can bypass both Bad Data Detection (BDD) and Neural Attack Location (NAL) systems. Four typical LESSON attacks are analyzed, with two attack objectives: one related to state estimation errors and the other to locational detection results. The experimental results show that the proposed framework is highly effective, with success rates of up to 100% in large-scale power systems. The framework is designed to meet physical constraints and ensure that adversarial perturbations are realistic and effective. The study highlights the vulnerability of deep learning-based systems to multi-label adversarial attacks and emphasizes the need for robust defense mechanisms. The results demonstrate that the proposed LESSON attack framework poses significant security risks to smart grids and underscores the importance of developing effective countermeasures against such attacks.