Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition

Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition

October 24-28, 2016 | Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter
This paper presents a novel class of attacks on state-of-the-art face recognition systems (FRSs), which are physically realizable and inconspicuous. The attacks allow an attacker to evade recognition or impersonate another individual. The researchers developed a systematic method to automatically generate such attacks using printed eyeglass frames. When worn by the attacker, these frames enable evasion of recognition or impersonation. The study focuses on white-box FRSs but also demonstrates how similar techniques can be used in black-box scenarios and to avoid face detection. The paper explores two types of attacks: dodging and impersonation. Dodging involves making the system misidentify the attacker as someone else, while impersonation involves making the system recognize the attacker as another individual. The researchers tested their method on three DNNs (DNN_A, DNN_B, and DNN_C) and found that the eyeglass frames allowed subjects to succeed in at least 80% of the time when attempting dodging. For impersonation, the frames allowed one subject to impersonate Milla Jovovich 87.87% of the time, a South-Asian female to impersonate a Middle-Eastern male 88% of the time, and a Middle-Eastern male to impersonate Clive Owen 16.13% of the time. The paper also discusses the implications of these attacks, including the difficulty of detecting them and the potential for misuse. The researchers show that these attacks can be physically realized using 3D or 2D printing technologies and that the perturbations can be made smooth and consistent to avoid detection. The study highlights the importance of understanding the vulnerabilities of FRSs and the need for robust defenses against such attacks. The results demonstrate that these attacks can be effective in real-world scenarios, emphasizing the need for further research and development in this area.This paper presents a novel class of attacks on state-of-the-art face recognition systems (FRSs), which are physically realizable and inconspicuous. The attacks allow an attacker to evade recognition or impersonate another individual. The researchers developed a systematic method to automatically generate such attacks using printed eyeglass frames. When worn by the attacker, these frames enable evasion of recognition or impersonation. The study focuses on white-box FRSs but also demonstrates how similar techniques can be used in black-box scenarios and to avoid face detection. The paper explores two types of attacks: dodging and impersonation. Dodging involves making the system misidentify the attacker as someone else, while impersonation involves making the system recognize the attacker as another individual. The researchers tested their method on three DNNs (DNN_A, DNN_B, and DNN_C) and found that the eyeglass frames allowed subjects to succeed in at least 80% of the time when attempting dodging. For impersonation, the frames allowed one subject to impersonate Milla Jovovich 87.87% of the time, a South-Asian female to impersonate a Middle-Eastern male 88% of the time, and a Middle-Eastern male to impersonate Clive Owen 16.13% of the time. The paper also discusses the implications of these attacks, including the difficulty of detecting them and the potential for misuse. The researchers show that these attacks can be physically realized using 3D or 2D printing technologies and that the perturbations can be made smooth and consistent to avoid detection. The study highlights the importance of understanding the vulnerabilities of FRSs and the need for robust defenses against such attacks. The results demonstrate that these attacks can be effective in real-world scenarios, emphasizing the need for further research and development in this area.
Reach us at info@study.space