October 24-28, 2016, Vienna, Austria | Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter
This paper explores the vulnerability of facial biometric systems to attacks that are physically realizable and inconspicuous. The authors define and investigate a novel class of attacks where an attacker can evade recognition or impersonate another individual by manipulating their physical appearance, specifically through the use of eyeglass frames. They develop a systematic method to generate such attacks, which are designed to be imperceptible to humans or to appear natural. The attacks are tested on state-of-the-art face-recognition algorithms, both in white-box and black-box scenarios, and also demonstrate how to avoid face detection. The paper highlights the importance of considering attacks that are not readily apparent to humans, as they can be particularly pernicious and resistant to investigation. The authors also discuss the implications and limitations of their approach, including the trade-offs between security and usability in deployed systems.This paper explores the vulnerability of facial biometric systems to attacks that are physically realizable and inconspicuous. The authors define and investigate a novel class of attacks where an attacker can evade recognition or impersonate another individual by manipulating their physical appearance, specifically through the use of eyeglass frames. They develop a systematic method to generate such attacks, which are designed to be imperceptible to humans or to appear natural. The attacks are tested on state-of-the-art face-recognition algorithms, both in white-box and black-box scenarios, and also demonstrate how to avoid face detection. The paper highlights the importance of considering attacks that are not readily apparent to humans, as they can be particularly pernicious and resistant to investigation. The authors also discuss the implications and limitations of their approach, including the trade-offs between security and usability in deployed systems.