17 Jun 2024 | Robert Höning, Javier Rando, Nicholas Carlini, Florian Tramèr
The paper evaluates the effectiveness of popular protections against style mimicry in generative AI, which aim to prevent unauthorized use of artists' unique artistic styles. Despite significant media attention and over 1 million downloads, the authors find that these protections provide a false sense of security. They demonstrate that low-effort and off-the-shelf techniques, such as image upscaling, are sufficient to create robust mimicry methods that significantly degrade existing protections. Through a user study, they show that all existing protections can be easily bypassed, leaving artists vulnerable to style mimicry. The authors caution that adversarial perturbations cannot reliably protect artists from the misuse of generative AI and urge the development of alternative protective solutions. The study highlights the limitations of current protections and the need for adaptive and robust methods to counter style mimicry.The paper evaluates the effectiveness of popular protections against style mimicry in generative AI, which aim to prevent unauthorized use of artists' unique artistic styles. Despite significant media attention and over 1 million downloads, the authors find that these protections provide a false sense of security. They demonstrate that low-effort and off-the-shelf techniques, such as image upscaling, are sufficient to create robust mimicry methods that significantly degrade existing protections. Through a user study, they show that all existing protections can be easily bypassed, leaving artists vulnerable to style mimicry. The authors caution that adversarial perturbations cannot reliably protect artists from the misuse of generative AI and urge the development of alternative protective solutions. The study highlights the limitations of current protections and the need for adaptive and robust methods to counter style mimicry.