July 8, 2024 | Kai-Cheng Yang*, Danishjeet Singh, and Filippo Menczer
This paper presents a systematic analysis of fake social media profiles created using Generative Adversarial Networks (GANs) on Twitter. The study identifies 1,420 fake accounts that use GAN-generated faces as profile pictures, which are used for inauthentic activities such as spamming, scamming, and coordinated message amplification. The researchers developed a detection method based on the consistent eye placement in GAN-generated faces, combined with human annotations, to identify such profiles. Applying this method to a random sample of 254,275 active Twitter users, they estimate the prevalence of GAN-generated profiles to be between 0.021% and 0.044%, corresponding to approximately 10,000 daily active accounts. These findings highlight the growing threat posed by multimodal generative AI. The study also reveals that GAN-generated faces often have distinct features, such as unrealistic accessories or background elements, which can be used to identify them. The researchers also discuss practical heuristics for social media users to recognize such accounts. The study provides a dataset and code for further investigation, emphasizing the need for continued research and improved detection methods to combat the spread of fake profiles. The findings suggest that generative AI tools, particularly GANs, are being widely used to create fake personas on social media, posing challenges to the integrity of online interactions. The study underscores the importance of developing effective detection strategies and promoting AI literacy among social media users to counteract the risks associated with AI-generated fake profiles.This paper presents a systematic analysis of fake social media profiles created using Generative Adversarial Networks (GANs) on Twitter. The study identifies 1,420 fake accounts that use GAN-generated faces as profile pictures, which are used for inauthentic activities such as spamming, scamming, and coordinated message amplification. The researchers developed a detection method based on the consistent eye placement in GAN-generated faces, combined with human annotations, to identify such profiles. Applying this method to a random sample of 254,275 active Twitter users, they estimate the prevalence of GAN-generated profiles to be between 0.021% and 0.044%, corresponding to approximately 10,000 daily active accounts. These findings highlight the growing threat posed by multimodal generative AI. The study also reveals that GAN-generated faces often have distinct features, such as unrealistic accessories or background elements, which can be used to identify them. The researchers also discuss practical heuristics for social media users to recognize such accounts. The study provides a dataset and code for further investigation, emphasizing the need for continued research and improved detection methods to combat the spread of fake profiles. The findings suggest that generative AI tools, particularly GANs, are being widely used to create fake personas on social media, posing challenges to the integrity of online interactions. The study underscores the importance of developing effective detection strategies and promoting AI literacy among social media users to counteract the risks associated with AI-generated fake profiles.