This study examines the prevalence, attitudes, and knowledge of non-consensual synthetic intimate imagery (NSII), also known as deepfake pornography, across 10 countries. Using survey data from over 16,000 respondents, the research finds that deepfake pornography is considered harmful by most people, despite limited societal awareness. Approximately 2.2% of respondents reported being victims of deepfake pornography, while 1.8% reported perpetrating such content. Despite the existence of specific legislation in some countries, NSII laws are insufficient to deter perpetration. The study suggests that digital literacy education and enforced platform policies could help prevent and reduce the harms of NSII.
The research highlights that deepfake pornography is a form of image-based sexual abuse, as it involves the creation and sharing of intimate images without consent. The harms of such abuse include negative impacts on mental health, career prospects, and social interactions. Women continue to bear the brunt of this online abuse. The study also finds that gender influences attitudes and behaviors related to deepfake pornography, with men being more likely to create and share such content than women. However, the effect size is small, and there is no significant difference in the perceived criminalization worthiness of deepfake pornography between men and women.
The study also explores the prevalence of deepfake pornography behaviors, including viewing, creating, and sharing such content. The results show that the most common behavior is viewing deepfake pornography, particularly of celebrities. The study also finds that the creation of deepfake pornography is rare, with only 1.8% of respondents reporting perpetration. The study further finds that the risk of being threatened with deepfake dissemination is significantly higher for men than for women.
The qualitative analysis of open-ended responses reveals that the most common theme is the condemnation of deepfake pornography, with over 43% of respondents expressing this view. Women are more likely to condemn deepfake pornography than men. The second most common theme is a fear of technology, particularly deepfakes and artificial intelligence. The third most common theme is punitive and justice attitudes, with men being slightly more likely to express these attitudes than women. The fourth most common theme is victim-blaming, with only about 4% of respondents indicating such attitudes.
The study concludes that while deepfake pornography is considered harmful by most people, the prevalence of such content is relatively low. The study also highlights the need for comprehensive legislation, technical tools to help victims, and user experience treatments on deepfake creation tools to deter users from creating and distributing NSII. The study also emphasizes the importance of digital literacy education to help people understand the risks and consequences of deepfake pornography.This study examines the prevalence, attitudes, and knowledge of non-consensual synthetic intimate imagery (NSII), also known as deepfake pornography, across 10 countries. Using survey data from over 16,000 respondents, the research finds that deepfake pornography is considered harmful by most people, despite limited societal awareness. Approximately 2.2% of respondents reported being victims of deepfake pornography, while 1.8% reported perpetrating such content. Despite the existence of specific legislation in some countries, NSII laws are insufficient to deter perpetration. The study suggests that digital literacy education and enforced platform policies could help prevent and reduce the harms of NSII.
The research highlights that deepfake pornography is a form of image-based sexual abuse, as it involves the creation and sharing of intimate images without consent. The harms of such abuse include negative impacts on mental health, career prospects, and social interactions. Women continue to bear the brunt of this online abuse. The study also finds that gender influences attitudes and behaviors related to deepfake pornography, with men being more likely to create and share such content than women. However, the effect size is small, and there is no significant difference in the perceived criminalization worthiness of deepfake pornography between men and women.
The study also explores the prevalence of deepfake pornography behaviors, including viewing, creating, and sharing such content. The results show that the most common behavior is viewing deepfake pornography, particularly of celebrities. The study also finds that the creation of deepfake pornography is rare, with only 1.8% of respondents reporting perpetration. The study further finds that the risk of being threatened with deepfake dissemination is significantly higher for men than for women.
The qualitative analysis of open-ended responses reveals that the most common theme is the condemnation of deepfake pornography, with over 43% of respondents expressing this view. Women are more likely to condemn deepfake pornography than men. The second most common theme is a fear of technology, particularly deepfakes and artificial intelligence. The third most common theme is punitive and justice attitudes, with men being slightly more likely to express these attitudes than women. The fourth most common theme is victim-blaming, with only about 4% of respondents indicating such attitudes.
The study concludes that while deepfake pornography is considered harmful by most people, the prevalence of such content is relatively low. The study also highlights the need for comprehensive legislation, technical tools to help victims, and user experience treatments on deepfake creation tools to deter users from creating and distributing NSII. The study also emphasizes the importance of digital literacy education to help people understand the risks and consequences of deepfake pornography.