How persuasive is AI-generated propaganda?

How persuasive is AI-generated propaganda?

2024 | Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, Michael Tomz
Can large language models generate persuasive propaganda? This study investigates whether AI-generated content can be as persuasive as content from real-world propaganda campaigns. Using a preregistered survey experiment, researchers compared the persuasiveness of news articles written by foreign propagandists with content generated by GPT-3, a large language model. The study found that GPT-3 can create highly persuasive text, as measured by participants' agreement with propaganda theses. Further, when a person fluent in English edited the prompt or curated GPT-3's output, the persuasiveness of the AI-generated content increased, sometimes matching that of the original propaganda. The study involved 8,221 US respondents who were asked to agree or disagree with thesis statements from six propaganda topics. The results showed that GPT-3-generated propaganda was highly persuasive, with 43.5% of respondents agreeing or strongly agreeing with the thesis statements, compared to 24.4% in the control group. When human curation was applied, GPT-3-generated content became as persuasive as the original propaganda. Additionally, editing the prompt for GPT-3 resulted in content that was as persuasive as the original propaganda. The study also found that GPT-3 performed well on measures of perceived credibility and writing style, suggesting that AI-generated content could blend into online information environments. However, the study acknowledges that the results may represent a lower bound on the persuasive potential of large language models, as newer models may perform even better. The findings suggest that AI could be used by propagandists to create convincing content with minimal effort, and that human-machine teaming strategies could enhance the persuasiveness of AI-generated propaganda. The study highlights the need for research on strategies to guard against the misuse of AI for propaganda and the importance of detecting AI-generated content to mitigate its impact on democratic processes.Can large language models generate persuasive propaganda? This study investigates whether AI-generated content can be as persuasive as content from real-world propaganda campaigns. Using a preregistered survey experiment, researchers compared the persuasiveness of news articles written by foreign propagandists with content generated by GPT-3, a large language model. The study found that GPT-3 can create highly persuasive text, as measured by participants' agreement with propaganda theses. Further, when a person fluent in English edited the prompt or curated GPT-3's output, the persuasiveness of the AI-generated content increased, sometimes matching that of the original propaganda. The study involved 8,221 US respondents who were asked to agree or disagree with thesis statements from six propaganda topics. The results showed that GPT-3-generated propaganda was highly persuasive, with 43.5% of respondents agreeing or strongly agreeing with the thesis statements, compared to 24.4% in the control group. When human curation was applied, GPT-3-generated content became as persuasive as the original propaganda. Additionally, editing the prompt for GPT-3 resulted in content that was as persuasive as the original propaganda. The study also found that GPT-3 performed well on measures of perceived credibility and writing style, suggesting that AI-generated content could blend into online information environments. However, the study acknowledges that the results may represent a lower bound on the persuasive potential of large language models, as newer models may perform even better. The findings suggest that AI could be used by propagandists to create convincing content with minimal effort, and that human-machine teaming strategies could enhance the persuasiveness of AI-generated propaganda. The study highlights the need for research on strategies to guard against the misuse of AI for propaganda and the importance of detecting AI-generated content to mitigate its impact on democratic processes.
Reach us at info@study.space