2024 | Almog Simchon, Matthew Edwards, and Stephan Lewandowsky
The paper "The persuasive effects of political microtargeting in the age of generative artificial intelligence" by Almog Simchon, Matthew Edwards, and Stephan Lewandowsky explores the potential misuse of large language models, such as ChatGPT, in scaling microtargeting efforts for political purposes. The authors conduct four studies to examine the effectiveness of personalized political ads tailored to individuals' personalities. The results show that personality-congruent political ads are more effective than non-personalized ads. Additionally, the studies demonstrate the feasibility of automatically generating and validating these personalized ads on a large scale. The findings highlight the potential risks of using AI and microtargeting to craft political messages that resonate with individuals based on their personality traits, emphasizing the need for ethical scrutiny and policy-oriented solutions to govern the use of AI in shaping public opinion and electoral integrity. The paper also discusses the implications of these findings for transparency, user empowerment, and the design of interventions to enhance people's ability to detect manipulation efforts.The paper "The persuasive effects of political microtargeting in the age of generative artificial intelligence" by Almog Simchon, Matthew Edwards, and Stephan Lewandowsky explores the potential misuse of large language models, such as ChatGPT, in scaling microtargeting efforts for political purposes. The authors conduct four studies to examine the effectiveness of personalized political ads tailored to individuals' personalities. The results show that personality-congruent political ads are more effective than non-personalized ads. Additionally, the studies demonstrate the feasibility of automatically generating and validating these personalized ads on a large scale. The findings highlight the potential risks of using AI and microtargeting to craft political messages that resonate with individuals based on their personality traits, emphasizing the need for ethical scrutiny and policy-oriented solutions to govern the use of AI in shaping public opinion and electoral integrity. The paper also discusses the implications of these findings for transparency, user empowerment, and the design of interventions to enhance people's ability to detect manipulation efforts.