Ethics of generative AI and manipulation: a design-oriented research agenda

Ethics of generative AI and manipulation: a design-oriented research agenda

Accepted: 9 January 2024 / Published online: 3 February 2024 | Michael Klenk
The article "Ethics of Generative AI and Manipulation: A Design-Oriented Research Agenda" by Michael Klenk explores the ethical implications of generative AI, particularly focusing on the risks of manipulation. Despite growing discussions around AI ethics, specific manipulation risks remain under investigated. The article outlines essential inquiries in conceptual, empirical, and design dimensions to understand and mitigate these risks. It emphasizes the need for an appropriate conceptualization of manipulation to ensure responsible development of generative AI technologies. The introduction highlights the dual nature of generative AI: while it offers promising applications in areas like health interventions and public policy, it also poses significant risks of manipulation. The article argues that effective influence can lead to manipulation, and existing AI ethics literature often fails to address design questions effectively. It calls for a clear understanding of manipulation to inform design choices and regulatory requirements. The article then delves into the design for values approach, which aims to integrate human values into the design process. It discusses the importance of conceptualizing manipulation to guide design and regulation. The article reviews various criteria for identifying manipulation, including hidden influence, bypassing rationality, and trickery, and critiques their limitations. It proposes the "indifference criterion" as a more robust approach, which identifies manipulation based on the intention to achieve a goal without revealing reasons to the target. The article concludes by emphasizing the need for further research to refine the conceptualization of manipulation and develop practical design requirements to prevent illegitimate forms of manipulation in generative AI applications.The article "Ethics of Generative AI and Manipulation: A Design-Oriented Research Agenda" by Michael Klenk explores the ethical implications of generative AI, particularly focusing on the risks of manipulation. Despite growing discussions around AI ethics, specific manipulation risks remain under investigated. The article outlines essential inquiries in conceptual, empirical, and design dimensions to understand and mitigate these risks. It emphasizes the need for an appropriate conceptualization of manipulation to ensure responsible development of generative AI technologies. The introduction highlights the dual nature of generative AI: while it offers promising applications in areas like health interventions and public policy, it also poses significant risks of manipulation. The article argues that effective influence can lead to manipulation, and existing AI ethics literature often fails to address design questions effectively. It calls for a clear understanding of manipulation to inform design choices and regulatory requirements. The article then delves into the design for values approach, which aims to integrate human values into the design process. It discusses the importance of conceptualizing manipulation to guide design and regulation. The article reviews various criteria for identifying manipulation, including hidden influence, bypassing rationality, and trickery, and critiques their limitations. It proposes the "indifference criterion" as a more robust approach, which identifies manipulation based on the intention to achieve a goal without revealing reasons to the target. The article concludes by emphasizing the need for further research to refine the conceptualization of manipulation and develop practical design requirements to prevent illegitimate forms of manipulation in generative AI applications.
Reach us at info@study.space