Ethics of generative AI and manipulation: a design-oriented research agenda

Ethics of generative AI and manipulation: a design-oriented research agenda

3 February 2024 | Michael Klenk
This article discusses the ethical implications of generative AI and manipulation, emphasizing the need for a design-oriented research agenda. Generative AI enables large-scale, effective manipulation, raising concerns about its ethical use. The article outlines essential questions related to the conceptual, empirical, and design dimensions of manipulation, highlighting the importance of a clear conceptualization of manipulation to ensure responsible AI development. It argues that different conceptualizations of manipulation will lead to different design and regulatory requirements. The article proposes a design for values approach, which aims to integrate human values into the design of new technologies. It emphasizes the importance of conceptual engineering in defining manipulation and discusses various conceptualizations of manipulation, including the hidden influence criterion, the bypassing rationality criterion, and the trickery criterion. The article concludes that the indifference criterion offers a promising approach to identifying manipulation, as it focuses on the influencer's indifference to the ideal state rather than malicious intent. The article also highlights the need for empirical research to understand stakeholders' views on manipulation and to inform the design of non-manipulative generative AI systems. The article concludes that a clear conceptualization of manipulation is essential for responsible AI development and that empirical research is needed to inform this conceptualization.This article discusses the ethical implications of generative AI and manipulation, emphasizing the need for a design-oriented research agenda. Generative AI enables large-scale, effective manipulation, raising concerns about its ethical use. The article outlines essential questions related to the conceptual, empirical, and design dimensions of manipulation, highlighting the importance of a clear conceptualization of manipulation to ensure responsible AI development. It argues that different conceptualizations of manipulation will lead to different design and regulatory requirements. The article proposes a design for values approach, which aims to integrate human values into the design of new technologies. It emphasizes the importance of conceptual engineering in defining manipulation and discusses various conceptualizations of manipulation, including the hidden influence criterion, the bypassing rationality criterion, and the trickery criterion. The article concludes that the indifference criterion offers a promising approach to identifying manipulation, as it focuses on the influencer's indifference to the ideal state rather than malicious intent. The article also highlights the need for empirical research to understand stakeholders' views on manipulation and to inform the design of non-manipulative generative AI systems. The article concludes that a clear conceptualization of manipulation is essential for responsible AI development and that empirical research is needed to inform this conceptualization.
Reach us at info@futurestudyspace.com
[slides] Ethics of generative AI and manipulation%3A a design-oriented research agenda | StudySpace