The Illusion of Artificial Inclusion

The Illusion of Artificial Inclusion

May 11–16, 2024 | William Agnew, A. Stevie Bergman, Jennifer Chien, Mark Diaz, Seliem El-Sayed, Jaylen Pittman, Shakir Mohamed, Kevin R. McKee
The Illusion of Artificial Inclusion William Agnew, A. Stevie Bergman, Jennifer Chien, Mark Díaz, Seliem El-Sayed, Jaylen Pittman, Shakir Mohamed, and Kevin R. McKee. 2024. The Illusion of Artificial Inclusion. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3613904.3642703 Human participants are central to the development of AI, psychology, and user research. Recent advances in generative AI have sparked interest in replacing human participants with AI surrogates. This paper critically examines the principles and goals underlying human participation to help chart out paths for future work that truly centers and empowers participants. The paper surveys several "substitution proposals" to better understand the arguments for and against substituting human participants with modern generative AI. Our scoping review indicates that the recent wave of these proposals is motivated by goals such as reducing the costs of research and development work and increasing the diversity of collected data. However, these proposals ignore and ultimately conflict with foundational values of work with human participants: representation, inclusion, and understanding. The paper discusses the emergence and capabilities of large language models (LLMs), and their role in AI development and social-behavioral research. It reviews substitution proposals and their stated motivations, finding that these proposals focus on human participation in AI development and in various scientific fields. The paper evaluates the motivations for substitution, finding that they include increasing the speed and scale of research, lowering costs, augmenting demographic diversity in datasets, and protecting participants from harms. The paper also discusses the challenges to the replacement of human participants, including practical obstacles such as the limitations of current language models, value lock-in, and the inability of LLMs to model a wide range of opinions. It also discusses intrinsic challenges to substitution, including conflicts with values such as representation, inclusion, and understanding. The paper concludes that the values of representation, inclusion, and understanding pose considerable challenges to substitution proposals. These challenges are different in kind from the initial obstacles discussed. Conflicts with representation, inclusion, and understanding cannot be alleviated with better training or improved model performance alone: they are intrinsic to the models and to the proposed approach to replacement itself. A deeper reckoning with the values of representation, inclusion, and understanding is needed to identify changes that could mitigate these intrinsic challenges. A first step toward representative and inclusive versions of substitution will be to involve and empower participants in the agenda-setting process. After formally setting the agenda, projects can maintain an ongoing role for participants, allowing them to monitor and supervise whether their substitutes are making presentThe Illusion of Artificial Inclusion William Agnew, A. Stevie Bergman, Jennifer Chien, Mark Díaz, Seliem El-Sayed, Jaylen Pittman, Shakir Mohamed, and Kevin R. McKee. 2024. The Illusion of Artificial Inclusion. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3613904.3642703 Human participants are central to the development of AI, psychology, and user research. Recent advances in generative AI have sparked interest in replacing human participants with AI surrogates. This paper critically examines the principles and goals underlying human participation to help chart out paths for future work that truly centers and empowers participants. The paper surveys several "substitution proposals" to better understand the arguments for and against substituting human participants with modern generative AI. Our scoping review indicates that the recent wave of these proposals is motivated by goals such as reducing the costs of research and development work and increasing the diversity of collected data. However, these proposals ignore and ultimately conflict with foundational values of work with human participants: representation, inclusion, and understanding. The paper discusses the emergence and capabilities of large language models (LLMs), and their role in AI development and social-behavioral research. It reviews substitution proposals and their stated motivations, finding that these proposals focus on human participation in AI development and in various scientific fields. The paper evaluates the motivations for substitution, finding that they include increasing the speed and scale of research, lowering costs, augmenting demographic diversity in datasets, and protecting participants from harms. The paper also discusses the challenges to the replacement of human participants, including practical obstacles such as the limitations of current language models, value lock-in, and the inability of LLMs to model a wide range of opinions. It also discusses intrinsic challenges to substitution, including conflicts with values such as representation, inclusion, and understanding. The paper concludes that the values of representation, inclusion, and understanding pose considerable challenges to substitution proposals. These challenges are different in kind from the initial obstacles discussed. Conflicts with representation, inclusion, and understanding cannot be alleviated with better training or improved model performance alone: they are intrinsic to the models and to the proposed approach to replacement itself. A deeper reckoning with the values of representation, inclusion, and understanding is needed to identify changes that could mitigate these intrinsic challenges. A first step toward representative and inclusive versions of substitution will be to involve and empower participants in the agenda-setting process. After formally setting the agenda, projects can maintain an ongoing role for participants, allowing them to monitor and supervise whether their substitutes are making present
Reach us at info@study.space
Understanding The Illusion of Artificial Inclusion