The Illusion of Artificial Inclusion

The Illusion of Artificial Inclusion

5 Feb 2024 | William Agnew, A. Stevie Bergman, Jennifer Chien, Mark Díaz, Seliem El-Sayed, Jaylen Pittman, Shakir Mohamed, Kevin R. McKee
The paper "The Illusion of Artificial Inclusion" by William Agnew, Mark Diaz, Seliem El-Sayed, Shakir Mohamed, and Kevin R. McKee explores the growing trend of substituting human participants in research and development with large language models (LLMs) and other generative AI systems. The authors survey several proposals for such substitutions, examining their motivations, including cost reduction, increased data diversity, and participant protection. However, they argue that these proposals conflict with foundational values of human participation: representation, inclusion, and understanding. The authors critique the practical and intrinsic challenges of using AI surrogates. Practical issues include the current limitations of LLMs in simulating human cognition, value lock-in, and the inability to model diverse perspectives. Intrinsic challenges stem from the nature of AI development and research, where representation and inclusion are essential for ensuring that the interests and experiences of participants are accurately reflected and that power is distributed equitably. Understanding, which relies on intersubjectivity between researcher and participant, is also compromised by AI surrogates. The paper concludes that while AI surrogates may offer some benefits, they cannot replace the deep engagement and empowerment that human participants provide. The authors call for a re-evaluation of how generative AI can support human participation, emphasizing the need to center and empower participants in future work.The paper "The Illusion of Artificial Inclusion" by William Agnew, Mark Diaz, Seliem El-Sayed, Shakir Mohamed, and Kevin R. McKee explores the growing trend of substituting human participants in research and development with large language models (LLMs) and other generative AI systems. The authors survey several proposals for such substitutions, examining their motivations, including cost reduction, increased data diversity, and participant protection. However, they argue that these proposals conflict with foundational values of human participation: representation, inclusion, and understanding. The authors critique the practical and intrinsic challenges of using AI surrogates. Practical issues include the current limitations of LLMs in simulating human cognition, value lock-in, and the inability to model diverse perspectives. Intrinsic challenges stem from the nature of AI development and research, where representation and inclusion are essential for ensuring that the interests and experiences of participants are accurately reflected and that power is distributed equitably. Understanding, which relies on intersubjectivity between researcher and participant, is also compromised by AI surrogates. The paper concludes that while AI surrogates may offer some benefits, they cannot replace the deep engagement and empowerment that human participants provide. The authors call for a re-evaluation of how generative AI can support human participation, emphasizing the need to center and empower participants in future work.
Reach us at info@study.space
Understanding The Illusion of Artificial Inclusion