May 11–16, 2024 | Nikhil Sharma, Q. Vera Liao, Ziang Xiao
This study investigates the effects of LLM-powered conversational search systems on information seeking and selective exposure. The research explores whether these systems increase selective exposure and opinion polarization compared to conventional search systems, and whether opinion-biased LLMs exacerbate or mitigate these effects. Two experiments were conducted: the first compared conventional web search with LLM-powered conversational search systems, while the second examined the impact of LLMs with opinion biases that either reinforced or challenged users' existing attitudes.
The first experiment found that participants engaged in more biased information querying with LLM-powered conversational search systems, and an opinionated LLM reinforcing their views exacerbated this bias. The second experiment explored how LLMs with manipulated opinion biases affected information-seeking behaviors. Results showed that a consonant LLM (reinforcing users' views) increased selective exposure and opinion polarization, while a dissonant LLM (challenging users' views) reduced selective exposure and opinion polarization.
The study highlights the potential risks of LLM-powered conversational search systems in promoting selective exposure and reducing information diversity. It underscores the importance of designing these systems to minimize bias and promote diverse information seeking. The findings have implications for the development of LLMs, conversational search systems, and the policies governing these technologies.This study investigates the effects of LLM-powered conversational search systems on information seeking and selective exposure. The research explores whether these systems increase selective exposure and opinion polarization compared to conventional search systems, and whether opinion-biased LLMs exacerbate or mitigate these effects. Two experiments were conducted: the first compared conventional web search with LLM-powered conversational search systems, while the second examined the impact of LLMs with opinion biases that either reinforced or challenged users' existing attitudes.
The first experiment found that participants engaged in more biased information querying with LLM-powered conversational search systems, and an opinionated LLM reinforcing their views exacerbated this bias. The second experiment explored how LLMs with manipulated opinion biases affected information-seeking behaviors. Results showed that a consonant LLM (reinforcing users' views) increased selective exposure and opinion polarization, while a dissonant LLM (challenging users' views) reduced selective exposure and opinion polarization.
The study highlights the potential risks of LLM-powered conversational search systems in promoting selective exposure and reducing information diversity. It underscores the importance of designing these systems to minimize bias and promote diverse information seeking. The findings have implications for the development of LLMs, conversational search systems, and the policies governing these technologies.