2024 | JIAYIN WANG, WEIZHI MA, PEIJIE SUN, MIN ZHANG, JIAN-YUN NIE
This study explores user experience in large language model (LLM) interactions, focusing on understanding user intents, experiences, and concerns. The research develops a taxonomy of seven user intents based on real-world interaction logs and human verification. A survey of 411 participants reveals insights into user satisfaction, usage frequency, and concerns with LLMs. The findings highlight the importance of user-centric approaches in LLM development, emphasizing the need for models that align with human needs and real-world applications. The study identifies six future research directions to enhance human-AI collaboration, including user intent modeling, personalization, tool utilization, and trustworthiness. The results show that LLMs are widely used, with high satisfaction in text assistance tasks but lower satisfaction in creative and problem-solving contexts. Users also express concerns about hallucination, long context processing, and multimodal ability. The study advocates for a user-centered approach in LLM development, ensuring models are not only technically advanced but also beneficial in human-AI collaboration. The findings underscore the need for further research into user intent analysis, personalized services, and trustworthiness in LLMs to improve real-world user experiences.This study explores user experience in large language model (LLM) interactions, focusing on understanding user intents, experiences, and concerns. The research develops a taxonomy of seven user intents based on real-world interaction logs and human verification. A survey of 411 participants reveals insights into user satisfaction, usage frequency, and concerns with LLMs. The findings highlight the importance of user-centric approaches in LLM development, emphasizing the need for models that align with human needs and real-world applications. The study identifies six future research directions to enhance human-AI collaboration, including user intent modeling, personalization, tool utilization, and trustworthiness. The results show that LLMs are widely used, with high satisfaction in text assistance tasks but lower satisfaction in creative and problem-solving contexts. Users also express concerns about hallucination, long context processing, and multimodal ability. The study advocates for a user-centered approach in LLM development, ensuring models are not only technically advanced but also beneficial in human-AI collaboration. The findings underscore the need for further research into user intent analysis, personalized services, and trustworthiness in LLMs to improve real-world user experiences.