March 10–14, 2024, Sheffield, United Kingdom | Ben Wang, Jiqun Liu, Jamshed Karimnazarov, Nicolas Thompson
The paper "Task Supportive and Personalized Human-Large Language Model Interaction: A User Study" by Ben Wang explores the challenges users face when interacting with large language models (LLMs) like ChatGPT, particularly in formulating prompts and managing cognitive barriers. The study integrates task context and user perceptions into human-ChatGPT interactions through prompt engineering, developing a platform with supportive functions such as perception articulation, prompt suggestion, and conversation explanation. A user study involving 16 participants (8 college students and 8 crowd workers) was conducted to evaluate these functions. The results show that the supportive functions help users manage expectations, reduce cognitive loads, refine prompts, and increase engagement. The research enhances understanding of designing proactive and user-centric systems with LLMs and offers insights into evaluating human-LLM interactions, while also highlighting potential challenges for under-served users. The study suggests improvements in system evaluation metrics, proactive user interface design, and addressing the unique challenges of crowd workers.The paper "Task Supportive and Personalized Human-Large Language Model Interaction: A User Study" by Ben Wang explores the challenges users face when interacting with large language models (LLMs) like ChatGPT, particularly in formulating prompts and managing cognitive barriers. The study integrates task context and user perceptions into human-ChatGPT interactions through prompt engineering, developing a platform with supportive functions such as perception articulation, prompt suggestion, and conversation explanation. A user study involving 16 participants (8 college students and 8 crowd workers) was conducted to evaluate these functions. The results show that the supportive functions help users manage expectations, reduce cognitive loads, refine prompts, and increase engagement. The research enhances understanding of designing proactive and user-centric systems with LLMs and offers insights into evaluating human-LLM interactions, while also highlighting potential challenges for under-served users. The study suggests improvements in system evaluation metrics, proactive user interface design, and addressing the unique challenges of crowd workers.