17 July 2024 | Kristian González Barman, Nathan Wood, Pawel Pawlowski
Large language models (LLMs) like ChatGPT offer significant opportunities but also pose risks if users are not properly trained. Current approaches focusing on transparency and explainability are insufficient to address the diverse needs of users. This paper argues for the importance of user-centric guidelines that clarify what tasks LLMs can and cannot perform, when user input is needed, and how to use them responsibly. Guidelines should focus on practical advice, such as refining prompts, verifying outputs, and understanding limitations, rather than just explaining how LLMs work. Users should be taught to use LLMs effectively, even without understanding the underlying mechanisms. While explainable AI (XAI) may help clarify how LLMs function, it is not the most effective method for guiding users. Practical guidelines are more useful as they provide actionable strategies for responsible use. The paper emphasizes the need for real-world case studies to inform these guidelines, ensuring they are grounded in practical strategies. User guidelines should be tailored to different contexts, such as education, the workplace, and expert advice, to address specific challenges. The paper concludes that user-centric guidelines are essential for reliable and responsible LLM use, helping users avoid common mistakes and fostering ethical, efficient, and safe application of LLMs.Large language models (LLMs) like ChatGPT offer significant opportunities but also pose risks if users are not properly trained. Current approaches focusing on transparency and explainability are insufficient to address the diverse needs of users. This paper argues for the importance of user-centric guidelines that clarify what tasks LLMs can and cannot perform, when user input is needed, and how to use them responsibly. Guidelines should focus on practical advice, such as refining prompts, verifying outputs, and understanding limitations, rather than just explaining how LLMs work. Users should be taught to use LLMs effectively, even without understanding the underlying mechanisms. While explainable AI (XAI) may help clarify how LLMs function, it is not the most effective method for guiding users. Practical guidelines are more useful as they provide actionable strategies for responsible use. The paper emphasizes the need for real-world case studies to inform these guidelines, ensuring they are grounded in practical strategies. User guidelines should be tailored to different contexts, such as education, the workplace, and expert advice, to address specific challenges. The paper concludes that user-centric guidelines are essential for reliable and responsible LLM use, helping users avoid common mistakes and fostering ethical, efficient, and safe application of LLMs.