May 11–16, 2024 | Joel Wester, Tim Schrills, Henning Pohl, Niels van Berkel
This paper investigates how users perceive different denial styles when large language models (LLMs) cannot or should not fulfill user requests. The study evaluates four denial styles: baseline, factual, diverting, and opinionated. The findings show that diverting denials are generally more appreciated by users than baseline or factual denials. Diverting denials are less frustrating and more useful, appropriate, and relevant compared to baseline denials. Opinionated denials are also perceived more positively than baseline denials on all measures. The study highlights the importance of designing LLM denials to better meet user expectations, particularly in situations where the LLM is constrained by technical or social policies. The results suggest that LLMs should be designed to provide helpful and informative denials while also being polite. The paper provides design recommendations for LLM denials, emphasizing the importance of clear and relatable explanations, as well as the use of diverting denials to redirect users to alternative solutions. The study also discusses the broader implications of these findings for human-centered computing and the design of AI systems.This paper investigates how users perceive different denial styles when large language models (LLMs) cannot or should not fulfill user requests. The study evaluates four denial styles: baseline, factual, diverting, and opinionated. The findings show that diverting denials are generally more appreciated by users than baseline or factual denials. Diverting denials are less frustrating and more useful, appropriate, and relevant compared to baseline denials. Opinionated denials are also perceived more positively than baseline denials on all measures. The study highlights the importance of designing LLM denials to better meet user expectations, particularly in situations where the LLM is constrained by technical or social policies. The results suggest that LLMs should be designed to provide helpful and informative denials while also being polite. The paper provides design recommendations for LLM denials, emphasizing the importance of clear and relatable explanations, as well as the use of diverting denials to redirect users to alternative solutions. The study also discusses the broader implications of these findings for human-centered computing and the design of AI systems.