2024 | Elizabeth C. Stade, Shannon Wiltsey Stirman, Lyle H. Ungar, Cody L. Boland, H. Andrew Schwartz, David B. Yaden, João Sedoc, Robert J. DeRubeis, Robb Willer, Johannes C. Eichstaedt
The paper "Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation" by Elizabeth C. Stade et al. explores the potential of large language models (LLMs) in supporting, augmenting, or automating psychotherapy. The authors highlight the growing enthusiasm in both the field and industry for using LLMs to address the insufficient capacity of mental healthcare systems and to scale personalized treatments. However, they emphasize the high stakes involved in clinical psychology, where responsible and evidence-based therapy requires nuanced expertise.
The paper provides a roadmap for the responsible application of clinical LLMs in psychotherapy, covering technical overviews, stages of integration, potential applications, and recommendations for development and evaluation. Key stages of integration include assistive, collaborative, and fully autonomous LLMs, each with increasing levels of autonomy and complexity. Potential applications include automating clinical administration tasks, measuring treatment fidelity, offering feedback on therapy worksheets, and automating aspects of supervision and training.
The authors recommend focusing on evidence-based practices, improvement beyond engagement, rigorous yet commonsense evaluation, interdisciplinary collaboration, and trust and usability for clinicians and patients. They also propose design criteria for effective clinical LLMs, such as detecting risks of harm, aiding in psychodiagnostic assessment, being responsive and flexible, stopping when not helpful, being fair and inclusive, being empathetic, and being transparent about being AI.
The paper concludes by discussing the potential unintended consequences of LLMs on the clinical profession and the long-term impact on clinical science, including advances in therapeutic interventions, increased access to care, and challenges to fundamental assumptions about psychotherapy. The authors emphasize the need for cautious optimism and active engagement between psychologists and technologists to ensure responsible and ethical use of LLMs in psychotherapy.The paper "Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation" by Elizabeth C. Stade et al. explores the potential of large language models (LLMs) in supporting, augmenting, or automating psychotherapy. The authors highlight the growing enthusiasm in both the field and industry for using LLMs to address the insufficient capacity of mental healthcare systems and to scale personalized treatments. However, they emphasize the high stakes involved in clinical psychology, where responsible and evidence-based therapy requires nuanced expertise.
The paper provides a roadmap for the responsible application of clinical LLMs in psychotherapy, covering technical overviews, stages of integration, potential applications, and recommendations for development and evaluation. Key stages of integration include assistive, collaborative, and fully autonomous LLMs, each with increasing levels of autonomy and complexity. Potential applications include automating clinical administration tasks, measuring treatment fidelity, offering feedback on therapy worksheets, and automating aspects of supervision and training.
The authors recommend focusing on evidence-based practices, improvement beyond engagement, rigorous yet commonsense evaluation, interdisciplinary collaboration, and trust and usability for clinicians and patients. They also propose design criteria for effective clinical LLMs, such as detecting risks of harm, aiding in psychodiagnostic assessment, being responsive and flexible, stopping when not helpful, being fair and inclusive, being empathetic, and being transparent about being AI.
The paper concludes by discussing the potential unintended consequences of LLMs on the clinical profession and the long-term impact on clinical science, including advances in therapeutic interventions, increased access to care, and challenges to fundamental assumptions about psychotherapy. The authors emphasize the need for cautious optimism and active engagement between psychologists and technologists to ensure responsible and ethical use of LLMs in psychotherapy.