Leveraging Large Language Models for Learning Complex Legal Concepts through Storytelling

Leveraging Large Language Models for Learning Complex Legal Concepts through Storytelling

2 Jul 2024 | Hang Jiang, Xiajie Zhang, Robert Mahari, Daniel Kessler, Eric Ma, Tal August, Irene Li, Alex 'Sandy' Pentland, Yoon Kim, Deb Roy, Jad Kabbara
This paper presents a novel application of large language models (LLMs) in legal education to help non-experts learn complex legal concepts through storytelling. The authors introduce a new dataset called LEGALSTORIES, which includes 294 legal doctrines, each accompanied by a story and multiple-choice questions generated by LLMs. The dataset was created using a human-in-the-loop approach, where legal experts reviewed and refined the generated content. The authors also conducted randomized controlled trials (RCTs) with legal novices to evaluate the effectiveness of LLM-generated stories in enhancing comprehension of legal concepts and interest in law. The results showed that LLM-generated stories improved comprehension and interest in law among non-native speakers compared to definitions alone. Additionally, stories helped participants relate legal concepts to their lives and showed higher retention rates in follow-up assessments. The study also compared the performance of three LLMs (LLaMA 2, GPT-3.5, and GPT-4) in generating legal stories and questions, finding that GPT-4 outperformed the others in most metrics. The authors conclude that LLMs have significant potential in promoting legal education and beyond, but emphasize the need for human supervision to ensure the quality and accuracy of generated content. The study also highlights the importance of balancing access to justice with responsible AI approaches, and acknowledges the limitations of the study, including sample size and data quality.This paper presents a novel application of large language models (LLMs) in legal education to help non-experts learn complex legal concepts through storytelling. The authors introduce a new dataset called LEGALSTORIES, which includes 294 legal doctrines, each accompanied by a story and multiple-choice questions generated by LLMs. The dataset was created using a human-in-the-loop approach, where legal experts reviewed and refined the generated content. The authors also conducted randomized controlled trials (RCTs) with legal novices to evaluate the effectiveness of LLM-generated stories in enhancing comprehension of legal concepts and interest in law. The results showed that LLM-generated stories improved comprehension and interest in law among non-native speakers compared to definitions alone. Additionally, stories helped participants relate legal concepts to their lives and showed higher retention rates in follow-up assessments. The study also compared the performance of three LLMs (LLaMA 2, GPT-3.5, and GPT-4) in generating legal stories and questions, finding that GPT-4 outperformed the others in most metrics. The authors conclude that LLMs have significant potential in promoting legal education and beyond, but emphasize the need for human supervision to ensure the quality and accuracy of generated content. The study also highlights the importance of balancing access to justice with responsible AI approaches, and acknowledges the limitations of the study, including sample size and data quality.
Reach us at info@study.space
Understanding Leveraging Large Language Models for Learning Complex Legal Concepts through Storytelling