This paper introduces PsychoGAT, a novel psychological assessment paradigm that transforms traditional self-report scales into interactive fiction games using large language models (LLMs). The core idea is to leverage LLMs as both psychologists and game designers to create engaging, personalized assessments. PsychoGAT consists of three main agents: a game designer, a game controller, and a critic. The game designer creates the game's setting and narrative, the game controller generates interactive content, and the critic refines the content to enhance user experience. The system also includes a human simulator that interacts with the game to provide psychometric evaluations.
The framework allows for the transformation of standardized psychological assessments into interactive fiction games, enabling players to engage with the game as the protagonist, with their decisions reflecting their psychological state. The system was validated through psychometric evaluations, which demonstrated statistically significant improvements in reliability, convergent validity, and discriminant validity. Human evaluations confirmed enhancements in content coherence, interactivity, interest, immersion, and satisfaction.
Experiments showed that PsychoGAT outperformed traditional scales and other LLM-based assessment methods in terms of psychometric effectiveness and user experience. The system was tested on various psychological constructs, including personality testing, depression measurement, and cognitive distortion detection. Results indicated that PsychoGAT provides a reliable and valid alternative to traditional self-report scales, with the potential to improve engagement and reduce resistance in psychological assessments.
The study also highlights the potential of LLMs in psychological research, particularly in creating interactive and immersive assessments. However, the system is limited to English and requires further development for broader application. The research contributes to the field of psychology by demonstrating the effectiveness of LLM-based agents in gamified psychological assessments.This paper introduces PsychoGAT, a novel psychological assessment paradigm that transforms traditional self-report scales into interactive fiction games using large language models (LLMs). The core idea is to leverage LLMs as both psychologists and game designers to create engaging, personalized assessments. PsychoGAT consists of three main agents: a game designer, a game controller, and a critic. The game designer creates the game's setting and narrative, the game controller generates interactive content, and the critic refines the content to enhance user experience. The system also includes a human simulator that interacts with the game to provide psychometric evaluations.
The framework allows for the transformation of standardized psychological assessments into interactive fiction games, enabling players to engage with the game as the protagonist, with their decisions reflecting their psychological state. The system was validated through psychometric evaluations, which demonstrated statistically significant improvements in reliability, convergent validity, and discriminant validity. Human evaluations confirmed enhancements in content coherence, interactivity, interest, immersion, and satisfaction.
Experiments showed that PsychoGAT outperformed traditional scales and other LLM-based assessment methods in terms of psychometric effectiveness and user experience. The system was tested on various psychological constructs, including personality testing, depression measurement, and cognitive distortion detection. Results indicated that PsychoGAT provides a reliable and valid alternative to traditional self-report scales, with the potential to improve engagement and reduce resistance in psychological assessments.
The study also highlights the potential of LLMs in psychological research, particularly in creating interactive and immersive assessments. However, the system is limited to English and requires further development for broader application. The research contributes to the field of psychology by demonstrating the effectiveness of LLM-based agents in gamified psychological assessments.