This paper presents a learning analytics-based methodology for assessing collaborative writing between humans and generative artificial intelligence (GAI). The approach is grounded in evidence-centered design (ECD), which identifies assessment claims based on knowledge-telling, knowledge transformation, and cognitive presence. Data from the CoAuthor writing tool is used as evidence, and epistemic network analysis (ENA) is employed to infer claims from the data. The findings reveal significant differences in writing processes among CoAuthor users, suggesting that this method is a viable approach for assessing human-AI collaborative writing.
The study addresses the challenges of assessing writing when both human and AI contribute. It argues that assessments should prepare students for a world where collaboration with AI is common by emphasizing appropriate AI engagement, the learning process, and opportunities for collaboration with AI. The research uses the ECD framework, existing theories of writing cognition, recent GAI interfaces, and learning analytic process models to propose and test an assessment method.
The study compares writing processes under three conditions: ownership (user vs. GAI), prompt type (creative vs. argumentative), and temperature (high vs. low). The results show statistically significant differences in writing processes across these conditions. Authors with higher ownership focused more on composing and revising their own writing, while those with GAI ownership relied more on AI suggestions. Creative writing prompts led to more exploration of AI suggestions, while argumentative prompts focused on composing and revising AI-generated text.
The study also explores the impact of GAI temperature settings on writing processes. Authors interacting with lower temperature GAI tended to focus more on knowledge transformation, while those with higher temperature GAI engaged more in knowledge telling and exploration. The results support the hypothesis that authors with higher ownership and those responding to argumentative prompts exhibited more knowledge transformation and integration.
The study highlights the importance of assessing writing processes rather than just the final product. It suggests that assessments should focus on the cognitive and interactive aspects of writing with GAI to provide a more complete understanding of learning. The proposed method uses ENA to analyze writing processes and provides a framework for assessing human-AI collaborative writing. The study concludes that this approach offers a proof of concept for evidence-centered assessment of writing with GAI and suggests a path forward for innovative assessments in the age of AI.This paper presents a learning analytics-based methodology for assessing collaborative writing between humans and generative artificial intelligence (GAI). The approach is grounded in evidence-centered design (ECD), which identifies assessment claims based on knowledge-telling, knowledge transformation, and cognitive presence. Data from the CoAuthor writing tool is used as evidence, and epistemic network analysis (ENA) is employed to infer claims from the data. The findings reveal significant differences in writing processes among CoAuthor users, suggesting that this method is a viable approach for assessing human-AI collaborative writing.
The study addresses the challenges of assessing writing when both human and AI contribute. It argues that assessments should prepare students for a world where collaboration with AI is common by emphasizing appropriate AI engagement, the learning process, and opportunities for collaboration with AI. The research uses the ECD framework, existing theories of writing cognition, recent GAI interfaces, and learning analytic process models to propose and test an assessment method.
The study compares writing processes under three conditions: ownership (user vs. GAI), prompt type (creative vs. argumentative), and temperature (high vs. low). The results show statistically significant differences in writing processes across these conditions. Authors with higher ownership focused more on composing and revising their own writing, while those with GAI ownership relied more on AI suggestions. Creative writing prompts led to more exploration of AI suggestions, while argumentative prompts focused on composing and revising AI-generated text.
The study also explores the impact of GAI temperature settings on writing processes. Authors interacting with lower temperature GAI tended to focus more on knowledge transformation, while those with higher temperature GAI engaged more in knowledge telling and exploration. The results support the hypothesis that authors with higher ownership and those responding to argumentative prompts exhibited more knowledge transformation and integration.
The study highlights the importance of assessing writing processes rather than just the final product. It suggests that assessments should focus on the cognitive and interactive aspects of writing with GAI to provide a more complete understanding of learning. The proposed method uses ENA to analyze writing processes and provides a framework for assessing human-AI collaborative writing. The study concludes that this approach offers a proof of concept for evidence-centered assessment of writing with GAI and suggests a path forward for innovative assessments in the age of AI.