2024 | Margaret Bearman, Joanna Tai, Phillip Dawson, David Boud & Rola Ajjawi
Developing evaluative judgement in the age of generative artificial intelligence (AI) is crucial for students to discern the quality of work, both their own and others', in an era where AI produces increasingly sophisticated outputs. This paper explores how assessment practices can help students develop evaluative judgement in the context of generative AI. It proposes three foci: (1) developing evaluative judgement of generative AI outputs; (2) developing evaluative judgement of generative AI processes; and (3) generative AI assessment of student evaluative judgements. The authors argue that existing formative assessment strategies can be adapted to help students critically evaluate AI outputs and processes, ensuring they do not rely uncritically on AI. Evaluative judgement is defined as the ability to assess the quality of work, considering both the content and the context. The paper emphasizes the importance of developing this skill in higher education, as it is a uniquely human capability that is essential in a world increasingly shaped by AI. The authors also highlight the need for educators to guide students in understanding the ethical and moral implications of using AI, as well as the limitations of AI outputs. They suggest that assessment practices, such as self-assessment, peer assessment, and feedback, can be used to develop evaluative judgement in students. The paper concludes that while AI can assist in the development of evaluative judgement, it is ultimately the responsibility of educators and learners to ensure that humans remain the arbiters of quality.Developing evaluative judgement in the age of generative artificial intelligence (AI) is crucial for students to discern the quality of work, both their own and others', in an era where AI produces increasingly sophisticated outputs. This paper explores how assessment practices can help students develop evaluative judgement in the context of generative AI. It proposes three foci: (1) developing evaluative judgement of generative AI outputs; (2) developing evaluative judgement of generative AI processes; and (3) generative AI assessment of student evaluative judgements. The authors argue that existing formative assessment strategies can be adapted to help students critically evaluate AI outputs and processes, ensuring they do not rely uncritically on AI. Evaluative judgement is defined as the ability to assess the quality of work, considering both the content and the context. The paper emphasizes the importance of developing this skill in higher education, as it is a uniquely human capability that is essential in a world increasingly shaped by AI. The authors also highlight the need for educators to guide students in understanding the ethical and moral implications of using AI, as well as the limitations of AI outputs. They suggest that assessment practices, such as self-assessment, peer assessment, and feedback, can be used to develop evaluative judgement in students. The paper concludes that while AI can assist in the development of evaluative judgement, it is ultimately the responsibility of educators and learners to ensure that humans remain the arbiters of quality.