25 April 2024 | Zheng ZHANG1,2, Le WU3, Qi LIU1,2*, Jiayu LIU1,2, Zhenya HUANG1,2, Yu YIN1,2, Yan ZHUANG1,2, Weibo GAO1,2 & Enhong CHEN1,2
This paper explores fairness in cognitive diagnosis (CD), a key area in intelligent education. CD aims to assess students' proficiency in specific knowledge concepts, such as Geometry. However, existing CD models often overlook fairness, leading to biased outcomes influenced by sensitive attributes like gender or region. The paper addresses two questions: (1) Do CD models produce results affected by sensitive attributes? (2) How can we mitigate the impact of these attributes to ensure fair diagnosis?
Empirical studies on the PISA dataset show that several CD methods lead to unfair performance, with varying trends across models. Theoretical analysis reveals that model complexity contributes to these differences. To address fairness, the paper proposes FairCD, a framework that separates student proficiency into bias and fair components. It uses two orthogonal tasks to ensure fairness is independent of sensitive attributes.
FairCD employs adversarial learning to eliminate bias, expanding its application from recommendation systems to CD. Despite the success of adversarial learning, training instability and complex modeling in CD may still result in biased outcomes. To overcome this, FairCD decomposes adversarial learning into two tasks: one to remove bias and another to ensure fairness. Experiments on the PISA dataset demonstrate that FairCD effectively reduces bias and ensures fair diagnosis. The paper highlights the importance of fairness in CD, emphasizing that educational outcomes should not be influenced by sensitive attributes. By addressing fairness, FairCD contributes to more equitable and effective intelligent education systems.This paper explores fairness in cognitive diagnosis (CD), a key area in intelligent education. CD aims to assess students' proficiency in specific knowledge concepts, such as Geometry. However, existing CD models often overlook fairness, leading to biased outcomes influenced by sensitive attributes like gender or region. The paper addresses two questions: (1) Do CD models produce results affected by sensitive attributes? (2) How can we mitigate the impact of these attributes to ensure fair diagnosis?
Empirical studies on the PISA dataset show that several CD methods lead to unfair performance, with varying trends across models. Theoretical analysis reveals that model complexity contributes to these differences. To address fairness, the paper proposes FairCD, a framework that separates student proficiency into bias and fair components. It uses two orthogonal tasks to ensure fairness is independent of sensitive attributes.
FairCD employs adversarial learning to eliminate bias, expanding its application from recommendation systems to CD. Despite the success of adversarial learning, training instability and complex modeling in CD may still result in biased outcomes. To overcome this, FairCD decomposes adversarial learning into two tasks: one to remove bias and another to ensure fairness. Experiments on the PISA dataset demonstrate that FairCD effectively reduces bias and ensures fair diagnosis. The paper highlights the importance of fairness in CD, emphasizing that educational outcomes should not be influenced by sensitive attributes. By addressing fairness, FairCD contributes to more equitable and effective intelligent education systems.