This paper discusses the limitations of artificial intelligence (AI) in education, highlighting critical issues that need more attention in future discussions. These include the limited ways educational processes can be statistically modeled, the risk of AI perpetuating social harms for minoritized students, the losses from reorganizing education to be more "machine readable," and the ecological and environmental costs of data-intensive AI. The paper calls for slowing down and recalibrating current AI and education discussions, focusing on issues of power, resistance, and the possibility of reimagining AI in more equitable ways.
AI in education is often misunderstood as a sophisticated form of statistical processing, but it is actually limited in its capabilities. AI systems are designed for specific tasks and operate within predefined boundaries. They do not truly understand or "know" what they are doing, but rather generate outputs based on statistical patterns. This has implications for educational AI, which relies on data to model and predict student behavior, engagement, and outcomes.
The paper also raises concerns about the social harms of AI, including algorithmic discrimination, quality-of-service harms, and representational harms. These issues are exacerbated by the way AI systems rely on statistical categorizations of social characteristics, which can lead to misrepresentations and reinforce unjust hierarchies. Additionally, AI technologies can have negative impacts on social relations within educational settings, such as through student activity monitoring systems.
Another concern is the need to fit education around the needs of AI, which may lead to a recursive standardization, homogenization, and narrowing of education. This involves reorganizing education to be more "machine readable," which can result in empty performative acts by students and teachers to trigger appropriate algorithmic responses.
The paper also addresses the environmental costs of AI, including the significant carbon emissions and resource consumption associated with training and using AI models. These costs raise concerns about the sustainability of AI in education and the need to consider environmental impacts when adopting AI technologies.
Finally, the paper suggests that educators should play a key role in shaping discussions around AI in education, challenging industry-led visions and emphasizing the need for more equitable and human-centered approaches. This includes exploring alternative forms of AI that better fit educational needs and emphasizing the importance of human elements in learning and teaching. The paper concludes that AI should not be seen as a solution to all educational problems, but rather as a technology that requires careful consideration of its implications for education and society.This paper discusses the limitations of artificial intelligence (AI) in education, highlighting critical issues that need more attention in future discussions. These include the limited ways educational processes can be statistically modeled, the risk of AI perpetuating social harms for minoritized students, the losses from reorganizing education to be more "machine readable," and the ecological and environmental costs of data-intensive AI. The paper calls for slowing down and recalibrating current AI and education discussions, focusing on issues of power, resistance, and the possibility of reimagining AI in more equitable ways.
AI in education is often misunderstood as a sophisticated form of statistical processing, but it is actually limited in its capabilities. AI systems are designed for specific tasks and operate within predefined boundaries. They do not truly understand or "know" what they are doing, but rather generate outputs based on statistical patterns. This has implications for educational AI, which relies on data to model and predict student behavior, engagement, and outcomes.
The paper also raises concerns about the social harms of AI, including algorithmic discrimination, quality-of-service harms, and representational harms. These issues are exacerbated by the way AI systems rely on statistical categorizations of social characteristics, which can lead to misrepresentations and reinforce unjust hierarchies. Additionally, AI technologies can have negative impacts on social relations within educational settings, such as through student activity monitoring systems.
Another concern is the need to fit education around the needs of AI, which may lead to a recursive standardization, homogenization, and narrowing of education. This involves reorganizing education to be more "machine readable," which can result in empty performative acts by students and teachers to trigger appropriate algorithmic responses.
The paper also addresses the environmental costs of AI, including the significant carbon emissions and resource consumption associated with training and using AI models. These costs raise concerns about the sustainability of AI in education and the need to consider environmental impacts when adopting AI technologies.
Finally, the paper suggests that educators should play a key role in shaping discussions around AI in education, challenging industry-led visions and emphasizing the need for more equitable and human-centered approaches. This includes exploring alternative forms of AI that better fit educational needs and emphasizing the importance of human elements in learning and teaching. The paper concludes that AI should not be seen as a solution to all educational problems, but rather as a technology that requires careful consideration of its implications for education and society.