The artificial intelligence divide: Who is the most vulnerable?

The artificial intelligence divide: Who is the most vulnerable?

2024 | Chenyue Wang, Sophie C Boerman, Anne C Kroon, Judith Möller, Claes H de Vreeze
This study investigates users' artificial intelligence (AI) related competencies (i.e., AI knowledge, skills, and attitudes) and identifies the vulnerable user groups in the AI-shaped online news and entertainment environment. A survey of 1088 Dutch citizens over the age of 16 years identified five user groups through latent class analysis: the average users, the expert advocates, the expert skeptics, the unskilled skeptics, and the neutral unskilled. The most vulnerable groups with the lowest levels of AI knowledge and AI skills (i.e., unskilled skeptics and neutral unskilled) were mostly older, with lower levels of education and privacy protection skills, than the average users. The results resonate with existing findings on the digital divide and provide evidence for an emerging AI divide among users. The study discusses the societal implications, such as the need for education programs and applications of explainable AI. The findings highlight the importance of privacy protection skills in the AI divide, showing that users with higher privacy protection skills are more likely to be expert advocates and less likely to be unskilled skeptics or neutral unskilled. The study also emphasizes the need for future research to explore the AI divide in different contexts and to develop approaches like Explainable AI (XAI) to support vulnerable users. The study has practical implications for policymakers, educators, and AI designers to ensure equal access to AI knowledge and skills for all users. The study's findings suggest that the AI divide is influenced by sociodemographic factors such as gender, age, and education, and that vulnerable groups are often the least educated and older individuals. The study also highlights the importance of understanding users' attitudes toward AI and the need for future research to explore the reasons behind skeptical attitudes toward AI. The study's limitations include a potential bias in the sample and the need for further research in different countries to verify the findings. Overall, the study provides important insights into the AI divide and the need for targeted interventions to support vulnerable users in the AI-shaped online communication environment.This study investigates users' artificial intelligence (AI) related competencies (i.e., AI knowledge, skills, and attitudes) and identifies the vulnerable user groups in the AI-shaped online news and entertainment environment. A survey of 1088 Dutch citizens over the age of 16 years identified five user groups through latent class analysis: the average users, the expert advocates, the expert skeptics, the unskilled skeptics, and the neutral unskilled. The most vulnerable groups with the lowest levels of AI knowledge and AI skills (i.e., unskilled skeptics and neutral unskilled) were mostly older, with lower levels of education and privacy protection skills, than the average users. The results resonate with existing findings on the digital divide and provide evidence for an emerging AI divide among users. The study discusses the societal implications, such as the need for education programs and applications of explainable AI. The findings highlight the importance of privacy protection skills in the AI divide, showing that users with higher privacy protection skills are more likely to be expert advocates and less likely to be unskilled skeptics or neutral unskilled. The study also emphasizes the need for future research to explore the AI divide in different contexts and to develop approaches like Explainable AI (XAI) to support vulnerable users. The study has practical implications for policymakers, educators, and AI designers to ensure equal access to AI knowledge and skills for all users. The study's findings suggest that the AI divide is influenced by sociodemographic factors such as gender, age, and education, and that vulnerable groups are often the least educated and older individuals. The study also highlights the importance of understanding users' attitudes toward AI and the need for future research to explore the reasons behind skeptical attitudes toward AI. The study's limitations include a potential bias in the sample and the need for further research in different countries to verify the findings. Overall, the study provides important insights into the AI divide and the need for targeted interventions to support vulnerable users in the AI-shaped online communication environment.
Reach us at info@futurestudyspace.com