12 Mar 2024 | Saleh Afroogh, Ali Akbari, Evan Malone, Mohammadali Kargar, Hananeh Alambeigi
The article "Trust in AI: Progress, Challenges, and Future Directions" by Saleh Afroogh, Ali Akbari, Evan Malone, Mohammadali Kargar, and Hananeh Alambeigi explores the significance of trust in artificial intelligence (AI) systems and its impact on technology acceptance across various domains. The authors highlight that trust plays a crucial role in regulating the diffusion of AI, as distrust can hinder its adoption. They conduct a systematic literature review to investigate different types of human-Machine interaction, the impact of trust on technology acceptance, and the development of trustworthy AI metrics.
Key findings include:
1. **Trust in Human-Machine Interaction**: Trust is a directional transaction between two parties, and in AI, it involves the trustor's belief in the AI system's ability to act in its best interest. The complexity and unpredictability of AI systems make establishing trust more challenging compared to interpersonal trust.
2. **Trustworthy AI Metrics**: The article proposes a taxonomy of technical (safety, accuracy, robustness) and non-technical (ethical, legal, mixed) trustworthiness metrics. It also discusses the importance of transparency, explainability, and interpretability in building trust.
3. **Trust Breakers and Makers**: The authors examine major trust-breakers such as autonomy and dignity threats, and trust makers like empathy and accountability. They propose solutions to address these issues and accelerate the transition to trustworthy AI.
The study emphasizes the need for a comprehensive understanding of trust in AI, including its definition, scope, and influential factors, to ensure the responsible and ethical development and use of AI technology.The article "Trust in AI: Progress, Challenges, and Future Directions" by Saleh Afroogh, Ali Akbari, Evan Malone, Mohammadali Kargar, and Hananeh Alambeigi explores the significance of trust in artificial intelligence (AI) systems and its impact on technology acceptance across various domains. The authors highlight that trust plays a crucial role in regulating the diffusion of AI, as distrust can hinder its adoption. They conduct a systematic literature review to investigate different types of human-Machine interaction, the impact of trust on technology acceptance, and the development of trustworthy AI metrics.
Key findings include:
1. **Trust in Human-Machine Interaction**: Trust is a directional transaction between two parties, and in AI, it involves the trustor's belief in the AI system's ability to act in its best interest. The complexity and unpredictability of AI systems make establishing trust more challenging compared to interpersonal trust.
2. **Trustworthy AI Metrics**: The article proposes a taxonomy of technical (safety, accuracy, robustness) and non-technical (ethical, legal, mixed) trustworthiness metrics. It also discusses the importance of transparency, explainability, and interpretability in building trust.
3. **Trust Breakers and Makers**: The authors examine major trust-breakers such as autonomy and dignity threats, and trust makers like empathy and accountability. They propose solutions to address these issues and accelerate the transition to trustworthy AI.
The study emphasizes the need for a comprehensive understanding of trust in AI, including its definition, scope, and influential factors, to ensure the responsible and ethical development and use of AI technology.