Trust in AI: progress, challenges, and future directions

Abstract The increasing use of artificial intelligence (AI) systems in our daily lives through various applications, services, and products highlights the significance of trust and distrust in AI from a user perspective. AI-driven systems have significantly diffused into various aspects of our lives...

Full description

Saved in:
Bibliographic Details
Main Authors: Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar, Hananeh Alambeigi
Format: Article
Language:English
Published: Springer Nature 2024-11-01
Series:Humanities & Social Sciences Communications
Online Access:https://doi.org/10.1057/s41599-024-04044-8
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832585972285440000
author Saleh Afroogh
Ali Akbari
Emmie Malone
Mohammadali Kargar
Hananeh Alambeigi
author_facet Saleh Afroogh
Ali Akbari
Emmie Malone
Mohammadali Kargar
Hananeh Alambeigi
author_sort Saleh Afroogh
collection DOAJ
description Abstract The increasing use of artificial intelligence (AI) systems in our daily lives through various applications, services, and products highlights the significance of trust and distrust in AI from a user perspective. AI-driven systems have significantly diffused into various aspects of our lives, serving as beneficial “tools” used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust and distrust in AI serve as regulators and could significantly control the level of this diffusion, as trust can increase, and distrust may reduce the rate of adoption of AI. Recently, a variety of studies focused on the different dimensions of trust and distrust in AI and its relevant considerations. In this systematic literature review, after conceptualizing trust in the current AI literature, we will investigate trust in different types of human–machine interaction and its impact on technology acceptance in different domains. Additionally, we propose a taxonomy of technical (i.e., safety, accuracy, robustness) and non-technical axiological (i.e., ethical, legal, and mixed) trustworthiness metrics, along with some trustworthy measurements. Moreover, we examine major trust-breakers in AI (e.g., autonomy and dignity threats) and trustmakers; and propose some future directions and probable solutions for the transition to a trustworthy AI.
format Article
id doaj-art-23b4e9b6bb2049cbb2f48051b589f1e3
institution Kabale University
issn 2662-9992
language English
publishDate 2024-11-01
publisher Springer Nature
record_format Article
series Humanities & Social Sciences Communications
spelling doaj-art-23b4e9b6bb2049cbb2f48051b589f1e32025-01-26T12:20:19ZengSpringer NatureHumanities & Social Sciences Communications2662-99922024-11-0111113010.1057/s41599-024-04044-8Trust in AI: progress, challenges, and future directionsSaleh Afroogh0Ali Akbari1Emmie Malone2Mohammadali Kargar3Hananeh Alambeigi4Urban Information Lab, The University of Texas at AustinStanford University School of MedicineDepartment of Philosophy, Lone Star College in HoustonDepartment of Mechanical Engineering, Texas A&M UniversityDepartment of Industrial and Systems Engineering, Texas A&M UniversityAbstract The increasing use of artificial intelligence (AI) systems in our daily lives through various applications, services, and products highlights the significance of trust and distrust in AI from a user perspective. AI-driven systems have significantly diffused into various aspects of our lives, serving as beneficial “tools” used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust and distrust in AI serve as regulators and could significantly control the level of this diffusion, as trust can increase, and distrust may reduce the rate of adoption of AI. Recently, a variety of studies focused on the different dimensions of trust and distrust in AI and its relevant considerations. In this systematic literature review, after conceptualizing trust in the current AI literature, we will investigate trust in different types of human–machine interaction and its impact on technology acceptance in different domains. Additionally, we propose a taxonomy of technical (i.e., safety, accuracy, robustness) and non-technical axiological (i.e., ethical, legal, and mixed) trustworthiness metrics, along with some trustworthy measurements. Moreover, we examine major trust-breakers in AI (e.g., autonomy and dignity threats) and trustmakers; and propose some future directions and probable solutions for the transition to a trustworthy AI.https://doi.org/10.1057/s41599-024-04044-8
spellingShingle Saleh Afroogh
Ali Akbari
Emmie Malone
Mohammadali Kargar
Hananeh Alambeigi
Trust in AI: progress, challenges, and future directions
Humanities & Social Sciences Communications
title Trust in AI: progress, challenges, and future directions
title_full Trust in AI: progress, challenges, and future directions
title_fullStr Trust in AI: progress, challenges, and future directions
title_full_unstemmed Trust in AI: progress, challenges, and future directions
title_short Trust in AI: progress, challenges, and future directions
title_sort trust in ai progress challenges and future directions
url https://doi.org/10.1057/s41599-024-04044-8
work_keys_str_mv AT salehafroogh trustinaiprogresschallengesandfuturedirections
AT aliakbari trustinaiprogresschallengesandfuturedirections
AT emmiemalone trustinaiprogresschallengesandfuturedirections
AT mohammadalikargar trustinaiprogresschallengesandfuturedirections
AT hananehalambeigi trustinaiprogresschallengesandfuturedirections