Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents

In this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a simulation of their own self) and, consequent...

Full description

Saved in:
Bibliographic Details
Main Author: Jeff Buechner
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/16/1/36
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832588339806470144
author Jeff Buechner
author_facet Jeff Buechner
author_sort Jeff Buechner
collection DOAJ
description In this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a simulation of their own self) and, consequently, cannot properly understand the indexicals ‘I’ and ‘me’. It also follows that they cannot take up a first-person point-of-view and that they cannot be conscious. They can understand that agent so-and-so (described in objective indexical-free terms) trusts or is entrusted but cannot know that <i>they</i> are that agent (if they are) and so cannot know that they are trusted or entrusted. Artificial agents cannot know what it means for <i>it</i> to have a normative expectation, nor what it means for <i>it</i> to be responsible for performing certain actions. Artificial agents lack all of the first-person properties that human agents possess, and which are epistemically important to human agents. Because of these limitations, and because artificial agents figure centrally in the trust relation defined in the Buechner–Tavani model of digital trust, there will be several different kinds of circumstances in which it would be rational for human agents not to trust artificial agents. I also examine the problem of moral luck, define a converse problem of moral luck, and argue that although each kind of problem of moral luck does not arise for artificial agents (since they cannot take up a first-person point-of-view), human agents should not trust artificial agents interacting with those human agents in moral luck and converse moral luck circumstances.
format Article
id doaj-art-4a2430bd3b09458eb910fde061810f54
institution Kabale University
issn 2078-2489
language English
publishDate 2025-01-01
publisher MDPI AG
record_format Article
series Information
spelling doaj-art-4a2430bd3b09458eb910fde061810f542025-01-24T13:35:14ZengMDPI AGInformation2078-24892025-01-011613610.3390/info16010036Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial AgentsJeff Buechner0Rutgers University-Newark, Newark, NJ 07102, USAIn this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a simulation of their own self) and, consequently, cannot properly understand the indexicals ‘I’ and ‘me’. It also follows that they cannot take up a first-person point-of-view and that they cannot be conscious. They can understand that agent so-and-so (described in objective indexical-free terms) trusts or is entrusted but cannot know that <i>they</i> are that agent (if they are) and so cannot know that they are trusted or entrusted. Artificial agents cannot know what it means for <i>it</i> to have a normative expectation, nor what it means for <i>it</i> to be responsible for performing certain actions. Artificial agents lack all of the first-person properties that human agents possess, and which are epistemically important to human agents. Because of these limitations, and because artificial agents figure centrally in the trust relation defined in the Buechner–Tavani model of digital trust, there will be several different kinds of circumstances in which it would be rational for human agents not to trust artificial agents. I also examine the problem of moral luck, define a converse problem of moral luck, and argue that although each kind of problem of moral luck does not arise for artificial agents (since they cannot take up a first-person point-of-view), human agents should not trust artificial agents interacting with those human agents in moral luck and converse moral luck circumstances.https://www.mdpi.com/2078-2489/16/1/36artificial agentnormative expectationzones of diffuse default trustBuechner–Tavani model of digital trustself-trustself
spellingShingle Jeff Buechner
Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
Information
artificial agent
normative expectation
zones of diffuse default trust
Buechner–Tavani model of digital trust
self-trust
self
title Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
title_full Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
title_fullStr Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
title_full_unstemmed Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
title_short Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
title_sort some circumstances under which it is rational for human agents not to trust artificial agents
topic artificial agent
normative expectation
zones of diffuse default trust
Buechner–Tavani model of digital trust
self-trust
self
url https://www.mdpi.com/2078-2489/16/1/36
work_keys_str_mv AT jeffbuechner somecircumstancesunderwhichitisrationalforhumanagentsnottotrustartificialagents