Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents

In this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a simulation of their own self) and, consequent...

Full description

Saved in:
Bibliographic Details
Main Author: Jeff Buechner
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/16/1/36
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a simulation of their own self) and, consequently, cannot properly understand the indexicals ‘I’ and ‘me’. It also follows that they cannot take up a first-person point-of-view and that they cannot be conscious. They can understand that agent so-and-so (described in objective indexical-free terms) trusts or is entrusted but cannot know that <i>they</i> are that agent (if they are) and so cannot know that they are trusted or entrusted. Artificial agents cannot know what it means for <i>it</i> to have a normative expectation, nor what it means for <i>it</i> to be responsible for performing certain actions. Artificial agents lack all of the first-person properties that human agents possess, and which are epistemically important to human agents. Because of these limitations, and because artificial agents figure centrally in the trust relation defined in the Buechner–Tavani model of digital trust, there will be several different kinds of circumstances in which it would be rational for human agents not to trust artificial agents. I also examine the problem of moral luck, define a converse problem of moral luck, and argue that although each kind of problem of moral luck does not arise for artificial agents (since they cannot take up a first-person point-of-view), human agents should not trust artificial agents interacting with those human agents in moral luck and converse moral luck circumstances.
ISSN:2078-2489