Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
In this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a simulation of their own self) and, consequent...
Saved in:
Main Author: | Jeff Buechner |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/16/1/36 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
STUDENTS’ IDEAS ABOUT TRUST IN INTERETHNIC RELATIONS
by: A. V. Alborova, et al.
Published: (2021-04-01) -
Determinants of Trust in Higher Education
by: I. S. Kuznetsov
Published: (2022-01-01) -
Students’ Trust and Their Educational Trajectory after Graduation
by: I. S. Kuznetsov
Published: (2023-01-01) -
The Road to Recovery: Building Physical and Emotional Trust when Engaging with Extension Clientele
by: Colby Jordan Silvert, et al.
Published: (2020-11-01) -
The Road to Recovery: Building Physical and Emotional Trust when Engaging with Extension Clientele
by: Colby Jordan Silvert, et al.
Published: (2020-11-01)