Me vs. the machine? Subjective evaluations of human- and AI-generated advice

Abstract Artificial intelligence (“AI”) has the potential to vastly improve human decision-making. In line with this, researchers have increasingly sought to understand how people view AI, often documenting skepticism and even outright aversion to these tools. In the present research, we complement...

Full description

Saved in:
Bibliographic Details
Main Authors: Merrick R. Osborne, Erica R. Bailey
Format: Article
Language:English
Published: Nature Portfolio 2025-02-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-86623-6
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832571758729756672
author Merrick R. Osborne
Erica R. Bailey
author_facet Merrick R. Osborne
Erica R. Bailey
author_sort Merrick R. Osborne
collection DOAJ
description Abstract Artificial intelligence (“AI”) has the potential to vastly improve human decision-making. In line with this, researchers have increasingly sought to understand how people view AI, often documenting skepticism and even outright aversion to these tools. In the present research, we complement these findings by documenting the performance of LLMs in the personal advice domain. In addition, we shift the focus in a new direction—exploring how interacting with AI tools, specifically large language models, impacts the user’s view of themselves. In five preregistered experiments (N = 1,722), we explore evaluations of human- and ChatGPT-generated advice along three dimensions: quality, effectiveness, and authenticity. We find that ChatGPT produces superior advice relative to the average online participant even in a domain in which people strongly prefer human-generated advice (dating and relationships). We also document a bias against ChatGPT-generated advice which is present only when participants are aware the advice was generated by ChatGPT. Novel to the present investigation, we then explore how interacting with these tools impacts self-evaluations. We manipulate the order in which people interact with these tools relative to self-generation and find that generating advice before interacting with ChatGPT advice boosts the quality ratings of the ChatGPT advice. At the same time, interacting with ChatGPT-generated advice before self-generating advice decreases self-ratings of authenticity. Taken together, we document a bias towards AI in the context of personal advice. Further, we identify an important externality in the use of these tools—they can invoke social comparisons of me vs. the machine.
format Article
id doaj-art-db58d9c10e974857ba22e625d7fdca38
institution Kabale University
issn 2045-2322
language English
publishDate 2025-02-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-db58d9c10e974857ba22e625d7fdca382025-02-02T12:22:09ZengNature PortfolioScientific Reports2045-23222025-02-0115111010.1038/s41598-025-86623-6Me vs. the machine? Subjective evaluations of human- and AI-generated adviceMerrick R. Osborne0Erica R. Bailey1U.C. Berkeley, Haas School of BusinessU.C. Berkeley, Haas School of BusinessAbstract Artificial intelligence (“AI”) has the potential to vastly improve human decision-making. In line with this, researchers have increasingly sought to understand how people view AI, often documenting skepticism and even outright aversion to these tools. In the present research, we complement these findings by documenting the performance of LLMs in the personal advice domain. In addition, we shift the focus in a new direction—exploring how interacting with AI tools, specifically large language models, impacts the user’s view of themselves. In five preregistered experiments (N = 1,722), we explore evaluations of human- and ChatGPT-generated advice along three dimensions: quality, effectiveness, and authenticity. We find that ChatGPT produces superior advice relative to the average online participant even in a domain in which people strongly prefer human-generated advice (dating and relationships). We also document a bias against ChatGPT-generated advice which is present only when participants are aware the advice was generated by ChatGPT. Novel to the present investigation, we then explore how interacting with these tools impacts self-evaluations. We manipulate the order in which people interact with these tools relative to self-generation and find that generating advice before interacting with ChatGPT advice boosts the quality ratings of the ChatGPT advice. At the same time, interacting with ChatGPT-generated advice before self-generating advice decreases self-ratings of authenticity. Taken together, we document a bias towards AI in the context of personal advice. Further, we identify an important externality in the use of these tools—they can invoke social comparisons of me vs. the machine.https://doi.org/10.1038/s41598-025-86623-6
spellingShingle Merrick R. Osborne
Erica R. Bailey
Me vs. the machine? Subjective evaluations of human- and AI-generated advice
Scientific Reports
title Me vs. the machine? Subjective evaluations of human- and AI-generated advice
title_full Me vs. the machine? Subjective evaluations of human- and AI-generated advice
title_fullStr Me vs. the machine? Subjective evaluations of human- and AI-generated advice
title_full_unstemmed Me vs. the machine? Subjective evaluations of human- and AI-generated advice
title_short Me vs. the machine? Subjective evaluations of human- and AI-generated advice
title_sort me vs the machine subjective evaluations of human and ai generated advice
url https://doi.org/10.1038/s41598-025-86623-6
work_keys_str_mv AT merrickrosborne mevsthemachinesubjectiveevaluationsofhumanandaigeneratedadvice
AT ericarbailey mevsthemachinesubjectiveevaluationsofhumanandaigeneratedadvice