How do people react to political bias in generative artificial intelligence (AI)?

Generative Artificial Intelligence (GAI) such as Large Language Models (LLMs) have a concerning tendency to generate politically biased content. This is a challenge, as the emergence of GAI meets politically polarized societies. Therefore, this research investigates how people react to biased GAI-co...

Full description

Saved in:
Bibliographic Details
Main Author: Uwe Messer
Format: Article
Language:English
Published: Elsevier 2025-03-01
Series:Computers in Human Behavior: Artificial Humans
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2949882124000689
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Generative Artificial Intelligence (GAI) such as Large Language Models (LLMs) have a concerning tendency to generate politically biased content. This is a challenge, as the emergence of GAI meets politically polarized societies. Therefore, this research investigates how people react to biased GAI-content based on their pre-existing political beliefs and how this influences the acceptance of GAI. In three experiments (N = 513), it was found that perceived alignment between user's political orientation and bias in generated content (in text and images) increases acceptance and reliance on GAI. Participants who perceived alignment were more likely to grant GAI access to sensitive smartphone functions and to endorse the use in critical domains (e.g., loan approval; social media moderation). Because users see GAI as a social actor, they consider perceived alignment as a sign of greater objectivity, thus granting aligned GAI access to more sensitive areas.
ISSN:2949-8821