Generative AI in healthcare: challenges to patient agency and ethical implications

Clinical research is no longer a monopolistic environment wherein patients and participants are the sole voice of information. The introduction and acceleration of AI-based methods in healthcare is creating a complex environment where human-derived data is no longer the sole mechanism through which...

Full description

Saved in:
Bibliographic Details
Main Authors: Scott A. Holmes, Vanda Faria, Eric A. Moulton
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-06-01
Series:Frontiers in Digital Health
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fdgth.2025.1524553/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Clinical research is no longer a monopolistic environment wherein patients and participants are the sole voice of information. The introduction and acceleration of AI-based methods in healthcare is creating a complex environment where human-derived data is no longer the sole mechanism through which researchers and clinicians explore and test their hypotheses. The concept of self-agency is intimately tied into this, as generative data does not encompass the same person-lived experiences as human-derived data. The lack of accountability and transparency in recognizing data sources supporting medical and research decisions has the potential to immediately and negatively impact patient care. This commentary considers how self-agency is being confronted by the introduction and proliferation of generative AI, and discusses future directions to improve, rather than undermine AI-fueled healthcare progress.
ISSN:2673-253X