Preventing unrestricted and unmonitored AI experimentation in healthcare through transparency and accountability
Abstract The integration of large language models (LLMs) into electronic health records offers potential benefits but raises significant ethical, legal, and operational concerns, including unconsented data use, lack of governance, and AI-related malpractice accountability. Sycophancy, feedback loop...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | npj Digital Medicine |
Online Access: | https://doi.org/10.1038/s41746-025-01443-2 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Abstract The integration of large language models (LLMs) into electronic health records offers potential benefits but raises significant ethical, legal, and operational concerns, including unconsented data use, lack of governance, and AI-related malpractice accountability. Sycophancy, feedback loop bias, and data reuse risk amplifying errors without proper oversight. To safeguard patients, especially the vulnerable, clinicians must advocate for patient-centered education, ethical practices, and robust oversight to prevent harm. |
---|---|
ISSN: | 2398-6352 |