Preventing unrestricted and unmonitored AI experimentation in healthcare through transparency and accountability
Abstract The integration of large language models (LLMs) into electronic health records offers potential benefits but raises significant ethical, legal, and operational concerns, including unconsented data use, lack of governance, and AI-related malpractice accountability. Sycophancy, feedback loop...
Saved in:
Main Authors: | Donnella S. Comeau, Danielle S. Bitterman, Leo Anthony Celi |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | npj Digital Medicine |
Online Access: | https://doi.org/10.1038/s41746-025-01443-2 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
TRANSPARENCY AND ACCOUNTABILITY IN POLITICAL FINANCING IN TURKEY
by: Mehmet Karakaş
Published: (2014-06-01) -
Time series predictions in unmonitored sites: a survey of machine learning techniques in water resources
by: Jared D. Willard, et al.
Published: (2025-01-01) -
Citizens' Attitudes Towards Local Services Accountability and Transparency:
by: Lejla Lazović Pita, et al.
Published: (2021-11-01) -
High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare
by: Maelenn Corfmat, et al.
Published: (2025-01-01) -
Working with Nonprofit Organizations in Community Settings: Governance, Accountability and Transparency
by: Elizabeth B. Bolton, et al.
Published: (2009-07-01)