The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations

Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine...

Full description

Saved in:
Bibliographic Details
Main Authors: Dimitri Roustan, François Bastardot
Format: Article
Language:English
Published: JMIR Publications 2025-01-01
Series:Interactive Journal of Medical Research
Online Access:https://www.i-jmr.org/2025/1/e59823
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832583296140181504
author Dimitri Roustan
François Bastardot
author_facet Dimitri Roustan
François Bastardot
author_sort Dimitri Roustan
collection DOAJ
description Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.
format Article
id doaj-art-4f912dd3aa204c96aa172b73e1b010ae
institution Kabale University
issn 1929-073X
language English
publishDate 2025-01-01
publisher JMIR Publications
record_format Article
series Interactive Journal of Medical Research
spelling doaj-art-4f912dd3aa204c96aa172b73e1b010ae2025-01-28T19:31:07ZengJMIR PublicationsInteractive Journal of Medical Research1929-073X2025-01-0114e5982310.2196/59823The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on HallucinationsDimitri Roustanhttps://orcid.org/0009-0008-2650-4035François Bastardothttps://orcid.org/0000-0003-4060-0353 Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.https://www.i-jmr.org/2025/1/e59823
spellingShingle Dimitri Roustan
François Bastardot
The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations
Interactive Journal of Medical Research
title The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations
title_full The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations
title_fullStr The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations
title_full_unstemmed The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations
title_short The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations
title_sort clinicians guide to large language models a general perspective with a focus on hallucinations
url https://www.i-jmr.org/2025/1/e59823
work_keys_str_mv AT dimitriroustan thecliniciansguidetolargelanguagemodelsageneralperspectivewithafocusonhallucinations
AT francoisbastardot thecliniciansguidetolargelanguagemodelsageneralperspectivewithafocusonhallucinations
AT dimitriroustan cliniciansguidetolargelanguagemodelsageneralperspectivewithafocusonhallucinations
AT francoisbastardot cliniciansguidetolargelanguagemodelsageneralperspectivewithafocusonhallucinations