Large Language Models lack essential metacognition for reliable medical reasoning
Abstract Large Language Models have demonstrated expert-level accuracy on medical board examinations, suggesting potential for clinical decision support systems. However, their metacognitive abilities, crucial for medical decision-making, remain largely unexplored. To address this gap, we developed...
Saved in:
Main Authors: | Maxime Griot, Coralie Hemptinne, Jean Vanderdonckt, Demet Yuksel |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-024-55628-6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Evolving Metacognitive Strategies in Hyperpolyglots: A Longitudinal Study of Adaptive Language Learning
by: Angel Osle
Published: (2024-01-01) -
Immersive Haptic Technology to Support English Language Learning Based on Metacognitive Strategies
by: Adriana Guanuche, et al.
Published: (2025-01-01) -
Beyond Text Generation: Assessing Large Language Models’ Ability to Reason Logically and Follow Strict Rules
by: Zhiyong Han, et al.
Published: (2025-01-01) -
Origanum majorana Essential Oil Lacks Mutagenic Activity in the Salmonella/Microsome and Micronucleus Assays
by: Andrea dos Santos Dantas, et al.
Published: (2016-01-01) -
Lack of standardization in the nomenclature of dating strokes or the desperate search for a common language
by: Eya Khadhraoui, et al.
Published: (2025-01-01)