AI versus human-generated multiple-choice questions for medical education: a cohort study in a high-stakes examination
Abstract Background The creation of high-quality multiple-choice questions (MCQs) is essential for medical education assessments but is resource-intensive and time-consuming when done by human experts. Large language models (LLMs) like ChatGPT-4o offer a promising alternative, but their efficacy rem...
Saved in:
Main Authors: | Alex KK Law, Jerome So, Chun Tat Lui, Yu Fai Choi, Koon Ho Cheung, Kevin Kei-ching Hung, Colin Alexander Graham |
---|---|
Format: | Article |
Language: | English |
Published: |
BMC
2025-02-01
|
Series: | BMC Medical Education |
Subjects: | |
Online Access: | https://doi.org/10.1186/s12909-025-06796-6 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Fit for purpose? Evaluating multiple-choice question quality in E-learning for emergency primary healthcare
by: Kate-Torunn Aas Vold, et al.
Published: (2024-12-01) -
Multiple Choice Questions in Anaesthesia : basic sciences /
by: Kumar, Bakul
Published: (1992) -
TEACHERS’ QUESTIONS IN INDONESIAN EFL CLASSROOM
by: Ahmadi, et al.
Published: (2020-08-01) -
Rhetorical questions as aggressive, friendly or sarcastic/ironical questions with imposed answers
by: Špago Džemal
Published: (2020-12-01) -
École républicaine et questions socialement vives : la neutralité engagée ?
by: Carole Voisin
Published: (2022-06-01)