Challenging the curve: can ChatGPT-generated MCQs reduce grade inflation in pharmacy education

IntroductionGrade inflation in higher education poses challenges to maintaining academic standards, particularly in pharmacy education, where assessing student competency is crucial. This study investigates the impact of AI-generated multiple-choice questions (MCQs) on exam difficulty and reliabilit...

Full description

Saved in:
Bibliographic Details
Main Author: Dalia Almaghaslah
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-01-01
Series:Frontiers in Pharmacology
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fphar.2025.1516381/full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832582934951886848
author Dalia Almaghaslah
author_facet Dalia Almaghaslah
author_sort Dalia Almaghaslah
collection DOAJ
description IntroductionGrade inflation in higher education poses challenges to maintaining academic standards, particularly in pharmacy education, where assessing student competency is crucial. This study investigates the impact of AI-generated multiple-choice questions (MCQs) on exam difficulty and reliability in a pharmacy management course at a Saudi university.MethodsA quasi-experimental design compared the 2024 midterm exam, featuring ChatGPT-generated MCQs, with the 2023 exam that utilized human-generated questions. Both exams covered identical topics. Exam reliability was assessed using the Kuder-Richardson Formula 20 (KR-20), while difficulty and discrimination indices were analyzed. Statistical tests, including t-tests and chi-square tests, were conducted to compare performance metrics.ResultsThe 2024 exam demonstrated higher reliability (KR-20 = 0.83) compared to the 2023 exam (KR-20 = 0.78). The 2024 exam included a greater proportion of moderate questions (30%) and one difficult question (3.3%), whereas the 2023 exam had 93.3% easy questions. The mean student score was significantly lower in 2024 (17.75 vs. 21.53, p < 0.001), and the discrimination index improved (0.35 vs. 0.25, p = 0.007), indicating enhanced differentiation between students.DiscussionThe findings suggest that AI-generated MCQs contribute to improved exam rigor and a potential reduction in grade inflation. However, careful review of AI-generated content remains essential to ensure alignment with course objectives and accuracy.ConclusionAI tools like ChatGPT offer promising opportunities to enhance assessment integrity and support fairer evaluations in pharmacy education.
format Article
id doaj-art-2bdef97f7e4a4a699cfc5e700d8fdd7e
institution Kabale University
issn 1663-9812
language English
publishDate 2025-01-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Pharmacology
spelling doaj-art-2bdef97f7e4a4a699cfc5e700d8fdd7e2025-01-29T05:21:32ZengFrontiers Media S.A.Frontiers in Pharmacology1663-98122025-01-011610.3389/fphar.2025.15163811516381Challenging the curve: can ChatGPT-generated MCQs reduce grade inflation in pharmacy educationDalia AlmaghaslahIntroductionGrade inflation in higher education poses challenges to maintaining academic standards, particularly in pharmacy education, where assessing student competency is crucial. This study investigates the impact of AI-generated multiple-choice questions (MCQs) on exam difficulty and reliability in a pharmacy management course at a Saudi university.MethodsA quasi-experimental design compared the 2024 midterm exam, featuring ChatGPT-generated MCQs, with the 2023 exam that utilized human-generated questions. Both exams covered identical topics. Exam reliability was assessed using the Kuder-Richardson Formula 20 (KR-20), while difficulty and discrimination indices were analyzed. Statistical tests, including t-tests and chi-square tests, were conducted to compare performance metrics.ResultsThe 2024 exam demonstrated higher reliability (KR-20 = 0.83) compared to the 2023 exam (KR-20 = 0.78). The 2024 exam included a greater proportion of moderate questions (30%) and one difficult question (3.3%), whereas the 2023 exam had 93.3% easy questions. The mean student score was significantly lower in 2024 (17.75 vs. 21.53, p < 0.001), and the discrimination index improved (0.35 vs. 0.25, p = 0.007), indicating enhanced differentiation between students.DiscussionThe findings suggest that AI-generated MCQs contribute to improved exam rigor and a potential reduction in grade inflation. However, careful review of AI-generated content remains essential to ensure alignment with course objectives and accuracy.ConclusionAI tools like ChatGPT offer promising opportunities to enhance assessment integrity and support fairer evaluations in pharmacy education.https://www.frontiersin.org/articles/10.3389/fphar.2025.1516381/fullAIChatGPT4MCQpharmacy coursegrade inflationAI-generated MCQs
spellingShingle Dalia Almaghaslah
Challenging the curve: can ChatGPT-generated MCQs reduce grade inflation in pharmacy education
Frontiers in Pharmacology
AI
ChatGPT4
MCQ
pharmacy course
grade inflation
AI-generated MCQs
title Challenging the curve: can ChatGPT-generated MCQs reduce grade inflation in pharmacy education
title_full Challenging the curve: can ChatGPT-generated MCQs reduce grade inflation in pharmacy education
title_fullStr Challenging the curve: can ChatGPT-generated MCQs reduce grade inflation in pharmacy education
title_full_unstemmed Challenging the curve: can ChatGPT-generated MCQs reduce grade inflation in pharmacy education
title_short Challenging the curve: can ChatGPT-generated MCQs reduce grade inflation in pharmacy education
title_sort challenging the curve can chatgpt generated mcqs reduce grade inflation in pharmacy education
topic AI
ChatGPT4
MCQ
pharmacy course
grade inflation
AI-generated MCQs
url https://www.frontiersin.org/articles/10.3389/fphar.2025.1516381/full
work_keys_str_mv AT daliaalmaghaslah challengingthecurvecanchatgptgeneratedmcqsreducegradeinflationinpharmacyeducation