Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions

Purpose: Many adults in the United States possess basic or below basic health literacy skills, making it essential for patient education materials (PEMs) to be presented at or below a sixth-grade reading level. We evaluate the readability of PEMs generated by ChatGPT 3.5 and 4.0 for common hand cond...

Full description

Saved in:
Bibliographic Details
Main Authors: George Abdelmalek, MD, Harjot Uppal, MBA, Daniel Garcia, BS, Joseph Farshchian, MD, Arash Emami, MD, Andrew McGinniss, MD
Format: Article
Language:English
Published: Elsevier 2025-01-01
Series:Journal of Hand Surgery Global Online
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2589514124001956
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832586291756138496
author George Abdelmalek, MD
Harjot Uppal, MBA
Daniel Garcia, BS
Joseph Farshchian, MD
Arash Emami, MD
Andrew McGinniss, MD
author_facet George Abdelmalek, MD
Harjot Uppal, MBA
Daniel Garcia, BS
Joseph Farshchian, MD
Arash Emami, MD
Andrew McGinniss, MD
author_sort George Abdelmalek, MD
collection DOAJ
description Purpose: Many adults in the United States possess basic or below basic health literacy skills, making it essential for patient education materials (PEMs) to be presented at or below a sixth-grade reading level. We evaluate the readability of PEMs generated by ChatGPT 3.5 and 4.0 for common hand conditions. Methods: We used Chat Generative Pre-Trained Transformer (ChatGPT) 3.5 and 4.0 to generate PEMs for 50 common hand pathologies. Two consistent questions were asked to minimize variability: 1. “Please explain [Condition] to a patient at a sixth-grade reading level, including details on anatomy, symptoms, doctors' examination, and treatment (both surgical and nonsurgical).” 2. “Create a detailed patient information sheet for the general patient population at a sixth-grade reading level explaining [Condition], including points such as anatomy, symptoms, physical examination, and treatment (both surgical and nonsurgical).” Before asking the second question, a priming phase was conducted where ChatGPT 3.5 and 4.0 were presented with a text sample written at a sixth-grade reading level and informed that this was the desired output level. Multiple readability tests were used to evaluate the output, with a consensus reading level created from the results of all eight readability scores. Statistical analyses were performed using SAS 9.4. Results: ChatGPT 4.0 successfully produced 28% of its responses at the appropriate reading level following the priming phase, compared to none by ChatGPT 3.5. ChatGPT 4.0 showed superior performance across all readability metrics. Conclusions: ChatGPT 4.0 is a more effective tool than ChatGPT 3.5 for generating PEMs at a sixth-grade reading level for common hand conditions. Clinical relevance: The results suggest that Artificial Intelligence could significantly enhance patient education and health literacy with further refinement.
format Article
id doaj-art-b5eea9e221ea4cd9a71975af79559bde
institution Kabale University
issn 2589-5141
language English
publishDate 2025-01-01
publisher Elsevier
record_format Article
series Journal of Hand Surgery Global Online
spelling doaj-art-b5eea9e221ea4cd9a71975af79559bde2025-01-26T05:04:39ZengElsevierJournal of Hand Surgery Global Online2589-51412025-01-01713740Leveraging ChatGPT to Produce Patient Education Materials for Common Hand ConditionsGeorge Abdelmalek, MD0Harjot Uppal, MBA1Daniel Garcia, BS2Joseph Farshchian, MD3Arash Emami, MD4Andrew McGinniss, MD5Department of Orthopaedic Surgery, St. Joseph’s University Medical Center, Paterson, NJDepartment of Orthopaedic Surgery, St. Joseph’s University Medical Center, Paterson, NJ; Corresponding author: Harjot Uppal, MBA, Department of Orthopaedic Surgery, St. Joseph’s University Medical Center, 504 Valley Road, Suite 203, Wayne, NJ 07470.Department of Orthopaedic Surgery, St. Joseph’s University Medical Center, Paterson, NJDepartment of Orthopaedic Surgery, St. Joseph’s University Medical Center, Paterson, NJDepartment of Orthopaedic Surgery, St. Joseph’s University Medical Center, Paterson, NJDepartment of Orthopaedic Surgery, St. Joseph’s University Medical Center, Paterson, NJPurpose: Many adults in the United States possess basic or below basic health literacy skills, making it essential for patient education materials (PEMs) to be presented at or below a sixth-grade reading level. We evaluate the readability of PEMs generated by ChatGPT 3.5 and 4.0 for common hand conditions. Methods: We used Chat Generative Pre-Trained Transformer (ChatGPT) 3.5 and 4.0 to generate PEMs for 50 common hand pathologies. Two consistent questions were asked to minimize variability: 1. “Please explain [Condition] to a patient at a sixth-grade reading level, including details on anatomy, symptoms, doctors' examination, and treatment (both surgical and nonsurgical).” 2. “Create a detailed patient information sheet for the general patient population at a sixth-grade reading level explaining [Condition], including points such as anatomy, symptoms, physical examination, and treatment (both surgical and nonsurgical).” Before asking the second question, a priming phase was conducted where ChatGPT 3.5 and 4.0 were presented with a text sample written at a sixth-grade reading level and informed that this was the desired output level. Multiple readability tests were used to evaluate the output, with a consensus reading level created from the results of all eight readability scores. Statistical analyses were performed using SAS 9.4. Results: ChatGPT 4.0 successfully produced 28% of its responses at the appropriate reading level following the priming phase, compared to none by ChatGPT 3.5. ChatGPT 4.0 showed superior performance across all readability metrics. Conclusions: ChatGPT 4.0 is a more effective tool than ChatGPT 3.5 for generating PEMs at a sixth-grade reading level for common hand conditions. Clinical relevance: The results suggest that Artificial Intelligence could significantly enhance patient education and health literacy with further refinement.http://www.sciencedirect.com/science/article/pii/S2589514124001956Artificial intelligenceHandPatient education materials
spellingShingle George Abdelmalek, MD
Harjot Uppal, MBA
Daniel Garcia, BS
Joseph Farshchian, MD
Arash Emami, MD
Andrew McGinniss, MD
Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions
Journal of Hand Surgery Global Online
Artificial intelligence
Hand
Patient education materials
title Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions
title_full Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions
title_fullStr Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions
title_full_unstemmed Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions
title_short Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions
title_sort leveraging chatgpt to produce patient education materials for common hand conditions
topic Artificial intelligence
Hand
Patient education materials
url http://www.sciencedirect.com/science/article/pii/S2589514124001956
work_keys_str_mv AT georgeabdelmalekmd leveragingchatgpttoproducepatienteducationmaterialsforcommonhandconditions
AT harjotuppalmba leveragingchatgpttoproducepatienteducationmaterialsforcommonhandconditions
AT danielgarciabs leveragingchatgpttoproducepatienteducationmaterialsforcommonhandconditions
AT josephfarshchianmd leveragingchatgpttoproducepatienteducationmaterialsforcommonhandconditions
AT arashemamimd leveragingchatgpttoproducepatienteducationmaterialsforcommonhandconditions
AT andrewmcginnissmd leveragingchatgpttoproducepatienteducationmaterialsforcommonhandconditions