ChatGPT and oral cancer: a study on informational reliability

Abstract Background Artificial intelligence (AI) and large language models (LLMs) like ChatGPT have transformed information retrieval, including in healthcare. ChatGPT, trained on diverse datasets, can provide medical advice but faces ethical and accuracy concerns. This study evaluates the accuracy...

Full description

Saved in:
Bibliographic Details
Main Author: Mesude Çi̇ti̇r
Format: Article
Language:English
Published: BMC 2025-01-01
Series:BMC Oral Health
Subjects:
Online Access:https://doi.org/10.1186/s12903-025-05479-4
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832585394660573184
author Mesude Çi̇ti̇r
author_facet Mesude Çi̇ti̇r
author_sort Mesude Çi̇ti̇r
collection DOAJ
description Abstract Background Artificial intelligence (AI) and large language models (LLMs) like ChatGPT have transformed information retrieval, including in healthcare. ChatGPT, trained on diverse datasets, can provide medical advice but faces ethical and accuracy concerns. This study evaluates the accuracy of ChatGPT-3.5's answers to frequently asked questions about oral cancer, a condition where early diagnosis is crucial for improving patient outcomes. Methods A total of 20 questions were asked to ChatGPT-3.5, selected from Google Trends and questions asked by patients in the clinic. The responses provided by ChatGPT were evaluated for accuracy by medical oncologists and oral and maxillofacial radiologists. Inter-rater agreement was assessed using Fleiss’s and Cohen kappa tests. The scores given by the specialties were compared with the Mann-Whitney U test. The references provided by ChatGPT-3.5 were evaluated for authenticity. Results Of the 80 responses from 20 questions, 41 (51.25%) were rated as very good, 37 (46.25%) as good, 2 (2.50%) as acceptable. There was no significant difference between oral and maxillofacial radiologists and medical oncologists in all 20 questions. Of the 81 references to ChatGPT-3.5 answers, only 13 were scientific articles, 10 were fake, and the remaining references were data from websites. Conclusion ChatGPT provided reliable information about oral cancer and did not provide incorrect information and suggestions. However, all information provided by ChatGPT is not based on real references.
format Article
id doaj-art-ed68ff4706b74f86b2955746df97aba0
institution Kabale University
issn 1472-6831
language English
publishDate 2025-01-01
publisher BMC
record_format Article
series BMC Oral Health
spelling doaj-art-ed68ff4706b74f86b2955746df97aba02025-01-26T12:55:22ZengBMCBMC Oral Health1472-68312025-01-012511710.1186/s12903-025-05479-4ChatGPT and oral cancer: a study on informational reliabilityMesude Çi̇ti̇r0Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Tokat Gaziosmanpasa UniversityAbstract Background Artificial intelligence (AI) and large language models (LLMs) like ChatGPT have transformed information retrieval, including in healthcare. ChatGPT, trained on diverse datasets, can provide medical advice but faces ethical and accuracy concerns. This study evaluates the accuracy of ChatGPT-3.5's answers to frequently asked questions about oral cancer, a condition where early diagnosis is crucial for improving patient outcomes. Methods A total of 20 questions were asked to ChatGPT-3.5, selected from Google Trends and questions asked by patients in the clinic. The responses provided by ChatGPT were evaluated for accuracy by medical oncologists and oral and maxillofacial radiologists. Inter-rater agreement was assessed using Fleiss’s and Cohen kappa tests. The scores given by the specialties were compared with the Mann-Whitney U test. The references provided by ChatGPT-3.5 were evaluated for authenticity. Results Of the 80 responses from 20 questions, 41 (51.25%) were rated as very good, 37 (46.25%) as good, 2 (2.50%) as acceptable. There was no significant difference between oral and maxillofacial radiologists and medical oncologists in all 20 questions. Of the 81 references to ChatGPT-3.5 answers, only 13 were scientific articles, 10 were fake, and the remaining references were data from websites. Conclusion ChatGPT provided reliable information about oral cancer and did not provide incorrect information and suggestions. However, all information provided by ChatGPT is not based on real references.https://doi.org/10.1186/s12903-025-05479-4AccuracyArtifical intelligenceChatGPTOral cancer
spellingShingle Mesude Çi̇ti̇r
ChatGPT and oral cancer: a study on informational reliability
BMC Oral Health
Accuracy
Artifical intelligence
ChatGPT
Oral cancer
title ChatGPT and oral cancer: a study on informational reliability
title_full ChatGPT and oral cancer: a study on informational reliability
title_fullStr ChatGPT and oral cancer: a study on informational reliability
title_full_unstemmed ChatGPT and oral cancer: a study on informational reliability
title_short ChatGPT and oral cancer: a study on informational reliability
title_sort chatgpt and oral cancer a study on informational reliability
topic Accuracy
Artifical intelligence
ChatGPT
Oral cancer
url https://doi.org/10.1186/s12903-025-05479-4
work_keys_str_mv AT mesudecitir chatgptandoralcancerastudyoninformationalreliability