-
1
Evaluating ChatGPT-3.5 in allergology: performance in the Polish Specialist Examination
Published 2024-02-01Get full text
Article -
2
Analisis Performa Ekstraksi Konten GPT-3 Dengan Matrik Bertscore Dan Rouge
Published 2024-12-01Subjects: “…GPT-3…”
Get full text
Article -
3
الخدمة المرجعية والرد على الاستفسارات باستخدام (ChatGpt3.5) و (Gemini) دراسة تقييمية مقارنة
Published 2024-10-01“…أظهرت النتائج وجود فروق ذات دلالة إحصائية بين ChatGPT3.5 و Gemini في الرد على الاستفسارات المرجعية لصالح Gemini؛ حيث اتسمت إجاباته بدقة وشمولية أكبر. …”
Get full text
Article -
4
Performance of ChatGPT-3.5 and ChatGPT-4 in the Taiwan National Pharmacist Licensing Examination: Comparative Evaluation Study
Published 2025-01-01“… Abstract BackgroundOpenAI released versions ChatGPT-3.5 and GPT-4 between 2022 and 2023. GPT-3.5 has demonstrated proficiency in various examinations, particularly the United States Medical Licensing Examination. …”
Get full text
Article -
5
Readability and Appropriateness of Responses Generated by ChatGPT 3.5, ChatGPT 4.0, Gemini, and Microsoft Copilot for FAQs in Refractive Surgery
Published 2024-12-01“…Results: Based on the responses generated by the LLM chatbots, 45% (n=18) of the answers given by ChatGPT 3.5 were correct, while this rate was 52.5% (n=21) for ChatGPT 4.0, 87.5% (n=35) for Gemini, and 60% (n=24) for Copilot. …”
Get full text
Article -
6
Performance assessment of ChatGPT 4, ChatGPT 3.5, Gemini Advanced Pro 1.5 and Bard 2.0 to problem solving in pathology in French language
Published 2025-01-01“…Seventy questions (25 first-order single response questions and 45 second-order multiple response questions) were submitted on May 2023 to ChatGPT 3.5 and Bard 2.0, and on September 2024 to Gemini 1.5 and ChatGPT-4. …”
Get full text
Article -
7
Large language models improve the identification of emergency department visits for symptomatic kidney stones
Published 2025-01-01Subjects: Get full text
Article -
8
ChatGPT Conversations on Oral Cancer: Unveiling ChatGPT's Potential and Pitfalls
Published 2024-06-01Subjects: Get full text
Article -
9
Exploring the Impact of Large Language Models on Disease Diagnosis
Published 2025-01-01Subjects: Get full text
Article -
10
A Rule-Based Parser in Comparison with Statistical Neuronal Approaches in Terms of Grammar Competence
Published 2024-12-01Subjects: Get full text
Article -
11
Application of Conversational AI Models in Decision Making for Clinical Periodontology: Analysis and Predictive Modeling
Published 2025-01-01“…The periodontology specialty examination test accuracy of ChatGPT-4 was significantly better than that of ChatGPT-3.5 for both sessions (<i>p</i> < 0.05). For the ChatGPT-4 spring session, it was significantly more effective in the English language (<i>p</i> = 0.0325) due to the lack of statistically significant differences for ChatGPT-3.5. …”
Get full text
Article -
12
An investigative analysis – ChatGPT’s capability to excel in the Polish speciality exam in pathology
Published 2024-09-01“…ChatGPT-3.5 achieved a performance of 45.38%, which is significantly below the minimum PES pass threshold. …”
Get full text
Article -
13
ChatGPT and oral cancer: a study on informational reliability
Published 2025-01-01“…Methods A total of 20 questions were asked to ChatGPT-3.5, selected from Google Trends and questions asked by patients in the clinic. …”
Get full text
Article -
14
A mixed methods crossover randomized controlled trial exploring the experiences, perceptions, and usability of artificial intelligence (ChatGPT) in health sciences education
Published 2024-12-01“…Technology usability was compared between ChatGPT-3.5 and the traditional tools using questionnaires. …”
Get full text
Article -
15
Clinical Characteristics of Children with Acute Post-Streptococcal Glomerulonephritis and Re-Evaluation of Patients with Artificial Intelligence
Published 2024-09-01“…Twelve questions about APSGN were directed to ChatGPT 3.5. The accuracy of the answers was evaluated by the researchers. …”
Get full text
Article -
16
Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions
Published 2025-01-01“…We evaluate the readability of PEMs generated by ChatGPT 3.5 and 4.0 for common hand conditions. Methods: We used Chat Generative Pre-Trained Transformer (ChatGPT) 3.5 and 4.0 to generate PEMs for 50 common hand pathologies. …”
Get full text
Article -
17
Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment Examinations
Published 2025-01-01“…Conclusions: In this study, ChatGPT-4 and Microsoft Copilot perform better on the hand surgery subspecialty examinations than did ChatGPT-3.5. Microsoft Copilot was more accurate than ChatGPT3.5 but less accurate than ChatGPT4. …”
Get full text
Article -
18
Large language models for pretreatment education in pediatric radiation oncology: A comparative evaluation study
Published 2025-03-01“…Responses were generated using GPT-3.5, GPT-4, and fine-tuned GPT-3.5, with fine-tuning based on pediatric radiotherapy guides from various institutions. …”
Get full text
Article -
19
TQFLL: a novel unified analytics framework for translation quality framework for large language model and human translation of allusions in multilingual corpora
Published 2025-01-01“…The findings of the study indicate that the GPT-3.5 translated version exhibits higher quality than the Volctrans version when evaluated by a machine. …”
Get full text
Article -
20
ChatGPT-4 Omni’s superiority in answering multiple-choice oral radiology questions
Published 2025-02-01“…ChatGPT-4o achieved the highest accuracy at 86.1%, followed by Google Bard at 61.8%. ChatGPT-3.5 demonstrated an accuracy rate of 43.9%, while Microsoft Copilot recorded a rate of 41.5%. …”
Get full text
Article