Showing 1 - 20 results of 81 for search '"GPT-3"', query time: 0.07s Refine Results
  1. 1
  2. 2
  3. 3

    الخدمة المرجعية والرد على الاستفسارات باستخدام (ChatGpt3.5) و (Gemini) دراسة تقييمية مقارنة by عبد الرحمن صابر عبد الرحمن عمار

    Published 2024-10-01
    “…أظهرت النتائج وجود فروق ذات دلالة إحصائية بين ChatGPT3.5 و Gemini في الرد على الاستفسارات المرجعية لصالح Gemini؛ حيث اتسمت إجاباته بدقة وشمولية أكبر. …”
    Get full text
    Article
  4. 4

    Performance of ChatGPT-3.5 and ChatGPT-4 in the Taiwan National Pharmacist Licensing Examination: Comparative Evaluation Study by Ying-Mei Wang, Hung-Wei Shen, Tzeng-Ji Chen, Shu-Chiung Chiang, Ting-Guan Lin

    Published 2025-01-01
    “… Abstract BackgroundOpenAI released versions ChatGPT-3.5 and GPT-4 between 2022 and 2023. GPT-3.5 has demonstrated proficiency in various examinations, particularly the United States Medical Licensing Examination. …”
    Get full text
    Article
  5. 5

    Readability and Appropriateness of Responses Generated by ChatGPT 3.5, ChatGPT 4.0, Gemini, and Microsoft Copilot for FAQs in Refractive Surgery by Fahri Onur Aydın, Burakhan Kürşat Aksoy, Ali Ceylan, Yusuf Berk Akbaş, Serhat Ermiş, Burçin Kepez Yıldız, Yusuf Yıldırım

    Published 2024-12-01
    “…Results: Based on the responses generated by the LLM chatbots, 45% (n=18) of the answers given by ChatGPT 3.5 were correct, while this rate was 52.5% (n=21) for ChatGPT 4.0, 87.5% (n=35) for Gemini, and 60% (n=24) for Copilot. …”
    Get full text
    Article
  6. 6

    Performance assessment of ChatGPT 4, ChatGPT 3.5, Gemini Advanced Pro 1.5 and Bard 2.0 to problem solving in pathology in French language by Georges Tarris, Laurent Martin

    Published 2025-01-01
    “…Seventy questions (25 first-order single response questions and 45 second-order multiple response questions) were submitted on May 2023 to ChatGPT 3.5 and Bard 2.0, and on September 2024 to Gemini 1.5 and ChatGPT-4. …”
    Get full text
    Article
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11

    Application of Conversational AI Models in Decision Making for Clinical Periodontology: Analysis and Predictive Modeling by Albert Camlet, Aida Kusiak, Dariusz Świetlik

    Published 2025-01-01
    “…The periodontology specialty examination test accuracy of ChatGPT-4 was significantly better than that of ChatGPT-3.5 for both sessions (<i>p</i> < 0.05). For the ChatGPT-4 spring session, it was significantly more effective in the English language (<i>p</i> = 0.0325) due to the lack of statistically significant differences for ChatGPT-3.5. …”
    Get full text
    Article
  12. 12
  13. 13

    ChatGPT and oral cancer: a study on informational reliability by Mesude Çi̇ti̇r

    Published 2025-01-01
    “…Methods A total of 20 questions were asked to ChatGPT-3.5, selected from Google Trends and questions asked by patients in the clinic. …”
    Get full text
    Article
  14. 14
  15. 15

    Clinical Characteristics of Children with Acute Post-Streptococcal Glomerulonephritis and Re-Evaluation of Patients with Artificial Intelligence by Emre LEVENTOGLU, Mustafa SORAN

    Published 2024-09-01
    “…Twelve questions about APSGN were directed to ChatGPT 3.5. The accuracy of the answers was evaluated by the researchers. …”
    Get full text
    Article
  16. 16

    Leveraging ChatGPT to Produce Patient Education Materials for Common Hand Conditions by George Abdelmalek, MD, Harjot Uppal, MBA, Daniel Garcia, BS, Joseph Farshchian, MD, Arash Emami, MD, Andrew McGinniss, MD

    Published 2025-01-01
    “…We evaluate the readability of PEMs generated by ChatGPT 3.5 and 4.0 for common hand conditions. Methods: We used Chat Generative Pre-Trained Transformer (ChatGPT) 3.5 and 4.0 to generate PEMs for 50 common hand pathologies. …”
    Get full text
    Article
  17. 17

    Evaluation of Chat Generative Pre-trained Transformer and Microsoft Copilot Performance on the American Society of Surgery of the Hand Self-Assessment Examinations by Taylor R. Rakauskas, BS, Antonio Da Costa, BS, Camberly Moriconi, BS, Gurnoor Gill, BA, Jeffrey W. Kwong, MD MS, Nicolas Lee, MD

    Published 2025-01-01
    “…Conclusions: In this study, ChatGPT-4 and Microsoft Copilot perform better on the hand surgery subspecialty examinations than did ChatGPT-3.5. Microsoft Copilot was more accurate than ChatGPT3.5 but less accurate than ChatGPT4. …”
    Get full text
    Article
  18. 18
  19. 19

    TQFLL: a novel unified analytics framework for translation quality framework for large language model and human translation of allusions in multilingual corpora by Li Yating, Muhammad Afzaal, Xiao Shanshan, Dina Abdel Salam El-Dakhs

    Published 2025-01-01
    “…The findings of the study indicate that the GPT-3.5 translated version exhibits higher quality than the Volctrans version when evaluated by a machine. …”
    Get full text
    Article
  20. 20

    ChatGPT-4 Omni’s superiority in answering multiple-choice oral radiology questions by Melek Tassoker

    Published 2025-02-01
    “…ChatGPT-4o achieved the highest accuracy at 86.1%, followed by Google Bard at 61.8%. ChatGPT-3.5 demonstrated an accuracy rate of 43.9%, while Microsoft Copilot recorded a rate of 41.5%. …”
    Get full text
    Article