Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > J Educ Eval Health Prof > Volume 20; 2023 > Article
Research article Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
Betzy Clariza Torres-Zegarra1orcid, Wagner Rios-Garcia2orcid, Alvaro Micael Ñaña-Cordova1orcid, Karen Fatima Arteaga-Cisneros1orcid, Xiomara Cristina Benavente Chalco1orcid, Marina Atena Bustamante Ordoñez1orcid, Carlos Jesus Gutierrez Rios1orcid, Carlos Alberto Ramos Godoy3,4orcid, Kristell Luisa Teresa Panta Quezada4orcid, Jesus Daniel Gutierrez-Arratia4,5orcid, Javier Alejandro Flores-Cohaila1,4*orcid

DOI: https://doi.org/10.3352/jeehp.2023.20.30
Published online: November 20, 2023
  • 87 Views
  • 40 Download
  • 0 Crossref
  • 0 Scopus

1Escuela de Medicina, Universidad Cientifica del Sur, Lima, Peru

2Sociedad Científica de Estudiantes de Medicina de Ica, Universidad Nacional San Luis Gonzaga, Ica, Peru

3Universidad Nacional de Cajamarca, Cajamarca, Peru

4Academic Department, USAMEDIC, Lima, Peru

5Neurogenetics Research Center, Instituto Nacional de Ciencias Neurologicas, Lima, Peru

*Corresponding email:  jflorescoh@cientifica.edu.pe

Editor: Sun Huh, Hallym University, Korea

• Received: 6 October 2023   • Accepted: 7 November 2023

Purpose
We aimed to describe the performance and evaluate the educational value of justifications provided by artificial intelligence chatbots, including GPT-3.5, GPT-4, Bard, Claude, and Bing, on the Peruvian National Medical Licensing Examination (P-NLME).
Methods
This was a cross-sectional analytical study. On July 25, 2023, each multiple-choice question (MCQ) from the P-NLME was entered into each chatbot (GPT-3, GPT-4, Bing, Bard, and Claude) 3 times. Then, 4 medical educators categorized the MCQs in terms of medical area, item type, and whether the MCQ required Peru-specific knowledge. They assessed the educational value of the justifications from the 2 top performers (GPT-4 and Bing).
Results
GPT-4 scored 86.7% and Bing scored 82.2%, followed by Bard and Claude, and the historical performance of Peruvian examinees was 55%. Among the factors associated with correct answers, only MCQs that required Peru-specific knowledge had lower odds (odds ratio, 0.23; 95% confidence interval, 0.09–0.61), whereas the remaining factors showed no associations. In assessing the educational value of justifications provided by GPT-4 and Bing, neither showed any significant differences in certainty, usefulness, or potential use in the classroom.
Conclusion
Among chatbots, GPT-4 and Bing were the top performers, with Bing performing better at Peru-specific MCQs. Moreover, the educational value of justifications provided by the GPT-4 and Bing could be deemed appropriate. However, it is essential to start addressing the educational value of these chatbots, rather than merely their performance on examinations.

Figure
Related articles

JEEHP : Journal of Educational Evaluation for Health Professions