Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
13 "Artificial intelligence"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Review
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review  
Xiaojun Xu, Yixiao Chen, Jing Miao
J Educ Eval Health Prof. 2024;21:6.   Published online March 15, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.6
  • 137 View
  • 177 Download
AbstractAbstract PDFSupplementary Material
Background
ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.
Methods
A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.
Results
ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.
Conclusion
ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.
Research articles
ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study  
Hiroyasu Sato, Katsuhiko Ogasawara
J Educ Eval Health Prof. 2024;21:4.   Published online February 28, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.4
  • 664 View
  • 120 Download
AbstractAbstract PDFSupplementary Material
Purpose
The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model’s performance.
Methods
The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).
Results
The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.
Conclusion
Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study’s findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.
Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
Hyunju Lee, Soobin Park
J Educ Eval Health Prof. 2023;20:39.   Published online December 28, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.39
  • 1,039 View
  • 138 Download
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study assessed the performance of 6 generative artificial intelligence (AI) platforms on the learning objectives of medical arthropodology in a parasitology class in Korea. We examined the AI platforms’ performance by querying in Korean and English to determine their information amount, accuracy, and relevance in prompts in both languages.
Methods
From December 15 to 17, 2023, 6 generative AI platforms—Bard, Bing, Claude, Clova X, GPT-4, and Wrtn—were tested on 7 medical arthropodology learning objectives in English and Korean. Clova X and Wrtn are platforms from Korean companies. Responses were evaluated using specific criteria for the English and Korean queries.
Results
Bard had abundant information but was fourth in accuracy and relevance. GPT-4, with high information content, ranked first in accuracy and relevance. Clova X was 4th in amount but 2nd in accuracy and relevance. Bing provided less information, with moderate accuracy and relevance. Wrtn’s answers were short, with average accuracy and relevance. Claude AI had reasonable information, but lower accuracy and relevance. The responses in English were superior in all aspects. Clova X was notably optimized for Korean, leading in relevance.
Conclusion
In a study of 6 generative AI platforms applied to medical arthropodology, GPT-4 excelled overall, while Clova X, a Korea-based AI product, achieved 100% relevance in Korean queries, the highest among its peers. Utilizing these AI platforms in classrooms improved the authors’ self-efficacy and interest in the subject, offering a positive experience of interacting with generative AI platforms to question and receive information.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
Review
Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
Tae Won Kim
J Educ Eval Health Prof. 2023;20:38.   Published online December 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.38
  • 1,534 View
  • 289 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
This study aims to explore ChatGPT’s (GPT-3.5 version) functionalities, including reinforcement learning, diverse applications, and limitations. ChatGPT is an artificial intelligence (AI) chatbot powered by OpenAI’s Generative Pre-trained Transformer (GPT) model. The chatbot’s applications span education, programming, content generation, and more, demonstrating its versatility. ChatGPT can improve education by creating assignments and offering personalized feedback, as shown by its notable performance in medical exams and the United States Medical Licensing Exam. However, concerns include plagiarism, reliability, and educational disparities. It aids in various research tasks, from design to writing, and has shown proficiency in summarizing and suggesting titles. Its use in scientific writing and language translation is promising, but professional oversight is needed for accuracy and originality. It assists in programming tasks like writing code, debugging, and guiding installation and updates. It offers diverse applications, from cheering up individuals to generating creative content like essays, news articles, and business plans. Unlike search engines, ChatGPT provides interactive, generative responses and understands context, making it more akin to human conversation, in contrast to conventional search engines’ keyword-based, non-interactive nature. ChatGPT has limitations, such as potential bias, dependence on outdated data, and revenue generation challenges. Nonetheless, ChatGPT is considered to be a transformative AI tool poised to redefine the future of generative technology. In conclusion, advancements in AI, such as ChatGPT, are altering how knowledge is acquired and applied, marking a shift from search engines to creativity engines. This transformation highlights the increasing importance of AI literacy and the ability to effectively utilize AI in various domains of life.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Artificial Intelligence: Fundamentals and Breakthrough Applications in Epilepsy
    Wesley Kerr, Sandra Acosta, Patrick Kwan, Gregory Worrell, Mohamad A. Mikati
    Epilepsy Currents.2024;[Epub]     CrossRef
Brief report
ChatGPT (GPT-3.5) as an assistant tool in microbial pathogenesis studies in Sweden: a cross-sectional comparative study  
Catharina Hultgren, Annica Lindkvist, Volkan Özenci, Sophie Curbo
J Educ Eval Health Prof. 2023;20:32.   Published online November 22, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.32
  • 788 View
  • 93 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
ChatGPT (GPT-3.5) has entered higher education and there is a need to determine how to use it effectively. This descriptive study compared the ability of GPT-3.5 and teachers to answer questions from dental students and construct detailed intended learning outcomes. When analyzed according to a Likert scale, we found that GPT-3.5 answered the questions from dental students in a similar or even more elaborate way compared to the answers that had previously been provided by a teacher. GPT-3.5 was also asked to construct detailed intended learning outcomes for a course in microbial pathogenesis, and when these were analyzed according to a Likert scale they were, to a large degree, found irrelevant. Since students are using GPT-3.5, it is important that instructors learn how to make the best use of it both to be able to advise students and to benefit from its potential.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Research articles
Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study  
Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa Panta Quezada, Jesus Daniel Gutierrez-Arratia, Javier Alejandro Flores-Cohaila
J Educ Eval Health Prof. 2023;20:30.   Published online November 20, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.30
  • 1,183 View
  • 159 Download
  • 4 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
We aimed to describe the performance and evaluate the educational value of justifications provided by artificial intelligence chatbots, including GPT-3.5, GPT-4, Bard, Claude, and Bing, on the Peruvian National Medical Licensing Examination (P-NLME).
Methods
This was a cross-sectional analytical study. On July 25, 2023, each multiple-choice question (MCQ) from the P-NLME was entered into each chatbot (GPT-3, GPT-4, Bing, Bard, and Claude) 3 times. Then, 4 medical educators categorized the MCQs in terms of medical area, item type, and whether the MCQ required Peru-specific knowledge. They assessed the educational value of the justifications from the 2 top performers (GPT-4 and Bing).
Results
GPT-4 scored 86.7% and Bing scored 82.2%, followed by Bard and Claude, and the historical performance of Peruvian examinees was 55%. Among the factors associated with correct answers, only MCQs that required Peru-specific knowledge had lower odds (odds ratio, 0.23; 95% confidence interval, 0.09–0.61), whereas the remaining factors showed no associations. In assessing the educational value of justifications provided by GPT-4 and Bing, neither showed any significant differences in certainty, usefulness, or potential use in the classroom.
Conclusion
Among chatbots, GPT-4 and Bing were the top performers, with Bing performing better at Peru-specific MCQs. Moreover, the educational value of justifications provided by the GPT-4 and Bing could be deemed appropriate. However, it is essential to start addressing the educational value of these chatbots, rather than merely their performance on examinations.

Citations

Citations to this article as recorded by  
  • Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study
    Masao Noda, Takayoshi Ueno, Ryota Koshu, Yuji Takaso, Mari Dias Shimada, Chizu Saito, Hisashi Sugimoto, Hiroaki Fushiki, Makoto Ito, Akihiro Nomura, Tomokazu Yoshizaki
    JMIR Medical Education.2024; 10: e57054.     CrossRef
  • Response to Letter to the Editor re: “Artificial Intelligence Versus Expert Plastic Surgeon: Comparative Study Shows ChatGPT ‘Wins' Rhinoplasty Consultations: Should We Be Worried? [1]” by Durairaj et al.
    Kay Durairaj, Omer Baker
    Facial Plastic Surgery & Aesthetic Medicine.2024;[Epub]     CrossRef
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study  
Janghee Park
J Educ Eval Health Prof. 2023;20:29.   Published online November 10, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.29
  • 1,337 View
  • 136 Download
  • 2 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.
Methods
The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.
Results
The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”
Conclusion
The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Embracing ChatGPT for Medical Education: Exploring Its Impact on Doctors and Medical Students
    Yijun Wu, Yue Zheng, Baijie Feng, Yuqi Yang, Kai Kang, Ailin Zhao
    JMIR Medical Education.2024; 10: e52483.     CrossRef
  • ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students
    Mohammed Zawiah, Fahmi Al-Ashwal, Lobna Gharaibeh, Rana Abu Farha, Karem Alzoubi, Khawla Abu Hammour, Qutaiba A Qasim, Fahd Abrah
    Journal of Multidisciplinary Healthcare.2023; Volume 16: 4099.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study  
Aleksandra Ignjatović, Lazar Stevanović
J Educ Eval Health Prof. 2023;20:28.   Published online October 16, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.28
  • 1,728 View
  • 166 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to assess the performance of ChatGPT (GPT-3.5 and GPT-4) as a study tool in solving biostatistical problems and to identify any potential drawbacks that might arise from using ChatGPT in medical education, particularly in solving practical biostatistical problems.
Methods
ChatGPT was tested to evaluate its ability to solve biostatistical problems from the Handbook of Medical Statistics by Peacock and Peacock in this descriptive study. Tables from the problems were transformed into textual questions. Ten biostatistical problems were randomly chosen and used as text-based input for conversation with ChatGPT (versions 3.5 and 4).
Results
GPT-3.5 solved 5 practical problems in the first attempt, related to categorical data, cross-sectional study, measuring reliability, probability properties, and the t-test. GPT-3.5 failed to provide correct answers regarding analysis of variance, the chi-square test, and sample size within 3 attempts. GPT-4 also solved a task related to the confidence interval in the first attempt and solved all questions within 3 attempts, with precise guidance and monitoring.
Conclusion
The assessment of both versions of ChatGPT performance in 10 biostatistical problems revealed that GPT-3.5 and 4’s performance was below average, with correct response rates of 5 and 6 out of 10 on the first attempt. GPT-4 succeeded in providing all correct answers within 3 attempts. These findings indicate that students must be aware that this tool, even when providing and calculating different statistical analyses, can be wrong, and they should be aware of ChatGPT’s limitations and be careful when incorporating this model into medical education.

Citations

Citations to this article as recorded by  
  • Can Generative AI and ChatGPT Outperform Humans on Cognitive-Demanding Problem-Solving Tasks in Science?
    Xiaoming Zhai, Matthew Nyaaba, Wenchao Ma
    Science & Education.2024;[Epub]     CrossRef
Brief report
Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study  
Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu
J Educ Eval Health Prof. 2023;20:17.   Published online June 12, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.17
  • 1,703 View
  • 126 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Learning about one’s implicit bias is crucial for improving one’s cultural competency and thereby reducing health inequity. To evaluate bias among medical students following a previously developed cultural training program targeting New Zealand Māori, we developed a text-based, self-evaluation tool called the Similarity Rating Test (SRT). The development process of the SRT was resource-intensive, limiting its generalizability and applicability. Here, we explored the potential of ChatGPT, an automated chatbot, to assist in the development process of the SRT by comparing ChatGPT’s and students’ evaluations of the SRT. Despite results showing non-significant equivalence and difference between ChatGPT’s and students’ ratings, ChatGPT’s ratings were more consistent than students’ ratings. The consistency rate was higher for non-stereotypical than for stereotypical statements, regardless of rater type. Further studies are warranted to validate ChatGPT’s potential for assisting in SRT development for implementation in medical education and evaluation of ethnic stereotypes and related topics.

Citations

Citations to this article as recorded by  
  • Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study
    Aleksandra Ignjatović, Lazar Stevanović
    Journal of Educational Evaluation for Health Professions.2023; 20: 28.     CrossRef
Review
Can an artificial intelligence chatbot be the author of a scholarly article?  
Ju Yoen Lee
J Educ Eval Health Prof. 2023;20:6.   Published online February 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.6
  • 7,509 View
  • 634 Download
  • 34 Web of Science
  • 35 Crossref
AbstractAbstract PDFSupplementary Material
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.

Citations

Citations to this article as recorded by  
  • Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills
    Graham Kendall, Jaime A. Teixeira da Silva
    Learned Publishing.2024; 37(1): 55.     CrossRef
  • Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals
    Brady D. Lund, K.T. Naheem
    Learned Publishing.2024; 37(1): 13.     CrossRef
  • The Role of AI in Writing an Article and Whether it Can Be a Co-author: What if it Gets Support From 2 Different AIs Like ChatGPT and Google Bard for the Same Theme?
    İlhan Bahşi, Ayşe Balat
    Journal of Craniofacial Surgery.2024; 35(1): 274.     CrossRef
  • Artificial Intelligence–Generated Scientific Literature: A Critical Appraisal
    Justyna Zybaczynska, Matthew Norris, Sunjay Modi, Jennifer Brennan, Pooja Jhaveri, Timothy J. Craig, Taha Al-Shaikhly
    The Journal of Allergy and Clinical Immunology: In Practice.2024; 12(1): 106.     CrossRef
  • Does Google’s Bard Chatbot perform better than ChatGPT on the European hand surgery exam?
    Goetsch Thibaut, Armaghan Dabbagh, Philippe Liverneaux
    International Orthopaedics.2024; 48(1): 151.     CrossRef
  • A Brief Review of the Efficacy in Artificial Intelligence and Chatbot-Generated Personalized Fitness Regimens
    Daniel K. Bays, Cole Verble, Kalyn M. Powers Verble
    Strength & Conditioning Journal.2024;[Epub]     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2024; 12: 1398.     CrossRef
  • The Use of Artificial Intelligence in Writing Scientific Review Articles
    Melissa A. Kacena, Lilian I. Plotkin, Jill C. Fehrenbacher
    Current Osteoporosis Reports.2024; 22(1): 115.     CrossRef
  • Using AI to Write a Review Article Examining the Role of the Nervous System on Skeletal Homeostasis and Fracture Healing
    Murad K. Nazzal, Ashlyn J. Morris, Reginald S. Parker, Fletcher A. White, Roman M. Natoli, Jill C. Fehrenbacher, Melissa A. Kacena
    Current Osteoporosis Reports.2024; 22(1): 217.     CrossRef
  • GenAI et al.: Cocreation, Authorship, Ownership, Academic Ethics and Integrity in a Time of Generative AI
    Aras Bozkurt
    Open Praxis.2024; 16(1): 1.     CrossRef
  • An integrative decision-making framework to guide policies on regulating ChatGPT usage
    Umar Ali Bukar, Md Shohel Sayeed, Siti Fatimah Abdul Razak, Sumendra Yogarayan, Oluwatosin Ahmed Amodu
    PeerJ Computer Science.2024; 10: e1845.     CrossRef
  • Artificial Intelligence and Its Role in Medical Research
    Anurag Gola, Ambarish Das, Amar B. Gumataj, S. Amirdhavarshini, J. Venkatachalam
    Current Medical Issues.2024; 22(2): 97.     CrossRef
  • Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer
    Casey Watters, Michal K. Lemanski
    Frontiers in Big Data.2023;[Epub]     CrossRef
  • The importance of human supervision in the use of ChatGPT as a support tool in scientific writing
    William Castillo-González
    Metaverse Basic and Applied Research.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • Chatbots in Medical Research
    Punit Sharma
    Clinical Nuclear Medicine.2023; 48(9): 838.     CrossRef
  • Potential applications of ChatGPT in dermatology
    Nicolas Kluger
    Journal of the European Academy of Dermatology and Venereology.2023;[Epub]     CrossRef
  • The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research
    Tariq Alqahtani, Hisham A. Badreldin, Mohammed Alrashed, Abdulrahman I. Alshaya, Sahar S. Alghamdi, Khalid bin Saleh, Shuroug A. Alowais, Omar A. Alshaya, Ishrat Rahman, Majed S. Al Yami, Abdulkareem M. Albekairy
    Research in Social and Administrative Pharmacy.2023; 19(8): 1236.     CrossRef
  • ChatGPT Performance on the American Urological Association Self-assessment Study Program and the Potential Influence of Artificial Intelligence in Urologic Training
    Nicholas A. Deebel, Ryan Terlecki
    Urology.2023; 177: 29.     CrossRef
  • Intelligence or artificial intelligence? More hard problems for authors of Biological Psychology, the neurosciences, and everyone else
    Thomas Ritz
    Biological Psychology.2023; 181: 108590.     CrossRef
  • The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts
    Mohammad Hosseini, David B Resnik, Kristi Holmes
    Research Ethics.2023; 19(4): 449.     CrossRef
  • How trustworthy is ChatGPT? The case of bibliometric analyses
    Faiza Farhat, Shahab Saquib Sohail, Dag Øivind Madsen
    Cogent Engineering.2023;[Epub]     CrossRef
  • Disclosing use of Artificial Intelligence: Promoting transparency in publishing
    Parvaiz A. Koul
    Lung India.2023; 40(5): 401.     CrossRef
  • ChatGPT in medical research: challenging time ahead
    Daideepya C Bhargava, Devendra Jadav, Vikas P Meshram, Tanuj Kanchan
    Medico-Legal Journal.2023; 91(4): 223.     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2023; 12: 1398.     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • ChatGPT in medical writing: A game-changer or a gimmick?
    Shital Sarah Ahaley, Ankita Pandey, Simran Kaur Juneja, Tanvi Suhane Gupta, Sujatha Vijayakumar
    Perspectives in Clinical Research.2023;[Epub]     CrossRef
  • Artificial Intelligence-Supported Systems in Anesthesiology and Its Standpoint to Date—A Review
    Fiona M. P. Pham
    Open Journal of Anesthesiology.2023; 13(07): 140.     CrossRef
  • ChatGPT as an innovative tool for increasing sales in online stores
    Michał Orzoł, Katarzyna Szopik-Depczyńska
    Procedia Computer Science.2023; 225: 3450.     CrossRef
  • Intelligent Plagiarism as a Misconduct in Academic Integrity
    Jesús Miguel Muñoz-Cantero, Eva Maria Espiñeira-Bellón
    Acta Médica Portuguesa.2023; 37(1): 1.     CrossRef
  • Follow-up of Artificial Intelligence Development and its Controlled Contribution to the Article: Step to the Authorship?
    Ekrem Solmaz
    European Journal of Therapeutics.2023;[Epub]     CrossRef
  • May Artificial Intelligence Be a Co-Author on an Academic Paper?
    Ayşe Balat, İlhan Bahşi
    European Journal of Therapeutics.2023; 29(3): e12.     CrossRef
  • Opportunities and challenges for ChatGPT and large language models in biomedicine and health
    Shubo Tian, Qiao Jin, Lana Yeganova, Po-Ting Lai, Qingqing Zhu, Xiuying Chen, Yifan Yang, Qingyu Chen, Won Kim, Donald C Comeau, Rezarta Islamaj, Aadit Kapoor, Xin Gao, Zhiyong Lu
    Briefings in Bioinformatics.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • Editorial policies of Journal of Educational Evaluation for Health Professions on the use of generative artificial intelligence in article writing and peer review
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 40.     CrossRef
Brief report
Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study  
Sun Huh
J Educ Eval Health Prof. 2023;20:1.   Published online January 11, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.1
  • 11,155 View
  • 1,014 Download
  • 118 Web of Science
  • 66 Crossref
AbstractAbstract PDFSupplementary Material
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.

Citations

Citations to this article as recorded by  
  • Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study
    Aravind P Gandhi, Felista Karen Joesph, Vineeth Rajagopal, P Aparnavi, Sushma Katkuri, Sonal Dayama, Prakasini Satapathy, Mahalaqua Nazli Khatib, Shilpa Gaidhane, Quazi Syed Zahiruddin, Ashish Behera
    JMIR Formative Research.2024; 8: e49964.     CrossRef
  • Large Language Models and Artificial Intelligence: A Primer for Plastic Surgeons on the Demonstrated and Potential Applications, Promises, and Limitations of ChatGPT
    Jad Abi-Rafeh, Hong Hao Xu, Roy Kazan, Ruth Tevlin, Heather Furnas
    Aesthetic Surgery Journal.2024; 44(3): 329.     CrossRef
  • Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers
    Ana Suárez, Víctor Díaz‐Flores García, Juan Algar, Margarita Gómez Sánchez, María Llorente de Pedro, Yolanda Freire
    International Endodontic Journal.2024; 57(1): 108.     CrossRef
  • Bob or Bot: Exploring ChatGPT's Answers to University Computer Science Assessment
    Mike Richards, Kevin Waugh, Mark Slaymaker, Marian Petre, John Woodthorpe, Daniel Gooch
    ACM Transactions on Computing Education.2024; 24(1): 1.     CrossRef
  • Evaluating ChatGPT as a self‐learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination
    Krishna Mohan Surapaneni, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, Porchelvan Swaminathan
    Biochemistry and Molecular Biology Education.2024; 52(2): 237.     CrossRef
  • Examining the use of ChatGPT in public universities in Hong Kong: a case study of restricted access areas
    Michelle W. T. Cheng, Iris H. Y. YIM
    Discover Education.2024;[Epub]     CrossRef
  • Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study
    Firas Haddad, Joanna S Saade
    JMIR Medical Education.2024; 10: e50842.     CrossRef
  • A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision‐making in nursing
    Mor Saban, Ilana Dubovi
    Journal of Advanced Nursing.2024;[Epub]     CrossRef
  • Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study
    Annika Meyer, Janik Riese, Thomas Streichert
    JMIR Medical Education.2024; 10: e50965.     CrossRef
  • From hype to insight: Exploring ChatGPT's early footprint in education via altmetrics and bibliometrics
    Lung‐Hsiang Wong, Hyejin Park, Chee‐Kit Looi
    Journal of Computer Assisted Learning.2024;[Epub]     CrossRef
  • A scoping review of artificial intelligence in medical education: BEME Guide No. 84
    Morris Gordon, Michelle Daniel, Aderonke Ajiboye, Hussein Uraiby, Nicole Y. Xu, Rangana Bartlett, Janice Hanson, Mary Haas, Maxwell Spadafore, Ciaran Grafton-Clarke, Rayhan Yousef Gasiea, Colin Michie, Janet Corral, Brian Kwan, Diana Dolmans, Satid Thamma
    Medical Teacher.2024; : 1.     CrossRef
  • Üniversite Öğrencilerinin ChatGPT 3,5 Deneyimleri: Yapay Zekâyla Yazılmış Masal Varyantları
    Bilge GÖK, Fahri TEMİZYÜREK, Özlem BAŞ
    Korkut Ata Türkiyat Araştırmaları Dergisi.2024; (14): 1040.     CrossRef
  • Tracking ChatGPT Research: Insights From the Literature and the Web
    Omar Mubin, Fady Alnajjar, Zouheir Trabelsi, Luqman Ali, Medha Mohan Ambali Parambil, Zhao Zou
    IEEE Access.2024; 12: 30518.     CrossRef
  • Potential applications of ChatGPT in obstetrics and gynecology in Korea: a review article
    YooKyung Lee, So Yun Kim
    Obstetrics & Gynecology Science.2024; 67(2): 153.     CrossRef
  • Application of generative language models to orthopaedic practice
    Jessica Caterson, Olivia Ambler, Nicholas Cereceda-Monteoliva, Matthew Horner, Andrew Jones, Arwel Tomos Poacher
    BMJ Open.2024; 14(3): e076484.     CrossRef
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • The advent of ChatGPT: Job Made Easy or Job Loss to Data Analysts
    Abiola Timothy Owolabi, Oluwaseyi Oluwadamilare Okunlola, Emmanuel Taiwo Adewuyi, Janet Iyabo Idowu, Olasunkanmi James Oladapo
    WSEAS TRANSACTIONS ON COMPUTERS.2024; 23: 24.     CrossRef
  • ChatGPT in dentomaxillofacial radiology education
    Hilal Peker Öztürk, Hakan Avsever, Buğra Şenel, Şükran Ayran, Mustafa Çağrı Peker, Hatice Seda Özgedik, Nurten Baysal
    Journal of Health Sciences and Medicine.2024; 7(2): 224.     CrossRef
  • Performance of ChatGPT on the Korean National Examination for Dental Hygienists
    Soo-Myoung Bae, Hye-Rim Jeon, Gyoung-Nam Kim, Seon-Hui Kwak, Hyo-Jin Lee
    Journal of Dental Hygiene Science.2024; 24(1): 62.     CrossRef
  • Medical knowledge of ChatGPT in public health, infectious diseases, COVID-19 pandemic, and vaccines: multiple choice questions examination based performance
    Sultan Ayoub Meo, Metib Alotaibi, Muhammad Zain Sultan Meo, Muhammad Omair Sultan Meo, Mashhood Hamid
    Frontiers in Public Health.2024;[Epub]     CrossRef
  • Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology
    Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal
    Cureus.2023;[Epub]     CrossRef
  • Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 5.     CrossRef
  • Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic
    Sun Huh
    Science Editing.2023; 10(1): 1.     CrossRef
  • Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum
    Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta
    Cureus.2023;[Epub]     CrossRef
  • Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry
    Arindam Ghosh, Aritri Bir
    Cureus.2023;[Epub]     CrossRef
  • Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts
    Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT in Dentistry: A Comprehensive Review
    Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed
    Cureus.2023;[Epub]     CrossRef
  • Can we trust AI chatbots’ answers about disease diagnosis and patient care?
    Sun Huh
    Journal of the Korean Medical Association.2023; 66(4): 218.     CrossRef
  • Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
    Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh
    JMIR Medical Education.2023; 9: e48291.     CrossRef
  • Early applications of ChatGPT in medical practice, education and research
    Sam Sedaghat
    Clinical Medicine.2023; 23(3): 278.     CrossRef
  • A Review of Research on Teaching and Learning Transformation under the Influence of ChatGPT Technology
    璇 师
    Advances in Education.2023; 13(05): 2617.     CrossRef
  • Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study
    Soshi Takagi, Takashi Watari, Ayano Erabi, Kota Sakaguchi
    JMIR Medical Education.2023; 9: e48002.     CrossRef
  • ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions
    Cosima C. Hoch, Barbara Wollenberg, Jan-Christoffer Lüers, Samuel Knoedler, Leonard Knoedler, Konstantin Frank, Sebastian Cotofana, Michael Alfertshofer
    European Archives of Oto-Rhino-Laryngology.2023; 280(9): 4271.     CrossRef
  • Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology
    Mayank Agarwal, Priyanka Sharma, Ayan Goswami
    Cureus.2023;[Epub]     CrossRef
  • The Intersection of ChatGPT, Clinical Medicine, and Medical Education
    Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali
    JMIR Medical Education.2023; 9: e47274.     CrossRef
  • The Role of Artificial Intelligence in Higher Education: ChatGPT Assessment for Anatomy Course
    Tarık TALAN, Yusuf KALINKARA
    Uluslararası Yönetim Bilişim Sistemleri ve Bilgisayar Bilimleri Dergisi.2023; 7(1): 33.     CrossRef
  • Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
    Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu
    Journal of Educational Evaluation for Health Professions.2023; 20: 17.     CrossRef
  • Examining Real-World Medication Consultations and Drug-Herb Interactions: ChatGPT Performance Evaluation
    Hsing-Yu Hsu, Kai-Cheng Hsu, Shih-Yen Hou, Ching-Lung Wu, Yow-Wen Hsieh, Yih-Dih Cheng
    JMIR Medical Education.2023; 9: e48433.     CrossRef
  • Assessing the Efficacy of ChatGPT in Solving Questions Based on the Core Concepts in Physiology
    Arijita Banerjee, Aquil Ahmad, Payal Bhalla, Kavita Goyal
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT Performs on the Chinese National Medical Licensing Examination
    Xinyi Wang, Zhenye Gong, Guoxin Wang, Jingdan Jia, Ying Xu, Jialu Zhao, Qingye Fan, Shaun Wu, Weiguo Hu, Xiaoyang Li
    Journal of Medical Systems.2023;[Epub]     CrossRef
  • Artificial intelligence and its impact on job opportunities among university students in North Lima, 2023
    Doris Ruiz-Talavera, Jaime Enrique De la Cruz-Aguero, Nereo García-Palomino, Renzo Calderón-Espinoza, William Joel Marín-Rodriguez
    ICST Transactions on Scalable Information Systems.2023;[Epub]     CrossRef
  • Revolutionizing Dental Care: A Comprehensive Review of Artificial Intelligence Applications Among Various Dental Specialties
    Najd Alzaid, Omar Ghulam, Modhi Albani, Rafa Alharbi, Mayan Othman, Hasan Taher, Saleem Albaradie, Suhael Ahmed
    Cureus.2023;[Epub]     CrossRef
  • Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review
    Carl Preiksaitis, Christian Rose
    JMIR Medical Education.2023; 9: e48785.     CrossRef
  • Exploring the impact of language models, such as ChatGPT, on student learning and assessment
    Araz Zirar
    Review of Education.2023;[Epub]     CrossRef
  • Evaluating the reliability of ChatGPT as a tool for imaging test referral: a comparative study with a clinical decision support system
    Shani Rosen, Mor Saban
    European Radiology.2023;[Epub]     CrossRef
  • Redesigning Tertiary Educational Evaluation with AI: A Task-Based Analysis of LIS Students’ Assessment on Written Tests and Utilizing ChatGPT at NSTU
    Shamima Yesmin
    Science & Technology Libraries.2023; : 1.     CrossRef
  • ChatGPT and the AI revolution: a comprehensive investigation of its multidimensional impact and potential
    Mohd Afjal
    Library Hi Tech.2023;[Epub]     CrossRef
  • The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard
    Hasan B Ilgaz, Zehra Çelik
    Cureus.2023;[Epub]     CrossRef
  • Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
    Abhra Ghosh, Nandita Maini Jindal, Vikram K Gupta, Ekta Bansal, Navjot Kaur Bajwa, Abhishek Sett
    Cureus.2023;[Epub]     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study
    Osman Babayiğit, Zeynep Tastan Eroglu, Dilek Ozkan Sen, Fatma Ucan Yarkac
    Cureus.2023;[Epub]     CrossRef
  • Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study
    Aleksandra Ignjatović, Lazar Stevanović
    Journal of Educational Evaluation for Health Professions.2023; 20: 28.     CrossRef
  • Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study
    Krishna Mohan Surapaneni
    JMIR Medical Education.2023; 9: e47191.     CrossRef
  • A systematic review of ChatGPT use in K‐12 education
    Peng Zhang, Gemma Tur
    European Journal of Education.2023;[Epub]     CrossRef
  • Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
    Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa
    Journal of Educational Evaluation for Health Professions.2023; 20: 30.     CrossRef
  • ChatGPT’s performance in German OB/GYN exams – paving the way for AI-enhanced medical education and clinical practice
    Maximilian Riedel, Katharina Kaefinger, Antonia Stuehrenberg, Viktoria Ritter, Niklas Amann, Anna Graf, Florian Recker, Evelyn Klein, Marion Kiechle, Fabian Riedel, Bastian Meyer
    Frontiers in Medicine.2023;[Epub]     CrossRef
  • Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study
    Janghee Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 29.     CrossRef
  • FROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKING
    Yaroslav Mykhalko, Pavlo Kish, Yelyzaveta Rubtsova, Oleksandr Kutsyn, Valentyna Koval
    Wiadomości Lekarskie.2023; 76(11): 2345.     CrossRef
  • Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions
    Pasin Tangadulrat, Supinya Sono, Boonsin Tangtrakulwanich
    JMIR Medical Education.2023; 9: e50658.     CrossRef
  • Below average ChatGPT performance in medical microbiology exam compared to university students
    Malik Sallam, Khaled Al-Salahat
    Frontiers in Education.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • ChatGPT Review: A Sophisticated Chatbot Models in Medical & Health-related Teaching and Learning
    Nur Izah Ab Razak, Muhammad Fawwaz Muhammad Yusoff, Rahmita Wirza O.K. Rahmat
    Malaysian Journal of Medicine and Health Sciences.2023; 19(s12): 98.     CrossRef
  • Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
    Tae Won Kim
    Journal of Educational Evaluation for Health Professions.2023; 20: 38.     CrossRef
  • Trends in research on ChatGPT and adoption-related issues discussed in articles: a narrative review
    Sang-Jun Kim
    Science Editing.2023; 11(1): 3.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Review
What should medical students know about artificial intelligence in medicine?  
Seong Ho Park, Kyung-Hyun Do, Sungwon Kim, Joo Hyun Park, Young-Suk Lim
J Educ Eval Health Prof. 2019;16:18.   Published online July 3, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.18
  • 21,080 View
  • 629 Download
  • 60 Web of Science
  • 70 Crossref
AbstractAbstract PDFSupplementary Material
Artificial intelligence (AI) is expected to affect various fields of medicine substantially and has the potential to improve many aspects of healthcare. However, AI has been creating much hype, too. In applying AI technology to patients, medical professionals should be able to resolve any anxiety, confusion, and questions that patients and the public may have. Also, they are responsible for ensuring that AI becomes a technology beneficial for patient care. These make the acquisition of sound knowledge and experience about AI a task of high importance for medical students. Preparing for AI does not merely mean learning information technology such as computer programming. One should acquire sufficient knowledge of basic and clinical medicines, data science, biostatistics, and evidence-based medicine. As a medical student, one should not passively accept stories related to AI in medicine in the media and on the Internet. Medical students should try to develop abilities to distinguish correct information from hype and spin and even capabilities to create thoroughly validated, trustworthy information for patients and the public.

Citations

Citations to this article as recorded by  
  • Radiology as a Specialty in the Era of Artificial Intelligence: A Systematic Review and Meta-analysis on Medical Students, Radiology Trainees, and Radiologists
    Amir Hassankhani, Melika Amoukhteh, Parya Valizadeh, Payam Jannatdoust, Paniz Sabeghi, Ali Gholamrezanezhad
    Academic Radiology.2024; 31(1): 306.     CrossRef
  • Strategies for Implementing Machine Learning Algorithms in the Clinical Practice of Radiology
    Allison Chae, Michael S. Yao, Hersh Sagreiya, Ari D. Goldberg, Neil Chatterjee, Matthew T. MacLean, Jeffrey Duda, Ameena Elahi, Arijitt Borthakur, Marylyn D. Ritchie, Daniel Rader, Charles E. Kahn, Walter R. Witschey, James C. Gee
    Radiology.2024;[Epub]     CrossRef
  • Towards integration of artificial intelligence into medical devices as a real-time recommender system for personalised healthcare: State-of-the-art and future prospects
    Talha Iqbal, Mehedi Masud, Bilal Amin, Conor Feely, Mary Faherty, Tim Jones, Michelle Tierney, Atif Shahzad, Patricia Vazquez
    Health Sciences Review.2024; 10: 100150.     CrossRef
  • The Knowledge of Students at Bursa Faculty of Medicine towards Artificial Intelligence: A Survey Study
    Deniz GÜVEN, Elif Güler KAZANCI, Ayşe ÖREN, Livanur SEVER, Pelin ÜNLÜ
    Journal of Bursa Faculty of Medicine.2024; 2(1): 20.     CrossRef
  • Preparing healthcare leaders of the digital age with an integrative artificial intelligence curriculum: a pilot study
    Soo Hwan Park, Roshini Pinto-Powell, Thomas Thesen, Alexander Lindqwister, Joshua Levy, Rachael Chacko, Devina Gonzalez, Connor Bridges, Adam Schwendt, Travis Byrum, Justin Fong, Shahin Shasavari, Saeed Hassanpour
    Medical Education Online.2024;[Epub]     CrossRef
  • A scoping review of artificial intelligence in medical education: BEME Guide No. 84
    Morris Gordon, Michelle Daniel, Aderonke Ajiboye, Hussein Uraiby, Nicole Y. Xu, Rangana Bartlett, Janice Hanson, Mary Haas, Maxwell Spadafore, Ciaran Grafton-Clarke, Rayhan Yousef Gasiea, Colin Michie, Janet Corral, Brian Kwan, Diana Dolmans, Satid Thamma
    Medical Teacher.2024; : 1.     CrossRef
  • Artificial Intelligence Readiness Status of Medical Faculty Students
    Büşra EMİR, Tulin YURDEM, Tulin OZEL, Toygar SAYAR, Teoman Atalay UZUN, Umit AKAR, Unal Arda COLAK
    Konuralp Tıp Dergisi.2024; 16(1): 88.     CrossRef
  • Potential applications of ChatGPT in obstetrics and gynecology in Korea: a review article
    YooKyung Lee, So Yun Kim
    Obstetrics & Gynecology Science.2024; 67(2): 153.     CrossRef
  • ChatGPT in dentomaxillofacial radiology education
    Hilal Peker Öztürk, Hakan Avsever, Buğra Şenel, Şükran Ayran, Mustafa Çağrı Peker, Hatice Seda Özgedik, Nurten Baysal
    Journal of Health Sciences and Medicine.2024; 7(2): 224.     CrossRef
  • Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education
    Russell Franco D’Souza, Mary Mathew, Vedprakash Mishra, Krishna Mohan Surapaneni
    Medical Education Online.2024;[Epub]     CrossRef
  • A novel adaptive cubic quasi‐Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID‐19 and segmentation for COVID‐19 lung infection, liver tumor, and optic disc/cup
    Yan Liu, Maojun Zhang, Zhiwei Zhong, Xiangrong Zeng
    Medical Physics.2023; 50(3): 1528.     CrossRef
  • Clinical informatics training in medical school education curricula: a scoping review
    Humairah Zainal, Joshua Kuan Tan, Xin Xiaohui, Julian Thumboo, Fong Kok Yong
    Journal of the American Medical Informatics Association.2023; 30(3): 604.     CrossRef
  • Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 1.     CrossRef
  • Exploring the views of Singapore junior doctors on medical curricula for the digital age: A case study
    Humairah Zainal, Xin Xiaohui, Julian Thumboo, Fong Kok Yong, Conor Gilligan
    PLOS ONE.2023; 18(3): e0281108.     CrossRef
  • Artificial Intelligence Teaching as Part of Medical Education: Qualitative Analysis of Expert Interviews
    Lukas Weidener, Michael Fischer
    JMIR Medical Education.2023; 9: e46428.     CrossRef
  • Investigating Students’ Perceptions towards Artificial Intelligence in Medical Education
    Ali Jasem Buabbas, Brouj Miskin, Amar Ali Alnaqi, Adel K. Ayed, Abrar Abdulmohsen Shehab, Shabbir Syed-Abdul, Mohy Uddin
    Healthcare.2023; 11(9): 1298.     CrossRef
  • Performance and risks of ChatGPT used in drug information: an exploratory real-world analysis
    Benedict Morath, Ute Chiriac, Elena Jaszkowski, Carolin Deiß, Hannah Nürnberg, Katrin Hörth, Torsten Hoppe-Tichy, Kim Green
    European Journal of Hospital Pharmacy.2023; : ejhpharm-2023-003750.     CrossRef
  • A closer look at the current knowledge and prospects of artificial intelligence integration in dentistry practice: A cross-sectional study
    Zuhal Y. Hamd, Wiam Elshami, Sausan Al Kawas, Hanan Aljuaid, Mohamed M. Abuzaid
    Heliyon.2023; 9(6): e17089.     CrossRef
  • ChatGPT and the Future of Digital Health: A Study on Healthcare Workers’ Perceptions and Expectations
    Mohamad-Hani Temsah, Fadi Aljamaan, Khalid H. Malki, Khalid Alhasan, Ibraheem Altamimi, Razan Aljarbou, Faisal Bazuhair, Abdulmajeed Alsubaihin, Naif Abdulmajeed, Fatimah S. Alshahrani, Reem Temsah, Turki Alshahrani, Lama Al-Eyadhy, Serin Mohammed Alkhate
    Healthcare.2023; 11(13): 1812.     CrossRef
  • The Impact of Artificial Intelligence on the Preference of Radiology as a Future Specialty Among Medical Students at Jazan University, Saudi Arabia: A Cross-Sectional Study
    Khalid M Hakami, Mohammed Alameer, Essa Jaawna, Abdulrahman Sudi, Bahiyyah Bahkali, Amnah Mohammed, Abdulaziz Hakami, Mohamed Salih Mahfouz, Abdulaziz H Alhazmi, Turki M Dhayihi
    Cureus.2023;[Epub]     CrossRef
  • Application of artificial intelligence in medical education: focus on the application of ChatGPT for clinical medical education
    Hyeonmi Hong, Youngjoon Kang, Youngjon Kim, Bomsol Kim
    Journal of Medicine and Life Science.2023; 20(2): 53.     CrossRef
  • Medical Students’ Perspectives on Artificial Intelligence in Radiology: The Current Understanding and Impact on Radiology as a Future Specialty Choice
    Ali Alamer
    Current Medical Imaging Formerly Current Medical Imaging Reviews.2023;[Epub]     CrossRef
  • Psychometric properties of the persian version of the Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS)
    AmirAli Moodi Ghalibaf, Maryam Moghadasin, Ali Emadzadeh, Haniye Mastour
    BMC Medical Education.2023;[Epub]     CrossRef
  • Views of Veterinary Faculty Students on the Concept of Artificial Intelligence and Its Use in Veterinary Medicine Practices: An Example of XXXX University Faculty of Veterinary Medicine
    Nigar YERLİKAYA, Özgül KÜÇÜKASLAN
    Ankara Üniversitesi Veteriner Fakültesi Dergisi.2023;[Epub]     CrossRef
  • A Pilot Remote Curriculum to Enhance Resident and Medical Student Understanding of Machine Learning in Healthcare
    Seth M. Meade, Sebastian Salas-Vega, Matthew R. Nagy, Swetha J. Sundar, Michael P. Steinmetz, Edward C. Benzel, Ghaith Habboub
    World Neurosurgery.2023; 180: e142.     CrossRef
  • Medical Students’ Knowledge and Attitudes about Artificial Intelligence: A Cross-Sectional Survey
    Amber EKER, Ahmet Asım ÇALIŞKAN, Aysel ZORALİ, Bensu KAYNAK, Mehmet Erhan DERİN
    Tıp Eğitimi Dünyası.2023; 22(68): 41.     CrossRef
  • El camino a futuro de la pediatría: Nuevas oportunidades con la inteligencia artificial en la atención infantil
    Wagner Rios-Garcia, Mayli M. Condori-Orosco, Cyntia J. Huasasquiche
    Investigación e Innovación Clínica y Quirúrgica Pediátrica.2023; 1(2): 71.     CrossRef
  • Percepciones de estudiantes de Medicina sobre el impacto de la inteligencia artificial en radiología
    G. Caparrós Galán, F. Sendra Portero
    Radiología.2022; 64(6): 516.     CrossRef
  • Finding the needle by modeling the haystack: Pulmonary embolism in an emergency patient with cardiorespiratory manifestations
    Davide Luciani, Alessandro Magrini, Carlo Berzuini, Antonello Gavazzi, Paolo Canova, Tiziano Barbui, Guido Bertolini
    Expert Systems with Applications.2022; 189: 116066.     CrossRef
  • SHIFTing artificial intelligence to be responsible in healthcare: A systematic review
    Haytham Siala, Yichuan Wang
    Social Science & Medicine.2022; 296: 114782.     CrossRef
  • AUGMENTING CBME CURRICULUM WITH ARTIFICIAL INTELLIGENCE COURSES – A FUTURISTIC APPROACH.
    Yogesh Bahurupi, Ashwini A Mahadule, Prashant M Patil, Vartika Saxena
    INDIAN JOURNAL OF APPLIED RESEARCH.2022; : 46.     CrossRef
  • Artificial Intelligence in Pediatric Pathology: The Extinction of a Medical Profession or the Key to a Bright Future?
    Ananda van der Kamp, Tomas J. Waterlander, Thomas de Bel, Jeroen van der Laak, Marry M. van den Heuvel-Eibrink, Annelies M. C. Mavinkurve-Groothuis, Ronald R. de Krijger
    Pediatric and Developmental Pathology.2022; 25(4): 380.     CrossRef
  • Artificial Intelligence Education for the Health Workforce: Expert Survey of Approaches and Needs
    Kathleen Gray, John Slavotinek, Gerardo Luis Dimaguila, Dawn Choo
    JMIR Medical Education.2022; 8(2): e35223.     CrossRef
  • Advancements in Oncology with Artificial Intelligence—A Review Article
    Nikitha Vobugari, Vikranth Raja, Udhav Sethi, Kejal Gandhi, Kishore Raja, Salim R. Surani
    Cancers.2022; 14(5): 1349.     CrossRef
  • Needs, Challenges, and Applications of Artificial Intelligence in Medical Education Curriculum
    Joel Grunhut, Oge Marques, Adam T M Wyatt
    JMIR Medical Education.2022; 8(2): e35587.     CrossRef
  • Promoting Research, Awareness, and Discussion on AI in Medicine Using #MedTwitterAI: A Longitudinal Twitter Hashtag Analysis
    Faisal A. Nawaz, Austin A. Barr, Monali Y. Desai, Christos Tsagkaris, Romil Singh, Elisabeth Klager, Fabian Eibensteiner, Emil D. Parvanov, Mojca Hribersek, Maria Kletecka-Pulker, Harald Willschke, Atanas G. Atanasov
    Frontiers in Public Health.2022;[Epub]     CrossRef
  • Communication training for pharmacy students with standard patients using artificial intelligence
    Naoto Nakagawa, Keita Odanaka, Hiroshi Ohara, Shigeki Kisara
    Currents in Pharmacy Teaching and Learning.2022; 14(7): 854.     CrossRef
  • Artificial intelligence in healthcare: Should it be included in the medical curriculum? A students’ perspective
    MANISHI BANSAL, ANKUSH JINDAL
    The National Medical Journal of India.2022; 35: 56.     CrossRef
  • Undergraduate Medical Students’ and Interns’ Knowledge and Perception of Artificial Intelligence in Medicine
    Nisha Jha, Pathiyil Ravi Shankar, Mohammed Azmi Al-Betar, Rupesh Mukhia, Kabita Hada, Subish Palaian
    Advances in Medical Education and Practice.2022; Volume 13: 927.     CrossRef
  • Perceptions of US Medical Students on Artificial Intelligence in Medicine: Mixed Methods Survey Study
    David Shalom Liu, Jake Sawyer, Alexander Luna, Jihad Aoun, Janet Wang, Lord Boachie, Safwan Halabi, Bina Joe
    JMIR Medical Education.2022; 8(4): e38325.     CrossRef
  • Artificial intelligence in medical education: a cross-sectional needs assessment
    M. Murat Civaner, Yeşim Uncu, Filiz Bulut, Esra Giounous Chalil, Abdülhamit Tatli
    BMC Medical Education.2022;[Epub]     CrossRef
  • Medical students’ perceptions of the impact of artificial intelligence in radiology
    G. Caparrós Galán, F. Sendra Portero
    Radiología (English Edition).2022; 64(6): 516.     CrossRef
  • Medical Education 4.0: A Neurology Perspective
    Zaitoon Zafar, Muhammad Umair, Filzah Faheem, Danish Bhatti , Junaid S Kalia
    Cureus.2022;[Epub]     CrossRef
  • AI in the hands of imperfect users
    Kristin M. Kostick-Quenet, Sara Gerke
    npj Digital Medicine.2022;[Epub]     CrossRef
  • Trust and medical AI: the challenges we face and the expertise needed to overcome them
    Thomas P Quinn, Manisha Senadeera, Stephan Jacobs, Simon Coghlan, Vuong Le
    Journal of the American Medical Informatics Association.2021; 28(4): 890.     CrossRef
  • Attitude of Brazilian dentists and dental students regarding the future role of artificial intelligence in oral radiology: a multicenter survey
    Ruben Pauwels, Yumi Chokyu Del Rey
    Dentomaxillofacial Radiology.2021; 50(5): 20200461.     CrossRef
  • Key Principles of Clinical Validation, Device Approval, and Insurance Coverage Decisions of Artificial Intelligence
    Seong Ho Park, Jaesoon Choi, Jeong-Sik Byeon
    Korean Journal of Radiology.2021; 22(3): 442.     CrossRef
  • Basic of machine learning and deep learning in imaging for medical physicists
    Luigi Manco, Nicola Maffei, Silvia Strolin, Sara Vichi, Luca Bottazzi, Lidia Strigari
    Physica Medica.2021; 83: 194.     CrossRef
  • Inteligencia artificial y simulación en urología
    J. Gómez Rivas, C. Toribio Vázquez, C. Ballesteros Ruiz, M. Taratkin, J.L. Marenco, G.E. Cacciamani, E. Checcucci, Z. Okhunov, D. Enikeev, F. Esperto, R. Grossmann, B. Somani, D. Veneziano
    Actas Urológicas Españolas.2021; 45(8): 524.     CrossRef
  • Regulating AI in Health Care: The Challenges of Informed User Engagement
    Olya Kudina
    Hastings Center Report.2021; 51(5): 6.     CrossRef
  • Are We Ready to Integrate Artificial Intelligence Literacy into Medical School Curriculum: Students and Faculty Survey
    Elena A Wood, Brittany L Ange, D Douglas Miller
    Journal of Medical Education and Curricular Development.2021; 8: 238212052110240.     CrossRef
  • A Conference-Friendly, Hands-on Introduction to Deep Learning for Radiology Trainees
    Walter F. Wiggins, M. Travis Caton, Kirti Magudia, Michael H. Rosenthal, Katherine P. Andriole
    Journal of Digital Imaging.2021; 34(4): 1026.     CrossRef
  • Artificial intelligence and simulation in urology
    J. Gómez Rivas, C. Toribio Vázquez, C. Ballesteros Ruiz, M. Taratkin, J.L. Marenco, G.E. Cacciamani, E. Checcucci, Z. Okhunov, D. Enikeev, F. Esperto, R. Grossmann, B. Somani, D. Veneziano
    Actas Urológicas Españolas (English Edition).2021; 45(8): 524.     CrossRef
  • Accelerating the Appropriate Adoption of Artificial Intelligence in Health Care: Protocol for a Multistepped Approach
    David Wiljer, Mohammad Salhia, Elham Dolatabadi, Azra Dhalla, Caitlin Gillan, Dalia Al-Mouaswas, Ethan Jackson, Jacqueline Waldorf, Jane Mattson, Megan Clare, Nadim Lalani, Rebecca Charow, Sarmini Balakumar, Sarah Younus, Tharshini Jeyakumar, Wanda Petean
    JMIR Research Protocols.2021; 10(10): e30940.     CrossRef
  • Artificial Intelligence in Undergraduate Medical Education: A Scoping Review
    Juehea Lee, Annie Siyu Wu, David Li, Kulamakan (Mahan) Kulasegaram
    Academic Medicine.2021; 96(11S): S62.     CrossRef
  • Artificial Intelligence Evidence-Based Current Status and Potential for Lower Limb Vascular Management
    Xenia Butova, Sergey Shayakhmetov, Maxim Fedin, Igor Zolotukhin, Sergio Gianesini
    Journal of Personalized Medicine.2021; 11(12): 1280.     CrossRef
  • Artificial Intelligence Education Programs for Health Care Professionals: Scoping Review
    Rebecca Charow, Tharshini Jeyakumar, Sarah Younus, Elham Dolatabadi, Mohammad Salhia, Dalia Al-Mouaswas, Melanie Anderson, Sarmini Balakumar, Megan Clare, Azra Dhalla, Caitlin Gillan, Shabnam Haghzare, Ethan Jackson, Nadim Lalani, Jane Mattson, Wanda Pete
    JMIR Medical Education.2021; 7(4): e31043.     CrossRef
  • The Journal Citation Indicator has arrived for Emerging Sources Citation Index journals, including the Journal of Educational Evaluation for Health Professions, in June 2021
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2021; 18: 20.     CrossRef
  • Ethical Challenges of Artificial Intelligence in Health Care: A Narrative Review
    Aaron T. Hui, Shawn S. Ahn, Carolyn T. Lye, Jun Deng
    Ethics in Biology, Engineering and Medicine: An International Journal.2021; 12(1): 55.     CrossRef
  • Bayesian networks: Making the most of a history
    Rami Abbass, Usmaan Bhatti, Shad Asinger
    The Clinical Teacher.2021; 18(2): 140.     CrossRef
  • Fundamentals in Artificial Intelligence for Vascular Surgeons
    Juliette Raffort, Cédric Adam, Marion Carrier, Fabien Lareyre
    Annals of Vascular Surgery.2020; 65: 254.     CrossRef
  • Extending capabilities of artificial intelligence for decision-making and healthcare education
    Mohd Javaid, Abid Haleem, IbrahimHaleem Khan, Raju Vaishya, Abhishek Vaish
    Apollo Medicine.2020; 17(1): 53.     CrossRef
  • Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine
    Zeeshan Ahmed, Khalid Mohamed, Saman Zeeshan, XinQi Dong
    Database.2020;[Epub]     CrossRef
  • Artificial Intelligence Education and Tools for Medical and Health Informatics Students: Systematic Review
    A Hasan Sapci, H Aylin Sapci
    JMIR Medical Education.2020; 6(1): e19285.     CrossRef
  • Evaluation of epidemiological lectures using peer instruction: focusing on the importance of ConcepTests
    Toshiharu Mitsuhashi
    PeerJ.2020; 8: e9640.     CrossRef
  • Artificial Intelligence in Small Bowel Endoscopy: Current Perspectives and Future Directions
    Dinesh Meher, Mrinal Gogoi, Pankaj Bharali, Prajna Anirvan, Shivaram Prasad Singh
    Journal of Digestive Endoscopy.2020; 11(04): 245.     CrossRef
  • Key principles of clinical validation, device approval, and insurance coverage decisions of artificial intelligence
    Seong Ho Park, Jaesoon Choi, Jeong-Sik Byeon
    Journal of the Korean Medical Association.2020; 63(11): 696.     CrossRef
  • Artificial intelligence-based education assists medical students’ interpretation of hip fracture
    Chi-Tung Cheng, Chih-Chi Chen, Chih-Yuan Fu, Chung-Hsien Chaou, Yu-Tung Wu, Chih-Po Hsu, Chih-Chen Chang, I-Fang Chung, Chi-Hsun Hsieh, Ming-Ju Hsieh, Chien-Hung Liao
    Insights into Imaging.2020;[Epub]     CrossRef
  • Current Status and Future Direction of Artificial Intelligence in Healthcare and Medical Education
    Jin Sup Jung
    Korean Medical Education Review.2020; 22(2): 99.     CrossRef
  • Introducing Artificial Intelligence Training in Medical Education
    Ketan Paranjape, Michiel Schinkel, Rishi Nannan Panday, Josip Car, Prabath Nanayakkara
    JMIR Medical Education.2019; 5(2): e16048.     CrossRef
Research article
An expert-led and artificial intelligence system-assisted tutoring course to improve the confidence of Chinese medical interns in suturing and ligature skills: a prospective pilot study  
Ying-Ying Yang, Boaz Shulruf
J Educ Eval Health Prof. 2019;16:7.   Published online April 10, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.7
  • 18,310 View
  • 306 Download
  • 12 Web of Science
  • 16 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Lack of confidence in suturing/ligature skills due to insufficient practice and assessments is common among novice Chinese medical interns. This study aimed to improve the skill acquisition of medical interns through a new intervention program.
Methods
In addition to regular clinical training, expert-led or expert-led plus artificial intelligence (AI) system tutoring courses were implemented during the first 2 weeks of the surgical block. Interns could voluntarily join the regular (no additional tutoring), expert-led tutoring, or expert-led+AI tutoring groups freely. In the regular group, interns (n=25) did not receive additional tutoring. The expert-led group received 3-hour expert-led tutoring and in-training formative assessments after 2 practice sessions. After a similar expert-led course, the expert-led+AI group (n=23) practiced and assessed their skills on an AI system. Through a comparison with the internal standard, the system automatically recorded and evaluated every intern’s suturing/ligature skills. In the expert-led+AI group, performance and confidence were compared between interns who participated in 1, 2, or 3 AI practice sessions.
Results
The end-of-surgical block objective structured clinical examination (OSCE) performance and self-assessed confidence in suturing/ligature skills were highest in the expert-led+AI group. In comparison with the expert-led group, the expert-led+AI group showed similar performance in the in-training assessment and greater improvement in the end-of-surgical block OSCE. In the expert-led+AI group, the best performance and highest post-OSCE confidence were noted in those who engaged in 3 AI practice sessions.
Conclusion
This pilot study demonstrated the potential value of incorporating an additional expert-led+AI system–assisted tutoring course into the regular surgical curriculum.

Citations

Citations to this article as recorded by  
  • Automated measurement extraction for assessing simple suture quality in medical education
    Thanapon Noraset, Prawej Mahawithitwong, Wethit Dumronggittigule, Pongthep Pisarnturakit, Cherdsak Iramaneerat, Chanean Ruansetakit, Irin Chaikangwan, Nattanit Poungjantaradej, Nutcha Yodrabum
    Expert Systems with Applications.2024; 241: 122722.     CrossRef
  • Dental student application of artificial intelligence technology in detecting proximal caries lesions
    Enes Ayan, Yusuf Bayraktar, Çiğdem Çelik, Baturalp Ayhan
    Journal of Dental Education.2024;[Epub]     CrossRef
  • Development of Artificial Intelligence–Teaching Assistant System for Undergraduate Nursing Students
    Yanika Kowitlawakul, Jocelyn Jie Min Tan, Siriwan Suebnukarn, Hoang D. Nguyen, Danny Chiang Choon Poo, Joseph Chai, Devi M. Kamala, Wenru Wang
    CIN: Computers, Informatics, Nursing.2024;[Epub]     CrossRef
  • Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education
    Thomas K.F. Chiu, Qi Xia, Xinyan Zhou, Ching Sing Chai, Miaoting Cheng
    Computers and Education: Artificial Intelligence.2023; 4: 100118.     CrossRef
  • The impact of Generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney
    Thomas K. F. Chiu
    Interactive Learning Environments.2023; : 1.     CrossRef
  • Application value of an artificial intelligence-based diagnosis and recognition system in gastroscopy training for graduate students in gastroenterology: a preliminary study
    Peng An, Zhongqiu Wang
    Wiener Medizinische Wochenschrift.2023;[Epub]     CrossRef
  • Technological advancements in surgical laparoscopy considering artificial intelligence: a survey among surgeons in Germany
    Sebastian Lünse, Eric L. Wisotzky, Sophie Beckmann, Christoph Paasch, Richard Hunger, René Mantke
    Langenbeck's Archives of Surgery.2023;[Epub]     CrossRef
  • Artificial intelligence (AI) integration in medical education: A pan-India cross-sectional observation of acceptance and understanding among students
    Vipul Sharma, Uddhave Saini, Varun Pareek, Lokendra Sharma, Susheel Kumar
    Scripta Medica.2023; 54(4): 343.     CrossRef
  • Artificial Intelligence Methods and Artificial Intelligence-Enabled Metrics for Surgical Education: A Multidisciplinary Consensus
    S Swaroop Vedula, Ahmed Ghazi, Justin W Collins, Carla Pugh, Dimitrios Stefanidis, Ozanan Meireles, Andrew J Hung, Steven Schwaitzberg, Jeffrey S Levy, Ajit K Sachdeva
    Journal of the American College of Surgeons.2022; 234(6): 1181.     CrossRef
  • The use and future perspective of Artificial Intelligence—A survey among German surgeons
    Mathieu Pecqueux, Carina Riediger, Marius Distler, Florian Oehme, Ulrich Bork, Fiona R. Kolbinger, Oliver Schöffski, Peter van Wijngaarden, Jürgen Weitz, Johannes Schweipert, Christoph Kahlert
    Frontiers in Public Health.2022;[Epub]     CrossRef
  • TIPTA YAPAY ZEKA UYGULAMALARI
    Hatice KELEŞ
    Kırıkkale Üniversitesi Tıp Fakültesi Dergisi.2022; 24(3): 604.     CrossRef
  • Application of Artificial Intelligence in Medicine: An Overview
    Peng-ran Liu, Lin Lu, Jia-yao Zhang, Tong-tong Huo, Song-xiang Liu, Zhe-wei Ye
    Current Medical Science.2021; 41(6): 1105.     CrossRef
  • Applications and Effects of EdTech in Medical Education
    Hyeonmi Hong, Youngjon Kim
    Korean Medical Education Review.2021; 23(3): 160.     CrossRef
  • Artificial Intelligence Education and Tools for Medical and Health Informatics Students: Systematic Review
    A Hasan Sapci, H Aylin Sapci
    JMIR Medical Education.2020; 6(1): e19285.     CrossRef
  • Scientific Development of Educational Artificial Intelligence in Web of Science
    Antonio-José Moreno-Guerrero, Jesús López-Belmonte, José-Antonio Marín-Marín, Rebeca Soler-Costa
    Future Internet.2020; 12(8): 124.     CrossRef
  • An Educational Network for Surgical Education Supported by Gamification Elements: Protocol for a Randomized Controlled Trial
    Natasha Guérard-Poirier, Michèle Beniey, Léamarie Meloche-Dumas, Florence Lebel-Guay, Bojana Misheva, Myriam Abbas, Malek Dhane, Myriam Elraheb, Adam Dubrowski, Erica Patocskai
    JMIR Research Protocols.2020; 9(12): e21273.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions