Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Most cited articles

Page Path
HOME > Browse articles > Most cited articles
40 Most cited articles
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles

From articles published in Journal of Educational Evaluation for Health Professions during the past two years (2022 ~ ).

Brief report
Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study  
Sun Huh
J Educ Eval Health Prof. 2023;20:1.   Published online January 11, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.1
  • 10,493 View
  • 994 Download
  • 99 Web of Science
  • 58 Crossref
AbstractAbstract PDFSupplementary Material
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.

Citations

Citations to this article as recorded by  
  • Large Language Models and Artificial Intelligence: A Primer for Plastic Surgeons on the Demonstrated and Potential Applications, Promises, and Limitations of ChatGPT
    Jad Abi-Rafeh, Hong Hao Xu, Roy Kazan, Ruth Tevlin, Heather Furnas
    Aesthetic Surgery Journal.2024; 44(3): 329.     CrossRef
  • Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers
    Ana Suárez, Víctor Díaz‐Flores García, Juan Algar, Margarita Gómez Sánchez, María Llorente de Pedro, Yolanda Freire
    International Endodontic Journal.2024; 57(1): 108.     CrossRef
  • Bob or Bot: Exploring ChatGPT's Answers to University Computer Science Assessment
    Mike Richards, Kevin Waugh, Mark Slaymaker, Marian Petre, John Woodthorpe, Daniel Gooch
    ACM Transactions on Computing Education.2024; 24(1): 1.     CrossRef
  • Examining the use of ChatGPT in public universities in Hong Kong: a case study of restricted access areas
    Michelle W. T. Cheng, Iris H. Y. YIM
    Discover Education.2024;[Epub]     CrossRef
  • Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study
    Firas Haddad, Joanna S Saade
    JMIR Medical Education.2024; 10: e50842.     CrossRef
  • A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision‐making in nursing
    Mor Saban, Ilana Dubovi
    Journal of Advanced Nursing.2024;[Epub]     CrossRef
  • Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study
    Annika Meyer, Janik Riese, Thomas Streichert
    JMIR Medical Education.2024; 10: e50965.     CrossRef
  • From hype to insight: Exploring ChatGPT's early footprint in education via altmetrics and bibliometrics
    Lung‐Hsiang Wong, Hyejin Park, Chee‐Kit Looi
    Journal of Computer Assisted Learning.2024;[Epub]     CrossRef
  • A scoping review of artificial intelligence in medical education: BEME Guide No. 84
    Morris Gordon, Michelle Daniel, Aderonke Ajiboye, Hussein Uraiby, Nicole Y. Xu, Rangana Bartlett, Janice Hanson, Mary Haas, Maxwell Spadafore, Ciaran Grafton-Clarke, Rayhan Yousef Gasiea, Colin Michie, Janet Corral, Brian Kwan, Diana Dolmans, Satid Thamma
    Medical Teacher.2024; : 1.     CrossRef
  • Üniversite Öğrencilerinin ChatGPT 3,5 Deneyimleri: Yapay Zekâyla Yazılmış Masal Varyantları
    Bilge GÖK, Fahri TEMİZYÜREK, Özlem BAŞ
    Korkut Ata Türkiyat Araştırmaları Dergisi.2024; (14): 1040.     CrossRef
  • Tracking ChatGPT Research: Insights From the Literature and the Web
    Omar Mubin, Fady Alnajjar, Zouheir Trabelsi, Luqman Ali, Medha Mohan Ambali Parambil, Zhao Zou
    IEEE Access.2024; 12: 30518.     CrossRef
  • Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology
    Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal
    Cureus.2023;[Epub]     CrossRef
  • Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 5.     CrossRef
  • Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic
    Sun Huh
    Science Editing.2023; 10(1): 1.     CrossRef
  • Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum
    Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta
    Cureus.2023;[Epub]     CrossRef
  • Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry
    Arindam Ghosh, Aritri Bir
    Cureus.2023;[Epub]     CrossRef
  • Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts
    Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT in Dentistry: A Comprehensive Review
    Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed
    Cureus.2023;[Epub]     CrossRef
  • Can we trust AI chatbots’ answers about disease diagnosis and patient care?
    Sun Huh
    Journal of the Korean Medical Association.2023; 66(4): 218.     CrossRef
  • Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
    Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh
    JMIR Medical Education.2023; 9: e48291.     CrossRef
  • Early applications of ChatGPT in medical practice, education and research
    Sam Sedaghat
    Clinical Medicine.2023; 23(3): 278.     CrossRef
  • A Review of Research on Teaching and Learning Transformation under the Influence of ChatGPT Technology
    璇 师
    Advances in Education.2023; 13(05): 2617.     CrossRef
  • Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study
    Soshi Takagi, Takashi Watari, Ayano Erabi, Kota Sakaguchi
    JMIR Medical Education.2023; 9: e48002.     CrossRef
  • ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions
    Cosima C. Hoch, Barbara Wollenberg, Jan-Christoffer Lüers, Samuel Knoedler, Leonard Knoedler, Konstantin Frank, Sebastian Cotofana, Michael Alfertshofer
    European Archives of Oto-Rhino-Laryngology.2023; 280(9): 4271.     CrossRef
  • Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology
    Mayank Agarwal, Priyanka Sharma, Ayan Goswami
    Cureus.2023;[Epub]     CrossRef
  • The Intersection of ChatGPT, Clinical Medicine, and Medical Education
    Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali
    JMIR Medical Education.2023; 9: e47274.     CrossRef
  • The Role of Artificial Intelligence in Higher Education: ChatGPT Assessment for Anatomy Course
    Tarık TALAN, Yusuf KALINKARA
    Uluslararası Yönetim Bilişim Sistemleri ve Bilgisayar Bilimleri Dergisi.2023; 7(1): 33.     CrossRef
  • Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
    Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu
    Journal of Educational Evaluation for Health Professions.2023; 20: 17.     CrossRef
  • Examining Real-World Medication Consultations and Drug-Herb Interactions: ChatGPT Performance Evaluation
    Hsing-Yu Hsu, Kai-Cheng Hsu, Shih-Yen Hou, Ching-Lung Wu, Yow-Wen Hsieh, Yih-Dih Cheng
    JMIR Medical Education.2023; 9: e48433.     CrossRef
  • Assessing the Efficacy of ChatGPT in Solving Questions Based on the Core Concepts in Physiology
    Arijita Banerjee, Aquil Ahmad, Payal Bhalla, Kavita Goyal
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT Performs on the Chinese National Medical Licensing Examination
    Xinyi Wang, Zhenye Gong, Guoxin Wang, Jingdan Jia, Ying Xu, Jialu Zhao, Qingye Fan, Shaun Wu, Weiguo Hu, Xiaoyang Li
    Journal of Medical Systems.2023;[Epub]     CrossRef
  • Artificial intelligence and its impact on job opportunities among university students in North Lima, 2023
    Doris Ruiz-Talavera, Jaime Enrique De la Cruz-Aguero, Nereo García-Palomino, Renzo Calderón-Espinoza, William Joel Marín-Rodriguez
    ICST Transactions on Scalable Information Systems.2023;[Epub]     CrossRef
  • Revolutionizing Dental Care: A Comprehensive Review of Artificial Intelligence Applications Among Various Dental Specialties
    Najd Alzaid, Omar Ghulam, Modhi Albani, Rafa Alharbi, Mayan Othman, Hasan Taher, Saleem Albaradie, Suhael Ahmed
    Cureus.2023;[Epub]     CrossRef
  • Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review
    Carl Preiksaitis, Christian Rose
    JMIR Medical Education.2023; 9: e48785.     CrossRef
  • Exploring the impact of language models, such as ChatGPT, on student learning and assessment
    Araz Zirar
    Review of Education.2023;[Epub]     CrossRef
  • Evaluating the reliability of ChatGPT as a tool for imaging test referral: a comparative study with a clinical decision support system
    Shani Rosen, Mor Saban
    European Radiology.2023;[Epub]     CrossRef
  • Redesigning Tertiary Educational Evaluation with AI: A Task-Based Analysis of LIS Students’ Assessment on Written Tests and Utilizing ChatGPT at NSTU
    Shamima Yesmin
    Science & Technology Libraries.2023; : 1.     CrossRef
  • ChatGPT and the AI revolution: a comprehensive investigation of its multidimensional impact and potential
    Mohd Afjal
    Library Hi Tech.2023;[Epub]     CrossRef
  • The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard
    Hasan B Ilgaz, Zehra Çelik
    Cureus.2023;[Epub]     CrossRef
  • Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
    Abhra Ghosh, Nandita Maini Jindal, Vikram K Gupta, Ekta Bansal, Navjot Kaur Bajwa, Abhishek Sett
    Cureus.2023;[Epub]     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study
    Osman Babayiğit, Zeynep Tastan Eroglu, Dilek Ozkan Sen, Fatma Ucan Yarkac
    Cureus.2023;[Epub]     CrossRef
  • Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study
    Aleksandra Ignjatović, Lazar Stevanović
    Journal of Educational Evaluation for Health Professions.2023; 20: 28.     CrossRef
  • Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study
    Krishna Mohan Surapaneni
    JMIR Medical Education.2023; 9: e47191.     CrossRef
  • A systematic review of ChatGPT use in K‐12 education
    Peng Zhang, Gemma Tur
    European Journal of Education.2023;[Epub]     CrossRef
  • Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
    Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa
    Journal of Educational Evaluation for Health Professions.2023; 20: 30.     CrossRef
  • ChatGPT’s performance in German OB/GYN exams – paving the way for AI-enhanced medical education and clinical practice
    Maximilian Riedel, Katharina Kaefinger, Antonia Stuehrenberg, Viktoria Ritter, Niklas Amann, Anna Graf, Florian Recker, Evelyn Klein, Marion Kiechle, Fabian Riedel, Bastian Meyer
    Frontiers in Medicine.2023;[Epub]     CrossRef
  • Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study
    Janghee Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 29.     CrossRef
  • Evaluating ChatGPT as a self‐learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination
    Krishna Mohan Surapaneni, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, Porchelvan Swaminathan
    Biochemistry and Molecular Biology Education.2023;[Epub]     CrossRef
  • FROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKING
    Yaroslav Mykhalko, Pavlo Kish, Yelyzaveta Rubtsova, Oleksandr Kutsyn, Valentyna Koval
    Wiadomości Lekarskie.2023; 76(11): 2345.     CrossRef
  • Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions
    Pasin Tangadulrat, Supinya Sono, Boonsin Tangtrakulwanich
    JMIR Medical Education.2023; 9: e50658.     CrossRef
  • Below average ChatGPT performance in medical microbiology exam compared to university students
    Malik Sallam, Khaled Al-Salahat
    Frontiers in Education.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • ChatGPT Review: A Sophisticated Chatbot Models in Medical & Health-related Teaching and Learning
    Nur Izah Ab Razak, Muhammad Fawwaz Muhammad Yusoff, Rahmita Wirza O.K. Rahmat
    Malaysian Journal of Medicine and Health Sciences.2023; 19(s12): 98.     CrossRef
  • Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
    Tae Won Kim
    Journal of Educational Evaluation for Health Professions.2023; 20: 38.     CrossRef
  • Trends in research on ChatGPT and adoption-related issues discussed in articles: a narrative review
    Sang-Jun Kim
    Science Editing.2023; 11(1): 3.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligences’ answers to learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soo Bin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Review
Can an artificial intelligence chatbot be the author of a scholarly article?  
Ju Yoen Lee
J Educ Eval Health Prof. 2023;20:6.   Published online February 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.6
  • 6,844 View
  • 612 Download
  • 29 Web of Science
  • 34 Crossref
AbstractAbstract PDFSupplementary Material
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.

Citations

Citations to this article as recorded by  
  • Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills
    Graham Kendall, Jaime A. Teixeira da Silva
    Learned Publishing.2024; 37(1): 55.     CrossRef
  • Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals
    Brady D. Lund, K.T. Naheem
    Learned Publishing.2024; 37(1): 13.     CrossRef
  • The Role of AI in Writing an Article and Whether it Can Be a Co-author: What if it Gets Support From 2 Different AIs Like ChatGPT and Google Bard for the Same Theme?
    İlhan Bahşi, Ayşe Balat
    Journal of Craniofacial Surgery.2024; 35(1): 274.     CrossRef
  • Artificial Intelligence–Generated Scientific Literature: A Critical Appraisal
    Justyna Zybaczynska, Matthew Norris, Sunjay Modi, Jennifer Brennan, Pooja Jhaveri, Timothy J. Craig, Taha Al-Shaikhly
    The Journal of Allergy and Clinical Immunology: In Practice.2024; 12(1): 106.     CrossRef
  • Does Google’s Bard Chatbot perform better than ChatGPT on the European hand surgery exam?
    Goetsch Thibaut, Armaghan Dabbagh, Philippe Liverneaux
    International Orthopaedics.2024; 48(1): 151.     CrossRef
  • A Brief Review of the Efficacy in Artificial Intelligence and Chatbot-Generated Personalized Fitness Regimens
    Daniel K. Bays, Cole Verble, Kalyn M. Powers Verble
    Strength & Conditioning Journal.2024;[Epub]     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2024; 12: 1398.     CrossRef
  • The Use of Artificial Intelligence in Writing Scientific Review Articles
    Melissa A. Kacena, Lilian I. Plotkin, Jill C. Fehrenbacher
    Current Osteoporosis Reports.2024; 22(1): 115.     CrossRef
  • Using AI to Write a Review Article Examining the Role of the Nervous System on Skeletal Homeostasis and Fracture Healing
    Murad K. Nazzal, Ashlyn J. Morris, Reginald S. Parker, Fletcher A. White, Roman M. Natoli, Jill C. Fehrenbacher, Melissa A. Kacena
    Current Osteoporosis Reports.2024; 22(1): 217.     CrossRef
  • GenAI et al.: Cocreation, Authorship, Ownership, Academic Ethics and Integrity in a Time of Generative AI
    Aras Bozkurt
    Open Praxis.2024; 16(1): 1.     CrossRef
  • An integrative decision-making framework to guide policies on regulating ChatGPT usage
    Umar Ali Bukar, Md Shohel Sayeed, Siti Fatimah Abdul Razak, Sumendra Yogarayan, Oluwatosin Ahmed Amodu
    PeerJ Computer Science.2024; 10: e1845.     CrossRef
  • Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer
    Casey Watters, Michal K. Lemanski
    Frontiers in Big Data.2023;[Epub]     CrossRef
  • The importance of human supervision in the use of ChatGPT as a support tool in scientific writing
    William Castillo-González
    Metaverse Basic and Applied Research.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • Chatbots in Medical Research
    Punit Sharma
    Clinical Nuclear Medicine.2023; 48(9): 838.     CrossRef
  • Potential applications of ChatGPT in dermatology
    Nicolas Kluger
    Journal of the European Academy of Dermatology and Venereology.2023;[Epub]     CrossRef
  • The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research
    Tariq Alqahtani, Hisham A. Badreldin, Mohammed Alrashed, Abdulrahman I. Alshaya, Sahar S. Alghamdi, Khalid bin Saleh, Shuroug A. Alowais, Omar A. Alshaya, Ishrat Rahman, Majed S. Al Yami, Abdulkareem M. Albekairy
    Research in Social and Administrative Pharmacy.2023; 19(8): 1236.     CrossRef
  • ChatGPT Performance on the American Urological Association Self-assessment Study Program and the Potential Influence of Artificial Intelligence in Urologic Training
    Nicholas A. Deebel, Ryan Terlecki
    Urology.2023; 177: 29.     CrossRef
  • Intelligence or artificial intelligence? More hard problems for authors of Biological Psychology, the neurosciences, and everyone else
    Thomas Ritz
    Biological Psychology.2023; 181: 108590.     CrossRef
  • The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts
    Mohammad Hosseini, David B Resnik, Kristi Holmes
    Research Ethics.2023; 19(4): 449.     CrossRef
  • How trustworthy is ChatGPT? The case of bibliometric analyses
    Faiza Farhat, Shahab Saquib Sohail, Dag Øivind Madsen
    Cogent Engineering.2023;[Epub]     CrossRef
  • Disclosing use of Artificial Intelligence: Promoting transparency in publishing
    Parvaiz A. Koul
    Lung India.2023; 40(5): 401.     CrossRef
  • ChatGPT in medical research: challenging time ahead
    Daideepya C Bhargava, Devendra Jadav, Vikas P Meshram, Tanuj Kanchan
    Medico-Legal Journal.2023; 91(4): 223.     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2023; 12: 1398.     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • ChatGPT in medical writing: A game-changer or a gimmick?
    Shital Sarah Ahaley, Ankita Pandey, Simran Kaur Juneja, Tanvi Suhane Gupta, Sujatha Vijayakumar
    Perspectives in Clinical Research.2023;[Epub]     CrossRef
  • Artificial Intelligence-Supported Systems in Anesthesiology and Its Standpoint to Date—A Review
    Fiona M. P. Pham
    Open Journal of Anesthesiology.2023; 13(07): 140.     CrossRef
  • ChatGPT as an innovative tool for increasing sales in online stores
    Michał Orzoł, Katarzyna Szopik-Depczyńska
    Procedia Computer Science.2023; 225: 3450.     CrossRef
  • Intelligent Plagiarism as a Misconduct in Academic Integrity
    Jesús Miguel Muñoz-Cantero, Eva Maria Espiñeira-Bellón
    Acta Médica Portuguesa.2023; 37(1): 1.     CrossRef
  • Follow-up of Artificial Intelligence Development and its Controlled Contribution to the Article: Step to the Authorship?
    Ekrem Solmaz
    European Journal of Therapeutics.2023;[Epub]     CrossRef
  • May Artificial Intelligence Be a Co-Author on an Academic Paper?
    Ayşe Balat, İlhan Bahşi
    European Journal of Therapeutics.2023; 29(3): e12.     CrossRef
  • Opportunities and challenges for ChatGPT and large language models in biomedicine and health
    Shubo Tian, Qiao Jin, Lana Yeganova, Po-Ting Lai, Qingqing Zhu, Xiuying Chen, Yifan Yang, Qingyu Chen, Won Kim, Donald C Comeau, Rezarta Islamaj, Aadit Kapoor, Xin Gao, Zhiyong Lu
    Briefings in Bioinformatics.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • Editorial policies of Journal of Educational Evaluation for Health Professions on the use of generative artificial intelligence in article writing and peer review
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 40.     CrossRef
Editorials
Application of computer-based testing in the Korean Medical Licensing Examination, the emergence of the metaverse in medical education, journal metrics and statistics, and appreciation to reviewers and volunteers
Sun Huh
J Educ Eval Health Prof. 2022;19:2.   Published online January 13, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.2
  • 7,360 View
  • 615 Download
  • 20 Web of Science
  • 21 Crossref
PDFSupplementary Material

Citations

Citations to this article as recorded by  
  • Facing the challenges of metaverse: a systematic literature review from Social Sciences and Marketing and Communication
    Verónica Crespo-Pereira, Eva Sánchez-Amboage, Matías Membiela-Pollán
    El Profesional de la información.2023;[Epub]     CrossRef
  • Utilizing the metaverse in anatomy and physiology
    Christian Moro
    Anatomical Sciences Education.2023; 16(4): 574.     CrossRef
  • Metaverse for Healthcare: A Survey on Potential Applications, Challenges and Future Directions
    Rajeswari Chengoden, Nancy Victor, Thien Huynh-The, Gokul Yenduri, Rutvij H. Jhaveri, Mamoun Alazab, Sweta Bhattacharya, Pawan Hegde, Praveen Kumar Reddy Maddikunta, Thippa Reddy Gadekallu
    IEEE Access.2023; 11: 12765.     CrossRef
  • Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 5.     CrossRef
  • Beyond Your Sight Using Metaverse Immersive Vision With Technology Behaviour Model
    Poh Soon JosephNg, Xiaoxue Gong, Narinderjit Singh, Toong Hai Sam, Hua Liu, Koo Yuen Phan
    Journal of Cases on Information Technology.2023; 25(1): 1.     CrossRef
  • Metaverse applications in education: a systematic review and a cost-benefit analysis
    Mark Anthony Camilleri
    Interactive Technology and Smart Education.2023;[Epub]     CrossRef
  • Conceptions of the metaverse in higher education: A draw-a-picture analysis and surveys to investigate the perceptions of students with different motivation levels
    Gwo-Jen Hwang, Yun-Fang Tu, Hui-Chun Chu
    Computers & Education.2023; : 104868.     CrossRef
  • Presidential address: improving item validity and adopting computer-based testing, clinical skills assessments, artificial intelligence, and virtual reality in health professions licensing examinations in Korea
    Hyunjoo Pai
    Journal of Educational Evaluation for Health Professions.2023; 20: 8.     CrossRef
  • Mission and Goals of the New Editor of the Ewha Medical Journal
    Sun Huh
    The Ewha Medical Journal.2023;[Epub]     CrossRef
  • METAVERSE ORTAMINDA MUHASEBE EĞİTİMİ
    Zeynep ŞAHİN
    Uluslararası İktisadi ve İdari İncelemeler Dergisi.2023; (41): 166.     CrossRef
  • La terza dimensione dell’e-learning: il metaverso
    Annamaria Cacchione
    IUL Research.2023; 4(7): 108.     CrossRef
  • Federated Learning for the Healthcare Metaverse: Concepts, Applications, Challenges, and Future Directions
    Ali Kashif Bashir, Nancy Victor, Sweta Bhattacharya, Thien Huynh-The, Rajeswari Chengoden, Gokul Yenduri, Praveen Kumar Reddy Maddikunta, Quoc-Viet Pham, Thippa Reddy Gadekallu, Madhusanka Liyanage
    IEEE Internet of Things Journal.2023; 10(24): 21873.     CrossRef
  • Assessment of the viability of integrating virtual reality programs in practical tests for the Korean Radiological Technologists Licensing Examination: a survey study
    Hye Min Park, Eun Seong Kim, Deok Mun Kwon, Pyong Kon Cho, Seoung Hwan Kim, Ki Baek Lee, Seong Hu Kim, Moon Il Bong, Won Seok Yang, Jin Eui Kim, Gi Bong Kang, Yong Su Yoon, Jung Su Kim
    Journal of Educational Evaluation for Health Professions.2023; 20: 33.     CrossRef
  • The impact of COVID-19 pandemic on hand surgery: a FESSH perspective
    Daniel B. Herren, Frederik Verstreken, Alex Lluch, Zaf Naqui, Brigitte van der Heijden
    Journal of Hand Surgery (European Volume).2022; 47(6): 562.     CrossRef
  • Public interest in the digital transformation accelerated by the COVID-19 pandemic and perception of its future impact
    Joo-Young Park, Kangsun Lee, Doo Ryeon Chung
    The Korean Journal of Internal Medicine.2022; 37(6): 1223.     CrossRef
  • The paradigm and future value of the metaverse for the intervention of cognitive decline
    Hao Zhou, Jian-Yi Gao, Ying Chen
    Frontiers in Public Health.2022;[Epub]     CrossRef
  • Advances in Metaverse Investigation: Streams of Research and Future Agenda
    Mariapina Trunfio, Simona Rossi
    Virtual Worlds.2022; 1(2): 103.     CrossRef
  • Dynamics of Metaverse and Medicine: A Review Article
    Mrudul A Kawarase, Ashish Anjankar
    Cureus.2022;[Epub]     CrossRef
  • What the Literature on Medicine, Nursing, Public Health, Midwifery, and Dentistry Reveals: An Overview of the Rapidly Approaching Metaverse
    Muhammet DAMAR
    Journal of Metaverse.2022; 2(2): 62.     CrossRef
  • Metaverse Üzerine Kapsamlı Bir Araştırma
    Çiğdem BAKIR
    European Journal of Science and Technology.2022;[Epub]     CrossRef
  • Possibility of independent use of the yes/no Angoff and Hofstee methods for the standard setting of the Korean Medical Licensing Examination written test: a descriptive study
    Do-Hwan Kim, Ye Ji Kang, Hoon-Ki Park
    Journal of Educational Evaluation for Health Professions.2022; 19: 33.     CrossRef
Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
Sun Huh
J Educ Eval Health Prof. 2023;20:5.   Published online January 31, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.5
  • 2,576 View
  • 294 Download
  • 13 Web of Science
  • 13 Crossref
PDF

Citations

Citations to this article as recorded by  
  • Seeing the forest for the trees and the changing seasons in the vast land of scholarly publishing
    Soo Jung Shin
    Science Editing.2024; 11(1): 81.     CrossRef
  • ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns
    Malik Sallam
    Healthcare.2023; 11(6): 887.     CrossRef
  • Exploring Determinants of COVID-19 Vaccine Acceptance, Uptake, and Hesitancy in the Pediatric Population: A Study of Parents and Caregivers in Saudi Arabia during the Initial Vaccination Phase
    Abdullah N. Alhuzaimi, Abdullah A. Alrasheed, Ayman Al-Eyadhy, Fadi Aljamaan, Khalid Alhasan, Mohammed A. Batais, Amr Jamal, Fatimah S. Alshahrani, Shuliweeh Alenezi, Ali Alhaboob, Fahad AlZamil, Yaser Y. Bashumeel, Ahmad M. Banaeem, Abdulrahman Aldawood,
    Healthcare.2023; 11(7): 972.     CrossRef
  • ChatGPT and large language model (LLM) chatbots: The current state of acceptability and a proposal for guidelines on utilization in academic medicine
    Jin K. Kim, Michael Chua, Mandy Rickard, Armando Lorenzo
    Journal of Pediatric Urology.2023; 19(5): 598.     CrossRef
  • The Potential Usefulness of ChatGPT in Oral and Maxillofacial Radiology
    Jyoti Mago, Manoj Sharma
    Cureus.2023;[Epub]     CrossRef
  • Decoding ChatGPT: A taxonomy of existing research, current challenges, and possible future directions
    Shahab Saquib Sohail, Faiza Farhat, Yassine Himeur, Mohammad Nadeem, Dag Øivind Madsen, Yashbir Singh, Shadi Atalla, Wathiq Mansoor
    Journal of King Saud University - Computer and Information Sciences.2023; 35(8): 101675.     CrossRef
  • Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer
    Casey Watters, Michal K. Lemanski
    Frontiers in Big Data.2023;[Epub]     CrossRef
  • Journal of Educational Evaluation for Health Professions received the Journal Impact Factor, 4.4 for the first time on June 28, 2023
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 21.     CrossRef
  • ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research
    Madhan Jeyaraman, Swaminathan Ramasubramanian, Sangeetha Balaji, Naveen Jeyaraman, Arulkumar Nallakumarasamy, Shilpa Sharma
    World Journal of Methodology.2023; 13(4): 170.     CrossRef
  • ChatGPT in pharmacy practice: a cross-sectional exploration of Jordanian pharmacists' perception, practice, and concerns
    Khawla Abu Hammour, Hamza Alhamad, Fahmi Y. Al-Ashwal, Abdulsalam Halboup, Rana Abu Farha, Adnan Abu Hammour
    Journal of Pharmaceutical Policy and Practice.2023;[Epub]     CrossRef
  • ChatGPT: unlocking the potential of Artifical Intelligence in COVID-19 monitoring and prediction
    Alberto G. GERLI, Joan B. SORIANO, Gianfranco ALICANDRO, Michele SALVAGNO, Fabio TACCONE, Stefano CENTANNI, Carlo LA VECCHIA
    Panminerva Medica.2023;[Epub]     CrossRef
  • A systematic review and meta-analysis on ChatGPT and its utilization in medical and dental research
    Hiroj Bagde, Ashwini Dhopte, Mohammad Khursheed Alam, Rehana Basri
    Heliyon.2023; 9(12): e23050.     CrossRef
  • ChatGPT: A brief narrative review
    Bulbul Gupta, Tabish Mufti, Shahab Saquib Sohail, Dag Øivind Madsen
    Cogent Business & Management.2023;[Epub]     CrossRef
Review
Prevalence of burnout and related factors in nursing faculty members: a systematic review  
Marziyeh Hosseini, Mitra Soltanian, Camellia Torabizadeh, Zahra Hadian Shirazi
J Educ Eval Health Prof. 2022;19:16.   Published online July 14, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.16
  • 3,951 View
  • 387 Download
  • 5 Web of Science
  • 8 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The current study aimed to identify the prevalence of burnout and related factors in nursing faculty members through a systematic review of the literature.
Methods
A comprehensive search of electronic databases, including Scopus, PubMed, Web of Science, Iranmedex, and Scientific Information Database was conducted via keywords extracted from Medical Subject Headings, including burnout and nursing faculty, for studies published from database inception to April 1, 2022. The quality of the included studies in this review was assessed using the appraisal tool for cross-sectional studies.
Results
A total of 2,551 nursing faculty members were enrolled in 11 studies. The mean score of burnout in nursing faculty members based on the Maslach Burnout Inventory (MBI) was 59.28 out of 132. The burnout score in this study was presented in 3 MBI subscales: emotional exhaustion, 21.24 (standard deviation [SD]=9.70) out of 54; depersonalization, 5.88 (SD=4.20) out of 30; and personal accomplishment, 32.16 (SD=6.45) out of 48. Several factors had significant relationships with burnout in nursing faculty members, including gender, level of education, hours of work, number of classroom, students taught, full-time work, job pressure, perceived stress, subjective well-being, marital status, job satisfaction, work setting satisfaction, workplace empowerment, collegial support, management style, fulfillment of self-expectation, communication style, humor, and academic position.
Conclusion
Overall, the mean burnout scores in nursing faculty members were moderate. Therefore, health policymakers and managers can reduce the likelihood of burnout in nursing faculty members by using psychosocial interventions and support.

Citations

Citations to this article as recorded by  
  • Civility and resilience practices to address chronic workplace stress in nursing academia
    Teresa M. Stephens, Cynthia M. Clark
    Teaching and Learning in Nursing.2024;[Epub]     CrossRef
  • The state of mental health, burnout, mattering and perceived wellness culture in Doctorally prepared nursing faculty with implications for action
    Bernadette Mazurek Melnyk, Lee Ann Strait, Cindy Beckett, Andreanna Pavan Hsieh, Jeffery Messinger, Randee Masciola
    Worldviews on Evidence-Based Nursing.2023; 20(2): 142.     CrossRef
  • Pressures in the Ivory Tower: An Empirical Study of Burnout Scores among Nursing Faculty
    Sheila A. Boamah, Michael Kalu, Rosain Stennett, Emily Belita, Jasmine Travers
    International Journal of Environmental Research and Public Health.2023; 20(5): 4398.     CrossRef
  • Understanding and Fostering Mental Health and Well-Being among University Faculty: A Narrative Review
    Dalal Hammoudi Halat, Abderrezzaq Soltani, Roua Dalli, Lama Alsarraj, Ahmed Malki
    Journal of Clinical Medicine.2023; 12(13): 4425.     CrossRef
  • Strategies to promote nurse educator well-being and prevent burnout: An integrative review
    Allan Lovern, Lindsay Quinlan, Stephanie Brogdon, Cora Rabe, Laura S. Bonanno
    Teaching and Learning in Nursing.2023;[Epub]     CrossRef
  • ALS Health care provider wellness
    Gregory Hansen, Sarah Burton-MacLeod, Kerri Lynn Schellenberg
    Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration.2023; : 1.     CrossRef
  • Cuidando al profesorado: resultados de un programa a distancia de autocuidado para educadores de profesiones de la salud
    Denisse Zúñiga, Guadalupe Echeverría, Pía Nitsche, Nuria Pedrals, Attilio Rigotti, Marisol Sirhan, Klaus Puschel, Marcela Bitran
    Educación Médica.2023; : 100871.     CrossRef
  • A mixed-methods study of the effectiveness and perceptions of a course design institute for health science educators
    Julie Speer, Quincy Conley, Derek Thurber, Brittany Williams, Mitzi Wasden, Brenda Jackson
    BMC Medical Education.2022;[Epub]     CrossRef
Educational/Faculty development material
Using a virtual flipped classroom model to promote critical thinking in online graduate courses in the United States: a case presentation  
Jennifer Tomesko, Deborah Cohen, Jennifer Bridenbaugh
J Educ Eval Health Prof. 2022;19:5.   Published online February 28, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.5
  • 4,256 View
  • 452 Download
  • 3 Web of Science
  • 5 Crossref
AbstractAbstract PDFSupplementary Material
Flipped classroom models encourage student autonomy and reverse the order of traditional classroom content such as lectures and assignments. Virtual learning environments are ideal for executing flipped classroom models to improve critical thinking skills. This paper provides health professions faculty with guidance on developing a virtual flipped classroom in online graduate nutrition courses between September 2021 and January 2022 at the School of Health Professions, Rutgers The State University of New Jersey. Examples of pre-class, live virtual face-to-face, and post-class activities are provided. Active learning, immediate feedback, and enhanced student engagement in a flipped classroom may result in a more thorough synthesis of information, resulting in increased critical thinking skills. This article describes how a flipped classroom model design in graduate online courses that incorporate virtual face-to-face class sessions in a virtual learning environment can be utilized to promote critical thinking skills. Health professions faculty who teach online can apply the examples discussed to their online courses.

Citations

Citations to this article as recorded by  
  • A scoping review of educational programmes on artificial intelligence (AI) available to medical imaging staff
    G. Doherty, L. McLaughlin, C. Hughes, J. McConnell, R. Bond, S. McFadden
    Radiography.2024; 30(2): 474.     CrossRef
  • Inculcating Critical Thinking Skills in Medical Students: Ways and Means
    Mandeep Kaur, Rajiv Mahajan
    International Journal of Applied & Basic Medical Research.2023; 13(2): 57.     CrossRef
  • Promoting students’ critical thinking and scientific attitudes through socio-scientific issues-based flipped classroom
    Nurfatimah Sugrah, Suyanta, Antuni Wiyarsi
    LUMAT: International Journal on Math, Science and Technology Education.2023;[Epub]     CrossRef
  • Análisis bibliométrico de la producción científica mundial sobre el aula invertida en la educación médica
    Gloria Katty Muñoz-Estrada, Hugo Eladio Chumpitaz Caycho, John Barja-Ore, Natalia Valverde-Espinoza, Liliana Verde-Vargas, Frank Mayta-Tovalino
    Educación Médica.2022; 23(5): 100758.     CrossRef
  • Effect of a flipped classroom course to foster medical students’ AI literacy with a focus on medical imaging: a single group pre-and post-test study
    Matthias C. Laupichler, Dariusch R. Hadizadeh, Maximilian W. M. Wintergerst, Leon von der Emde, Daniel Paech, Elizabeth A. Dick, Tobias Raupach
    BMC Medical Education.2022;[Epub]     CrossRef
Review
Medical students’ satisfaction level with e-learning during the COVID-19 pandemic and its related factors: a systematic review  
Mahbubeh Tabatabaeichehr, Samane Babaei, Mahdieh Dartomi, Peiman Alesheikh, Amir Tabatabaee, Hamed Mortazavi, Zohreh Khoshgoftar
J Educ Eval Health Prof. 2022;19:37.   Published online December 20, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.37
  • 2,081 View
  • 185 Download
  • 5 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This review investigated medical students’ satisfaction level with e-learning during the coronavirus disease 2019 (COVID-19) pandemic and its related factors.
Methods
A comprehensive systematic search was performed of international literature databases, including Scopus, PubMed, Web of Science, and Persian databases such as Iranmedex and Scientific Information Database using keywords extracted from Medical Subject Headings such as “Distance learning,” “Distance education,” “Online learning,” “Online education,” and “COVID-19” from the earliest date to July 10, 2022. The quality of the studies included in this review was evaluated using the appraisal tool for cross-sectional studies (AXIS tool).
Results
A total of 15,473 medical science students were enrolled in 24 studies. The level of satisfaction with e-learning during the COVID-19 pandemic among medical science students was 51.8%. Factors such as age, gender, clinical year, experience with e-learning before COVID-19, level of study, adaptation content of course materials, interactivity, understanding of the content, active participation of the instructor in the discussion, multimedia use in teaching sessions, adequate time dedicated to the e-learning, stress perception, and convenience had significant relationships with the satisfaction of medical students with e-learning during the COVID-19 pandemic.
Conclusion
Therefore, due to the inevitability of online education and e-learning, it is suggested that educational managers and policymakers choose the best online education method for medical students by examining various studies in this field to increase their satisfaction with e-learning.

Citations

Citations to this article as recorded by  
  • Factors affecting medical students’ satisfaction with online learning: a regression analysis of a survey
    Özlem Serpil Çakmakkaya, Elif Güzel Meydanlı, Ali Metin Kafadar, Mehmet Selman Demirci, Öner Süzer, Muhlis Cem Ar, Muhittin Onur Yaman, Kaan Can Demirbaş, Mustafa Sait Gönen
    BMC Medical Education.2024;[Epub]     CrossRef
  • A comparative study on the effectiveness of online and in-class team-based learning on student performance and perceptions in virtual simulation experiments
    Jing Shen, Hongyan Qi, Ruhuan Mei, Cencen Sun
    BMC Medical Education.2024;[Epub]     CrossRef
  • Physician Assistant Students’ Perception of Online Didactic Education: A Cross-Sectional Study
    Daniel L Anderson, Jeffrey L Alexander
    Cureus.2023;[Epub]     CrossRef
  • Mediating Role of PERMA Wellbeing in the Relationship between Insomnia and Psychological Distress among Nursing College Students
    Qian Sun, Xiangyu Zhao, Yiming Gao, Di Zhao, Meiling Qi
    Behavioral Sciences.2023; 13(9): 764.     CrossRef
Research articles
Improvement of the clinical skills of nurse anesthesia students using mini-clinical evaluation exercises in Iran: a randomized controlled study  
Ali Khalafi, Yasamin Sharbatdar, Nasrin Khajeali, Mohammad Hosein Haghighizadeh, Mahshid Vaziri
J Educ Eval Health Prof. 2023;20:12.   Published online April 6, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.12
  • 1,712 View
  • 115 Download
  • 1 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The present study aimed to investigate the effect of a mini-clinical evaluation exercise (CEX) assessment on improving the clinical skills of nurse anesthesia students at Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.
Methods
This study started on November 1, 2022, and ended on December 1, 2022. It was conducted among 50 nurse anesthesia students divided into intervention and control groups. The intervention group’s clinical skills were evaluated 4 times using the mini-CEX method. In contrast, the same skills were evaluated in the control group based on the conventional method—that is, general supervision by the instructor during the internship and a summative evaluation based on a checklist at the end of the course. The intervention group students also filled out a questionnaire to measure their satisfaction with the mini-CEX method.
Results
The mean score of the students in both the control and intervention groups increased significantly on the post-test (P<0.0001), but the improvement in the scores of the intervention group was significantly greater compared with the control group (P<0.0001). The overall mean score for satisfaction in the intervention group was 76.3 out of a maximum of 95.
Conclusion
The findings of this study showed that using mini-CEX as a formative evaluation method to evaluate clinical skills had a significant effect on the improvement of nurse anesthesia students’ clinical skills, and they had a very favorable opinion about this evaluation method.

Citations

Citations to this article as recorded by  
  • Psychometric testing of anesthesia nursing competence scale (AnestComp)
    Samira Mahmoudi, Akram Yazdani, Fatemeh Hasanshiri
    Perioperative Care and Operating Room Management.2024; 34: 100368.     CrossRef
  • Comparing Satisfaction of Undergraduate Nursing Students`: Mini-CEX vs CIM in Assessing Clinical Competence
    Somia Saghir, Anny Ashiq Ali, Kashif Khan, Uzma Bibi, Shafaat Ullah, Rafi Ullah, Zaifullah Khan, Tahir Khan
    Pakistan Journal of Health Sciences.2023; : 134.     CrossRef
  • Enhancement of the technical and non-technical skills of nurse anesthesia students using the Anesthetic List Management Assessment Tool in Iran: a quasi-experimental study
    Ali Khalafi, Maedeh Kordnejad, Vahid Saidkhani
    Journal of Educational Evaluation for Health Professions.2023; 20: 19.     CrossRef
Suggestion of more suitable study designs and the corresponding reporting guidelines in articles published in the Journal of Educational Evaluation for Health Professions from 2021 to September 2022: a descriptive study  
Soo Young Kim
J Educ Eval Health Prof. 2022;19:36.   Published online December 26, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.36
  • 1,170 View
  • 106 Download
  • 3 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to suggest a more suitable study design and the corresponding reporting guidelines in the papers published in the Journal of Educational Evaluation for Health Professionals from January 2021 to September 2022.
Methods
Among 59 papers published in the Journal of Educational Evaluation for Health Professionals from January 2021 to September 2022, research articles, review articles, and brief reports were selected. The followings were analyzed: first, the percentage of articles describing the study design in the title, abstracts, or methods; second, the portion of articles describing reporting guidelines; third, the types of study design and corresponding reporting guidelines; and fourth, the suggestion of a more suitable study design based on the study design algorithm for medical literature on interventions, systematic reviews & other review types, and epidemiological studies overview.
Results
Out of 45 articles, 44 described study designs (97.8%). Out of 44, 19 articles were suggested to be described with more suitable study designs, which mainly occurred in before-and-after studies, diagnostic research, and non-randomized trials. Of the 18 reporting guidelines mentioned, 8 (44.4%) were considered perfect. STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) was used for descriptive studies, before-and-after studies, and randomized controlled trials; however, its use should be reconsidered.
Conclusion
Some declarations of study design and reporting guidelines were suggested to be described with more suitable ones. Education and training on study design and reporting guidelines for researchers are needed, and reporting guideline policies for descriptive studies should also be implemented.

Citations

Citations to this article as recorded by  
  • Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 5.     CrossRef
  • A comprehensive perspective on the interaction between gut microbiota and COVID-19 vaccines
    Ming Hong, Tin Lan, Qiuxia Li, Binfei Li, Yong Yuan, Feng Xu, Weijia Wang
    Gut Microbes.2023;[Epub]     CrossRef
  • Why do editors of local nursing society journals strive to have their journals included in MEDLINE? A case study of the Korean Journal of Women Health Nursing
    Sun Huh
    Korean Journal of Women Health Nursing.2023; 29(3): 147.     CrossRef
Physical therapy students’ perception of their ability of clinical and clinical decision-making skills enhanced after simulation-based learning courses in the United States: a repeated measures design  
Fabian Bizama, Mansoor Alameri, Kristy Jean Demers, Derrick Ferguson Campbell
J Educ Eval Health Prof. 2022;19:34.   Published online December 19, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.34
  • 2,191 View
  • 177 Download
  • 1 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
It aimed to investigate physical therapy students’ perception of their ability of clinical and clinical decision-making skills after a simulation-based learning course in the United States.
Methods
Survey questionnaires were administered to voluntary participants, including 44 second and third-year physical therapy students of the University of St. Augustine for Health Sciences during 2021–2022. Thirty-six questionnaire items consisted of 4 demographic items, 1 general evaluation, 21 test items for clinical decision-making skills, and 4 clinical skill items. Descriptive and inferential statistics evaluated differences in students’ perception of their ability in clinical decision-making and clinical skills, pre- and post-simulation, and post-first clinical experience during 2021–2022.
Results
Friedman test revealed a significant increase from pre- to post-simulation in perception of the ability of clinical and clinical decision-making skills total tool score (P<0.001), clinical decision-making 21-item score (P<0.001), and clinical skills score (P<0.001). No significant differences were found between post-simulation and post-first clinical experience. Post-hoc tests indicated a significant difference between pre-simulation and post-simulation (P<0.001) and between pre-simulation and post-first clinical experience (P<0.001). Forty-three students (97.6%) either strongly agreed (59.1%) or agreed (38.5%) that simulation was a valuable learning experience.
Conclusion
The above findings suggest that simulation-based learning helped students begin their first clinical experience with enhanced clinical and clinical decision-making skills.

Citations

Citations to this article as recorded by  
  • Physiotherapists' training in oncology rehabilitation from entry‐level to advanced education: A qualitative study
    Gianluca Bertoni, Valentina Conti, Marco Testa, Ilaria Coppola, Stefania Costi, Simone Battista
    Physiotherapy Research International.2024;[Epub]     CrossRef
  • Simulación clínica mediada por tecnología: un escenario didáctico a partir de recursos para la formación de los profesionales en rehabilitación
    Cyndi Yacira Meneses Castaño, Isabel Jimenez Becerra, Paola Teresa Penagos Gomez
    Educación Médica.2023; 24(4): 100810.     CrossRef
  • Self-Efficacy with Telehealth Examination: the Doctor of Physical Therapy Student Perspective
    Derrick F. Campbell, Jean-Michel Brismee, Brad Allen, Troy Hooper, Manuel A. Domenech, Kathleen J. Manella
    Philippine Journal of Physical Therapy.2023; 2(2): 12.     CrossRef
Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa Panta Quezada, Jesus Daniel Gutierrez-Arratia, Javier Alejandro Flores-Cohaila
J Educ Eval Health Prof. 2023;20:30.   Published online November 20, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.30
  • 893 View
  • 143 Download
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
We aimed to describe the performance and evaluate the educational value of justifications provided by artificial intelligence chatbots, including GPT-3.5, GPT-4, Bard, Claude, and Bing, on the Peruvian National Medical Licensing Examination (P-NLME).
Methods
This was a cross-sectional analytical study. On July 25, 2023, each multiple-choice question (MCQ) from the P-NLME was entered into each chatbot (GPT-3, GPT-4, Bing, Bard, and Claude) 3 times. Then, 4 medical educators categorized the MCQs in terms of medical area, item type, and whether the MCQ required Peru-specific knowledge. They assessed the educational value of the justifications from the 2 top performers (GPT-4 and Bing).
Results
GPT-4 scored 86.7% and Bing scored 82.2%, followed by Bard and Claude, and the historical performance of Peruvian examinees was 55%. Among the factors associated with correct answers, only MCQs that required Peru-specific knowledge had lower odds (odds ratio, 0.23; 95% confidence interval, 0.09–0.61), whereas the remaining factors showed no associations. In assessing the educational value of justifications provided by GPT-4 and Bing, neither showed any significant differences in certainty, usefulness, or potential use in the classroom.
Conclusion
Among chatbots, GPT-4 and Bing were the top performers, with Bing performing better at Peru-specific MCQs. Moreover, the educational value of justifications provided by the GPT-4 and Bing could be deemed appropriate. However, it is essential to start addressing the educational value of these chatbots, rather than merely their performance on examinations.

Citations

Citations to this article as recorded by  
  • Performance of GPT-4V in answering the Japanese otolaryngology board certification examination questions: An evaluation study (Preprint)
    Masao Noda, Takayoshi Ueno, Ryota Koshu, Yuji Takaso, Mari Dias Shimada, Chizu Saito, Hisashi Sugimoto, Hiroaki Fushiki, Makoto Ito, Akihiro Nomura, Tomokazu Yoshizaki
    JMIR Medical Education.2024;[Epub]     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study  
Janghee Park
J Educ Eval Health Prof. 2023;20:29.   Published online November 10, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.29
  • 1,015 View
  • 111 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.
Methods
The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.
Results
The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”
Conclusion
The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.

Citations

Citations to this article as recorded by  
  • ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students
    Mohammed Zawiah, Fahmi Al-Ashwal, Lobna Gharaibeh, Rana Abu Farha, Karem Alzoubi, Khawla Abu Hammour, Qutaiba A Qasim, Fahd Abrah
    Journal of Multidisciplinary Healthcare.2023; Volume 16: 4099.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Factors influencing the learning transfer of nursing students in a non-face-to-face educational environment during the COVID-19 pandemic in Korea: a cross-sectional study using structural equation modeling  
Geun Myun Kim, Yunsoo Kim, Seong Kwang Kim
J Educ Eval Health Prof. 2023;20:14.   Published online April 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.14
  • 1,261 View
  • 141 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The aim of this study was to identify factors influencing the learning transfer of nursing students in a non-face-to-face educational environment through structural equation modeling and suggest ways to improve the transfer of learning.
Methods
In this cross-sectional study, data were collected via online surveys from February 9 to March 1, 2022, from 218 nursing students in Korea. Learning transfer, learning immersion, learning satisfaction, learning efficacy, self-directed learning ability and information technology utilization ability were analyzed using IBM SPSS for Windows ver. 22.0 and AMOS ver. 22.0.
Results
The assessment of structural equation modeling showed adequate model fit, with normed χ2=1.74 (P<0.024), goodness-of-fit index=0.97, adjusted goodness-of-fit index=0.93, comparative fit index=0.98, root mean square residual=0.02, Tucker-Lewis index=0.97, normed fit index=0.96, and root mean square error of approximation=0.06. In a hypothetical model analysis, 9 out of 11 pathways of the hypothetical structural model for learning transfer in nursing students were statistically significant. Learning self-efficacy and learning immersion of nursing students directly affected learning transfer, and subjective information technology utilization ability, self-directed learning ability, and learning satisfaction were variables with indirect effects. The explanatory power of immersion, satisfaction, and self-efficacy for learning transfer was 44.4%.
Conclusion
The assessment of structural equation modeling indicated an acceptable fit. It is necessary to improve the transfer of learning through the development of a self-directed program for learning ability improvement, including the use of information technology in nursing students’ learning environment in non-face-to-face conditions.

Citations

Citations to this article as recorded by  
  • The Mediating Effect of Perceived Institutional Support on Inclusive Leadership and Academic Loyalty in Higher Education
    Olabode Gbobaniyi, Shalini Srivastava, Abiodun Kolawole Oyetunji, Chiemela Victor Amaechi, Salmia Binti Beddu, Bajpai Ankita
    Sustainability.2023; 15(17): 13195.     CrossRef
  • Transfer of Learning of New Nursing Professionals: Exploring Patterns and the Effect of Previous Work Experience
    Helena Roig-Ester, Paulina Elizabeth Robalino Guerra, Carla Quesada-Pallarès, Andreas Gegenfurtner
    Education Sciences.2023; 14(1): 52.     CrossRef
Priorities in updating training paradigms in orthopedic manual therapy: an international Delphi study  
Damian Keter, David Griswold, Kenneth Learman, Chad Cook
J Educ Eval Health Prof. 2023;20:4.   Published online January 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.4
  • 2,562 View
  • 260 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Orthopedic manual therapy (OMT) education demonstrates significant variability between philosophies and while literature has offered a more comprehensive understanding of the contextual, patient specific, and technique factors which interact to influence outcome, most OMT training paradigms continue to emphasize the mechanical basis for OMT application. The purpose of this study was to establish consensus on modifications & adaptions to training paradigms which need to occur within OMT education to align with current evidence.
Methods
A 3-round Delphi survey instrument designed to identify foundational knowledge to include and omit from OMT education was completed by 28 educators working within high level manual therapy education programs internationally. Round 1 consisted of open-ended questions to identify content in each area. Round 2 and Round 3 allowed participants to rank the themes identified in Round 1.
Results
Consensus was reached on 25 content areas to include within OMT education, 1 content area to omit from OMT education, and 34 knowledge components which should be present in those providing OMT. Support was seen for education promoting understanding the complex psychological, neurophysiological, and biomechanical systems as they relate to both evaluation and treatment effect. While some concepts were more consistently supported there was significant variability in responses which is largely expected to be related to previous training.
Conclusion
The results of this study indicate manual therapy educators understanding of evidence-based practice as support for all 3 tiers of evidence were represented. The results of this study should guide OMT training program development and modification.

Citations

Citations to this article as recorded by  
  • A critical review of the role of manual therapy in the treatment of individuals with low back pain
    Jean-Pascal Grenier, Maria Rothmund
    Journal of Manual & Manipulative Therapy.2024; : 1.     CrossRef
  • Modernizing patient-centered manual therapy: Findings from a Delphi study on orthopaedic manual therapy application
    Damian Keter, David Griswold, Kenneth Learman, Chad Cook
    Musculoskeletal Science and Practice.2023; 65: 102777.     CrossRef
Educational/Faculty development material
Common models and approaches for the clinical educator to plan effective feedback encounters  
Cesar Orsini, Veena Rodrigues, Jorge Tricio, Margarita Rosel
J Educ Eval Health Prof. 2022;19:35.   Published online December 19, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.35
  • 3,675 View
  • 576 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Giving constructive feedback is crucial for learners to bridge the gap between their current performance and the desired standards of competence. Giving effective feedback is a skill that can be learned, practiced, and improved. Therefore, our aim was to explore models in clinical settings and assess their transferability to different clinical feedback encounters. We identified the 6 most common and accepted feedback models, including the Feedback Sandwich, the Pendleton Rules, the One-Minute Preceptor, the SET-GO model, the R2C2 (Rapport/Reaction/Content/Coach), and the ALOBA (Agenda Led Outcome-based Analysis) model. We present a handy resource describing their structure, strengths and weaknesses, requirements for educators and learners, and suitable feedback encounters for use for each model. These feedback models represent practical frameworks for educators to adopt but also to adapt to their preferred style, combining and modifying them if necessary to suit their needs and context.

Citations

Citations to this article as recorded by  
  • Navigating power dynamics between pharmacy preceptors and learners
    Shane Tolleson, Mabel Truong, Natalie Rosario
    Exploratory Research in Clinical and Social Pharmacy.2024; 13: 100408.     CrossRef
  • Feedback conversations: First things first?
    Katharine A. Robb, Marcy E. Rosenbaum, Lauren Peters, Susan Lenoch, Donna Lancianese, Jane L. Miller
    Patient Education and Counseling.2023; 115: 107849.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions