Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Most read articles

Page Path
HOME > Browse articles > Most read articles
88 Most read articles
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles

Most-read articles are from the articles published in 2022 during the last three month.

Research article
No difference in factual or conceptual recall comprehension for tablet, laptop, and handwritten note-taking by medical students in the United States: a survey-based observational study  
Warren Wiechmann, Robert Edwards, Cheyenne Low, Alisa Wray, Megan Boysen-Osborn, Shannon Toohey
J Educ Eval Health Prof. 2022;19:8.   Published online April 26, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.8
  • 10,672 View
  • 466 Download
  • 1 Web of Science
AbstractAbstract PDFSupplementary Material
Purpose
Technological advances are changing how students approach learning. The traditional note-taking methods of longhand writing have been supplemented and replaced by tablets, smartphones, and laptop note-taking. It has been theorized that writing notes by hand requires more complex cognitive processes and may lead to better retention. However, few studies have investigated the use of tablet-based note-taking, which allows the incorporation of typing, drawing, highlights, and media. We therefore sought to confirm the hypothesis that tablet-based note-taking would lead to equivalent or better recall as compared to written note-taking.
Methods
We allocated 68 students into longhand, laptop, or tablet note-taking groups, and they watched and took notes on a presentation on which they were assessed for factual and conceptual recall. A second short distractor video was shown, followed by a 30-minute assessment at the University of California, Irvine campus, over a single day period in August 2018. Notes were analyzed for content, supplemental drawings, and other media sources.
Results
No significant difference was found in the factual or conceptual recall scores for tablet, laptop, and handwritten note-taking (P=0.61). The median word count was 131.5 for tablets, 121.0 for handwriting, and 297.0 for laptops (P=0.01). The tablet group had the highest presence of drawing, highlighting, and other media/tools.
Conclusion
In light of conflicting research regarding the best note-taking method, our study showed that longhand note-taking is not superior to tablet or laptop note-taking. This suggests students should be encouraged to pick the note-taking method that appeals most to them. In the future, traditional note-taking may be replaced or supplemented with digital technologies that provide similar efficacy with more convenience.
Educational/Faculty development material
Common models and approaches for the clinical educator to plan effective feedback encounters  
Cesar Orsini, Veena Rodrigues, Jorge Tricio, Margarita Rosel
J Educ Eval Health Prof. 2022;19:35.   Published online December 19, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.35
  • 5,264 View
  • 679 Download
  • 2 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Giving constructive feedback is crucial for learners to bridge the gap between their current performance and the desired standards of competence. Giving effective feedback is a skill that can be learned, practiced, and improved. Therefore, our aim was to explore models in clinical settings and assess their transferability to different clinical feedback encounters. We identified the 6 most common and accepted feedback models, including the Feedback Sandwich, the Pendleton Rules, the One-Minute Preceptor, the SET-GO model, the R2C2 (Rapport/Reaction/Content/Coach), and the ALOBA (Agenda Led Outcome-based Analysis) model. We present a handy resource describing their structure, strengths and weaknesses, requirements for educators and learners, and suitable feedback encounters for use for each model. These feedback models represent practical frameworks for educators to adopt but also to adapt to their preferred style, combining and modifying them if necessary to suit their needs and context.

Citations

Citations to this article as recorded by  
  • Navigating power dynamics between pharmacy preceptors and learners
    Shane Tolleson, Mabel Truong, Natalie Rosario
    Exploratory Research in Clinical and Social Pharmacy.2024; 13: 100408.     CrossRef
  • Feedback in Medical Education—Its Importance and How to Do It
    Tarik Babar, Omer A. Awan
    Academic Radiology.2024;[Epub]     CrossRef
  • Feedback conversations: First things first?
    Katharine A. Robb, Marcy E. Rosenbaum, Lauren Peters, Susan Lenoch, Donna Lancianese, Jane L. Miller
    Patient Education and Counseling.2023; 115: 107849.     CrossRef
Review
Can an artificial intelligence chatbot be the author of a scholarly article?  
Ju Yoen Lee
J Educ Eval Health Prof. 2023;20:6.   Published online February 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.6
  • 8,193 View
  • 662 Download
  • 37 Web of Science
  • 42 Crossref
AbstractAbstract PDFSupplementary Material
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.

Citations

Citations to this article as recorded by  
  • Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills
    Graham Kendall, Jaime A. Teixeira da Silva
    Learned Publishing.2024; 37(1): 55.     CrossRef
  • Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals
    Brady D. Lund, K.T. Naheem
    Learned Publishing.2024; 37(1): 13.     CrossRef
  • The Role of AI in Writing an Article and Whether it Can Be a Co-author: What if it Gets Support From 2 Different AIs Like ChatGPT and Google Bard for the Same Theme?
    İlhan Bahşi, Ayşe Balat
    Journal of Craniofacial Surgery.2024; 35(1): 274.     CrossRef
  • Artificial Intelligence–Generated Scientific Literature: A Critical Appraisal
    Justyna Zybaczynska, Matthew Norris, Sunjay Modi, Jennifer Brennan, Pooja Jhaveri, Timothy J. Craig, Taha Al-Shaikhly
    The Journal of Allergy and Clinical Immunology: In Practice.2024; 12(1): 106.     CrossRef
  • Does Google’s Bard Chatbot perform better than ChatGPT on the European hand surgery exam?
    Goetsch Thibaut, Armaghan Dabbagh, Philippe Liverneaux
    International Orthopaedics.2024; 48(1): 151.     CrossRef
  • A Brief Review of the Efficacy in Artificial Intelligence and Chatbot-Generated Personalized Fitness Regimens
    Daniel K. Bays, Cole Verble, Kalyn M. Powers Verble
    Strength & Conditioning Journal.2024;[Epub]     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2024; 12: 1398.     CrossRef
  • The Use of Artificial Intelligence in Writing Scientific Review Articles
    Melissa A. Kacena, Lilian I. Plotkin, Jill C. Fehrenbacher
    Current Osteoporosis Reports.2024; 22(1): 115.     CrossRef
  • Using AI to Write a Review Article Examining the Role of the Nervous System on Skeletal Homeostasis and Fracture Healing
    Murad K. Nazzal, Ashlyn J. Morris, Reginald S. Parker, Fletcher A. White, Roman M. Natoli, Jill C. Fehrenbacher, Melissa A. Kacena
    Current Osteoporosis Reports.2024; 22(1): 217.     CrossRef
  • GenAI et al.: Cocreation, Authorship, Ownership, Academic Ethics and Integrity in a Time of Generative AI
    Aras Bozkurt
    Open Praxis.2024; 16(1): 1.     CrossRef
  • An integrative decision-making framework to guide policies on regulating ChatGPT usage
    Umar Ali Bukar, Md Shohel Sayeed, Siti Fatimah Abdul Razak, Sumendra Yogarayan, Oluwatosin Ahmed Amodu
    PeerJ Computer Science.2024; 10: e1845.     CrossRef
  • Artificial Intelligence and Its Role in Medical Research
    Anurag Gola, Ambarish Das, Amar B. Gumataj, S. Amirdhavarshini, J. Venkatachalam
    Current Medical Issues.2024; 22(2): 97.     CrossRef
  • From advancements to ethics: Assessing ChatGPT’s role in writing research paper
    Vasu Gupta, Fnu Anamika, Kinna Parikh, Meet A Patel, Rahul Jain, Rohit Jain
    Turkish Journal of Internal Medicine.2024; 6(2): 74.     CrossRef
  • Yapay Zekânın Edebiyatta Kullanım Serüveni
    Nesime Ceyhan Akça, Serap Aslan Cobutoğlu, Özlem Yeşim Özbek, Mehmet Furkan Akça
    RumeliDE Dil ve Edebiyat Araştırmaları Dergisi.2024; (39): 283.     CrossRef
  • ChatGPT's Gastrointestinal Tumor Board Tango: A limping dance partner?
    Ughur Aghamaliyev, Javad Karimbayli, Clemens Giessen-Jung, Ilmer Matthias, Kristian Unger, Dorian Andrade, Felix O. Hofmann, Maximilian Weniger, Martin K. Angele, C. Benedikt Westphalen, Jens Werner, Bernhard W. Renz
    European Journal of Cancer.2024; 205: 114100.     CrossRef
  • Gout and Gout-Related Comorbidities: Insight and Limitations from Population-Based Registers in Sweden
    Panagiota Drivelegka, Lennart TH Jacobsson, Mats Dehlin
    Gout, Urate, and Crystal Deposition Disease.2024; 2(2): 144.     CrossRef
  • Artificial intelligence in academic cardiothoracic surgery
    Adham AHMED, Irbaz HAMEED
    The Journal of Cardiovascular Surgery.2024;[Epub]     CrossRef
  • The emergence of generative artificial intelligence platforms in 2023, journal metrics, appreciation to reviewers and volunteers, and obituary
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2024; 21: 9.     CrossRef
  • Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer
    Casey Watters, Michal K. Lemanski
    Frontiers in Big Data.2023;[Epub]     CrossRef
  • The importance of human supervision in the use of ChatGPT as a support tool in scientific writing
    William Castillo-González
    Metaverse Basic and Applied Research.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • Chatbots in Medical Research
    Punit Sharma
    Clinical Nuclear Medicine.2023; 48(9): 838.     CrossRef
  • Potential applications of ChatGPT in dermatology
    Nicolas Kluger
    Journal of the European Academy of Dermatology and Venereology.2023;[Epub]     CrossRef
  • The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research
    Tariq Alqahtani, Hisham A. Badreldin, Mohammed Alrashed, Abdulrahman I. Alshaya, Sahar S. Alghamdi, Khalid bin Saleh, Shuroug A. Alowais, Omar A. Alshaya, Ishrat Rahman, Majed S. Al Yami, Abdulkareem M. Albekairy
    Research in Social and Administrative Pharmacy.2023; 19(8): 1236.     CrossRef
  • ChatGPT Performance on the American Urological Association Self-assessment Study Program and the Potential Influence of Artificial Intelligence in Urologic Training
    Nicholas A. Deebel, Ryan Terlecki
    Urology.2023; 177: 29.     CrossRef
  • Intelligence or artificial intelligence? More hard problems for authors of Biological Psychology, the neurosciences, and everyone else
    Thomas Ritz
    Biological Psychology.2023; 181: 108590.     CrossRef
  • The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts
    Mohammad Hosseini, David B Resnik, Kristi Holmes
    Research Ethics.2023; 19(4): 449.     CrossRef
  • How trustworthy is ChatGPT? The case of bibliometric analyses
    Faiza Farhat, Shahab Saquib Sohail, Dag Øivind Madsen
    Cogent Engineering.2023;[Epub]     CrossRef
  • Disclosing use of Artificial Intelligence: Promoting transparency in publishing
    Parvaiz A. Koul
    Lung India.2023; 40(5): 401.     CrossRef
  • ChatGPT in medical research: challenging time ahead
    Daideepya C Bhargava, Devendra Jadav, Vikas P Meshram, Tanuj Kanchan
    Medico-Legal Journal.2023; 91(4): 223.     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2023; 12: 1398.     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • ChatGPT in medical writing: A game-changer or a gimmick?
    Shital Sarah Ahaley, Ankita Pandey, Simran Kaur Juneja, Tanvi Suhane Gupta, Sujatha Vijayakumar
    Perspectives in Clinical Research.2023;[Epub]     CrossRef
  • Artificial Intelligence-Supported Systems in Anesthesiology and Its Standpoint to Date—A Review
    Fiona M. P. Pham
    Open Journal of Anesthesiology.2023; 13(07): 140.     CrossRef
  • ChatGPT as an innovative tool for increasing sales in online stores
    Michał Orzoł, Katarzyna Szopik-Depczyńska
    Procedia Computer Science.2023; 225: 3450.     CrossRef
  • Intelligent Plagiarism as a Misconduct in Academic Integrity
    Jesús Miguel Muñoz-Cantero, Eva Maria Espiñeira-Bellón
    Acta Médica Portuguesa.2023; 37(1): 1.     CrossRef
  • Follow-up of Artificial Intelligence Development and its Controlled Contribution to the Article: Step to the Authorship?
    Ekrem Solmaz
    European Journal of Therapeutics.2023;[Epub]     CrossRef
  • May Artificial Intelligence Be a Co-Author on an Academic Paper?
    Ayşe Balat, İlhan Bahşi
    European Journal of Therapeutics.2023; 29(3): e12.     CrossRef
  • Opportunities and challenges for ChatGPT and large language models in biomedicine and health
    Shubo Tian, Qiao Jin, Lana Yeganova, Po-Ting Lai, Qingqing Zhu, Xiuying Chen, Yifan Yang, Qingyu Chen, Won Kim, Donald C Comeau, Rezarta Islamaj, Aadit Kapoor, Xin Gao, Zhiyong Lu
    Briefings in Bioinformatics.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • Editorial policies of Journal of Educational Evaluation for Health Professions on the use of generative artificial intelligence in article writing and peer review
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 40.     CrossRef
  • Should We Wait for Major Frauds to Unveil to Plan an AI Use License?
    Istemihan Coban
    European Journal of Therapeutics.2023; 30(2): 198.     CrossRef
Brief report
Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study  
Sun Huh
J Educ Eval Health Prof. 2023;20:1.   Published online January 11, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.1
  • 11,685 View
  • 1,024 Download
  • 131 Web of Science
  • 72 Crossref
AbstractAbstract PDFSupplementary Material
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.

Citations

Citations to this article as recorded by  
  • Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study
    Aravind P Gandhi, Felista Karen Joesph, Vineeth Rajagopal, P Aparnavi, Sushma Katkuri, Sonal Dayama, Prakasini Satapathy, Mahalaqua Nazli Khatib, Shilpa Gaidhane, Quazi Syed Zahiruddin, Ashish Behera
    JMIR Formative Research.2024; 8: e49964.     CrossRef
  • Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers
    Ana Suárez, Víctor Díaz‐Flores García, Juan Algar, Margarita Gómez Sánchez, María Llorente de Pedro, Yolanda Freire
    International Endodontic Journal.2024; 57(1): 108.     CrossRef
  • Bob or Bot: Exploring ChatGPT's Answers to University Computer Science Assessment
    Mike Richards, Kevin Waugh, Mark Slaymaker, Marian Petre, John Woodthorpe, Daniel Gooch
    ACM Transactions on Computing Education.2024; 24(1): 1.     CrossRef
  • A systematic review of ChatGPT use in K‐12 education
    Peng Zhang, Gemma Tur
    European Journal of Education.2024;[Epub]     CrossRef
  • Evaluating ChatGPT as a self‐learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination
    Krishna Mohan Surapaneni, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, Porchelvan Swaminathan
    Biochemistry and Molecular Biology Education.2024; 52(2): 237.     CrossRef
  • Examining the use of ChatGPT in public universities in Hong Kong: a case study of restricted access areas
    Michelle W. T. Cheng, Iris H. Y. YIM
    Discover Education.2024;[Epub]     CrossRef
  • Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study
    Firas Haddad, Joanna S Saade
    JMIR Medical Education.2024; 10: e50842.     CrossRef
  • Assessment of Artificial Intelligence Platforms With Regard to Medical Microbiology Knowledge: An Analysis of ChatGPT and Gemini
    Jai Ranjan, Absar Ahmad, Monalisa Subudhi, Ajay Kumar
    Cureus.2024;[Epub]     CrossRef
  • A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision‐making in nursing
    Mor Saban, Ilana Dubovi
    Journal of Advanced Nursing.2024;[Epub]     CrossRef
  • Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study
    Annika Meyer, Janik Riese, Thomas Streichert
    JMIR Medical Education.2024; 10: e50965.     CrossRef
  • From hype to insight: Exploring ChatGPT's early footprint in education via altmetrics and bibliometrics
    Lung‐Hsiang Wong, Hyejin Park, Chee‐Kit Looi
    Journal of Computer Assisted Learning.2024;[Epub]     CrossRef
  • A scoping review of artificial intelligence in medical education: BEME Guide No. 84
    Morris Gordon, Michelle Daniel, Aderonke Ajiboye, Hussein Uraiby, Nicole Y. Xu, Rangana Bartlett, Janice Hanson, Mary Haas, Maxwell Spadafore, Ciaran Grafton-Clarke, Rayhan Yousef Gasiea, Colin Michie, Janet Corral, Brian Kwan, Diana Dolmans, Satid Thamma
    Medical Teacher.2024; 46(4): 446.     CrossRef
  • Üniversite Öğrencilerinin ChatGPT 3,5 Deneyimleri: Yapay Zekâyla Yazılmış Masal Varyantları
    Bilge GÖK, Fahri TEMİZYÜREK, Özlem BAŞ
    Korkut Ata Türkiyat Araştırmaları Dergisi.2024; (14): 1040.     CrossRef
  • Tracking ChatGPT Research: Insights From the Literature and the Web
    Omar Mubin, Fady Alnajjar, Zouheir Trabelsi, Luqman Ali, Medha Mohan Ambali Parambil, Zhao Zou
    IEEE Access.2024; 12: 30518.     CrossRef
  • Potential applications of ChatGPT in obstetrics and gynecology in Korea: a review article
    YooKyung Lee, So Yun Kim
    Obstetrics & Gynecology Science.2024; 67(2): 153.     CrossRef
  • Application of generative language models to orthopaedic practice
    Jessica Caterson, Olivia Ambler, Nicholas Cereceda-Monteoliva, Matthew Horner, Andrew Jones, Arwel Tomos Poacher
    BMJ Open.2024; 14(3): e076484.     CrossRef
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • The advent of ChatGPT: Job Made Easy or Job Loss to Data Analysts
    Abiola Timothy Owolabi, Oluwaseyi Oluwadamilare Okunlola, Emmanuel Taiwo Adewuyi, Janet Iyabo Idowu, Olasunkanmi James Oladapo
    WSEAS TRANSACTIONS ON COMPUTERS.2024; 23: 24.     CrossRef
  • ChatGPT in dentomaxillofacial radiology education
    Hilal Peker Öztürk, Hakan Avsever, Buğra Şenel, Şükran Ayran, Mustafa Çağrı Peker, Hatice Seda Özgedik, Nurten Baysal
    Journal of Health Sciences and Medicine.2024; 7(2): 224.     CrossRef
  • Performance of ChatGPT on the Korean National Examination for Dental Hygienists
    Soo-Myoung Bae, Hye-Rim Jeon, Gyoung-Nam Kim, Seon-Hui Kwak, Hyo-Jin Lee
    Journal of Dental Hygiene Science.2024; 24(1): 62.     CrossRef
  • Medical knowledge of ChatGPT in public health, infectious diseases, COVID-19 pandemic, and vaccines: multiple choice questions examination based performance
    Sultan Ayoub Meo, Metib Alotaibi, Muhammad Zain Sultan Meo, Muhammad Omair Sultan Meo, Mashhood Hamid
    Frontiers in Public Health.2024;[Epub]     CrossRef
  • Unlock the potential for Saudi Arabian higher education: a systematic review of the benefits of ChatGPT
    Eman Faisal
    Frontiers in Education.2024;[Epub]     CrossRef
  • Does the Information Quality of ChatGPT Meet the Requirements of Orthopedics and Trauma Surgery?
    Adnan Kasapovic, Thaer Ali, Mari Babasiz, Jessica Bojko, Martin Gathen, Robert Kaczmarczyk, Jonas Roos
    Cureus.2024;[Epub]     CrossRef
  • Exploring the Profile of University Assessments Flagged as Containing AI-Generated Material
    Daniel Gooch, Kevin Waugh, Mike Richards, Mark Slaymaker, John Woodthorpe
    ACM Inroads.2024; 15(2): 39.     CrossRef
  • Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom’s Taxonomy
    Ambadasu Bharatha, Nkemcho Ojeh, Ahbab Mohammad Fazle Rabbi, Michael Campbell, Kandamaran Krishnamurthy, Rhaheem Layne-Yarde, Alok Kumar, Dale Springer, Kenneth Connell, Md Anwarul Majumder
    Advances in Medical Education and Practice.2024; Volume 15: 393.     CrossRef
  • The emergence of generative artificial intelligence platforms in 2023, journal metrics, appreciation to reviewers and volunteers, and obituary
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2024; 21: 9.     CrossRef
  • Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology
    Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal
    Cureus.2023;[Epub]     CrossRef
  • Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 5.     CrossRef
  • Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic
    Sun Huh
    Science Editing.2023; 10(1): 1.     CrossRef
  • Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum
    Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta
    Cureus.2023;[Epub]     CrossRef
  • Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry
    Arindam Ghosh, Aritri Bir
    Cureus.2023;[Epub]     CrossRef
  • Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts
    Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT in Dentistry: A Comprehensive Review
    Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed
    Cureus.2023;[Epub]     CrossRef
  • Can we trust AI chatbots’ answers about disease diagnosis and patient care?
    Sun Huh
    Journal of the Korean Medical Association.2023; 66(4): 218.     CrossRef
  • Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
    Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh
    JMIR Medical Education.2023; 9: e48291.     CrossRef
  • Early applications of ChatGPT in medical practice, education and research
    Sam Sedaghat
    Clinical Medicine.2023; 23(3): 278.     CrossRef
  • A Review of Research on Teaching and Learning Transformation under the Influence of ChatGPT Technology
    璇 师
    Advances in Education.2023; 13(05): 2617.     CrossRef
  • Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study
    Soshi Takagi, Takashi Watari, Ayano Erabi, Kota Sakaguchi
    JMIR Medical Education.2023; 9: e48002.     CrossRef
  • ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions
    Cosima C. Hoch, Barbara Wollenberg, Jan-Christoffer Lüers, Samuel Knoedler, Leonard Knoedler, Konstantin Frank, Sebastian Cotofana, Michael Alfertshofer
    European Archives of Oto-Rhino-Laryngology.2023; 280(9): 4271.     CrossRef
  • Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology
    Mayank Agarwal, Priyanka Sharma, Ayan Goswami
    Cureus.2023;[Epub]     CrossRef
  • The Intersection of ChatGPT, Clinical Medicine, and Medical Education
    Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali
    JMIR Medical Education.2023; 9: e47274.     CrossRef
  • The Role of Artificial Intelligence in Higher Education: ChatGPT Assessment for Anatomy Course
    Tarık TALAN, Yusuf KALINKARA
    Uluslararası Yönetim Bilişim Sistemleri ve Bilgisayar Bilimleri Dergisi.2023; 7(1): 33.     CrossRef
  • Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
    Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu
    Journal of Educational Evaluation for Health Professions.2023; 20: 17.     CrossRef
  • Examining Real-World Medication Consultations and Drug-Herb Interactions: ChatGPT Performance Evaluation
    Hsing-Yu Hsu, Kai-Cheng Hsu, Shih-Yen Hou, Ching-Lung Wu, Yow-Wen Hsieh, Yih-Dih Cheng
    JMIR Medical Education.2023; 9: e48433.     CrossRef
  • Assessing the Efficacy of ChatGPT in Solving Questions Based on the Core Concepts in Physiology
    Arijita Banerjee, Aquil Ahmad, Payal Bhalla, Kavita Goyal
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT Performs on the Chinese National Medical Licensing Examination
    Xinyi Wang, Zhenye Gong, Guoxin Wang, Jingdan Jia, Ying Xu, Jialu Zhao, Qingye Fan, Shaun Wu, Weiguo Hu, Xiaoyang Li
    Journal of Medical Systems.2023;[Epub]     CrossRef
  • Artificial intelligence and its impact on job opportunities among university students in North Lima, 2023
    Doris Ruiz-Talavera, Jaime Enrique De la Cruz-Aguero, Nereo García-Palomino, Renzo Calderón-Espinoza, William Joel Marín-Rodriguez
    ICST Transactions on Scalable Information Systems.2023;[Epub]     CrossRef
  • Revolutionizing Dental Care: A Comprehensive Review of Artificial Intelligence Applications Among Various Dental Specialties
    Najd Alzaid, Omar Ghulam, Modhi Albani, Rafa Alharbi, Mayan Othman, Hasan Taher, Saleem Albaradie, Suhael Ahmed
    Cureus.2023;[Epub]     CrossRef
  • Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review
    Carl Preiksaitis, Christian Rose
    JMIR Medical Education.2023; 9: e48785.     CrossRef
  • Exploring the impact of language models, such as ChatGPT, on student learning and assessment
    Araz Zirar
    Review of Education.2023;[Epub]     CrossRef
  • Large Language Models and Artificial Intelligence: A Primer for Plastic Surgeons on the Demonstrated and Potential Applications, Promises, and Limitations of ChatGPT
    Jad Abi-Rafeh, Hong Hao Xu, Roy Kazan, Ruth Tevlin, Heather Furnas
    Aesthetic Surgery Journal.2023;[Epub]     CrossRef
  • Evaluating the reliability of ChatGPT as a tool for imaging test referral: a comparative study with a clinical decision support system
    Shani Rosen, Mor Saban
    European Radiology.2023;[Epub]     CrossRef
  • Redesigning Tertiary Educational Evaluation with AI: A Task-Based Analysis of LIS Students’ Assessment on Written Tests and Utilizing ChatGPT at NSTU
    Shamima Yesmin
    Science & Technology Libraries.2023; : 1.     CrossRef
  • ChatGPT and the AI revolution: a comprehensive investigation of its multidimensional impact and potential
    Mohd Afjal
    Library Hi Tech.2023;[Epub]     CrossRef
  • The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard
    Hasan B Ilgaz, Zehra Çelik
    Cureus.2023;[Epub]     CrossRef
  • Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
    Abhra Ghosh, Nandita Maini Jindal, Vikram K Gupta, Ekta Bansal, Navjot Kaur Bajwa, Abhishek Sett
    Cureus.2023;[Epub]     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study
    Osman Babayiğit, Zeynep Tastan Eroglu, Dilek Ozkan Sen, Fatma Ucan Yarkac
    Cureus.2023;[Epub]     CrossRef
  • Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study
    Aleksandra Ignjatović, Lazar Stevanović
    Journal of Educational Evaluation for Health Professions.2023; 20: 28.     CrossRef
  • Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study
    Krishna Mohan Surapaneni
    JMIR Medical Education.2023; 9: e47191.     CrossRef
  • Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
    Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa
    Journal of Educational Evaluation for Health Professions.2023; 20: 30.     CrossRef
  • ChatGPT’s performance in German OB/GYN exams – paving the way for AI-enhanced medical education and clinical practice
    Maximilian Riedel, Katharina Kaefinger, Antonia Stuehrenberg, Viktoria Ritter, Niklas Amann, Anna Graf, Florian Recker, Evelyn Klein, Marion Kiechle, Fabian Riedel, Bastian Meyer
    Frontiers in Medicine.2023;[Epub]     CrossRef
  • Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study
    Janghee Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 29.     CrossRef
  • FROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKING
    Yaroslav Mykhalko, Pavlo Kish, Yelyzaveta Rubtsova, Oleksandr Kutsyn, Valentyna Koval
    Wiadomości Lekarskie.2023; 76(11): 2345.     CrossRef
  • Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions
    Pasin Tangadulrat, Supinya Sono, Boonsin Tangtrakulwanich
    JMIR Medical Education.2023; 9: e50658.     CrossRef
  • Below average ChatGPT performance in medical microbiology exam compared to university students
    Malik Sallam, Khaled Al-Salahat
    Frontiers in Education.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • ChatGPT Review: A Sophisticated Chatbot Models in Medical & Health-related Teaching and Learning
    Nur Izah Ab Razak, Muhammad Fawwaz Muhammad Yusoff, Rahmita Wirza O.K. Rahmat
    Malaysian Journal of Medicine and Health Sciences.2023; 19(s12): 98.     CrossRef
  • Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
    Tae Won Kim
    Journal of Educational Evaluation for Health Professions.2023; 20: 38.     CrossRef
  • Trends in research on ChatGPT and adoption-related issues discussed in articles: a narrative review
    Sang-Jun Kim
    Science Editing.2023; 11(1): 3.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligences’ answers to learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soo Bin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Reviews
How to review and assess a systematic review and meta-analysis article: a methodological study (secondary publication)  
Seung-Kwon Myung
J Educ Eval Health Prof. 2023;20:24.   Published online August 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.24
  • 3,059 View
  • 332 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Systematic reviews and meta-analyses have become central in many research fields, particularly medicine. They offer the highest level of evidence in evidence-based medicine and support the development and revision of clinical practice guidelines, which offer recommendations for clinicians caring for patients with specific diseases and conditions. This review summarizes the concepts of systematic reviews and meta-analyses and provides guidance on reviewing and assessing such papers. A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect. Conducting a meta-analysis involves defining a research topic, selecting a study design, searching literature in electronic databases, selecting relevant studies, and conducting the analysis. One can assess the findings of a meta-analysis by interpreting a forest plot and a funnel plot and by examining heterogeneity. When reviewing systematic reviews and meta-analyses, several essential points must be considered, including the originality and significance of the work, the comprehensiveness of the database search, the selection of studies based on inclusion and exclusion criteria, subgroup analyses by various factors, and the interpretation of the results based on the levels of evidence. This review will provide readers with helpful guidance to help them read, understand, and evaluate these articles.

Citations

Citations to this article as recorded by  
  • The Role of BIM in Managing Risks in Sustainability of Bridge Projects: A Systematic Review with Meta-Analysis
    Dema Munef Ahmad, László Gáspár, Zsolt Bencze, Rana Ahmad Maya
    Sustainability.2024; 16(3): 1242.     CrossRef
Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
Tae Won Kim
J Educ Eval Health Prof. 2023;20:38.   Published online December 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.38
  • 2,082 View
  • 351 Download
  • 2 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
This study aims to explore ChatGPT’s (GPT-3.5 version) functionalities, including reinforcement learning, diverse applications, and limitations. ChatGPT is an artificial intelligence (AI) chatbot powered by OpenAI’s Generative Pre-trained Transformer (GPT) model. The chatbot’s applications span education, programming, content generation, and more, demonstrating its versatility. ChatGPT can improve education by creating assignments and offering personalized feedback, as shown by its notable performance in medical exams and the United States Medical Licensing Exam. However, concerns include plagiarism, reliability, and educational disparities. It aids in various research tasks, from design to writing, and has shown proficiency in summarizing and suggesting titles. Its use in scientific writing and language translation is promising, but professional oversight is needed for accuracy and originality. It assists in programming tasks like writing code, debugging, and guiding installation and updates. It offers diverse applications, from cheering up individuals to generating creative content like essays, news articles, and business plans. Unlike search engines, ChatGPT provides interactive, generative responses and understands context, making it more akin to human conversation, in contrast to conventional search engines’ keyword-based, non-interactive nature. ChatGPT has limitations, such as potential bias, dependence on outdated data, and revenue generation challenges. Nonetheless, ChatGPT is considered to be a transformative AI tool poised to redefine the future of generative technology. In conclusion, advancements in AI, such as ChatGPT, are altering how knowledge is acquired and applied, marking a shift from search engines to creativity engines. This transformation highlights the increasing importance of AI literacy and the ability to effectively utilize AI in various domains of life.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Artificial Intelligence: Fundamentals and Breakthrough Applications in Epilepsy
    Wesley Kerr, Sandra Acosta, Patrick Kwan, Gregory Worrell, Mohamad A. Mikati
    Epilepsy Currents.2024;[Epub]     CrossRef
  • A Developed Graphical User Interface-Based on Different Generative Pre-trained Transformers Models
    Ekrem Küçük, İpek Balıkçı Çiçek, Zeynep Küçükakçalı, Cihan Yetiş, Cemil Çolak
    ODÜ Tıp Dergisi.2024; 11(1): 18.     CrossRef
  • Art or Artifact: Evaluating the Accuracy, Appeal, and Educational Value of AI-Generated Imagery in DALL·E 3 for Illustrating Congenital Heart Diseases
    Mohamad-Hani Temsah, Abdullah N. Alhuzaimi, Mohammed Almansour, Fadi Aljamaan, Khalid Alhasan, Munirah A. Batarfi, Ibraheem Altamimi, Amani Alharbi, Adel Abdulaziz Alsuhaibani, Leena Alwakeel, Abdulrahman Abdulkhaliq Alzahrani, Khaled B. Alsulaim, Amr Jam
    Journal of Medical Systems.2024;[Epub]     CrossRef
Research articles
Mentorship and self-efficacy are associated with lower burnout in physical therapists in the United States: a cross-sectional survey study  
Matthew Pugliese, Jean-Michel Brismée, Brad Allen, Sean Riley, Justin Tammany, Paul Mintken
J Educ Eval Health Prof. 2023;20:27.   Published online September 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.27
  • 2,981 View
  • 285 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the prevalence of burnout in physical therapists in the United States and the relationships between burnout and education, mentorship, and self-efficacy.
Methods
This was a cross-sectional survey study. An electronic survey was distributed to practicing physical therapists across the United States over a 6-week period from December 2020 to January 2021. The survey was completed by 2,813 physical therapists from all states. The majority were female (68.72%), White or Caucasian (80.13%), and employed full-time (77.14%). Respondents completed questions on demographics, education, mentorship, self-efficacy, and burnout. The Burnout Clinical Subtypes Questionnaire 12 (BCSQ-12) and self-reports were used to quantify burnout, and the General Self-Efficacy Scale (GSES) was used to measure self-efficacy. Descriptive and inferential analyses were performed.
Results
Respondents from home health (median BCSQ-12=42.00) and skilled nursing facility settings (median BCSQ-12=42.00) displayed the highest burnout scores. Burnout was significantly lower among those who provided formal mentorship (median BCSQ-12=39.00, P=0.0001) compared to no mentorship (median BCSQ-12=41.00). Respondents who received formal mentorship (median BCSQ-12=38.00, P=0.0028) displayed significantly lower burnout than those who received no mentorship (median BCSQ-12=41.00). A moderate negative correlation (rho=-0.49) was observed between the GSES and burnout scores. A strong positive correlation was found between self-reported burnout status and burnout scores (rrb=0.61).
Conclusion
Burnout is prevalent in the physical therapy profession, as almost half of respondents (49.34%) reported burnout. Providing or receiving mentorship and higher self-efficacy were associated with lower burnout. Organizations should consider measuring burnout levels, investing in mentorship programs, and implementing strategies to improve self-efficacy.
ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study  
Hiroyasu Sato, Katsuhiko Ogasawara
J Educ Eval Health Prof. 2024;21:4.   Published online February 28, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.4
  • 1,013 View
  • 165 Download
AbstractAbstract PDFSupplementary Material
Purpose
The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model’s performance.
Methods
The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).
Results
The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.
Conclusion
Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study’s findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis  
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
J Educ Eval Health Prof. 2024;21:5.   Published online February 28, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.5
  • 864 View
  • 166 Download
AbstractAbstract PDFSupplementary Material
Purpose
We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.
Methods
This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.
Results
Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.
Conclusion
Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.
Review
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review  
Xiaojun Xu, Yixiao Chen, Jing Miao
J Educ Eval Health Prof. 2024;21:6.   Published online March 15, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.6
  • 802 View
  • 252 Download
AbstractAbstract PDFSupplementary Material
Background
ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.
Methods
A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.
Results
ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.
Conclusion
ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.
Research articles
Effect of motion-graphic video-based training on the performance of operating room nurse students in cataract surgery in Iran: a randomized controlled study  
Behnaz Fatahi, Samira Fatahi, Sohrab Nosrati, Masood Bagheri
J Educ Eval Health Prof. 2023;20:34.   Published online November 28, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.34
  • 1,392 View
  • 85 Download
AbstractAbstract PDFSupplementary Material
Purpose
The present study was conducted to determine the effect of motion-graphic video-based training on the performance of operating room nurse students in cataract surgery using phacoemulsification at Kermanshah University of Medical Sciences in Iran.
Methods
This was a randomized controlled study conducted among 36 students training to become operating room nurses. The control group only received routine training, and the intervention group received motion-graphic video-based training on the scrub nurse’s performance in cataract surgery in addition to the educator’s training. The performance of the students in both groups as scrub nurses was measured through a researcher-made checklist in a pre-test and a post-test.
Results
The mean scores for performance in the pre-test and post-test were 17.83 and 26.44 in the control group and 18.33 and 50.94 in the intervention group, respectively, and a significant difference was identified between the mean scores of the pre- and post-test in both groups (P=0.001). The intervention also led to a significant increase in the mean performance score in the intervention group compared to the control group (P=0.001).
Conclusion
Considering the significant difference in the performance score of the intervention group compared to the control group, motion-graphic video-based training had a positive effect on the performance of operating room nurse students, and such training can be used to improve clinical training.
Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study  
Janghee Park
J Educ Eval Health Prof. 2023;20:29.   Published online November 10, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.29
  • 1,643 View
  • 155 Download
  • 4 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.
Methods
The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.
Results
The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”
Conclusion
The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Embracing ChatGPT for Medical Education: Exploring Its Impact on Doctors and Medical Students
    Yijun Wu, Yue Zheng, Baijie Feng, Yuqi Yang, Kai Kang, Ailin Zhao
    JMIR Medical Education.2024; 10: e52483.     CrossRef
  • ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students
    Mohammed Zawiah, Fahmi Al-Ashwal, Lobna Gharaibeh, Rana Abu Farha, Karem Alzoubi, Khawla Abu Hammour, Qutaiba A Qasim, Fahd Abrah
    Journal of Multidisciplinary Healthcare.2023; Volume 16: 4099.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Priorities in updating training paradigms in orthopedic manual therapy: an international Delphi study  
Damian Keter, David Griswold, Kenneth Learman, Chad Cook
J Educ Eval Health Prof. 2023;20:4.   Published online January 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.4
  • 3,216 View
  • 267 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Orthopedic manual therapy (OMT) education demonstrates significant variability between philosophies and while literature has offered a more comprehensive understanding of the contextual, patient specific, and technique factors which interact to influence outcome, most OMT training paradigms continue to emphasize the mechanical basis for OMT application. The purpose of this study was to establish consensus on modifications & adaptions to training paradigms which need to occur within OMT education to align with current evidence.
Methods
A 3-round Delphi survey instrument designed to identify foundational knowledge to include and omit from OMT education was completed by 28 educators working within high level manual therapy education programs internationally. Round 1 consisted of open-ended questions to identify content in each area. Round 2 and Round 3 allowed participants to rank the themes identified in Round 1.
Results
Consensus was reached on 25 content areas to include within OMT education, 1 content area to omit from OMT education, and 34 knowledge components which should be present in those providing OMT. Support was seen for education promoting understanding the complex psychological, neurophysiological, and biomechanical systems as they relate to both evaluation and treatment effect. While some concepts were more consistently supported there was significant variability in responses which is largely expected to be related to previous training.
Conclusion
The results of this study indicate manual therapy educators understanding of evidence-based practice as support for all 3 tiers of evidence were represented. The results of this study should guide OMT training program development and modification.

Citations

Citations to this article as recorded by  
  • A critical review of the role of manual therapy in the treatment of individuals with low back pain
    Jean-Pascal Grenier, Maria Rothmund
    Journal of Manual & Manipulative Therapy.2024; : 1.     CrossRef
  • Modernizing patient-centered manual therapy: Findings from a Delphi study on orthopaedic manual therapy application
    Damian Keter, David Griswold, Kenneth Learman, Chad Cook
    Musculoskeletal Science and Practice.2023; 65: 102777.     CrossRef
Simulation-based training using a novel Surabaya hysterectomy mannequin following video demonstration to improve abdominal hysterectomy skills of obstetrics and gynecology residents during the COVID-19 pandemic in Indonesia: a pre- and post-intervention study  
Dara Dasawulansari Syamsuri, Brahmana Askandar Tjokroprawiro, Eighty Mardiyan Kurniawati, Budi Utomo, Djoko Kuswanto
J Educ Eval Health Prof. 2022;19:11.   Published online May 17, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.11
  • 5,544 View
  • 306 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
During the coronavirus disease 2019 (COVID-19) pandemic, the number of abdominal hysterectomy procedures decreased in Indonesia. The existing commercial abdominal hysterectomy simulation model is expensive and difficult to reuse. This study compared residents’ abdominal hysterectomy skills after simulation-based training using the Surabaya hysterectomy mannequin following a video demonstration.
Methods
We randomized 3rd- and 4th-year obstetrics and gynecology residents to a video-based group (group 1), a simulation-based group (group 2), and a combination group (group 3). Abdominal hysterectomy skills were compared between before and after the educational intervention. The pre- and post-tests were scored by blinded experts using the validated Objective Structured Assessment of Technical Skills (OSATS) and Global Rating Scale (GRS).
Results
A total of 33 residents were included in the pre- and post-tests. The OSATS and GRS mean differences after the intervention were higher in group 3 than in groups 1 and 2 (OSATS: 4.64 [95% CI, 2.90–6.37] vs. 2.55 [95% CI, 2.19–2.90] vs. 3.82 [95% CI, 2.41–5.22], P=0.047; GRS: 10.00 [95% CI, 7.01–12.99] vs. 5.18 [95% CI, 3.99–6.38] vs. 7.18 [95% CI, 6.11–8.26], P=0.006). The 3rd-year residents in group 3 had greater mean differences in OSATS and GRS scores than the 4th-year residents (OSATS: 5.67 [95% CI, 2.88–8.46]; GRS: 12.83 [95% CI, 8.61–17.05] vs. OSATS: 3.40 [95% CI, 0.83–5.97]; GRS: 5.67 [95% CI, 2.80–8.54]).
Conclusion
Simulation-based training using the Surabaya hysterectomy mannequin following video demonstration can be a bridge to learning about abdominal hysterectomy for residents who had less surgical experience during the COVID-19 pandemic.

Citations

Citations to this article as recorded by  
  • Improving the quality of care and patient safety in oncology, the contribution of simulation-based training: A scoping review
    Nadia Al Wachami, Mohamed Chahboune, Ibtissam Youlyouz-marfak, Mohamed Reda Mesradi, Hajar Lemriss, Abderraouf Hilali
    International Journal of Nursing Sciences.2024; 11(2): 187.     CrossRef
  • Effect of midwife competence training in primary healthcare facilities on obstetric risk early detection
    Ai Nur Zannah, Yuningsih Yuningsih, Asri Iman Sari, Eka Afdi Septiyono
    Healthcare in Low-resource Settings.2024;[Epub]     CrossRef
Review
Prevalence of burnout and related factors in nursing faculty members: a systematic review  
Marziyeh Hosseini, Mitra Soltanian, Camellia Torabizadeh, Zahra Hadian Shirazi
J Educ Eval Health Prof. 2022;19:16.   Published online July 14, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.16
  • 4,523 View
  • 428 Download
  • 5 Web of Science
  • 8 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The current study aimed to identify the prevalence of burnout and related factors in nursing faculty members through a systematic review of the literature.
Methods
A comprehensive search of electronic databases, including Scopus, PubMed, Web of Science, Iranmedex, and Scientific Information Database was conducted via keywords extracted from Medical Subject Headings, including burnout and nursing faculty, for studies published from database inception to April 1, 2022. The quality of the included studies in this review was assessed using the appraisal tool for cross-sectional studies.
Results
A total of 2,551 nursing faculty members were enrolled in 11 studies. The mean score of burnout in nursing faculty members based on the Maslach Burnout Inventory (MBI) was 59.28 out of 132. The burnout score in this study was presented in 3 MBI subscales: emotional exhaustion, 21.24 (standard deviation [SD]=9.70) out of 54; depersonalization, 5.88 (SD=4.20) out of 30; and personal accomplishment, 32.16 (SD=6.45) out of 48. Several factors had significant relationships with burnout in nursing faculty members, including gender, level of education, hours of work, number of classroom, students taught, full-time work, job pressure, perceived stress, subjective well-being, marital status, job satisfaction, work setting satisfaction, workplace empowerment, collegial support, management style, fulfillment of self-expectation, communication style, humor, and academic position.
Conclusion
Overall, the mean burnout scores in nursing faculty members were moderate. Therefore, health policymakers and managers can reduce the likelihood of burnout in nursing faculty members by using psychosocial interventions and support.

Citations

Citations to this article as recorded by  
  • Strategies to promote nurse educator well-being and prevent burnout: An integrative review
    Allan Lovern, Lindsay Quinlan, Stephanie Brogdon, Cora Rabe, Laura S. Bonanno
    Teaching and Learning in Nursing.2024; 19(2): 185.     CrossRef
  • ALS Health care provider wellness
    Gregory Hansen, Sarah Burton-MacLeod, Kerri Lynn Schellenberg
    Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration.2024; 25(3-4): 299.     CrossRef
  • Cuidando al profesorado: resultados de un programa a distancia de autocuidado para educadores de profesiones de la salud
    Denisse Zúñiga, Guadalupe Echeverría, Pía Nitsche, Nuria Pedrals, Attilio Rigotti, Marisol Sirhan, Klaus Puschel, Marcela Bitran
    Educación Médica.2024; 25(1): 100871.     CrossRef
  • Civility and resilience practices to address chronic workplace stress in nursing academia
    Teresa M. Stephens, Cynthia M. Clark
    Teaching and Learning in Nursing.2024; 19(2): 119.     CrossRef
  • The state of mental health, burnout, mattering and perceived wellness culture in Doctorally prepared nursing faculty with implications for action
    Bernadette Mazurek Melnyk, Lee Ann Strait, Cindy Beckett, Andreanna Pavan Hsieh, Jeffery Messinger, Randee Masciola
    Worldviews on Evidence-Based Nursing.2023; 20(2): 142.     CrossRef
  • Pressures in the Ivory Tower: An Empirical Study of Burnout Scores among Nursing Faculty
    Sheila A. Boamah, Michael Kalu, Rosain Stennett, Emily Belita, Jasmine Travers
    International Journal of Environmental Research and Public Health.2023; 20(5): 4398.     CrossRef
  • Understanding and Fostering Mental Health and Well-Being among University Faculty: A Narrative Review
    Dalal Hammoudi Halat, Abderrezzaq Soltani, Roua Dalli, Lama Alsarraj, Ahmed Malki
    Journal of Clinical Medicine.2023; 12(13): 4425.     CrossRef
  • A mixed-methods study of the effectiveness and perceptions of a course design institute for health science educators
    Julie Speer, Quincy Conley, Derek Thurber, Brittany Williams, Mitzi Wasden, Brenda Jackson
    BMC Medical Education.2022;[Epub]     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions