Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Most read articles

Page Path
HOME > Browse articles > Most read articles
84 Most read articles
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles

Most-read articles are from the articles published in 2022 during the last three month.

Research article
No difference in factual or conceptual recall comprehension for tablet, laptop, and handwritten note-taking by medical students in the United States: a survey-based observational study  
Warren Wiechmann, Robert Edwards, Cheyenne Low, Alisa Wray, Megan Boysen-Osborn, Shannon Toohey
J Educ Eval Health Prof. 2022;19:8.   Published online April 26, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.8
  • 9,521 View
  • 439 Download
AbstractAbstract PDFSupplementary Material
Purpose
Technological advances are changing how students approach learning. The traditional note-taking methods of longhand writing have been supplemented and replaced by tablets, smartphones, and laptop note-taking. It has been theorized that writing notes by hand requires more complex cognitive processes and may lead to better retention. However, few studies have investigated the use of tablet-based note-taking, which allows the incorporation of typing, drawing, highlights, and media. We therefore sought to confirm the hypothesis that tablet-based note-taking would lead to equivalent or better recall as compared to written note-taking.
Methods
We allocated 68 students into longhand, laptop, or tablet note-taking groups, and they watched and took notes on a presentation on which they were assessed for factual and conceptual recall. A second short distractor video was shown, followed by a 30-minute assessment at the University of California, Irvine campus, over a single day period in August 2018. Notes were analyzed for content, supplemental drawings, and other media sources.
Results
No significant difference was found in the factual or conceptual recall scores for tablet, laptop, and handwritten note-taking (P=0.61). The median word count was 131.5 for tablets, 121.0 for handwriting, and 297.0 for laptops (P=0.01). The tablet group had the highest presence of drawing, highlighting, and other media/tools.
Conclusion
In light of conflicting research regarding the best note-taking method, our study showed that longhand note-taking is not superior to tablet or laptop note-taking. This suggests students should be encouraged to pick the note-taking method that appeals most to them. In the future, traditional note-taking may be replaced or supplemented with digital technologies that provide similar efficacy with more convenience.
Brief report
Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study  
Sun Huh
J Educ Eval Health Prof. 2023;20:1.   Published online January 11, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.1
  • 10,697 View
  • 1,001 Download
  • 102 Web of Science
  • 61 Crossref
AbstractAbstract PDFSupplementary Material
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.

Citations

Citations to this article as recorded by  
  • Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study
    Aravind P Gandhi, Felista Karen Joesph, Vineeth Rajagopal, P Aparnavi, Sushma Katkuri, Sonal Dayama, Prakasini Satapathy, Mahalaqua Nazli Khatib, Shilpa Gaidhane, Quazi Syed Zahiruddin, Ashish Behera
    JMIR Formative Research.2024; 8: e49964.     CrossRef
  • Large Language Models and Artificial Intelligence: A Primer for Plastic Surgeons on the Demonstrated and Potential Applications, Promises, and Limitations of ChatGPT
    Jad Abi-Rafeh, Hong Hao Xu, Roy Kazan, Ruth Tevlin, Heather Furnas
    Aesthetic Surgery Journal.2024; 44(3): 329.     CrossRef
  • Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers
    Ana Suárez, Víctor Díaz‐Flores García, Juan Algar, Margarita Gómez Sánchez, María Llorente de Pedro, Yolanda Freire
    International Endodontic Journal.2024; 57(1): 108.     CrossRef
  • Bob or Bot: Exploring ChatGPT's Answers to University Computer Science Assessment
    Mike Richards, Kevin Waugh, Mark Slaymaker, Marian Petre, John Woodthorpe, Daniel Gooch
    ACM Transactions on Computing Education.2024; 24(1): 1.     CrossRef
  • Examining the use of ChatGPT in public universities in Hong Kong: a case study of restricted access areas
    Michelle W. T. Cheng, Iris H. Y. YIM
    Discover Education.2024;[Epub]     CrossRef
  • Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study
    Firas Haddad, Joanna S Saade
    JMIR Medical Education.2024; 10: e50842.     CrossRef
  • A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision‐making in nursing
    Mor Saban, Ilana Dubovi
    Journal of Advanced Nursing.2024;[Epub]     CrossRef
  • Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study
    Annika Meyer, Janik Riese, Thomas Streichert
    JMIR Medical Education.2024; 10: e50965.     CrossRef
  • From hype to insight: Exploring ChatGPT's early footprint in education via altmetrics and bibliometrics
    Lung‐Hsiang Wong, Hyejin Park, Chee‐Kit Looi
    Journal of Computer Assisted Learning.2024;[Epub]     CrossRef
  • A scoping review of artificial intelligence in medical education: BEME Guide No. 84
    Morris Gordon, Michelle Daniel, Aderonke Ajiboye, Hussein Uraiby, Nicole Y. Xu, Rangana Bartlett, Janice Hanson, Mary Haas, Maxwell Spadafore, Ciaran Grafton-Clarke, Rayhan Yousef Gasiea, Colin Michie, Janet Corral, Brian Kwan, Diana Dolmans, Satid Thamma
    Medical Teacher.2024; : 1.     CrossRef
  • Üniversite Öğrencilerinin ChatGPT 3,5 Deneyimleri: Yapay Zekâyla Yazılmış Masal Varyantları
    Bilge GÖK, Fahri TEMİZYÜREK, Özlem BAŞ
    Korkut Ata Türkiyat Araştırmaları Dergisi.2024; (14): 1040.     CrossRef
  • Tracking ChatGPT Research: Insights From the Literature and the Web
    Omar Mubin, Fady Alnajjar, Zouheir Trabelsi, Luqman Ali, Medha Mohan Ambali Parambil, Zhao Zou
    IEEE Access.2024; 12: 30518.     CrossRef
  • Potential applications of ChatGPT in obstetrics and gynecology in Korea: a review article
    YooKyung Lee, So Yun Kim
    Obstetrics & Gynecology Science.2024; 67(2): 153.     CrossRef
  • Application of generative language models to orthopaedic practice
    Jessica Caterson, Olivia Ambler, Nicholas Cereceda-Monteoliva, Matthew Horner, Andrew Jones, Arwel Tomos Poacher
    BMJ Open.2024; 14(3): e076484.     CrossRef
  • Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology
    Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal
    Cureus.2023;[Epub]     CrossRef
  • Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 5.     CrossRef
  • Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic
    Sun Huh
    Science Editing.2023; 10(1): 1.     CrossRef
  • Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum
    Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta
    Cureus.2023;[Epub]     CrossRef
  • Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry
    Arindam Ghosh, Aritri Bir
    Cureus.2023;[Epub]     CrossRef
  • Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts
    Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT in Dentistry: A Comprehensive Review
    Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed
    Cureus.2023;[Epub]     CrossRef
  • Can we trust AI chatbots’ answers about disease diagnosis and patient care?
    Sun Huh
    Journal of the Korean Medical Association.2023; 66(4): 218.     CrossRef
  • Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
    Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh
    JMIR Medical Education.2023; 9: e48291.     CrossRef
  • Early applications of ChatGPT in medical practice, education and research
    Sam Sedaghat
    Clinical Medicine.2023; 23(3): 278.     CrossRef
  • A Review of Research on Teaching and Learning Transformation under the Influence of ChatGPT Technology
    璇 师
    Advances in Education.2023; 13(05): 2617.     CrossRef
  • Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study
    Soshi Takagi, Takashi Watari, Ayano Erabi, Kota Sakaguchi
    JMIR Medical Education.2023; 9: e48002.     CrossRef
  • ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions
    Cosima C. Hoch, Barbara Wollenberg, Jan-Christoffer Lüers, Samuel Knoedler, Leonard Knoedler, Konstantin Frank, Sebastian Cotofana, Michael Alfertshofer
    European Archives of Oto-Rhino-Laryngology.2023; 280(9): 4271.     CrossRef
  • Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology
    Mayank Agarwal, Priyanka Sharma, Ayan Goswami
    Cureus.2023;[Epub]     CrossRef
  • The Intersection of ChatGPT, Clinical Medicine, and Medical Education
    Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali
    JMIR Medical Education.2023; 9: e47274.     CrossRef
  • The Role of Artificial Intelligence in Higher Education: ChatGPT Assessment for Anatomy Course
    Tarık TALAN, Yusuf KALINKARA
    Uluslararası Yönetim Bilişim Sistemleri ve Bilgisayar Bilimleri Dergisi.2023; 7(1): 33.     CrossRef
  • Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
    Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu
    Journal of Educational Evaluation for Health Professions.2023; 20: 17.     CrossRef
  • Examining Real-World Medication Consultations and Drug-Herb Interactions: ChatGPT Performance Evaluation
    Hsing-Yu Hsu, Kai-Cheng Hsu, Shih-Yen Hou, Ching-Lung Wu, Yow-Wen Hsieh, Yih-Dih Cheng
    JMIR Medical Education.2023; 9: e48433.     CrossRef
  • Assessing the Efficacy of ChatGPT in Solving Questions Based on the Core Concepts in Physiology
    Arijita Banerjee, Aquil Ahmad, Payal Bhalla, Kavita Goyal
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT Performs on the Chinese National Medical Licensing Examination
    Xinyi Wang, Zhenye Gong, Guoxin Wang, Jingdan Jia, Ying Xu, Jialu Zhao, Qingye Fan, Shaun Wu, Weiguo Hu, Xiaoyang Li
    Journal of Medical Systems.2023;[Epub]     CrossRef
  • Artificial intelligence and its impact on job opportunities among university students in North Lima, 2023
    Doris Ruiz-Talavera, Jaime Enrique De la Cruz-Aguero, Nereo García-Palomino, Renzo Calderón-Espinoza, William Joel Marín-Rodriguez
    ICST Transactions on Scalable Information Systems.2023;[Epub]     CrossRef
  • Revolutionizing Dental Care: A Comprehensive Review of Artificial Intelligence Applications Among Various Dental Specialties
    Najd Alzaid, Omar Ghulam, Modhi Albani, Rafa Alharbi, Mayan Othman, Hasan Taher, Saleem Albaradie, Suhael Ahmed
    Cureus.2023;[Epub]     CrossRef
  • Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review
    Carl Preiksaitis, Christian Rose
    JMIR Medical Education.2023; 9: e48785.     CrossRef
  • Exploring the impact of language models, such as ChatGPT, on student learning and assessment
    Araz Zirar
    Review of Education.2023;[Epub]     CrossRef
  • Evaluating the reliability of ChatGPT as a tool for imaging test referral: a comparative study with a clinical decision support system
    Shani Rosen, Mor Saban
    European Radiology.2023;[Epub]     CrossRef
  • Redesigning Tertiary Educational Evaluation with AI: A Task-Based Analysis of LIS Students’ Assessment on Written Tests and Utilizing ChatGPT at NSTU
    Shamima Yesmin
    Science & Technology Libraries.2023; : 1.     CrossRef
  • ChatGPT and the AI revolution: a comprehensive investigation of its multidimensional impact and potential
    Mohd Afjal
    Library Hi Tech.2023;[Epub]     CrossRef
  • The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard
    Hasan B Ilgaz, Zehra Çelik
    Cureus.2023;[Epub]     CrossRef
  • Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
    Abhra Ghosh, Nandita Maini Jindal, Vikram K Gupta, Ekta Bansal, Navjot Kaur Bajwa, Abhishek Sett
    Cureus.2023;[Epub]     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study
    Osman Babayiğit, Zeynep Tastan Eroglu, Dilek Ozkan Sen, Fatma Ucan Yarkac
    Cureus.2023;[Epub]     CrossRef
  • Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study
    Aleksandra Ignjatović, Lazar Stevanović
    Journal of Educational Evaluation for Health Professions.2023; 20: 28.     CrossRef
  • Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study
    Krishna Mohan Surapaneni
    JMIR Medical Education.2023; 9: e47191.     CrossRef
  • A systematic review of ChatGPT use in K‐12 education
    Peng Zhang, Gemma Tur
    European Journal of Education.2023;[Epub]     CrossRef
  • Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
    Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa
    Journal of Educational Evaluation for Health Professions.2023; 20: 30.     CrossRef
  • ChatGPT’s performance in German OB/GYN exams – paving the way for AI-enhanced medical education and clinical practice
    Maximilian Riedel, Katharina Kaefinger, Antonia Stuehrenberg, Viktoria Ritter, Niklas Amann, Anna Graf, Florian Recker, Evelyn Klein, Marion Kiechle, Fabian Riedel, Bastian Meyer
    Frontiers in Medicine.2023;[Epub]     CrossRef
  • Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study
    Janghee Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 29.     CrossRef
  • Evaluating ChatGPT as a self‐learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination
    Krishna Mohan Surapaneni, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, Porchelvan Swaminathan
    Biochemistry and Molecular Biology Education.2023;[Epub]     CrossRef
  • FROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKING
    Yaroslav Mykhalko, Pavlo Kish, Yelyzaveta Rubtsova, Oleksandr Kutsyn, Valentyna Koval
    Wiadomości Lekarskie.2023; 76(11): 2345.     CrossRef
  • Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions
    Pasin Tangadulrat, Supinya Sono, Boonsin Tangtrakulwanich
    JMIR Medical Education.2023; 9: e50658.     CrossRef
  • Below average ChatGPT performance in medical microbiology exam compared to university students
    Malik Sallam, Khaled Al-Salahat
    Frontiers in Education.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • ChatGPT Review: A Sophisticated Chatbot Models in Medical & Health-related Teaching and Learning
    Nur Izah Ab Razak, Muhammad Fawwaz Muhammad Yusoff, Rahmita Wirza O.K. Rahmat
    Malaysian Journal of Medicine and Health Sciences.2023; 19(s12): 98.     CrossRef
  • Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
    Tae Won Kim
    Journal of Educational Evaluation for Health Professions.2023; 20: 38.     CrossRef
  • Trends in research on ChatGPT and adoption-related issues discussed in articles: a narrative review
    Sang-Jun Kim
    Science Editing.2023; 11(1): 3.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Review
Can an artificial intelligence chatbot be the author of a scholarly article?  
Ju Yoen Lee
J Educ Eval Health Prof. 2023;20:6.   Published online February 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.6
  • 6,987 View
  • 622 Download
  • 33 Web of Science
  • 34 Crossref
AbstractAbstract PDFSupplementary Material
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.

Citations

Citations to this article as recorded by  
  • Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills
    Graham Kendall, Jaime A. Teixeira da Silva
    Learned Publishing.2024; 37(1): 55.     CrossRef
  • Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals
    Brady D. Lund, K.T. Naheem
    Learned Publishing.2024; 37(1): 13.     CrossRef
  • The Role of AI in Writing an Article and Whether it Can Be a Co-author: What if it Gets Support From 2 Different AIs Like ChatGPT and Google Bard for the Same Theme?
    İlhan Bahşi, Ayşe Balat
    Journal of Craniofacial Surgery.2024; 35(1): 274.     CrossRef
  • Artificial Intelligence–Generated Scientific Literature: A Critical Appraisal
    Justyna Zybaczynska, Matthew Norris, Sunjay Modi, Jennifer Brennan, Pooja Jhaveri, Timothy J. Craig, Taha Al-Shaikhly
    The Journal of Allergy and Clinical Immunology: In Practice.2024; 12(1): 106.     CrossRef
  • Does Google’s Bard Chatbot perform better than ChatGPT on the European hand surgery exam?
    Goetsch Thibaut, Armaghan Dabbagh, Philippe Liverneaux
    International Orthopaedics.2024; 48(1): 151.     CrossRef
  • A Brief Review of the Efficacy in Artificial Intelligence and Chatbot-Generated Personalized Fitness Regimens
    Daniel K. Bays, Cole Verble, Kalyn M. Powers Verble
    Strength & Conditioning Journal.2024;[Epub]     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2024; 12: 1398.     CrossRef
  • The Use of Artificial Intelligence in Writing Scientific Review Articles
    Melissa A. Kacena, Lilian I. Plotkin, Jill C. Fehrenbacher
    Current Osteoporosis Reports.2024; 22(1): 115.     CrossRef
  • Using AI to Write a Review Article Examining the Role of the Nervous System on Skeletal Homeostasis and Fracture Healing
    Murad K. Nazzal, Ashlyn J. Morris, Reginald S. Parker, Fletcher A. White, Roman M. Natoli, Jill C. Fehrenbacher, Melissa A. Kacena
    Current Osteoporosis Reports.2024; 22(1): 217.     CrossRef
  • GenAI et al.: Cocreation, Authorship, Ownership, Academic Ethics and Integrity in a Time of Generative AI
    Aras Bozkurt
    Open Praxis.2024; 16(1): 1.     CrossRef
  • An integrative decision-making framework to guide policies on regulating ChatGPT usage
    Umar Ali Bukar, Md Shohel Sayeed, Siti Fatimah Abdul Razak, Sumendra Yogarayan, Oluwatosin Ahmed Amodu
    PeerJ Computer Science.2024; 10: e1845.     CrossRef
  • Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer
    Casey Watters, Michal K. Lemanski
    Frontiers in Big Data.2023;[Epub]     CrossRef
  • The importance of human supervision in the use of ChatGPT as a support tool in scientific writing
    William Castillo-González
    Metaverse Basic and Applied Research.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • Chatbots in Medical Research
    Punit Sharma
    Clinical Nuclear Medicine.2023; 48(9): 838.     CrossRef
  • Potential applications of ChatGPT in dermatology
    Nicolas Kluger
    Journal of the European Academy of Dermatology and Venereology.2023;[Epub]     CrossRef
  • The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research
    Tariq Alqahtani, Hisham A. Badreldin, Mohammed Alrashed, Abdulrahman I. Alshaya, Sahar S. Alghamdi, Khalid bin Saleh, Shuroug A. Alowais, Omar A. Alshaya, Ishrat Rahman, Majed S. Al Yami, Abdulkareem M. Albekairy
    Research in Social and Administrative Pharmacy.2023; 19(8): 1236.     CrossRef
  • ChatGPT Performance on the American Urological Association Self-assessment Study Program and the Potential Influence of Artificial Intelligence in Urologic Training
    Nicholas A. Deebel, Ryan Terlecki
    Urology.2023; 177: 29.     CrossRef
  • Intelligence or artificial intelligence? More hard problems for authors of Biological Psychology, the neurosciences, and everyone else
    Thomas Ritz
    Biological Psychology.2023; 181: 108590.     CrossRef
  • The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts
    Mohammad Hosseini, David B Resnik, Kristi Holmes
    Research Ethics.2023; 19(4): 449.     CrossRef
  • How trustworthy is ChatGPT? The case of bibliometric analyses
    Faiza Farhat, Shahab Saquib Sohail, Dag Øivind Madsen
    Cogent Engineering.2023;[Epub]     CrossRef
  • Disclosing use of Artificial Intelligence: Promoting transparency in publishing
    Parvaiz A. Koul
    Lung India.2023; 40(5): 401.     CrossRef
  • ChatGPT in medical research: challenging time ahead
    Daideepya C Bhargava, Devendra Jadav, Vikas P Meshram, Tanuj Kanchan
    Medico-Legal Journal.2023; 91(4): 223.     CrossRef
  • Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
    Mike Perkins, Jasper Roe
    F1000Research.2023; 12: 1398.     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • ChatGPT in medical writing: A game-changer or a gimmick?
    Shital Sarah Ahaley, Ankita Pandey, Simran Kaur Juneja, Tanvi Suhane Gupta, Sujatha Vijayakumar
    Perspectives in Clinical Research.2023;[Epub]     CrossRef
  • Artificial Intelligence-Supported Systems in Anesthesiology and Its Standpoint to Date—A Review
    Fiona M. P. Pham
    Open Journal of Anesthesiology.2023; 13(07): 140.     CrossRef
  • ChatGPT as an innovative tool for increasing sales in online stores
    Michał Orzoł, Katarzyna Szopik-Depczyńska
    Procedia Computer Science.2023; 225: 3450.     CrossRef
  • Intelligent Plagiarism as a Misconduct in Academic Integrity
    Jesús Miguel Muñoz-Cantero, Eva Maria Espiñeira-Bellón
    Acta Médica Portuguesa.2023; 37(1): 1.     CrossRef
  • Follow-up of Artificial Intelligence Development and its Controlled Contribution to the Article: Step to the Authorship?
    Ekrem Solmaz
    European Journal of Therapeutics.2023;[Epub]     CrossRef
  • May Artificial Intelligence Be a Co-Author on an Academic Paper?
    Ayşe Balat, İlhan Bahşi
    European Journal of Therapeutics.2023; 29(3): e12.     CrossRef
  • Opportunities and challenges for ChatGPT and large language models in biomedicine and health
    Shubo Tian, Qiao Jin, Lana Yeganova, Po-Ting Lai, Qingqing Zhu, Xiuying Chen, Yifan Yang, Qingyu Chen, Won Kim, Donald C Comeau, Rezarta Islamaj, Aadit Kapoor, Xin Gao, Zhiyong Lu
    Briefings in Bioinformatics.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • Editorial policies of Journal of Educational Evaluation for Health Professions on the use of generative artificial intelligence in article writing and peer review
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 40.     CrossRef
Educational/Faculty development material
Common models and approaches for the clinical educator to plan effective feedback encounters  
Cesar Orsini, Veena Rodrigues, Jorge Tricio, Margarita Rosel
J Educ Eval Health Prof. 2022;19:35.   Published online December 19, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.35
  • 3,980 View
  • 609 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Giving constructive feedback is crucial for learners to bridge the gap between their current performance and the desired standards of competence. Giving effective feedback is a skill that can be learned, practiced, and improved. Therefore, our aim was to explore models in clinical settings and assess their transferability to different clinical feedback encounters. We identified the 6 most common and accepted feedback models, including the Feedback Sandwich, the Pendleton Rules, the One-Minute Preceptor, the SET-GO model, the R2C2 (Rapport/Reaction/Content/Coach), and the ALOBA (Agenda Led Outcome-based Analysis) model. We present a handy resource describing their structure, strengths and weaknesses, requirements for educators and learners, and suitable feedback encounters for use for each model. These feedback models represent practical frameworks for educators to adopt but also to adapt to their preferred style, combining and modifying them if necessary to suit their needs and context.

Citations

Citations to this article as recorded by  
  • Navigating power dynamics between pharmacy preceptors and learners
    Shane Tolleson, Mabel Truong, Natalie Rosario
    Exploratory Research in Clinical and Social Pharmacy.2024; 13: 100408.     CrossRef
  • Feedback conversations: First things first?
    Katharine A. Robb, Marcy E. Rosenbaum, Lauren Peters, Susan Lenoch, Donna Lancianese, Jane L. Miller
    Patient Education and Counseling.2023; 115: 107849.     CrossRef
Brief report
Training and implementation of handheld ultrasound technology at Georgetown Public Hospital Corporation in Guyana: a virtual learning cohort study  
Michelle Bui, Adrian Fernandez, Budheshwar Ramsukh, Onika Noel, Chris Prashad, David Bayne
J Educ Eval Health Prof. 2023;20:11.   Published online April 4, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.11
  • 2,069 View
  • 85 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
A virtual point-of-care ultrasound (POCUS) education program was initiated to introduce handheld ultrasound technology to Georgetown Public Hospital Corporation in Guyana, a low-resource setting. We studied ultrasound competency and participant satisfaction in a cohort of 20 physicians-in-training through the urology clinic. The program consisted of a training phase, where they learned how to use the Butterfly iQ ultrasound, and a mentored implementation phase, where they applied their skills in the clinic. The assessment was through written exams and an objective structured clinical exam (OSCE). Fourteen students completed the program. The written exam scores were 3.36/5 in the training phase and 3.57/5 in the mentored implementation phase, and all students earned 100% on the OSCE. Students expressed satisfaction with the program. Our POCUS education program demonstrates the potential to teach clinical skills in low-resource settings and the value of virtual global health partnerships in advancing POCUS and minimally invasive diagnostics.

Citations

Citations to this article as recorded by  
  • Efficacy of Handheld Ultrasound in Medical Education: A Comprehensive Systematic Review and Narrative Analysis
    Mariam Haji-Hassan, Roxana-Denisa Capraș, Sorana D. Bolboacă
    Diagnostics.2023; 13(24): 3665.     CrossRef
Reviews
Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
Tae Won Kim
J Educ Eval Health Prof. 2023;20:38.   Published online December 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.38
  • 1,097 View
  • 223 Download
AbstractAbstract PDFSupplementary Material
This study aims to explore ChatGPT’s (GPT-3.5 version) functionalities, including reinforcement learning, diverse applications, and limitations. ChatGPT is an artificial intelligence (AI) chatbot powered by OpenAI’s Generative Pre-trained Transformer (GPT) model. The chatbot’s applications span education, programming, content generation, and more, demonstrating its versatility. ChatGPT can improve education by creating assignments and offering personalized feedback, as shown by its notable performance in medical exams and the United States Medical Licensing Exam. However, concerns include plagiarism, reliability, and educational disparities. It aids in various research tasks, from design to writing, and has shown proficiency in summarizing and suggesting titles. Its use in scientific writing and language translation is promising, but professional oversight is needed for accuracy and originality. It assists in programming tasks like writing code, debugging, and guiding installation and updates. It offers diverse applications, from cheering up individuals to generating creative content like essays, news articles, and business plans. Unlike search engines, ChatGPT provides interactive, generative responses and understands context, making it more akin to human conversation, in contrast to conventional search engines’ keyword-based, non-interactive nature. ChatGPT has limitations, such as potential bias, dependence on outdated data, and revenue generation challenges. Nonetheless, ChatGPT is considered to be a transformative AI tool poised to redefine the future of generative technology. In conclusion, advancements in AI, such as ChatGPT, are altering how knowledge is acquired and applied, marking a shift from search engines to creativity engines. This transformation highlights the increasing importance of AI literacy and the ability to effectively utilize AI in various domains of life.
How to review and assess a systematic review and meta-analysis article: a methodological study (secondary publication)  
Seung-Kwon Myung
J Educ Eval Health Prof. 2023;20:24.   Published online August 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.24
  • 2,077 View
  • 253 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Systematic reviews and meta-analyses have become central in many research fields, particularly medicine. They offer the highest level of evidence in evidence-based medicine and support the development and revision of clinical practice guidelines, which offer recommendations for clinicians caring for patients with specific diseases and conditions. This review summarizes the concepts of systematic reviews and meta-analyses and provides guidance on reviewing and assessing such papers. A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect. Conducting a meta-analysis involves defining a research topic, selecting a study design, searching literature in electronic databases, selecting relevant studies, and conducting the analysis. One can assess the findings of a meta-analysis by interpreting a forest plot and a funnel plot and by examining heterogeneity. When reviewing systematic reviews and meta-analyses, several essential points must be considered, including the originality and significance of the work, the comprehensiveness of the database search, the selection of studies based on inclusion and exclusion criteria, subgroup analyses by various factors, and the interpretation of the results based on the levels of evidence. This review will provide readers with helpful guidance to help them read, understand, and evaluate these articles.

Citations

Citations to this article as recorded by  
  • The Role of BIM in Managing Risks in Sustainability of Bridge Projects: A Systematic Review with Meta-Analysis
    Dema Munef Ahmad, László Gáspár, Zsolt Bencze, Rana Ahmad Maya
    Sustainability.2024; 16(3): 1242.     CrossRef
Research articles
Medical students’ self-assessed efficacy and satisfaction with training on endotracheal intubation and central venous catheterization with smart glasses in Taiwan: a non-equivalent control-group pre- and post-test study  
Yu-Fan Lin, Chien-Ying Wang, Yen-Hsun Huang, Sheng-Min Lin, Ying-Ying Yang
J Educ Eval Health Prof. 2022;19:25.   Published online September 2, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.25
  • 2,848 View
  • 226 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Endotracheal intubation and central venous catheterization are essential procedures in clinical practice. Simulation-based technology such as smart glasses has been used to facilitate medical students’ training on these procedures. We investigated medical students’ self-assessed efficacy and satisfaction regarding the practice and training of these procedures with smart glasses in Taiwan.
Methods
This observational study enrolled 145 medical students in the 5th and 6th years participating in clerkships at Taipei Veterans General Hospital between October 2020 and December 2021. Students were divided into the smart glasses or the control group and received training at a workshop. The primary outcomes included students’ pre- and post-intervention scores for self-assessed efficacy and satisfaction with the training tool, instructor’s teaching, and the workshop.
Results
The pre-intervention scores for self-assessed efficacy of 5th- and 6th-year medical students in endotracheal intubation and central venous catheterization procedures showed no significant difference. The post-intervention score of self-assessed efficacy in the smart glasses group was better than that of the control group. Moreover, 6th-year medical students in the smart glasses group showed higher satisfaction with the training tool, instructor’s teaching, and workshop than those in the control group.
Conclusion
Smart glasses served as a suitable simulation tool for endotracheal intubation and central venous catheterization procedures training in medical students. Medical students practicing with smart glasses showed improved self-assessed efficacy and higher satisfaction with training, especially for procedural steps in a space-limited field. Simulation training on procedural skills with smart glasses in 5th-year medical students may be adjusted to improve their satisfaction.

Citations

Citations to this article as recorded by  
  • The use of smart glasses in nursing education: A scoping review
    Charlotte Romare, Lisa Skär
    Nurse Education in Practice.2023; 73: 103824.     CrossRef
Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
Hyunju Lee, Soobin Park
J Educ Eval Health Prof. 2023;20:39.   Published online December 28, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.39
  • 864 View
  • 127 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study assessed the performance of 6 generative artificial intelligence (AI) platforms on the learning objectives of medical arthropodology in a parasitology class in Korea. We examined the AI platforms’ performance by querying in Korean and English to determine their information amount, accuracy, and relevance in prompts in both languages.
Methods
From December 15 to 17, 2023, 6 generative AI platforms—Bard, Bing, Claude, Clova X, GPT-4, and Wrtn—were tested on 7 medical arthropodology learning objectives in English and Korean. Clova X and Wrtn are platforms from Korean companies. Responses were evaluated using specific criteria for the English and Korean queries.
Results
Bard had abundant information but was fourth in accuracy and relevance. GPT-4, with high information content, ranked first in accuracy and relevance. Clova X was 4th in amount but 2nd in accuracy and relevance. Bing provided less information, with moderate accuracy and relevance. Wrtn’s answers were short, with average accuracy and relevance. Claude AI had reasonable information, but lower accuracy and relevance. The responses in English were superior in all aspects. Clova X was notably optimized for Korean, leading in relevance.
Conclusion
In a study of 6 generative AI platforms applied to medical arthropodology, GPT-4 excelled overall, while Clova X, a Korea-based AI product, achieved 100% relevance in Korean queries, the highest among its peers. Utilizing these AI platforms in classrooms improved the authors’ self-efficacy and interest in the subject, offering a positive experience of interacting with generative AI platforms to question and receive information.
Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa Panta Quezada, Jesus Daniel Gutierrez-Arratia, Javier Alejandro Flores-Cohaila
J Educ Eval Health Prof. 2023;20:30.   Published online November 20, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.30
  • 962 View
  • 145 Download
  • 1 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
We aimed to describe the performance and evaluate the educational value of justifications provided by artificial intelligence chatbots, including GPT-3.5, GPT-4, Bard, Claude, and Bing, on the Peruvian National Medical Licensing Examination (P-NLME).
Methods
This was a cross-sectional analytical study. On July 25, 2023, each multiple-choice question (MCQ) from the P-NLME was entered into each chatbot (GPT-3, GPT-4, Bing, Bard, and Claude) 3 times. Then, 4 medical educators categorized the MCQs in terms of medical area, item type, and whether the MCQ required Peru-specific knowledge. They assessed the educational value of the justifications from the 2 top performers (GPT-4 and Bing).
Results
GPT-4 scored 86.7% and Bing scored 82.2%, followed by Bard and Claude, and the historical performance of Peruvian examinees was 55%. Among the factors associated with correct answers, only MCQs that required Peru-specific knowledge had lower odds (odds ratio, 0.23; 95% confidence interval, 0.09–0.61), whereas the remaining factors showed no associations. In assessing the educational value of justifications provided by GPT-4 and Bing, neither showed any significant differences in certainty, usefulness, or potential use in the classroom.
Conclusion
Among chatbots, GPT-4 and Bing were the top performers, with Bing performing better at Peru-specific MCQs. Moreover, the educational value of justifications provided by the GPT-4 and Bing could be deemed appropriate. However, it is essential to start addressing the educational value of these chatbots, rather than merely their performance on examinations.

Citations

Citations to this article as recorded by  
  • Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study
    Masao Noda, Takayoshi Ueno, Ryota Koshu, Yuji Takaso, Mari Dias Shimada, Chizu Saito, Hisashi Sugimoto, Hiroaki Fushiki, Makoto Ito, Akihiro Nomura, Tomokazu Yoshizaki
    JMIR Medical Education.2024; 10: e57054.     CrossRef
  • Response to Letter to the Editor re: “Artificial Intelligence Versus Expert Plastic Surgeon: Comparative Study Shows ChatGPT ‘Wins' Rhinoplasty Consultations: Should We Be Worried? [1]” by Durairaj et al.
    Kay Durairaj, Omer Baker
    Facial Plastic Surgery & Aesthetic Medicine.2024;[Epub]     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study  
Aleksandra Ignjatović, Lazar Stevanović
J Educ Eval Health Prof. 2023;20:28.   Published online October 16, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.28
  • 1,555 View
  • 159 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to assess the performance of ChatGPT (GPT-3.5 and GPT-4) as a study tool in solving biostatistical problems and to identify any potential drawbacks that might arise from using ChatGPT in medical education, particularly in solving practical biostatistical problems.
Methods
ChatGPT was tested to evaluate its ability to solve biostatistical problems from the Handbook of Medical Statistics by Peacock and Peacock in this descriptive study. Tables from the problems were transformed into textual questions. Ten biostatistical problems were randomly chosen and used as text-based input for conversation with ChatGPT (versions 3.5 and 4).
Results
GPT-3.5 solved 5 practical problems in the first attempt, related to categorical data, cross-sectional study, measuring reliability, probability properties, and the t-test. GPT-3.5 failed to provide correct answers regarding analysis of variance, the chi-square test, and sample size within 3 attempts. GPT-4 also solved a task related to the confidence interval in the first attempt and solved all questions within 3 attempts, with precise guidance and monitoring.
Conclusion
The assessment of both versions of ChatGPT performance in 10 biostatistical problems revealed that GPT-3.5 and 4’s performance was below average, with correct response rates of 5 and 6 out of 10 on the first attempt. GPT-4 succeeded in providing all correct answers within 3 attempts. These findings indicate that students must be aware that this tool, even when providing and calculating different statistical analyses, can be wrong, and they should be aware of ChatGPT’s limitations and be careful when incorporating this model into medical education.

Citations

Citations to this article as recorded by  
  • Can Generative AI and ChatGPT Outperform Humans on Cognitive-Demanding Problem-Solving Tasks in Science?
    Xiaoming Zhai, Matthew Nyaaba, Wenchao Ma
    Science & Education.2024;[Epub]     CrossRef
Mentorship and self-efficacy are associated with lower burnout in physical therapists in the United States: a cross-sectional survey study  
Matthew Pugliese, Jean-Michel Brismée, Brad Allen, Sean Riley, Justin Tammany, Paul Mintken
J Educ Eval Health Prof. 2023;20:27.   Published online September 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.27
  • 2,309 View
  • 249 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the prevalence of burnout in physical therapists in the United States and the relationships between burnout and education, mentorship, and self-efficacy.
Methods
This was a cross-sectional survey study. An electronic survey was distributed to practicing physical therapists across the United States over a 6-week period from December 2020 to January 2021. The survey was completed by 2,813 physical therapists from all states. The majority were female (68.72%), White or Caucasian (80.13%), and employed full-time (77.14%). Respondents completed questions on demographics, education, mentorship, self-efficacy, and burnout. The Burnout Clinical Subtypes Questionnaire 12 (BCSQ-12) and self-reports were used to quantify burnout, and the General Self-Efficacy Scale (GSES) was used to measure self-efficacy. Descriptive and inferential analyses were performed.
Results
Respondents from home health (median BCSQ-12=42.00) and skilled nursing facility settings (median BCSQ-12=42.00) displayed the highest burnout scores. Burnout was significantly lower among those who provided formal mentorship (median BCSQ-12=39.00, P=0.0001) compared to no mentorship (median BCSQ-12=41.00). Respondents who received formal mentorship (median BCSQ-12=38.00, P=0.0028) displayed significantly lower burnout than those who received no mentorship (median BCSQ-12=41.00). A moderate negative correlation (rho=-0.49) was observed between the GSES and burnout scores. A strong positive correlation was found between self-reported burnout status and burnout scores (rrb=0.61).
Conclusion
Burnout is prevalent in the physical therapy profession, as almost half of respondents (49.34%) reported burnout. Providing or receiving mentorship and higher self-efficacy were associated with lower burnout. Organizations should consider measuring burnout levels, investing in mentorship programs, and implementing strategies to improve self-efficacy.
Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study  
Janghee Park
J Educ Eval Health Prof. 2023;20:29.   Published online November 10, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.29
  • 1,104 View
  • 116 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.
Methods
The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.
Results
The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”
Conclusion
The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.

Citations

Citations to this article as recorded by  
  • ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students
    Mohammed Zawiah, Fahmi Al-Ashwal, Lobna Gharaibeh, Rana Abu Farha, Karem Alzoubi, Khawla Abu Hammour, Qutaiba A Qasim, Fahd Abrah
    Journal of Multidisciplinary Healthcare.2023; Volume 16: 4099.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Effect of motion-graphic video-based training on the performance of operating room nurse students in cataract surgery in Iran: a randomized controlled study  
Behnaz Fatahi, Samira Fatahi, Sohrab Nosrati, Masood Bagheri
J Educ Eval Health Prof. 2023;20:34.   Published online November 28, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.34
  • 883 View
  • 78 Download
AbstractAbstract PDFSupplementary Material
Purpose
The present study was conducted to determine the effect of motion-graphic video-based training on the performance of operating room nurse students in cataract surgery using phacoemulsification at Kermanshah University of Medical Sciences in Iran.
Methods
This was a randomized controlled study conducted among 36 students training to become operating room nurses. The control group only received routine training, and the intervention group received motion-graphic video-based training on the scrub nurse’s performance in cataract surgery in addition to the educator’s training. The performance of the students in both groups as scrub nurses was measured through a researcher-made checklist in a pre-test and a post-test.
Results
The mean scores for performance in the pre-test and post-test were 17.83 and 26.44 in the control group and 18.33 and 50.94 in the intervention group, respectively, and a significant difference was identified between the mean scores of the pre- and post-test in both groups (P=0.001). The intervention also led to a significant increase in the mean performance score in the intervention group compared to the control group (P=0.001).
Conclusion
Considering the significant difference in the performance score of the intervention group compared to the control group, motion-graphic video-based training had a positive effect on the performance of operating room nurse students, and such training can be used to improve clinical training.
Review
Prevalence of burnout and related factors in nursing faculty members: a systematic review  
Marziyeh Hosseini, Mitra Soltanian, Camellia Torabizadeh, Zahra Hadian Shirazi
J Educ Eval Health Prof. 2022;19:16.   Published online July 14, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.16
  • 4,027 View
  • 388 Download
  • 5 Web of Science
  • 8 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The current study aimed to identify the prevalence of burnout and related factors in nursing faculty members through a systematic review of the literature.
Methods
A comprehensive search of electronic databases, including Scopus, PubMed, Web of Science, Iranmedex, and Scientific Information Database was conducted via keywords extracted from Medical Subject Headings, including burnout and nursing faculty, for studies published from database inception to April 1, 2022. The quality of the included studies in this review was assessed using the appraisal tool for cross-sectional studies.
Results
A total of 2,551 nursing faculty members were enrolled in 11 studies. The mean score of burnout in nursing faculty members based on the Maslach Burnout Inventory (MBI) was 59.28 out of 132. The burnout score in this study was presented in 3 MBI subscales: emotional exhaustion, 21.24 (standard deviation [SD]=9.70) out of 54; depersonalization, 5.88 (SD=4.20) out of 30; and personal accomplishment, 32.16 (SD=6.45) out of 48. Several factors had significant relationships with burnout in nursing faculty members, including gender, level of education, hours of work, number of classroom, students taught, full-time work, job pressure, perceived stress, subjective well-being, marital status, job satisfaction, work setting satisfaction, workplace empowerment, collegial support, management style, fulfillment of self-expectation, communication style, humor, and academic position.
Conclusion
Overall, the mean burnout scores in nursing faculty members were moderate. Therefore, health policymakers and managers can reduce the likelihood of burnout in nursing faculty members by using psychosocial interventions and support.

Citations

Citations to this article as recorded by  
  • Civility and resilience practices to address chronic workplace stress in nursing academia
    Teresa M. Stephens, Cynthia M. Clark
    Teaching and Learning in Nursing.2024;[Epub]     CrossRef
  • The state of mental health, burnout, mattering and perceived wellness culture in Doctorally prepared nursing faculty with implications for action
    Bernadette Mazurek Melnyk, Lee Ann Strait, Cindy Beckett, Andreanna Pavan Hsieh, Jeffery Messinger, Randee Masciola
    Worldviews on Evidence-Based Nursing.2023; 20(2): 142.     CrossRef
  • Pressures in the Ivory Tower: An Empirical Study of Burnout Scores among Nursing Faculty
    Sheila A. Boamah, Michael Kalu, Rosain Stennett, Emily Belita, Jasmine Travers
    International Journal of Environmental Research and Public Health.2023; 20(5): 4398.     CrossRef
  • Understanding and Fostering Mental Health and Well-Being among University Faculty: A Narrative Review
    Dalal Hammoudi Halat, Abderrezzaq Soltani, Roua Dalli, Lama Alsarraj, Ahmed Malki
    Journal of Clinical Medicine.2023; 12(13): 4425.     CrossRef
  • Strategies to promote nurse educator well-being and prevent burnout: An integrative review
    Allan Lovern, Lindsay Quinlan, Stephanie Brogdon, Cora Rabe, Laura S. Bonanno
    Teaching and Learning in Nursing.2023;[Epub]     CrossRef
  • ALS Health care provider wellness
    Gregory Hansen, Sarah Burton-MacLeod, Kerri Lynn Schellenberg
    Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration.2023; : 1.     CrossRef
  • Cuidando al profesorado: resultados de un programa a distancia de autocuidado para educadores de profesiones de la salud
    Denisse Zúñiga, Guadalupe Echeverría, Pía Nitsche, Nuria Pedrals, Attilio Rigotti, Marisol Sirhan, Klaus Puschel, Marcela Bitran
    Educación Médica.2023; : 100871.     CrossRef
  • A mixed-methods study of the effectiveness and perceptions of a course design institute for health science educators
    Julie Speer, Quincy Conley, Derek Thurber, Brittany Williams, Mitzi Wasden, Brenda Jackson
    BMC Medical Education.2022;[Epub]     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions