Most-read articles are from the articles published in 2022 during the last three month.
Reviews
-
Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
-
Tae Won Kim
-
J Educ Eval Health Prof. 2023;20:38. Published online December 27, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.38
-
-
6,097
View
-
735
Download
-
7
Web of Science
-
9
Crossref
-
Abstract
PDFSupplementary Material
- This study aims to explore ChatGPT’s (GPT-3.5 version) functionalities, including reinforcement learning, diverse applications, and limitations. ChatGPT is an artificial intelligence (AI) chatbot powered by OpenAI’s Generative Pre-trained Transformer (GPT) model. The chatbot’s applications span education, programming, content generation, and more, demonstrating its versatility. ChatGPT can improve education by creating assignments and offering personalized feedback, as shown by its notable performance in medical exams and the United States Medical Licensing Exam. However, concerns include plagiarism, reliability, and educational disparities. It aids in various research tasks, from design to writing, and has shown proficiency in summarizing and suggesting titles. Its use in scientific writing and language translation is promising, but professional oversight is needed for accuracy and originality. It assists in programming tasks like writing code, debugging, and guiding installation and updates. It offers diverse applications, from cheering up individuals to generating creative content like essays, news articles, and business plans. Unlike search engines, ChatGPT provides interactive, generative responses and understands context, making it more akin to human conversation, in contrast to conventional search engines’ keyword-based, non-interactive nature. ChatGPT has limitations, such as potential bias, dependence on outdated data, and revenue generation challenges. Nonetheless, ChatGPT is considered to be a transformative AI tool poised to redefine the future of generative technology. In conclusion, advancements in AI, such as ChatGPT, are altering how knowledge is acquired and applied, marking a shift from search engines to creativity engines. This transformation highlights the increasing importance of AI literacy and the ability to effectively utilize AI in various domains of life.
-
Citations
Citations to this article as recorded by
- Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
Xiaojun Xu, Yixiao Chen, Jing Miao
Journal of Educational Evaluation for Health Professions.2024; 21: 6. CrossRef - Artificial Intelligence: Fundamentals and Breakthrough Applications in Epilepsy
Wesley Kerr, Sandra Acosta, Patrick Kwan, Gregory Worrell, Mohamad A. Mikati
Epilepsy Currents.2024;[Epub] CrossRef - A Developed Graphical User Interface-Based on Different Generative Pre-trained Transformers Models
Ekrem Küçük, İpek Balıkçı Çiçek, Zeynep Küçükakçalı, Cihan Yetiş, Cemil Çolak
ODÜ Tıp Dergisi.2024; 11(1): 18. CrossRef - Art or Artifact: Evaluating the Accuracy, Appeal, and Educational Value of AI-Generated Imagery in DALL·E 3 for Illustrating Congenital Heart Diseases
Mohamad-Hani Temsah, Abdullah N. Alhuzaimi, Mohammed Almansour, Fadi Aljamaan, Khalid Alhasan, Munirah A. Batarfi, Ibraheem Altamimi, Amani Alharbi, Adel Abdulaziz Alsuhaibani, Leena Alwakeel, Abdulrahman Abdulkhaliq Alzahrani, Khaled B. Alsulaim, Amr Jam
Journal of Medical Systems.2024;[Epub] CrossRef - Authentic assessment in medical education: exploring AI integration and student-as-partners collaboration
Syeda Sadia Fatima, Nabeel Ashfaque Sheikh, Athar Osama
Postgraduate Medical Journal.2024;[Epub] CrossRef - Comparative performance analysis of large language models: ChatGPT-3.5, ChatGPT-4 and Google Gemini in glucocorticoid-induced osteoporosis
Linjian Tong, Chaoyang Zhang, Rui Liu, Jia Yang, Zhiming Sun
Journal of Orthopaedic Surgery and Research.2024;[Epub] CrossRef - Can AI-Generated Clinical Vignettes in Japanese Be Used Medically and Linguistically?
Yasutaka Yanagita, Daiki Yokokawa, Shun Uchida, Yu Li, Takanori Uehara, Masatomi Ikusaka
Journal of General Internal Medicine.2024;[Epub] CrossRef - ChatGPT vs. sleep disorder specialist responses to common sleep queries: Ratings by experts and laypeople
Jiyoung Kim, Seo-Young Lee, Jee Hyun Kim, Dong-Hyeon Shin, Eun Hye Oh, Jin A Kim, Jae Wook Cho
Sleep Health.2024;[Epub] CrossRef - Technology integration into Chinese as a foreign language learning in higher education: An integrated bibliometric analysis and systematic review (2000–2024)
Binze Xu
Language Teaching Research.2024;[Epub] CrossRef
-
How to review and assess a systematic review and meta-analysis article: a methodological study (secondary publication)
-
Seung-Kwon Myung
-
J Educ Eval Health Prof. 2023;20:24. Published online August 27, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.24
-
-
7,111
View
-
578
Download
-
5
Web of Science
-
3
Crossref
-
Abstract
PDFSupplementary Material
- Systematic reviews and meta-analyses have become central in many research fields, particularly medicine. They offer the highest level of evidence in evidence-based medicine and support the development and revision of clinical practice guidelines, which offer recommendations for clinicians caring for patients with specific diseases and conditions. This review summarizes the concepts of systematic reviews and meta-analyses and provides guidance on reviewing and assessing such papers. A systematic review refers to a review of a research question that uses explicit and systematic methods to identify, select, and critically appraise relevant research. In contrast, a meta-analysis is a quantitative statistical analysis that combines individual results on the same research question to estimate the common or mean effect. Conducting a meta-analysis involves defining a research topic, selecting a study design, searching literature in electronic databases, selecting relevant studies, and conducting the analysis. One can assess the findings of a meta-analysis by interpreting a forest plot and a funnel plot and by examining heterogeneity. When reviewing systematic reviews and meta-analyses, several essential points must be considered, including the originality and significance of the work, the comprehensiveness of the database search, the selection of studies based on inclusion and exclusion criteria, subgroup analyses by various factors, and the interpretation of the results based on the levels of evidence. This review will provide readers with helpful guidance to help them read, understand, and evaluate these articles.
-
Citations
Citations to this article as recorded by
- The Role of BIM in Managing Risks in Sustainability of Bridge Projects: A Systematic Review with Meta-Analysis
Dema Munef Ahmad, László Gáspár, Zsolt Bencze, Rana Ahmad Maya
Sustainability.2024; 16(3): 1242. CrossRef - The association between long noncoding RNA ABHD11-AS1 and malignancy prognosis: a meta-analysis
Guangyao Lin, Tao Ye, Jing Wang
BMC Cancer.2024;[Epub] CrossRef - The impact of indoor carbon dioxide exposure on human brain activity: A systematic review and meta-analysis based on studies utilizing electroencephalogram signals
Nan Zhang, Chao Liu, Caixia Hou, Wenhao Wang, Qianhui Yuan, Weijun Gao
Building and Environment.2024; 259: 111687. CrossRef
Educational/Faculty development material
-
Common models and approaches for the clinical educator to plan effective feedback encounters
-
Cesar Orsini, Veena Rodrigues, Jorge Tricio, Margarita Rosel
-
J Educ Eval Health Prof. 2022;19:35. Published online December 19, 2022
-
DOI: https://doi.org/10.3352/jeehp.2022.19.35
-
-
8,325
View
-
912
Download
-
3
Web of Science
-
4
Crossref
-
Abstract
PDFSupplementary Material
- Giving constructive feedback is crucial for learners to bridge the gap between their current performance and the desired standards of competence. Giving effective feedback is a skill that can be learned, practiced, and improved. Therefore, our aim was to explore models in clinical settings and assess their transferability to different clinical feedback encounters. We identified the 6 most common and accepted feedback models, including the Feedback Sandwich, the Pendleton Rules, the One-Minute Preceptor, the SET-GO model, the R2C2 (Rapport/Reaction/Content/Coach), and the ALOBA (Agenda Led Outcome-based Analysis) model. We present a handy resource describing their structure, strengths and weaknesses, requirements for educators and learners, and suitable feedback encounters for use for each model. These feedback models represent practical frameworks for educators to adopt but also to adapt to their preferred style, combining and modifying them if necessary to suit their needs and context.
-
Citations
Citations to this article as recorded by
- Navigating power dynamics between pharmacy preceptors and learners
Shane Tolleson, Mabel Truong, Natalie Rosario
Exploratory Research in Clinical and Social Pharmacy.2024; 13: 100408. CrossRef - Feedback in Medical Education—Its Importance and How to Do It
Tarik Babar, Omer A. Awan
Academic Radiology.2024;[Epub] CrossRef - Comparison of the effects of apprenticeship training by sandwich feedback and traditional methods on final-semester operating room technology students’ perioperative competence and performance: a randomized, controlled trial
Azam Hosseinpour, Morteza Nasiri, Fatemeh Keshmiri, Tayebeh Arabzadeh, Hossein Sharafi
BMC Medical Education.2024;[Epub] CrossRef - Feedback conversations: First things first?
Katharine A. Robb, Marcy E. Rosenbaum, Lauren Peters, Susan Lenoch, Donna Lancianese, Jane L. Miller
Patient Education and Counseling.2023; 115: 107849. CrossRef
Review
-
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
-
Xiaojun Xu, Yixiao Chen, Jing Miao
-
J Educ Eval Health Prof. 2024;21:6. Published online March 15, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.6
-
-
3,398
View
-
464
Download
-
6
Web of Science
-
8
Crossref
-
Abstract
PDFSupplementary Material
- Background
ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.
Methods
A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.
Results
ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.
Conclusion
ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.
-
Citations
Citations to this article as recorded by
- Chatbots in neurology and neuroscience: Interactions with students, patients and neurologists
Stefano Sandrone
Brain Disorders.2024; 15: 100145. CrossRef - ChatGPT in education: unveiling frontiers and future directions through systematic literature review and bibliometric analysis
Buddhini Amarathunga
Asian Education and Development Studies.2024;[Epub] CrossRef - Evaluating the performance of ChatGPT-3.5 and ChatGPT-4 on the Taiwan plastic surgery board examination
Ching-Hua Hsieh, Hsiao-Yun Hsieh, Hui-Ping Lin
Heliyon.2024; 10(14): e34851. CrossRef - Preparing for Artificial General Intelligence (AGI) in Health Professions Education: AMEE Guide No. 172
Ken Masters, Anne Herrmann-Werner, Teresa Festl-Wietek, David Taylor
Medical Teacher.2024; 46(10): 1258. CrossRef - A Comparative Analysis of ChatGPT and Medical Faculty Graduates in Medical Specialization Exams: Uncovering the Potential of Artificial Intelligence in Medical Education
Gülcan Gencer, Kerem Gencer
Cureus.2024;[Epub] CrossRef - Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: a narrative review
Sang-Jun Kim
Science Editing.2024; 11(2): 96. CrossRef - Innovation Off the Bat: Bridging the ChatGPT Gap in Digital Competence among English as a Foreign Language Teachers
Gulsara Urazbayeva, Raisa Kussainova, Aikumis Aibergen, Assel Kaliyeva, Gulnur Kantayeva
Education Sciences.2024; 14(9): 946. CrossRef - Exploring the perceptions of Chinese pre-service teachers on the integration of generative AI in English language teaching: Benefits, challenges, and educational implications
Ji Young Chung, Seung-Hoon Jeong
Online Journal of Communication and Media Technologies.2024; 14(4): e202457. CrossRef
Research article
-
Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study
-
Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman
-
J Educ Eval Health Prof. 2024;21:17. Published online July 8, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.17
-
-
1,367
View
-
246
Download
-
1
Web of Science
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States.
Methods
In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024.
Results
GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P<0.00001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items.
Conclusions
ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology’s Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.
-
Citations
Citations to this article as recorded by
- From GPT-3.5 to GPT-4.o: A Leap in AI’s Medical Exam Performance
Markus Kipp
Information.2024; 15(9): 543. CrossRef
Reviews
-
Can an artificial intelligence chatbot be the author of a scholarly article?
-
Ju Yoen Lee
-
J Educ Eval Health Prof. 2023;20:6. Published online February 27, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.6
-
-
10,125
View
-
737
Download
-
54
Web of Science
-
49
Crossref
-
Abstract
PDFSupplementary Material
- At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
-
Citations
Citations to this article as recorded by
- Risks of abuse of large language models, like ChatGPT, in scientific publishing: Authorship, predatory publishing, and paper mills
Graham Kendall, Jaime A. Teixeira da Silva
Learned Publishing.2024; 37(1): 55. CrossRef - Can ChatGPT be an author? A study of artificial intelligence authorship policies in top academic journals
Brady D. Lund, K.T. Naheem
Learned Publishing.2024; 37(1): 13. CrossRef - Artificial Intelligence–Generated Scientific Literature: A Critical Appraisal
Justyna Zybaczynska, Matthew Norris, Sunjay Modi, Jennifer Brennan, Pooja Jhaveri, Timothy J. Craig, Taha Al-Shaikhly
The Journal of Allergy and Clinical Immunology: In Practice.2024; 12(1): 106. CrossRef - Does Google’s Bard Chatbot perform better than ChatGPT on the European hand surgery exam?
Goetsch Thibaut, Armaghan Dabbagh, Philippe Liverneaux
International Orthopaedics.2024; 48(1): 151. CrossRef - A Brief Review of the Efficacy in Artificial Intelligence and Chatbot-Generated Personalized Fitness Regimens
Daniel K. Bays, Cole Verble, Kalyn M. Powers Verble
Strength & Conditioning Journal.2024; 46(4): 485. CrossRef - Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
Mike Perkins, Jasper Roe
F1000Research.2024; 12: 1398. CrossRef - The Use of Artificial Intelligence in Writing Scientific Review Articles
Melissa A. Kacena, Lilian I. Plotkin, Jill C. Fehrenbacher
Current Osteoporosis Reports.2024; 22(1): 115. CrossRef - Using AI to Write a Review Article Examining the Role of the Nervous System on Skeletal Homeostasis and Fracture Healing
Murad K. Nazzal, Ashlyn J. Morris, Reginald S. Parker, Fletcher A. White, Roman M. Natoli, Jill C. Fehrenbacher, Melissa A. Kacena
Current Osteoporosis Reports.2024; 22(1): 217. CrossRef - GenAI et al.: Cocreation, Authorship, Ownership, Academic Ethics and Integrity in a Time of Generative AI
Aras Bozkurt
Open Praxis.2024; 16(1): 1. CrossRef - An integrative decision-making framework to guide policies on regulating ChatGPT usage
Umar Ali Bukar, Md Shohel Sayeed, Siti Fatimah Abdul Razak, Sumendra Yogarayan, Oluwatosin Ahmed Amodu
PeerJ Computer Science.2024; 10: e1845. CrossRef - Artificial Intelligence and Its Role in Medical Research
Anurag Gola, Ambarish Das, Amar B. Gumataj, S. Amirdhavarshini, J. Venkatachalam
Current Medical Issues.2024; 22(2): 97. CrossRef - From advancements to ethics: Assessing ChatGPT’s role in writing research paper
Vasu Gupta, Fnu Anamika, Kinna Parikh, Meet A Patel, Rahul Jain, Rohit Jain
Turkish Journal of Internal Medicine.2024; 6(2): 74. CrossRef - Yapay Zekânın Edebiyatta Kullanım Serüveni
Nesime Ceyhan Akça, Serap Aslan Cobutoğlu, Özlem Yeşim Özbek, Mehmet Furkan Akça
RumeliDE Dil ve Edebiyat Araştırmaları Dergisi.2024; (39): 283. CrossRef - ChatGPT's Gastrointestinal Tumor Board Tango: A limping dance partner?
Ughur Aghamaliyev, Javad Karimbayli, Clemens Giessen-Jung, Matthias Ilmer, Kristian Unger, Dorian Andrade, Felix O. Hofmann, Maximilian Weniger, Martin K. Angele, C. Benedikt Westphalen, Jens Werner, Bernhard W. Renz
European Journal of Cancer.2024; 205: 114100. CrossRef - Gout and Gout-Related Comorbidities: Insight and Limitations from Population-Based Registers in Sweden
Panagiota Drivelegka, Lennart TH Jacobsson, Mats Dehlin
Gout, Urate, and Crystal Deposition Disease.2024; 2(2): 144. CrossRef - Artificial intelligence in academic cardiothoracic surgery
Adham AHMED, Irbaz HAMEED
The Journal of Cardiovascular Surgery.2024;[Epub] CrossRef - The emergence of generative artificial intelligence platforms in 2023, journal metrics, appreciation to reviewers and volunteers, and obituary
Sun Huh
Journal of Educational Evaluation for Health Professions.2024; 21: 9. CrossRef - A survey of safety and trustworthiness of large language models through the lens of verification and validation
Xiaowei Huang, Wenjie Ruan, Wei Huang, Gaojie Jin, Yi Dong, Changshun Wu, Saddek Bensalem, Ronghui Mu, Yi Qi, Xingyu Zhao, Kaiwen Cai, Yanghao Zhang, Sihao Wu, Peipei Xu, Dengyu Wu, Andre Freitas, Mustafa A. Mustafa
Artificial Intelligence Review.2024;[Epub] CrossRef - Identification of ChatGPT-Generated Abstracts Within Shoulder and Elbow Surgery Poses a Challenge for Reviewers
Ryan D. Stadler, Suleiman Y. Sudah, Michael A. Moverman, Patrick J. Denard, Xavier A. Duralde, Grant E. Garrigues, Christopher S. Klifto, Jonathan C. Levy, Surena Namdari, Joaquin Sanchez-Sotelo, Mariano E. Menendez
Arthroscopy: The Journal of Arthroscopic & Related Surgery.2024;[Epub] CrossRef - Decision-Making Framework for the Utilization of Generative Artificial Intelligence in Education: A Case Study of ChatGPT
Umar Ali Bukar, Md. Shohel Sayeed, Siti Fatimah Abdul Razak, Sumendra Yogarayan, Radhwan Sneesl
IEEE Access.2024; 12: 95368. CrossRef - ChatGPT or Gemini: Who Makes the Better Scientific Writing Assistant?
Hatoon S. AlSagri, Faiza Farhat, Shahab Saquib Sohail, Abdul Khader Jilani Saudagar
Journal of Academic Ethics.2024;[Epub] CrossRef - The Syntax of Smart Writing: Artificial Intelligence Unveiled
Balaji Arumugam, Arun Murugan, Kirubakaran S., Saranya Rajamanickam
International Journal of Preventative & Evidence Based Medicine.2024; : 1. CrossRef - Generative artificial intelligence usage by researchers at work: Effects of gender, career stage, type of workplace, and perceived barriers
Pablo Dorta-González, Alexis Jorge López-Puig, María Isabel Dorta-González, Sara M. González-Betancor
Telematics and Informatics.2024; 94: 102187. CrossRef - Let stochastic parrots squawk: why academic journals should allow large language models to coauthor articles
Nicholas J. Abernethy
AI and Ethics.2024;[Epub] CrossRef - Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer
Casey Watters, Michal K. Lemanski
Frontiers in Big Data.2023;[Epub] CrossRef - The importance of human supervision in the use of ChatGPT as a support tool in scientific writing
William Castillo-González
Metaverse Basic and Applied Research.2023;[Epub] CrossRef - ChatGPT for Future Medical and Dental Research
Bader Fatani
Cureus.2023;[Epub] CrossRef - Chatbots in Medical Research
Punit Sharma
Clinical Nuclear Medicine.2023; 48(9): 838. CrossRef - Potential applications of ChatGPT in dermatology
Nicolas Kluger
Journal of the European Academy of Dermatology and Venereology.2023;[Epub] CrossRef - The emergent role of artificial intelligence, natural learning processing, and large language models in higher education and research
Tariq Alqahtani, Hisham A. Badreldin, Mohammed Alrashed, Abdulrahman I. Alshaya, Sahar S. Alghamdi, Khalid bin Saleh, Shuroug A. Alowais, Omar A. Alshaya, Ishrat Rahman, Majed S. Al Yami, Abdulkareem M. Albekairy
Research in Social and Administrative Pharmacy.2023; 19(8): 1236. CrossRef - ChatGPT Performance on the American Urological Association Self-assessment Study Program and the Potential Influence of Artificial Intelligence in Urologic Training
Nicholas A. Deebel, Ryan Terlecki
Urology.2023; 177: 29. CrossRef - Intelligence or artificial intelligence? More hard problems for authors of Biological Psychology, the neurosciences, and everyone else
Thomas Ritz
Biological Psychology.2023; 181: 108590. CrossRef - The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts
Mohammad Hosseini, David B Resnik, Kristi Holmes
Research Ethics.2023; 19(4): 449. CrossRef - How trustworthy is ChatGPT? The case of bibliometric analyses
Faiza Farhat, Shahab Saquib Sohail, Dag Øivind Madsen
Cogent Engineering.2023;[Epub] CrossRef - Disclosing use of Artificial Intelligence: Promoting transparency in publishing
Parvaiz A. Koul
Lung India.2023; 40(5): 401. CrossRef - ChatGPT in medical research: challenging time ahead
Daideepya C Bhargava, Devendra Jadav, Vikas P Meshram, Tanuj Kanchan
Medico-Legal Journal.2023; 91(4): 223. CrossRef - Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis
Mike Perkins, Jasper Roe
F1000Research.2023; 12: 1398. CrossRef - The Role of AI in Writing an Article and Whether it Can Be a Co-author: What if it Gets Support From 2 Different AIs Like ChatGPT and Google Bard for the Same Theme?
İlhan Bahşi, Ayşe Balat
Journal of Craniofacial Surgery.2023;[Epub] CrossRef - Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
Sun Huh
Child Health Nursing Research.2023; 29(4): 249. CrossRef - ChatGPT in medical writing: A game-changer or a gimmick?
Shital Sarah Ahaley, Ankita Pandey, Simran Kaur Juneja, Tanvi Suhane Gupta, Sujatha Vijayakumar
Perspectives in Clinical Research.2023;[Epub] CrossRef - Artificial Intelligence-Supported Systems in Anesthesiology and Its Standpoint to Date—A Review
Fiona M. P. Pham
Open Journal of Anesthesiology.2023; 13(07): 140. CrossRef - ChatGPT as an innovative tool for increasing sales in online stores
Michał Orzoł, Katarzyna Szopik-Depczyńska
Procedia Computer Science.2023; 225: 3450. CrossRef - Intelligent Plagiarism as a Misconduct in Academic Integrity
Jesús Miguel Muñoz-Cantero, Eva Maria Espiñeira-Bellón
Acta Médica Portuguesa.2023; 37(1): 1. CrossRef - Follow-up of Artificial Intelligence Development and its Controlled Contribution to the Article: Step to the Authorship?
Ekrem Solmaz
European Journal of Therapeutics.2023;[Epub] CrossRef - May Artificial Intelligence Be a Co-Author on an Academic Paper?
Ayşe Balat, İlhan Bahşi
European Journal of Therapeutics.2023; 29(3): e12. CrossRef - Opportunities and challenges for ChatGPT and large language models in biomedicine and health
Shubo Tian, Qiao Jin, Lana Yeganova, Po-Ting Lai, Qingqing Zhu, Xiuying Chen, Yifan Yang, Qingyu Chen, Won Kim, Donald C Comeau, Rezarta Islamaj, Aadit Kapoor, Xin Gao, Zhiyong Lu
Briefings in Bioinformatics.2023;[Epub] CrossRef - ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614. CrossRef - Editorial policies on the use of generative artificial intelligence in article writing and peer-review in the Journal of Educational Evaluation for Health Professions
Sun Huh
Journal of Educational Evaluation for Health Professions.2023; 20: 40. CrossRef - Should We Wait for Major Frauds to Unveil to Plan an AI Use License?
Istemihan Coban
European Journal of Therapeutics.2023; 30(2): 198. CrossRef
-
Immersive simulation in nursing and midwifery education: a systematic review
-
Lahoucine Ben Yahya, Aziz Naciri, Mohamed Radid, Ghizlane Chemsi
-
J Educ Eval Health Prof. 2024;21:19. Published online August 8, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.19
-
-
Abstract
PDFSupplementary Material
- Purpose
Immersive simulation is an innovative training approach in health education that enhances student learning. This study examined its impact on engagement, motivation, and academic performance in nursing and midwifery students.
Methods
A comprehensive systematic search was meticulously conducted in 4 reputable databases—Scopus, PubMed, Web of Science, and Science Direct—following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The research protocol was pre-registered in the PROSPERO registry, ensuring transparency and rigor. The quality of the included studies was assessed using the Medical Education Research Study Quality Instrument.
Results
Out of 90 identified studies, 11 were included in the present review, involving 1,090 participants. Four out of 5 studies observed high post-test engagement scores in the intervention groups. Additionally, 5 out of 6 studies that evaluated motivation found higher post-test motivational scores in the intervention groups than in control groups using traditional approaches. Furthermore, among the 8 out of 11 studies that evaluated academic performance during immersive simulation training, 5 reported significant differences (P<0.001) in favor of the students in the intervention groups.
Conclusion
Immersive simulation, as demonstrated by this study, has a significant potential to enhance student engagement, motivation, and academic performance, surpassing traditional teaching methods. This potential underscores the urgent need for future research in various contexts to better integrate this innovative educational approach into nursing and midwifery education curricula, inspiring hope for improved teaching methods.
Brief report
-
Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study
-
Sun Huh
-
J Educ Eval Health Prof. 2023;20:1. Published online January 11, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.1
-
-
13,800
View
-
1,071
Download
-
162
Web of Science
-
80
Crossref
-
Abstract
PDFSupplementary Material
- This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.
-
Citations
Citations to this article as recorded by
- Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study
Aravind P Gandhi, Felista Karen Joesph, Vineeth Rajagopal, P Aparnavi, Sushma Katkuri, Sonal Dayama, Prakasini Satapathy, Mahalaqua Nazli Khatib, Shilpa Gaidhane, Quazi Syed Zahiruddin, Ashish Behera
JMIR Formative Research.2024; 8: e49964. CrossRef - Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers
Ana Suárez, Víctor Díaz‐Flores García, Juan Algar, Margarita Gómez Sánchez, María Llorente de Pedro, Yolanda Freire
International Endodontic Journal.2024; 57(1): 108. CrossRef - Bob or Bot: Exploring ChatGPT's Answers to University Computer Science Assessment
Mike Richards, Kevin Waugh, Mark Slaymaker, Marian Petre, John Woodthorpe, Daniel Gooch
ACM Transactions on Computing Education.2024; 24(1): 1. CrossRef - A systematic review of ChatGPT use in K‐12 education
Peng Zhang, Gemma Tur
European Journal of Education.2024;[Epub] CrossRef - Evaluating ChatGPT as a self‐learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination
Krishna Mohan Surapaneni, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, Porchelvan Swaminathan
Biochemistry and Molecular Biology Education.2024; 52(2): 237. CrossRef - Examining the use of ChatGPT in public universities in Hong Kong: a case study of restricted access areas
Michelle W. T. Cheng, Iris H. Y. YIM
Discover Education.2024;[Epub] CrossRef - Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study
Firas Haddad, Joanna S Saade
JMIR Medical Education.2024; 10: e50842. CrossRef - Assessment of Artificial Intelligence Platforms With Regard to Medical Microbiology Knowledge: An Analysis of ChatGPT and Gemini
Jai Ranjan, Absar Ahmad, Monalisa Subudhi, Ajay Kumar
Cureus.2024;[Epub] CrossRef - A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision‐making in nursing
Mor Saban, Ilana Dubovi
Journal of Advanced Nursing.2024;[Epub] CrossRef - Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study
Annika Meyer, Janik Riese, Thomas Streichert
JMIR Medical Education.2024; 10: e50965. CrossRef - From hype to insight: Exploring ChatGPT's early footprint in education via altmetrics and bibliometrics
Lung‐Hsiang Wong, Hyejin Park, Chee‐Kit Looi
Journal of Computer Assisted Learning.2024; 40(4): 1428. CrossRef - A scoping review of artificial intelligence in medical education: BEME Guide No. 84
Morris Gordon, Michelle Daniel, Aderonke Ajiboye, Hussein Uraiby, Nicole Y. Xu, Rangana Bartlett, Janice Hanson, Mary Haas, Maxwell Spadafore, Ciaran Grafton-Clarke, Rayhan Yousef Gasiea, Colin Michie, Janet Corral, Brian Kwan, Diana Dolmans, Satid Thamma
Medical Teacher.2024; 46(4): 446. CrossRef - Üniversite Öğrencilerinin ChatGPT 3,5 Deneyimleri: Yapay Zekâyla Yazılmış Masal Varyantları
Bilge GÖK, Fahri TEMİZYÜREK, Özlem BAŞ
Korkut Ata Türkiyat Araştırmaları Dergisi.2024; (14): 1040. CrossRef - Tracking ChatGPT Research: Insights from the literature and the web
Omar Mubin, Fady Alnajjar, Zouheir Trabelsi, Luqman Ali, Medha Mohan Ambali Parambil, Zhao Zou
IEEE Access.2024; : 1. CrossRef - Potential applications of ChatGPT in obstetrics and gynecology in Korea: a review article
YooKyung Lee, So Yun Kim
Obstetrics & Gynecology Science.2024; 67(2): 153. CrossRef - Application of generative language models to orthopaedic practice
Jessica Caterson, Olivia Ambler, Nicholas Cereceda-Monteoliva, Matthew Horner, Andrew Jones, Arwel Tomos Poacher
BMJ Open.2024; 14(3): e076484. CrossRef - Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
Xiaojun Xu, Yixiao Chen, Jing Miao
Journal of Educational Evaluation for Health Professions.2024; 21: 6. CrossRef - The advent of ChatGPT: Job Made Easy or Job Loss to Data Analysts
Abiola Timothy Owolabi, Oluwaseyi Oluwadamilare Okunlola, Emmanuel Taiwo Adewuyi, Janet Iyabo Idowu, Olasunkanmi James Oladapo
WSEAS TRANSACTIONS ON COMPUTERS.2024; 23: 24. CrossRef - ChatGPT in dentomaxillofacial radiology education
Hilal Peker Öztürk, Hakan Avsever, Buğra Şenel, Şükran Ayran, Mustafa Çağrı Peker, Hatice Seda Özgedik, Nurten Baysal
Journal of Health Sciences and Medicine.2024; 7(2): 224. CrossRef - Performance of ChatGPT on the Korean National Examination for Dental Hygienists
Soo-Myoung Bae, Hye-Rim Jeon, Gyoung-Nam Kim, Seon-Hui Kwak, Hyo-Jin Lee
Journal of Dental Hygiene Science.2024; 24(1): 62. CrossRef - Medical knowledge of ChatGPT in public health, infectious diseases, COVID-19 pandemic, and vaccines: multiple choice questions examination based performance
Sultan Ayoub Meo, Metib Alotaibi, Muhammad Zain Sultan Meo, Muhammad Omair Sultan Meo, Mashhood Hamid
Frontiers in Public Health.2024;[Epub] CrossRef - Unlock the potential for Saudi Arabian higher education: a systematic review of the benefits of ChatGPT
Eman Faisal
Frontiers in Education.2024;[Epub] CrossRef - Does the Information Quality of ChatGPT Meet the Requirements of Orthopedics and Trauma Surgery?
Adnan Kasapovic, Thaer Ali, Mari Babasiz, Jessica Bojko, Martin Gathen, Robert Kaczmarczyk, Jonas Roos
Cureus.2024;[Epub] CrossRef - Exploring the Profile of University Assessments Flagged as Containing AI-Generated Material
Daniel Gooch, Kevin Waugh, Mike Richards, Mark Slaymaker, John Woodthorpe
ACM Inroads.2024; 15(2): 39. CrossRef - Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom’s Taxonomy
Ambadasu Bharatha, Nkemcho Ojeh, Ahbab Mohammad Fazle Rabbi, Michael Campbell, Kandamaran Krishnamurthy, Rhaheem Layne-Yarde, Alok Kumar, Dale Springer, Kenneth Connell, Md Anwarul Majumder
Advances in Medical Education and Practice.2024; Volume 15: 393. CrossRef - The emergence of generative artificial intelligence platforms in 2023, journal metrics, appreciation to reviewers and volunteers, and obituary
Sun Huh
Journal of Educational Evaluation for Health Professions.2024; 21: 9. CrossRef - ChatGPT, a Friend or a Foe in Medical Education: A Review of Strengths, Challenges, and Opportunities
Mahdi Zarei, Maryam Zarei, Sina Hamzehzadeh, Sepehr Shakeri Bavil Oliyaei, Mohammad-Salar Hosseini
Shiraz E-Medical Journal.2024;[Epub] CrossRef - Augmenting intensive care unit nursing practice with generative AI: A formative study of diagnostic synergies using simulation‐based clinical cases
Chedva Levin, Moriya Suliman, Etti Naimi, Mor Saban
Journal of Clinical Nursing.2024;[Epub] CrossRef - Artificial intelligence chatbots for the nutrition management of diabetes and the metabolic syndrome
Farah Naja, Mandy Taktouk, Dana Matbouli, Sharfa Khaleel, Ayah Maher, Berna Uzun, Maryam Alameddine, Lara Nasreddine
European Journal of Clinical Nutrition.2024; 78(10): 887. CrossRef - Large language models in healthcare: from a systematic review on medical examinations to a comparative analysis on fundamentals of robotic surgery online test
Andrea Moglia, Konstantinos Georgiou, Pietro Cerveri, Luca Mainardi, Richard M. Satava, Alfred Cuschieri
Artificial Intelligence Review.2024;[Epub] CrossRef - Is ChatGPT Enhancing Youth’s Learning, Engagement and Satisfaction?
Christina Sanchita Shah, Smriti Mathur, Sushant Kr. Vishnoi
Journal of Computer Information Systems.2024; : 1. CrossRef - Comparison of ChatGPT, Gemini, and Le Chat with physician interpretations of medical laboratory questions from an online health forum
Annika Meyer, Ari Soleman, Janik Riese, Thomas Streichert
Clinical Chemistry and Laboratory Medicine (CCLM).2024;[Epub] CrossRef - Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis
Hye Kyung Jin, Ha Eun Lee, EunYoung Kim
BMC Medical Education.2024;[Epub] CrossRef - Role of ChatGPT in Dentistry: A Review
Pratik Surana, Priyanka P. Ostwal, Shruti Vishal Dev, Jayesh Tiwari, Kadire Shiva Charan Yadav, Gajji Renuka
Research Journal of Pharmacy and Technology.2024; : 3489. CrossRef - Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology
Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal
Cureus.2023;[Epub] CrossRef - Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
Sun Huh
Journal of Educational Evaluation for Health Professions.2023; 20: 5. CrossRef - Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic
Sun Huh
Science Editing.2023; 10(1): 1. CrossRef - Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum
Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta
Cureus.2023;[Epub] CrossRef - Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry
Arindam Ghosh, Aritri Bir
Cureus.2023;[Epub] CrossRef - Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts
Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy
Cureus.2023;[Epub] CrossRef - ChatGPT for Future Medical and Dental Research
Bader Fatani
Cureus.2023;[Epub] CrossRef - ChatGPT in Dentistry: A Comprehensive Review
Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed
Cureus.2023;[Epub] CrossRef - Can we trust AI chatbots’ answers about disease diagnosis and patient care?
Sun Huh
Journal of the Korean Medical Association.2023; 66(4): 218. CrossRef - Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh
JMIR Medical Education.2023; 9: e48291. CrossRef - Early applications of ChatGPT in medical practice, education and research
Sam Sedaghat
Clinical Medicine.2023; 23(3): 278. CrossRef - A Review of Research on Teaching and Learning Transformation under the Influence of ChatGPT Technology
璇 师
Advances in Education.2023; 13(05): 2617. CrossRef - Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study
Soshi Takagi, Takashi Watari, Ayano Erabi, Kota Sakaguchi
JMIR Medical Education.2023; 9: e48002. CrossRef - ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions
Cosima C. Hoch, Barbara Wollenberg, Jan-Christoffer Lüers, Samuel Knoedler, Leonard Knoedler, Konstantin Frank, Sebastian Cotofana, Michael Alfertshofer
European Archives of Oto-Rhino-Laryngology.2023; 280(9): 4271. CrossRef - Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology
Mayank Agarwal, Priyanka Sharma, Ayan Goswami
Cureus.2023;[Epub] CrossRef - The Intersection of ChatGPT, Clinical Medicine, and Medical Education
Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali
JMIR Medical Education.2023; 9: e47274. CrossRef - The Role of Artificial Intelligence in Higher Education: ChatGPT Assessment for Anatomy Course
Tarık TALAN, Yusuf KALINKARA
Uluslararası Yönetim Bilişim Sistemleri ve Bilgisayar Bilimleri Dergisi.2023; 7(1): 33. CrossRef - Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu
Journal of Educational Evaluation for Health Professions.2023; 20: 17. CrossRef - Examining Real-World Medication Consultations and Drug-Herb Interactions: ChatGPT Performance Evaluation
Hsing-Yu Hsu, Kai-Cheng Hsu, Shih-Yen Hou, Ching-Lung Wu, Yow-Wen Hsieh, Yih-Dih Cheng
JMIR Medical Education.2023; 9: e48433. CrossRef - Assessing the Efficacy of ChatGPT in Solving Questions Based on the Core Concepts in Physiology
Arijita Banerjee, Aquil Ahmad, Payal Bhalla, Kavita Goyal
Cureus.2023;[Epub] CrossRef - ChatGPT Performs on the Chinese National Medical Licensing Examination
Xinyi Wang, Zhenye Gong, Guoxin Wang, Jingdan Jia, Ying Xu, Jialu Zhao, Qingye Fan, Shaun Wu, Weiguo Hu, Xiaoyang Li
Journal of Medical Systems.2023;[Epub] CrossRef - Artificial intelligence and its impact on job opportunities among university students in North Lima, 2023
Doris Ruiz-Talavera, Jaime Enrique De la Cruz-Aguero, Nereo García-Palomino, Renzo Calderón-Espinoza, William Joel Marín-Rodriguez
ICST Transactions on Scalable Information Systems.2023;[Epub] CrossRef - Revolutionizing Dental Care: A Comprehensive Review of Artificial Intelligence Applications Among Various Dental Specialties
Najd Alzaid, Omar Ghulam, Modhi Albani, Rafa Alharbi, Mayan Othman, Hasan Taher, Saleem Albaradie, Suhael Ahmed
Cureus.2023;[Epub] CrossRef - Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review
Carl Preiksaitis, Christian Rose
JMIR Medical Education.2023; 9: e48785. CrossRef - Exploring the impact of language models, such as ChatGPT, on student learning and assessment
Araz Zirar
Review of Education.2023;[Epub] CrossRef - Large Language Models and Artificial Intelligence: A Primer for Plastic Surgeons on the Demonstrated and Potential Applications, Promises, and Limitations of ChatGPT
Jad Abi-Rafeh, Hong Hao Xu, Roy Kazan, Ruth Tevlin, Heather Furnas
Aesthetic Surgery Journal.2023;[Epub] CrossRef - Evaluating the reliability of ChatGPT as a tool for imaging test referral: a comparative study with a clinical decision support system
Shani Rosen, Mor Saban
European Radiology.2023; 34(5): 2826. CrossRef - Redesigning Tertiary Educational Evaluation with AI: A Task-Based Analysis of LIS Students’ Assessment on Written Tests and Utilizing ChatGPT at NSTU
Shamima Yesmin
Science & Technology Libraries.2023; : 1. CrossRef - ChatGPT and the AI revolution: a comprehensive investigation of its multidimensional impact and potential
Mohd Afjal
Library Hi Tech.2023;[Epub] CrossRef - The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard
Hasan B Ilgaz, Zehra Çelik
Cureus.2023;[Epub] CrossRef - Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
Abhra Ghosh, Nandita Maini Jindal, Vikram K Gupta, Ekta Bansal, Navjot Kaur Bajwa, Abhishek Sett
Cureus.2023;[Epub] CrossRef - Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
Sun Huh
Child Health Nursing Research.2023; 29(4): 249. CrossRef - Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study
Osman Babayiğit, Zeynep Tastan Eroglu, Dilek Ozkan Sen, Fatma Ucan Yarkac
Cureus.2023;[Epub] CrossRef - Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study
Aleksandra Ignjatović, Lazar Stevanović
Journal of Educational Evaluation for Health Professions.2023; 20: 28. CrossRef - Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study
Krishna Mohan Surapaneni
JMIR Medical Education.2023; 9: e47191. CrossRef - Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa
Journal of Educational Evaluation for Health Professions.2023; 20: 30. CrossRef - ChatGPT’s performance in German OB/GYN exams – paving the way for AI-enhanced medical education and clinical practice
Maximilian Riedel, Katharina Kaefinger, Antonia Stuehrenberg, Viktoria Ritter, Niklas Amann, Anna Graf, Florian Recker, Evelyn Klein, Marion Kiechle, Fabian Riedel, Bastian Meyer
Frontiers in Medicine.2023;[Epub] CrossRef - Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study
Janghee Park
Journal of Educational Evaluation for Health Professions.2023; 20: 29. CrossRef - FROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKING
Yaroslav Mykhalko, Pavlo Kish, Yelyzaveta Rubtsova, Oleksandr Kutsyn, Valentyna Koval
Wiadomości Lekarskie.2023; 76(11): 2345. CrossRef - Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions
Pasin Tangadulrat, Supinya Sono, Boonsin Tangtrakulwanich
JMIR Medical Education.2023; 9: e50658. CrossRef - Below average ChatGPT performance in medical microbiology exam compared to university students
Malik Sallam, Khaled Al-Salahat
Frontiers in Education.2023;[Epub] CrossRef - ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614. CrossRef - ChatGPT Review: A Sophisticated Chatbot Models in Medical & Health-related Teaching and Learning
Nur Izah Ab Razak, Muhammad Fawwaz Muhammad Yusoff, Rahmita Wirza O.K. Rahmat
Malaysian Journal of Medicine and Health Sciences.2023; 19(s12): 98. CrossRef - Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
Tae Won Kim
Journal of Educational Evaluation for Health Professions.2023; 20: 38. CrossRef - Trends in research on ChatGPT and adoption-related issues discussed in articles: a narrative review
Sang-Jun Kim
Science Editing.2023; 11(1): 3. CrossRef - Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
Hyunju Lee, Soobin Park
Journal of Educational Evaluation for Health Professions.2023; 20: 39. CrossRef
Educational/Faculty development material
-
The 6 degrees of curriculum integration in medical education in the United States
-
Julie Youm, Jennifer Christner, Kevin Hittle, Paul Ko, Cinda Stone, Angela D. Blood, Samara Ginzburg
-
J Educ Eval Health Prof. 2024;21:15. Published online June 13, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.15
-
-
Abstract
PDFSupplementary Material
- Despite explicit expectations and accreditation requirements for integrated curriculum, there needs to be more clarity around an accepted common definition, best practices for implementation, and criteria for successful curriculum integration. To address the lack of consensus surrounding integration, we reviewed the literature and herein propose a definition for curriculum integration for the medical education audience. We further believe that medical education is ready to move beyond “horizontal” (1-dimensional) and “vertical” (2-dimensional) integration and propose a model of “6 degrees of curriculum integration” to expand the 2-dimensional concept for future designs of medical education programs and best prepare learners to meet the needs of patients. These 6 degrees include: interdisciplinary, timing and sequencing, instruction and assessment, incorporation of basic and clinical sciences, knowledge and skills-based competency progression, and graduated responsibilities in patient care. We encourage medical educators to look beyond 2-dimensional integration to this holistic and interconnected representation of curriculum integration.
Research articles
-
Mentorship and self-efficacy are associated with lower burnout in physical therapists in the United States: a cross-sectional survey study
-
Matthew Pugliese, Jean-Michel Brismée, Brad Allen, Sean Riley, Justin Tammany, Paul Mintken
-
J Educ Eval Health Prof. 2023;20:27. Published online September 27, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.27
-
-
4,710
View
-
364
Download
-
2
Web of Science
-
2
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study investigated the prevalence of burnout in physical therapists in the United States and the relationships between burnout and education, mentorship, and self-efficacy.
Methods
This was a cross-sectional survey study. An electronic survey was distributed to practicing physical therapists across the United States over a 6-week period from December 2020 to January 2021. The survey was completed by 2,813 physical therapists from all states. The majority were female (68.72%), White or Caucasian (80.13%), and employed full-time (77.14%). Respondents completed questions on demographics, education, mentorship, self-efficacy, and burnout. The Burnout Clinical Subtypes Questionnaire 12 (BCSQ-12) and self-reports were used to quantify burnout, and the General Self-Efficacy Scale (GSES) was used to measure self-efficacy. Descriptive and inferential analyses were performed.
Results
Respondents from home health (median BCSQ-12=42.00) and skilled nursing facility settings (median BCSQ-12=42.00) displayed the highest burnout scores. Burnout was significantly lower among those who provided formal mentorship (median BCSQ-12=39.00, P=0.0001) compared to no mentorship (median BCSQ-12=41.00). Respondents who received formal mentorship (median BCSQ-12=38.00, P=0.0028) displayed significantly lower burnout than those who received no mentorship (median BCSQ-12=41.00). A moderate negative correlation (rho=-0.49) was observed between the GSES and burnout scores. A strong positive correlation was found between self-reported burnout status and burnout scores (rrb=0.61).
Conclusion
Burnout is prevalent in the physical therapy profession, as almost half of respondents (49.34%) reported burnout. Providing or receiving mentorship and higher self-efficacy were associated with lower burnout. Organizations should consider measuring burnout levels, investing in mentorship programs, and implementing strategies to improve self-efficacy.
-
Citations
Citations to this article as recorded by
- Wellness and Stress Management Practices Among Healthcare Professionals and Health Professional Students
Asli C. Yalim, Katherine Daly, Monica Bailey, Denise Kay, Xiang Zhu, Mohammed Patel, Laurie C. Neely, Desiree A. Díaz, Denyi M. Canario Asencio, Karla Rosario, Melissa Cowan, Magdalena Pasarica
American Journal of Health Promotion.2024;[Epub] CrossRef - Interprofessional education to support alcohol use screening and future team-based management of stress-related disorders in vulnerable populations
Taylor Fitzpatrick-Schmidt, Scott Edwards
Frontiers in Education.2024;[Epub] CrossRef
-
Doctoral physical therapy students’ increased confidence following exploration of active video gaming systems in a problem-based learning curriculum in the United States: a pre- and post-intervention study
-
Michelle Elizabeth Wormley, Wendy Romney, Diana Veneri, Andrea Oberlander
-
J Educ Eval Health Prof. 2022;19:7. Published online April 26, 2022
-
DOI: https://doi.org/10.3352/jeehp.2022.19.7
-
-
8,797
View
-
304
Download
-
1
Web of Science
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
Active video gaming (AVG) is used in physical therapy (PT) to treat individuals with a variety of diagnoses across the lifespan. The literature supports improvements in balance, cardiovascular endurance, and motor control; however, evidence is lacking regarding the implementation of AVG in PT education. This study investigated doctoral physical therapy (DPT) students’ confidence following active exploration of AVG systems as a PT intervention in the United States.
Methods
This pretest-posttest study included 60 DPT students in 2017 (cohort 1) and 55 students in 2018 (cohort 2) enrolled in a problem-based learning curriculum. AVG systems were embedded into patient cases and 2 interactive laboratory classes across 2 consecutive semesters (April–December 2017 and April–December 2018). Participants completed a 31-question survey before the intervention and 8 months later. Students’ confidence was rated for general use, game selection, plan of care, set-up, documentation, setting, and demographics. Descriptive statistics and the Wilcoxon signed-rank test were used to compare differences in confidence pre- and post-intervention.
Results
Both cohorts showed increased confidence at the post-test, with median (interquartile range) scores as follows: cohort 1: pre-test, 57.1 (44.3–63.5); post-test, 79.1 (73.1–85.4); and cohort 2: pre-test, 61.4 (48.0–70.7); post-test, 89.3 (80.0–93.2). Cohort 2 was significantly more confident at baseline than cohort 1 (P<0.05). In cohort 1, students’ data were paired and confidence levels significantly increased in all domains: use, Z=-6.2 (P<0.01); selection, Z=-5.9 (P<0.01); plan of care, Z=-6.0 (P<0.01); set-up, Z=-5.5 (P<0.01); documentation, Z=-6.0 (P<0.01); setting, Z=-6.3 (P<0.01); and total score, Z=-6.4 (P<0.01).
Conclusion
Structured, active experiences with AVG resulted in a significant increase in students’ confidence. As technology advances in healthcare delivery, it is essential to expose students to these technologies in the classroom.
-
Citations
Citations to this article as recorded by
- The use of artificial intelligence in crafting a novel method for teaching normal human gait
Scott W. Lowe
European Journal of Physiotherapy.2024; : 1. CrossRef
Editorial
Research article
-
No difference in factual or conceptual recall comprehension for tablet, laptop, and handwritten note-taking by medical students in the United States: a survey-based observational study
-
Warren Wiechmann, Robert Edwards, Cheyenne Low, Alisa Wray, Megan Boysen-Osborn, Shannon Toohey
-
J Educ Eval Health Prof. 2022;19:8. Published online April 26, 2022
-
DOI: https://doi.org/10.3352/jeehp.2022.19.8
-
-
11,798
View
-
490
Download
-
2
Web of Science
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
Technological advances are changing how students approach learning. The traditional note-taking methods of longhand writing have been supplemented and replaced by tablets, smartphones, and laptop note-taking. It has been theorized that writing notes by hand requires more complex cognitive processes and may lead to better retention. However, few studies have investigated the use of tablet-based note-taking, which allows the incorporation of typing, drawing, highlights, and media. We therefore sought to confirm the hypothesis that tablet-based note-taking would lead to equivalent or better recall as compared to written note-taking.
Methods
We allocated 68 students into longhand, laptop, or tablet note-taking groups, and they watched and took notes on a presentation on which they were assessed for factual and conceptual recall. A second short distractor video was shown, followed by a 30-minute assessment at the University of California, Irvine campus, over a single day period in August 2018. Notes were analyzed for content, supplemental drawings, and other media sources.
Results
No significant difference was found in the factual or conceptual recall scores for tablet, laptop, and handwritten note-taking (P=0.61). The median word count was 131.5 for tablets, 121.0 for handwriting, and 297.0 for laptops (P=0.01). The tablet group had the highest presence of drawing, highlighting, and other media/tools.
Conclusion
In light of conflicting research regarding the best note-taking method, our study showed that longhand note-taking is not superior to tablet or laptop note-taking. This suggests students should be encouraged to pick the note-taking method that appeals most to them. In the future, traditional note-taking may be replaced or supplemented with digital technologies that provide similar efficacy with more convenience.
-
Citations
Citations to this article as recorded by
- Typed Versus Handwritten Lecture Notes and College Student Achievement: A Meta-Analysis
Abraham E. Flanigan, Jordan Wheeler, Tiphaine Colliot, Junrong Lu, Kenneth A. Kiewra
Educational Psychology Review.2024;[Epub] CrossRef
Review
-
Attraction and achievement as 2 attributes of gamification in healthcare: an evolutionary concept analysis
-
Hyun Kyoung Kim
-
J Educ Eval Health Prof. 2024;21:10. Published online April 11, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.10
-
-
Abstract
PDFSupplementary Material
- This study conducted a conceptual analysis of gamification in healthcare utilizing Rogers’ evolutionary concept analysis methodology to identify its attributes and provide a method for its applications in the healthcare field. Gamification has recently been used as a health intervention and education method, but the concept is used inconsistently and confusingly. A literature review was conducted to derive definitions, surrogate terms, antecedents, influencing factors, attributes (characteristics with dimensions and features), related concepts, consequences, implications, and hypotheses from various academic fields. A total of 56 journal articles in English and Korean, published between August 2 and August 7, 2023, were extracted from databases such as PubMed Central, the Institute of Electrical and Electronics Engineers, the Association for Computing Machinery Digital Library, the Research Information Sharing Service, and the Korean Studies Information Service System, using the keywords “gamification” and “healthcare.” These articles were then analyzed. Gamification in healthcare is defined as the application of game elements in health-related contexts to improve health outcomes. The attributes of this concept were categorized into 2 main areas: attraction and achievement. These categories encompass various strategies for synchronization, enjoyable engagement, visual rewards, and goal-reinforcing frames. Through a multidisciplinary analysis of the concept’s attributes and influencing factors, this paper provides practical strategies for implementing gamification in health interventions. When developing a gamification strategy, healthcare providers can reference this analysis to ensure the game elements are used both appropriately and effectively.
Research article
-
Effect of motion-graphic video-based training on the performance of operating room nurse students in cataract surgery in Iran: a randomized controlled study
-
Behnaz Fatahi, Samira Fatahi, Sohrab Nosrati, Masood Bagheri
-
J Educ Eval Health Prof. 2023;20:34. Published online November 28, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.34
-
-
Abstract
PDFSupplementary Material
- Purpose
The present study was conducted to determine the effect of motion-graphic video-based training on the performance of operating room nurse students in cataract surgery using phacoemulsification at Kermanshah University of Medical Sciences in Iran.
Methods
This was a randomized controlled study conducted among 36 students training to become operating room nurses. The control group only received routine training, and the intervention group received motion-graphic video-based training on the scrub nurse’s performance in cataract surgery in addition to the educator’s training. The performance of the students in both groups as scrub nurses was measured through a researcher-made checklist in a pre-test and a post-test.
Results
The mean scores for performance in the pre-test and post-test were 17.83 and 26.44 in the control group and 18.33 and 50.94 in the intervention group, respectively, and a significant difference was identified between the mean scores of the pre- and post-test in both groups (P=0.001). The intervention also led to a significant increase in the mean performance score in the intervention group compared to the control group (P=0.001).
Conclusion
Considering the significant difference in the performance score of the intervention group compared to the control group, motion-graphic video-based training had a positive effect on the performance of operating room nurse students, and such training can be used to improve clinical training.