Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
63 "Medical student"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis  
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
J Educ Eval Health Prof. 2024;21:5.   Published online February 28, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.5
  • 258 View
  • 82 Download
AbstractAbstract PDFSupplementary Material
Purpose
We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.
Methods
This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.
Results
Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.
Conclusion
Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.
Negative effects on medical students’ scores for clinical performance during the COVID-19 pandemic in Taiwan: a comparative study  
Eunice Jia-Shiow Yuan, Shiau-Shian Huang, Chia-An Hsu, Jiing-Feng Lirng, Tzu-Hao Li, Chia-Chang Huang, Ying-Ying Yang, Chung-Pin Li, Chen-Huan Chen
J Educ Eval Health Prof. 2023;20:37.   Published online December 26, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.37
  • 616 View
  • 65 Download
AbstractAbstract PDFSupplementary Material
Purpose
Coronavirus disease 2019 (COVID-19) has heavily impacted medical clinical education in Taiwan. Medical curricula have been altered to minimize exposure and limit transmission. This study investigated the effect of COVID-19 on Taiwanese medical students’ clinical performance using online standardized evaluation systems and explored the factors influencing medical education during the pandemic.
Methods
Medical students were scored from 0 to 100 based on their clinical performance from 1/1/2018 to 6/31/2021. The students were placed into pre-COVID-19 (before 2/1/2020) and midst-COVID-19 (on and after 2/1/2020) groups. Each group was further categorized into COVID-19-affected specialties (pulmonary, infectious, and emergency medicine) and other specialties. Generalized estimating equations (GEEs) were used to compare and examine the effects of relevant variables on student performance.
Results
In total, 16,944 clinical scores were obtained for COVID-19-affected specialties and other specialties. For the COVID-19-affected specialties, the midst-COVID-19 score (88.513.52) was significantly lower than the pre-COVID-19 score (90.143.55) (P<0.0001). For the other specialties, the midst-COVID-19 score (88.323.68) was also significantly lower than the pre-COVID-19 score (90.063.58) (P<0.0001). There were 1,322 students (837 males and 485 females). Male students had significantly lower scores than female students (89.333.68 vs. 89.993.66, P=0.0017). GEE analysis revealed that the COVID-19 pandemic (unstandardized beta coefficient=-1.99, standard error [SE]=0.13, P<0.0001), COVID-19-affected specialties (B=0.26, SE=0.11, P=0.0184), female students (B=1.10, SE=0.20, P<0.0001), and female attending physicians (B=-0.19, SE=0.08, P=0.0145) were independently associated with students’ scores.
Conclusion
COVID-19 negatively impacted medical students' clinical performance, regardless of their specialty. Female students outperformed male students, irrespective of the pandemic.
Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study  
Janghee Park
J Educ Eval Health Prof. 2023;20:29.   Published online November 10, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.29
  • 1,098 View
  • 116 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.
Methods
The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.
Results
The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”
Conclusion
The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.

Citations

Citations to this article as recorded by  
  • ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students
    Mohammed Zawiah, Fahmi Al-Ashwal, Lobna Gharaibeh, Rana Abu Farha, Karem Alzoubi, Khawla Abu Hammour, Qutaiba A Qasim, Fahd Abrah
    Journal of Multidisciplinary Healthcare.2023; Volume 16: 4099.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Development and validation of the student ratings in clinical teaching scale in Australia: a methodological study  
Pin-Hsiang Huang, Anthony John O’Sullivan, Boaz Shulruf
J Educ Eval Health Prof. 2023;20:26.   Published online September 5, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.26
  • 791 View
  • 108 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to devise a valid measurement for assessing clinical students’ perceptions of teaching practices.
Methods
A new tool was developed based on a meta-analysis encompassing effective clinical teaching-learning factors. Seventy-nine items were generated using a frequency (never to always) scale. The tool was applied to the University of New South Wales year 2, 3, and 6 medical students. Exploratory and confirmatory factor analysis (exploratory factor analysis [EFA] and confirmatory factor analysis [CFA], respectively) were conducted to establish the tool’s construct validity and goodness of fit, and Cronbach’s α was used for reliability.
Results
In total, 352 students (44.2%) completed the questionnaire. The EFA identified student-centered learning, problem-solving learning, self-directed learning, and visual technology (reliability, 0.77 to 0.89). CFA showed acceptable goodness of fit (chi-square P<0.01, comparative fit index=0.930 and Tucker-Lewis index=0.917, root mean square error of approximation=0.069, standardized root mean square residual=0.06).
Conclusion
The established tool—Student Ratings in Clinical Teaching (STRICT)—is a valid and reliable tool that demonstrates how students perceive clinical teaching efficacy. STRICT measures the frequency of teaching practices to mitigate the biases of acquiescence and social desirability. Clinical teachers may use the tool to adapt their teaching practices with more active learning activities and to utilize visual technology to facilitate clinical learning efficacy. Clinical educators may apply STRICT to assess how these teaching practices are implemented in current clinical settings.
Experience of introducing an electronic health records station in an objective structured clinical examination to evaluate medical students’ communication skills in Canada: a descriptive study  
Kuan-chin Jean Chen, Ilona Bartman, Debra Pugh, David Topps, Isabelle Desjardins, Melissa Forgie, Douglas Archibald
J Educ Eval Health Prof. 2023;20:22.   Published online July 4, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.22
  • 2,646 View
  • 132 Download
AbstractAbstract PDFSupplementary Material
Purpose
There is limited literature related to the assessment of electronic medical record (EMR)-related competencies. To address this gap, this study explored the feasibility of an EMR objective structured clinical examination (OSCE) station to evaluate medical students’ communication skills by psychometric analyses and standardized patients’ (SPs) perspectives on EMR use in an OSCE.
Methods
An OSCE station that incorporated the use of an EMR was developed and pilot-tested in March 2020. Students’ communication skills were assessed by SPs and physician examiners. Students’ scores were compared between the EMR station and 9 other stations. A psychometric analysis, including item total correlation, was done. SPs participated in a post-OSCE focus group to discuss their perception of EMRs’ effect on communication.
Results
Ninety-nine 3rd-year medical students participated in a 10-station OSCE that included the use of the EMR station. The EMR station had an acceptable item total correlation (0.217). Students who leveraged graphical displays in counseling received higher OSCE station scores from the SPs (P=0.041). The thematic analysis of SPs’ perceptions of students’ EMR use from the focus group revealed the following domains of themes: technology, communication, case design, ownership of health information, and timing of EMR usage.
Conclusion
This study demonstrated the feasibility of incorporating EMR in assessing learner communication skills in an OSCE. The EMR station had acceptable psychometric characteristics. Some medical students were able to efficiently use the EMRs as an aid in patient counseling. Teaching students how to be patient-centered even in the presence of technology may promote engagement.
What impacts students’ satisfaction the most from Medicine Student Experience Questionnaire in Australia: a validity study  
Pin-Hsiang Huang, Gary Velan, Greg Smith, Melanie Fentoullis, Sean Edward Kennedy, Karen Jane Gibson, Kerry Uebel, Boaz Shulruf
J Educ Eval Health Prof. 2023;20:2.   Published online January 18, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.2
  • 1,386 View
  • 121 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study evaluated the validity of student feedback derived from Medicine Student Experience Questionnaire (MedSEQ), as well as the predictors of students’ satisfaction in the Medicine program.
Methods
Data from MedSEQ applying to the University of New South Wales Medicine program in 2017, 2019, and 2021 were analyzed. Confirmatory factor analysis (CFA) and Cronbach’s α were used to assess the construct validity and reliability of MedSEQ respectively. Hierarchical multiple linear regressions were used to identify the factors that most impact students’ overall satisfaction with the program.
Results
A total of 1,719 students (34.50%) responded to MedSEQ. CFA showed good fit indices (root mean square error of approximation=0.051; comparative fit index=0.939; chi-square/degrees of freedom=6.429). All factors yielded good (α>0.7) or very good (α>0.8) levels of reliability, except the “online resources” factor, which had acceptable reliability (α=0.687). A multiple linear regression model with only demographic characteristics explained 3.8% of the variance in students’ overall satisfaction, whereas the model adding 8 domains from MedSEQ explained 40%, indicating that 36.2% of the variance was attributable to students’ experience across the 8 domains. Three domains had the strongest impact on overall satisfaction: “being cared for,” “satisfaction with teaching,” and “satisfaction with assessment” (β=0.327, 0.148, 0.148, respectively; all with P<0.001).
Conclusion
MedSEQ has good construct validity and high reliability, reflecting students’ satisfaction with the Medicine program. Key factors impacting students’ satisfaction are the perception of being cared for, quality teaching irrespective of the mode of delivery and fair assessment tasks which enhance learning.

Citations

Citations to this article as recorded by  
  • Mental health and quality of life across 6 years of medical training: A year-by-year analysis
    Natalia de Castro Pecci Maddalena, Alessandra Lamas Granero Lucchetti, Ivana Lucia Damasio Moutinho, Oscarina da Silva Ezequiel, Giancarlo Lucchetti
    International Journal of Social Psychiatry.2024; 70(2): 298.     CrossRef
Brief report
Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study  
Sun Huh
J Educ Eval Health Prof. 2023;20:1.   Published online January 11, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.1
  • 10,675 View
  • 1,001 Download
  • 102 Web of Science
  • 61 Crossref
AbstractAbstract PDFSupplementary Material
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.

Citations

Citations to this article as recorded by  
  • Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study
    Aravind P Gandhi, Felista Karen Joesph, Vineeth Rajagopal, P Aparnavi, Sushma Katkuri, Sonal Dayama, Prakasini Satapathy, Mahalaqua Nazli Khatib, Shilpa Gaidhane, Quazi Syed Zahiruddin, Ashish Behera
    JMIR Formative Research.2024; 8: e49964.     CrossRef
  • Large Language Models and Artificial Intelligence: A Primer for Plastic Surgeons on the Demonstrated and Potential Applications, Promises, and Limitations of ChatGPT
    Jad Abi-Rafeh, Hong Hao Xu, Roy Kazan, Ruth Tevlin, Heather Furnas
    Aesthetic Surgery Journal.2024; 44(3): 329.     CrossRef
  • Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers
    Ana Suárez, Víctor Díaz‐Flores García, Juan Algar, Margarita Gómez Sánchez, María Llorente de Pedro, Yolanda Freire
    International Endodontic Journal.2024; 57(1): 108.     CrossRef
  • Bob or Bot: Exploring ChatGPT's Answers to University Computer Science Assessment
    Mike Richards, Kevin Waugh, Mark Slaymaker, Marian Petre, John Woodthorpe, Daniel Gooch
    ACM Transactions on Computing Education.2024; 24(1): 1.     CrossRef
  • Examining the use of ChatGPT in public universities in Hong Kong: a case study of restricted access areas
    Michelle W. T. Cheng, Iris H. Y. YIM
    Discover Education.2024;[Epub]     CrossRef
  • Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study
    Firas Haddad, Joanna S Saade
    JMIR Medical Education.2024; 10: e50842.     CrossRef
  • A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision‐making in nursing
    Mor Saban, Ilana Dubovi
    Journal of Advanced Nursing.2024;[Epub]     CrossRef
  • Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study
    Annika Meyer, Janik Riese, Thomas Streichert
    JMIR Medical Education.2024; 10: e50965.     CrossRef
  • From hype to insight: Exploring ChatGPT's early footprint in education via altmetrics and bibliometrics
    Lung‐Hsiang Wong, Hyejin Park, Chee‐Kit Looi
    Journal of Computer Assisted Learning.2024;[Epub]     CrossRef
  • A scoping review of artificial intelligence in medical education: BEME Guide No. 84
    Morris Gordon, Michelle Daniel, Aderonke Ajiboye, Hussein Uraiby, Nicole Y. Xu, Rangana Bartlett, Janice Hanson, Mary Haas, Maxwell Spadafore, Ciaran Grafton-Clarke, Rayhan Yousef Gasiea, Colin Michie, Janet Corral, Brian Kwan, Diana Dolmans, Satid Thamma
    Medical Teacher.2024; : 1.     CrossRef
  • Üniversite Öğrencilerinin ChatGPT 3,5 Deneyimleri: Yapay Zekâyla Yazılmış Masal Varyantları
    Bilge GÖK, Fahri TEMİZYÜREK, Özlem BAŞ
    Korkut Ata Türkiyat Araştırmaları Dergisi.2024; (14): 1040.     CrossRef
  • Tracking ChatGPT Research: Insights From the Literature and the Web
    Omar Mubin, Fady Alnajjar, Zouheir Trabelsi, Luqman Ali, Medha Mohan Ambali Parambil, Zhao Zou
    IEEE Access.2024; 12: 30518.     CrossRef
  • Potential applications of ChatGPT in obstetrics and gynecology in Korea: a review article
    YooKyung Lee, So Yun Kim
    Obstetrics & Gynecology Science.2024; 67(2): 153.     CrossRef
  • Application of generative language models to orthopaedic practice
    Jessica Caterson, Olivia Ambler, Nicholas Cereceda-Monteoliva, Matthew Horner, Andrew Jones, Arwel Tomos Poacher
    BMJ Open.2024; 14(3): e076484.     CrossRef
  • Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology
    Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal
    Cureus.2023;[Epub]     CrossRef
  • Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2023; 20: 5.     CrossRef
  • Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic
    Sun Huh
    Science Editing.2023; 10(1): 1.     CrossRef
  • Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum
    Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta
    Cureus.2023;[Epub]     CrossRef
  • Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry
    Arindam Ghosh, Aritri Bir
    Cureus.2023;[Epub]     CrossRef
  • Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts
    Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT for Future Medical and Dental Research
    Bader Fatani
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT in Dentistry: A Comprehensive Review
    Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed
    Cureus.2023;[Epub]     CrossRef
  • Can we trust AI chatbots’ answers about disease diagnosis and patient care?
    Sun Huh
    Journal of the Korean Medical Association.2023; 66(4): 218.     CrossRef
  • Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions
    Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh
    JMIR Medical Education.2023; 9: e48291.     CrossRef
  • Early applications of ChatGPT in medical practice, education and research
    Sam Sedaghat
    Clinical Medicine.2023; 23(3): 278.     CrossRef
  • A Review of Research on Teaching and Learning Transformation under the Influence of ChatGPT Technology
    璇 师
    Advances in Education.2023; 13(05): 2617.     CrossRef
  • Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study
    Soshi Takagi, Takashi Watari, Ayano Erabi, Kota Sakaguchi
    JMIR Medical Education.2023; 9: e48002.     CrossRef
  • ChatGPT’s quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions
    Cosima C. Hoch, Barbara Wollenberg, Jan-Christoffer Lüers, Samuel Knoedler, Leonard Knoedler, Konstantin Frank, Sebastian Cotofana, Michael Alfertshofer
    European Archives of Oto-Rhino-Laryngology.2023; 280(9): 4271.     CrossRef
  • Analysing the Applicability of ChatGPT, Bard, and Bing to Generate Reasoning-Based Multiple-Choice Questions in Medical Physiology
    Mayank Agarwal, Priyanka Sharma, Ayan Goswami
    Cureus.2023;[Epub]     CrossRef
  • The Intersection of ChatGPT, Clinical Medicine, and Medical Education
    Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali
    JMIR Medical Education.2023; 9: e47274.     CrossRef
  • The Role of Artificial Intelligence in Higher Education: ChatGPT Assessment for Anatomy Course
    Tarık TALAN, Yusuf KALINKARA
    Uluslararası Yönetim Bilişim Sistemleri ve Bilgisayar Bilimleri Dergisi.2023; 7(1): 33.     CrossRef
  • Comparing ChatGPT’s ability to rate the degree of stereotypes and the consistency of stereotype attribution with those of medical students in New Zealand in developing a similarity rating test: a methodological study
    Chao-Cheng Lin, Zaine Akuhata-Huntington, Che-Wei Hsu
    Journal of Educational Evaluation for Health Professions.2023; 20: 17.     CrossRef
  • Examining Real-World Medication Consultations and Drug-Herb Interactions: ChatGPT Performance Evaluation
    Hsing-Yu Hsu, Kai-Cheng Hsu, Shih-Yen Hou, Ching-Lung Wu, Yow-Wen Hsieh, Yih-Dih Cheng
    JMIR Medical Education.2023; 9: e48433.     CrossRef
  • Assessing the Efficacy of ChatGPT in Solving Questions Based on the Core Concepts in Physiology
    Arijita Banerjee, Aquil Ahmad, Payal Bhalla, Kavita Goyal
    Cureus.2023;[Epub]     CrossRef
  • ChatGPT Performs on the Chinese National Medical Licensing Examination
    Xinyi Wang, Zhenye Gong, Guoxin Wang, Jingdan Jia, Ying Xu, Jialu Zhao, Qingye Fan, Shaun Wu, Weiguo Hu, Xiaoyang Li
    Journal of Medical Systems.2023;[Epub]     CrossRef
  • Artificial intelligence and its impact on job opportunities among university students in North Lima, 2023
    Doris Ruiz-Talavera, Jaime Enrique De la Cruz-Aguero, Nereo García-Palomino, Renzo Calderón-Espinoza, William Joel Marín-Rodriguez
    ICST Transactions on Scalable Information Systems.2023;[Epub]     CrossRef
  • Revolutionizing Dental Care: A Comprehensive Review of Artificial Intelligence Applications Among Various Dental Specialties
    Najd Alzaid, Omar Ghulam, Modhi Albani, Rafa Alharbi, Mayan Othman, Hasan Taher, Saleem Albaradie, Suhael Ahmed
    Cureus.2023;[Epub]     CrossRef
  • Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review
    Carl Preiksaitis, Christian Rose
    JMIR Medical Education.2023; 9: e48785.     CrossRef
  • Exploring the impact of language models, such as ChatGPT, on student learning and assessment
    Araz Zirar
    Review of Education.2023;[Epub]     CrossRef
  • Evaluating the reliability of ChatGPT as a tool for imaging test referral: a comparative study with a clinical decision support system
    Shani Rosen, Mor Saban
    European Radiology.2023;[Epub]     CrossRef
  • Redesigning Tertiary Educational Evaluation with AI: A Task-Based Analysis of LIS Students’ Assessment on Written Tests and Utilizing ChatGPT at NSTU
    Shamima Yesmin
    Science & Technology Libraries.2023; : 1.     CrossRef
  • ChatGPT and the AI revolution: a comprehensive investigation of its multidimensional impact and potential
    Mohd Afjal
    Library Hi Tech.2023;[Epub]     CrossRef
  • The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard
    Hasan B Ilgaz, Zehra Çelik
    Cureus.2023;[Epub]     CrossRef
  • Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
    Abhra Ghosh, Nandita Maini Jindal, Vikram K Gupta, Ekta Bansal, Navjot Kaur Bajwa, Abhishek Sett
    Cureus.2023;[Epub]     CrossRef
  • Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article
    Sun Huh
    Child Health Nursing Research.2023; 29(4): 249.     CrossRef
  • Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study
    Osman Babayiğit, Zeynep Tastan Eroglu, Dilek Ozkan Sen, Fatma Ucan Yarkac
    Cureus.2023;[Epub]     CrossRef
  • Efficacy and limitations of ChatGPT as a biostatistical problem-solving tool in medical education in Serbia: a descriptive study
    Aleksandra Ignjatović, Lazar Stevanović
    Journal of Educational Evaluation for Health Professions.2023; 20: 28.     CrossRef
  • Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study
    Krishna Mohan Surapaneni
    JMIR Medical Education.2023; 9: e47191.     CrossRef
  • A systematic review of ChatGPT use in K‐12 education
    Peng Zhang, Gemma Tur
    European Journal of Education.2023;[Epub]     CrossRef
  • Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
    Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa
    Journal of Educational Evaluation for Health Professions.2023; 20: 30.     CrossRef
  • ChatGPT’s performance in German OB/GYN exams – paving the way for AI-enhanced medical education and clinical practice
    Maximilian Riedel, Katharina Kaefinger, Antonia Stuehrenberg, Viktoria Ritter, Niklas Amann, Anna Graf, Florian Recker, Evelyn Klein, Marion Kiechle, Fabian Riedel, Bastian Meyer
    Frontiers in Medicine.2023;[Epub]     CrossRef
  • Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study
    Janghee Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 29.     CrossRef
  • Evaluating ChatGPT as a self‐learning tool in medical biochemistry: A performance assessment in undergraduate medical university examination
    Krishna Mohan Surapaneni, Anusha Rajajagadeesan, Lakshmi Goudhaman, Shalini Lakshmanan, Saranya Sundaramoorthi, Dineshkumar Ravi, Kalaiselvi Rajendiran, Porchelvan Swaminathan
    Biochemistry and Molecular Biology Education.2023;[Epub]     CrossRef
  • FROM TEXT TO DIAGNOSE: CHATGPT’S EFFICACY IN MEDICAL DECISION-MAKING
    Yaroslav Mykhalko, Pavlo Kish, Yelyzaveta Rubtsova, Oleksandr Kutsyn, Valentyna Koval
    Wiadomości Lekarskie.2023; 76(11): 2345.     CrossRef
  • Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions
    Pasin Tangadulrat, Supinya Sono, Boonsin Tangtrakulwanich
    JMIR Medical Education.2023; 9: e50658.     CrossRef
  • Below average ChatGPT performance in medical microbiology exam compared to university students
    Malik Sallam, Khaled Al-Salahat
    Frontiers in Education.2023;[Epub]     CrossRef
  • ChatGPT: "To be or not to be" ... in academic research. The human mind's analytical rigor and capacity to discriminate between AI bots' truths and hallucinations
    Aurelian Anghelescu, Ilinca Ciobanu, Constantin Munteanu, Lucia Ana Maria Anghelescu, Gelu Onose
    Balneo and PRM Research Journal.2023; 14(Vol.14, no): 614.     CrossRef
  • ChatGPT Review: A Sophisticated Chatbot Models in Medical & Health-related Teaching and Learning
    Nur Izah Ab Razak, Muhammad Fawwaz Muhammad Yusoff, Rahmita Wirza O.K. Rahmat
    Malaysian Journal of Medicine and Health Sciences.2023; 19(s12): 98.     CrossRef
  • Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review
    Tae Won Kim
    Journal of Educational Evaluation for Health Professions.2023; 20: 38.     CrossRef
  • Trends in research on ChatGPT and adoption-related issues discussed in articles: a narrative review
    Sang-Jun Kim
    Science Editing.2023; 11(1): 3.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Reviews
Factors associated with medical students’ scores on the National Licensing Exam in Peru: a systematic review  
Javier Alejandro Flores-Cohaila
J Educ Eval Health Prof. 2022;19:38.   Published online December 29, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.38
  • 2,805 View
  • 247 Download
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to identify factors that have been studied for their associations with National Licensing Examination (ENAM) scores in Peru.
Methods
A search was conducted of literature databases and registers, including EMBASE, SciELO, Web of Science, MEDLINE, Peru’s National Register of Research Work, and Google Scholar. The following key terms were used: “ENAM” and “associated factors.” Studies in English and Spanish were included. The quality of the included studies was evaluated using the Medical Education Research Study Quality Instrument (MERSQI).
Results
In total, 38,500 participants were enrolled in 12 studies. Most (11/12) studies were cross-sectional, except for one case-control study. Three studies were published in peer-reviewed journals. The mean MERSQI was 10.33. A better performance on the ENAM was associated with a higher-grade point average (GPA) (n=8), internship setting in EsSalud (n=4), and regular academic status (n=3). Other factors showed associations in various studies, such as medical school, internship setting, age, gender, socioeconomic status, simulations test, study resources, preparation time, learning styles, study techniques, test-anxiety, and self-regulated learning strategies.
Conclusion
The ENAM is a multifactorial phenomenon; our model gives students a locus of control on what they can do to improve their score (i.e., implement self-regulated learning strategies) and faculty, health policymakers, and managers a framework to improve the ENAM score (i.e., design remediation programs to improve GPA and integrate anxiety-management courses into the curriculum).

Citations

Citations to this article as recorded by  
  • Performance of ChatGPT on the Peruvian National Licensing Medical Examination: Cross-Sectional Study
    Javier A Flores-Cohaila, Abigaíl García-Vicente, Sonia F Vizcarra-Jiménez, Janith P De la Cruz-Galán, Jesús D Gutiérrez-Arratia, Blanca Geraldine Quiroga Torres, Alvaro Taype-Rondan
    JMIR Medical Education.2023; 9: e48039.     CrossRef
Medical students’ satisfaction level with e-learning during the COVID-19 pandemic and its related factors: a systematic review  
Mahbubeh Tabatabaeichehr, Samane Babaei, Mahdieh Dartomi, Peiman Alesheikh, Amir Tabatabaee, Hamed Mortazavi, Zohreh Khoshgoftar
J Educ Eval Health Prof. 2022;19:37.   Published online December 20, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.37
  • 2,133 View
  • 190 Download
  • 5 Web of Science
  • 6 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This review investigated medical students’ satisfaction level with e-learning during the coronavirus disease 2019 (COVID-19) pandemic and its related factors.
Methods
A comprehensive systematic search was performed of international literature databases, including Scopus, PubMed, Web of Science, and Persian databases such as Iranmedex and Scientific Information Database using keywords extracted from Medical Subject Headings such as “Distance learning,” “Distance education,” “Online learning,” “Online education,” and “COVID-19” from the earliest date to July 10, 2022. The quality of the studies included in this review was evaluated using the appraisal tool for cross-sectional studies (AXIS tool).
Results
A total of 15,473 medical science students were enrolled in 24 studies. The level of satisfaction with e-learning during the COVID-19 pandemic among medical science students was 51.8%. Factors such as age, gender, clinical year, experience with e-learning before COVID-19, level of study, adaptation content of course materials, interactivity, understanding of the content, active participation of the instructor in the discussion, multimedia use in teaching sessions, adequate time dedicated to the e-learning, stress perception, and convenience had significant relationships with the satisfaction of medical students with e-learning during the COVID-19 pandemic.
Conclusion
Therefore, due to the inevitability of online education and e-learning, it is suggested that educational managers and policymakers choose the best online education method for medical students by examining various studies in this field to increase their satisfaction with e-learning.

Citations

Citations to this article as recorded by  
  • Factors affecting medical students’ satisfaction with online learning: a regression analysis of a survey
    Özlem Serpil Çakmakkaya, Elif Güzel Meydanlı, Ali Metin Kafadar, Mehmet Selman Demirci, Öner Süzer, Muhlis Cem Ar, Muhittin Onur Yaman, Kaan Can Demirbaş, Mustafa Sait Gönen
    BMC Medical Education.2024;[Epub]     CrossRef
  • A comparative study on the effectiveness of online and in-class team-based learning on student performance and perceptions in virtual simulation experiments
    Jing Shen, Hongyan Qi, Ruhuan Mei, Cencen Sun
    BMC Medical Education.2024;[Epub]     CrossRef
  • Pharmacy Students’ Attitudes Toward Distance Learning After the COVID-19 Pandemic: Cross-Sectional Study From Saudi Arabia
    Saud Alsahali, Salman Almutairi, Salem Almutairi, Saleh Almofadhi, Mohammed Anaam, Mohammed Alshammari, Suhaj Abdulsalim, Yasser Almogbel
    JMIR Formative Research.2024; 8: e54500.     CrossRef
  • Effects of the First Wave of the COVID-19 Pandemic on the Work Readiness of Undergraduate Nursing Students in China: A Mixed-Methods Study
    Lifang He, Jean Rizza Dela Cruz
    Risk Management and Healthcare Policy.2024; Volume 17: 559.     CrossRef
  • Physician Assistant Students’ Perception of Online Didactic Education: A Cross-Sectional Study
    Daniel L Anderson, Jeffrey L Alexander
    Cureus.2023;[Epub]     CrossRef
  • Mediating Role of PERMA Wellbeing in the Relationship between Insomnia and Psychological Distress among Nursing College Students
    Qian Sun, Xiangyu Zhao, Yiming Gao, Di Zhao, Meiling Qi
    Behavioral Sciences.2023; 13(9): 764.     CrossRef
Educational/Faculty development material
Common models and approaches for the clinical educator to plan effective feedback encounters  
Cesar Orsini, Veena Rodrigues, Jorge Tricio, Margarita Rosel
J Educ Eval Health Prof. 2022;19:35.   Published online December 19, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.35
  • 3,953 View
  • 609 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Giving constructive feedback is crucial for learners to bridge the gap between their current performance and the desired standards of competence. Giving effective feedback is a skill that can be learned, practiced, and improved. Therefore, our aim was to explore models in clinical settings and assess their transferability to different clinical feedback encounters. We identified the 6 most common and accepted feedback models, including the Feedback Sandwich, the Pendleton Rules, the One-Minute Preceptor, the SET-GO model, the R2C2 (Rapport/Reaction/Content/Coach), and the ALOBA (Agenda Led Outcome-based Analysis) model. We present a handy resource describing their structure, strengths and weaknesses, requirements for educators and learners, and suitable feedback encounters for use for each model. These feedback models represent practical frameworks for educators to adopt but also to adapt to their preferred style, combining and modifying them if necessary to suit their needs and context.

Citations

Citations to this article as recorded by  
  • Navigating power dynamics between pharmacy preceptors and learners
    Shane Tolleson, Mabel Truong, Natalie Rosario
    Exploratory Research in Clinical and Social Pharmacy.2024; 13: 100408.     CrossRef
  • Feedback conversations: First things first?
    Katharine A. Robb, Marcy E. Rosenbaum, Lauren Peters, Susan Lenoch, Donna Lancianese, Jane L. Miller
    Patient Education and Counseling.2023; 115: 107849.     CrossRef
Brief report
Self-directed learning quotient and common learning types of pre-medical students in Korea by the Multi-Dimensional Learning Strategy Test 2nd edition: a descriptive study
Sun Kim, A Ra Cho, Chul Woon Chung
J Educ Eval Health Prof. 2022;19:32.   Published online November 28, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.32
  • 1,255 View
  • 125 Download
AbstractAbstract PDFSupplementary Material
This study aimed to find the self-directed learning quotient and common learning types of pre-medical students through the confirmation of 4 characteristics of learning strategies, including personality, motivation, emotion, and behavior. The response data were collected from 277 out of 294 target first-year pre-medical students from 2019 to 2021, using the Multi-Dimensional Learning Strategy Test 2nd edition. The most common learning type was a self-directed type (44.0%), stagnant type (33.9%), latent type (14.4%), and conscientiousness type (7.6%). The self-directed learning index was high (29.2%), moderate (24.6%), somewhat high (21.7%), somewhat low (14.4%), and low (10.1%). This study confirmed that many students lacked self-directed learning capabilities for learning strategies. In addition, it was found that the difficulties experienced by each student were different, and the variables resulting in difficulties were also diverse. It may provide insights into how to develop programs that can help students increase their self-directed learning capability.
Research articles
Is online objective structured clinical examination teaching an acceptable replacement in post-COVID-19 medical education in the United Kingdom?: a descriptive study  
Vashist Motkur, Aniket Bharadwaj, Nimalesh Yogarajah
J Educ Eval Health Prof. 2022;19:30.   Published online November 7, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.30
  • 1,492 View
  • 127 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Coronavirus disease 2019 (COVID-19) restrictions resulted in an increased emphasis on virtual communication in medical education. This study assessed the acceptability of virtual teaching in an online objective structured clinical examination (OSCE) series and its role in future education.
Methods
Six surgical OSCE stations were designed, covering common surgical topics, with specific tasks testing data interpretation, clinical knowledge, and communication skills. These were delivered via Zoom to students who participated in student/patient/examiner role-play. Feedback was collected by asking students to compare online teaching with previous experiences of in-person teaching. Descriptive statistics were used for Likert response data, and thematic analysis for free-text items.
Results
Sixty-two students provided feedback, with 81% of respondents finding online instructions preferable to paper equivalents. Furthermore, 65% and 68% found online teaching more efficient and accessible, respectively, than in-person teaching. Only 34% found communication with each other easier online; Forty percent preferred online OSCE teaching to in-person teaching. Students also expressed feedback in positive and negative free-text comments.
Conclusion
The data suggested that generally students were unwilling for online teaching to completely replace in-person teaching. The success of online teaching was dependent on the clinical skill being addressed; some were less amenable to a virtual setting. However, online OSCE teaching could play a role alongside in-person teaching.

Citations

Citations to this article as recorded by  
  • Feasibility and reliability of the pandemic-adapted online-onsite hybrid graduation OSCE in Japan
    Satoshi Hara, Kunio Ohta, Daisuke Aono, Toshikatsu Tamai, Makoto Kurachi, Kimikazu Sugimori, Hiroshi Mihara, Hiroshi Ichimura, Yasuhiko Yamamoto, Hideki Nomura
    Advances in Health Sciences Education.2023;[Epub]     CrossRef
  • Should Virtual Objective Structured Clinical Examination (OSCE) Teaching Replace or Complement Face-to-Face Teaching in the Post-COVID-19 Educational Environment: An Evaluation of an Innovative National COVID-19 Teaching Programme
    Charles Gamble, Alice Oatham, Raj Parikh
    Cureus.2023;[Epub]     CrossRef
Acceptability of the 8-case objective structured clinical examination of medical students in Korea using generalizability theory: a reliability study  
Song Yi Park, Sang-Hwa Lee, Min-Jeong Kim, Ki-Hwan Ji, Ji Ho Ryu
J Educ Eval Health Prof. 2022;19:26.   Published online September 8, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.26
  • 2,035 View
  • 207 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated whether the reliability was acceptable when the number of cases in the objective structured clinical examination (OSCE) decreased from 12 to 8 using generalizability theory (GT).
Methods
This psychometric study analyzed the OSCE data of 439 fourth-year medical students conducted in the Busan and Gyeongnam areas of South Korea from July 12 to 15, 2021. The generalizability study (G-study) considered 3 facets—students (p), cases (c), and items (i)—and designed the analysis as p×(i:c) due to items being nested in a case. The acceptable generalizability (G) coefficient was set to 0.70. The G-study and decision study (D-study) were performed using G String IV ver. 6.3.8 (Papawork, Hamilton, ON, Canada).
Results
All G coefficients except for July 14 (0.69) were above 0.70. The major sources of variance components (VCs) were items nested in cases (i:c), from 51.34% to 57.70%, and residual error (pi:c), from 39.55% to 43.26%. The proportion of VCs in cases was negligible, ranging from 0% to 2.03%.
Conclusion
The case numbers decreased in the 2021 Busan and Gyeongnam OSCE. However, the reliability was acceptable. In the D-study, reliability was maintained at 0.70 or higher if there were more than 21 items/case in 8 cases and more than 18 items/case in 9 cases. However, according to the G-study, increasing the number of items nested in cases rather than the number of cases could further improve reliability. The consortium needs to maintain a case bank with various items to implement a reliable blueprinting combination for the OSCE.

Citations

Citations to this article as recorded by  
  • Applying the Generalizability Theory to Identify the Sources of Validity Evidence for the Quality of Communication Questionnaire
    Flávia Del Castanhel, Fernanda R. Fonseca, Luciana Bonnassis Burg, Leonardo Maia Nogueira, Getúlio Rodrigues de Oliveira Filho, Suely Grosseman
    American Journal of Hospice and Palliative Medicine®.2023;[Epub]     CrossRef
Medical students’ self-assessed efficacy and satisfaction with training on endotracheal intubation and central venous catheterization with smart glasses in Taiwan: a non-equivalent control-group pre- and post-test study  
Yu-Fan Lin, Chien-Ying Wang, Yen-Hsun Huang, Sheng-Min Lin, Ying-Ying Yang
J Educ Eval Health Prof. 2022;19:25.   Published online September 2, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.25
  • 2,844 View
  • 226 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Endotracheal intubation and central venous catheterization are essential procedures in clinical practice. Simulation-based technology such as smart glasses has been used to facilitate medical students’ training on these procedures. We investigated medical students’ self-assessed efficacy and satisfaction regarding the practice and training of these procedures with smart glasses in Taiwan.
Methods
This observational study enrolled 145 medical students in the 5th and 6th years participating in clerkships at Taipei Veterans General Hospital between October 2020 and December 2021. Students were divided into the smart glasses or the control group and received training at a workshop. The primary outcomes included students’ pre- and post-intervention scores for self-assessed efficacy and satisfaction with the training tool, instructor’s teaching, and the workshop.
Results
The pre-intervention scores for self-assessed efficacy of 5th- and 6th-year medical students in endotracheal intubation and central venous catheterization procedures showed no significant difference. The post-intervention score of self-assessed efficacy in the smart glasses group was better than that of the control group. Moreover, 6th-year medical students in the smart glasses group showed higher satisfaction with the training tool, instructor’s teaching, and workshop than those in the control group.
Conclusion
Smart glasses served as a suitable simulation tool for endotracheal intubation and central venous catheterization procedures training in medical students. Medical students practicing with smart glasses showed improved self-assessed efficacy and higher satisfaction with training, especially for procedural steps in a space-limited field. Simulation training on procedural skills with smart glasses in 5th-year medical students may be adjusted to improve their satisfaction.

Citations

Citations to this article as recorded by  
  • The use of smart glasses in nursing education: A scoping review
    Charlotte Romare, Lisa Skär
    Nurse Education in Practice.2023; 73: 103824.     CrossRef
Brief report
Educational impact of an active learning session with 6-lead mobile electrocardiography on medical students’ knowledge of cardiovascular physiology during the COVID-19 pandemic in the United States: a survey-based observational study  
Alexandra Camille Greb, Emma Altieri, Irene Masini, Emily Helena Frisch, Milton Leon Greenberg
J Educ Eval Health Prof. 2022;19:12.   Published online June 20, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.12
  • 2,554 View
  • 235 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Mobile electrocardiogram (ECG) devices are valuable tools for teaching ECG interpretation. The primary purpose of this follow-up study was to determine if an ECG active learning session could be safely and effectively performed during the coronavirus disease 2019 (COVID-19) pandemic using a newly developed mobile 6-lead ECG device. Additionally, we examined the educational impact of these active learning sessions on student knowledge of cardiovascular physiology and the utility of the mobile 6-lead ECG device in a classroom setting. In this study, first-year medical students (MS1) performed four active learning activities using the new mobile 6-lead ECG device. Data were collected from 42 MS1s through a quantitative survey administered in September 2020. Overall, students felt the activity enhanced their understanding of the course material and that the activity was performed safely and in compliance with local COVID-19 guidelines. These results emphasize student preference for hands-on, small group learning activities in spite of the pandemic.

Citations

Citations to this article as recorded by  
  • Medical student exam performance and perceptions of a COVID-19 pandemic-appropriate pre-clerkship medical physiology and pathophysiology curriculum
    Melissa Chang, Andrew Cuyegkeng, Joseph A. Breuer, Arina Alexeeva, Abigail R. Archibald, Javier J. Lepe, Milton L. Greenberg
    BMC Medical Education.2022;[Epub]     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions