Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
19 "Feedback"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis  
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
J Educ Eval Health Prof. 2024;21:5.   Published online February 28, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.5
  • 554 View
  • 122 Download
AbstractAbstract PDFSupplementary Material
Purpose
We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.
Methods
This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.
Results
Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.
Conclusion
Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.
Medical students’ patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study  
Janghee Park
J Educ Eval Health Prof. 2023;20:29.   Published online November 10, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.29
  • 1,356 View
  • 136 Download
  • 2 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to analyze patterns of using ChatGPT before and after group activities and to explore medical students’ perceptions of ChatGPT as a feedback tool in the classroom.
Methods
The study included 99 2nd-year pre-medical students who participated in a “Leadership and Communication” course from March to June 2023. Students engaged in both individual and group activities related to negotiation strategies. ChatGPT was used to provide feedback on their solutions. A survey was administered to assess students’ perceptions of ChatGPT’s feedback, its use in the classroom, and the strengths and challenges of ChatGPT from May 17 to 19, 2023.
Results
The students responded by indicating that ChatGPT’s feedback was helpful, and revised and resubmitted their group answers in various ways after receiving feedback. The majority of respondents expressed agreement with the use of ChatGPT during class. The most common response concerning the appropriate context of using ChatGPT’s feedback was “after the first round of discussion, for revisions.” There was a significant difference in satisfaction with ChatGPT’s feedback, including correctness, usefulness, and ethics, depending on whether or not ChatGPT was used during class, but there was no significant difference according to gender or whether students had previous experience with ChatGPT. The strongest advantages were “providing answers to questions” and “summarizing information,” and the worst disadvantage was “producing information without supporting evidence.”
Conclusion
The students were aware of the advantages and disadvantages of ChatGPT, and they had a positive attitude toward using ChatGPT in the classroom.

Citations

Citations to this article as recorded by  
  • Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
    Xiaojun Xu, Yixiao Chen, Jing Miao
    Journal of Educational Evaluation for Health Professions.2024; 21: 6.     CrossRef
  • Embracing ChatGPT for Medical Education: Exploring Its Impact on Doctors and Medical Students
    Yijun Wu, Yue Zheng, Baijie Feng, Yuqi Yang, Kai Kang, Ailin Zhao
    JMIR Medical Education.2024; 10: e52483.     CrossRef
  • ChatGPT and Clinical Training: Perception, Concerns, and Practice of Pharm-D Students
    Mohammed Zawiah, Fahmi Al-Ashwal, Lobna Gharaibeh, Rana Abu Farha, Karem Alzoubi, Khawla Abu Hammour, Qutaiba A Qasim, Fahd Abrah
    Journal of Multidisciplinary Healthcare.2023; Volume 16: 4099.     CrossRef
  • Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
    Hyunju Lee, Soobin Park
    Journal of Educational Evaluation for Health Professions.2023; 20: 39.     CrossRef
Enhancement of the technical and non-technical skills of nurse anesthesia students using the Anesthetic List Management Assessment Tool in Iran: a quasi-experimental study  
Ali Khalafi, Maedeh Kordnejad, Vahid Saidkhani
J Educ Eval Health Prof. 2023;20:19.   Published online June 16, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.19
  • 1,086 View
  • 80 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the effect of evaluations based on the Anesthetic List Management Assessment Tool (ALMAT) form on improving the technical and non-technical skills of final-year nurse anesthesia students at Ahvaz Jundishapur University of Medical Sciences (AJUMS).
Methods
This was a semi-experimental study with a pre-test and post-test design. It included 45 final-year nurse anesthesia students of AJUMS and lasted for 3 months. The technical and non-technical skills of the intervention group were assessed at 4 university hospitals using formative-feedback evaluation based on the ALMAT form, from induction of anesthesia until reaching mastery and independence. Finally, the students’ degree of improvement in technical and non-technical skills was compared between the intervention and control groups. Statistical tests (the independent t-test, paired t-test, and Mann-Whitney test) were used to analyze the data.
Results
The rate of improvement in post-test scores of technical skills was significantly higher in the intervention group than in the control group (P˂0.0001). Similarly, the students in the intervention group received significantly higher post-test scores for non-technical skills than the students in the control group (P˂0.0001).
Conclusion
The findings of this study showed that the use of ALMAT as a formative-feedback evaluation method to evaluate technical and non-technical skills had a significant effect on improving these skills and was effective in helping students learn and reach mastery and independence.
What impacts students’ satisfaction the most from Medicine Student Experience Questionnaire in Australia: a validity study  
Pin-Hsiang Huang, Gary Velan, Greg Smith, Melanie Fentoullis, Sean Edward Kennedy, Karen Jane Gibson, Kerry Uebel, Boaz Shulruf
J Educ Eval Health Prof. 2023;20:2.   Published online January 18, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.2
  • 1,477 View
  • 127 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study evaluated the validity of student feedback derived from Medicine Student Experience Questionnaire (MedSEQ), as well as the predictors of students’ satisfaction in the Medicine program.
Methods
Data from MedSEQ applying to the University of New South Wales Medicine program in 2017, 2019, and 2021 were analyzed. Confirmatory factor analysis (CFA) and Cronbach’s α were used to assess the construct validity and reliability of MedSEQ respectively. Hierarchical multiple linear regressions were used to identify the factors that most impact students’ overall satisfaction with the program.
Results
A total of 1,719 students (34.50%) responded to MedSEQ. CFA showed good fit indices (root mean square error of approximation=0.051; comparative fit index=0.939; chi-square/degrees of freedom=6.429). All factors yielded good (α>0.7) or very good (α>0.8) levels of reliability, except the “online resources” factor, which had acceptable reliability (α=0.687). A multiple linear regression model with only demographic characteristics explained 3.8% of the variance in students’ overall satisfaction, whereas the model adding 8 domains from MedSEQ explained 40%, indicating that 36.2% of the variance was attributable to students’ experience across the 8 domains. Three domains had the strongest impact on overall satisfaction: “being cared for,” “satisfaction with teaching,” and “satisfaction with assessment” (β=0.327, 0.148, 0.148, respectively; all with P<0.001).
Conclusion
MedSEQ has good construct validity and high reliability, reflecting students’ satisfaction with the Medicine program. Key factors impacting students’ satisfaction are the perception of being cared for, quality teaching irrespective of the mode of delivery and fair assessment tasks which enhance learning.

Citations

Citations to this article as recorded by  
  • Mental health and quality of life across 6 years of medical training: A year-by-year analysis
    Natalia de Castro Pecci Maddalena, Alessandra Lamas Granero Lucchetti, Ivana Lucia Damasio Moutinho, Oscarina da Silva Ezequiel, Giancarlo Lucchetti
    International Journal of Social Psychiatry.2024; 70(2): 298.     CrossRef
Educational/Faculty development material
Common models and approaches for the clinical educator to plan effective feedback encounters  
Cesar Orsini, Veena Rodrigues, Jorge Tricio, Margarita Rosel
J Educ Eval Health Prof. 2022;19:35.   Published online December 19, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.35
  • 4,543 View
  • 635 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Giving constructive feedback is crucial for learners to bridge the gap between their current performance and the desired standards of competence. Giving effective feedback is a skill that can be learned, practiced, and improved. Therefore, our aim was to explore models in clinical settings and assess their transferability to different clinical feedback encounters. We identified the 6 most common and accepted feedback models, including the Feedback Sandwich, the Pendleton Rules, the One-Minute Preceptor, the SET-GO model, the R2C2 (Rapport/Reaction/Content/Coach), and the ALOBA (Agenda Led Outcome-based Analysis) model. We present a handy resource describing their structure, strengths and weaknesses, requirements for educators and learners, and suitable feedback encounters for use for each model. These feedback models represent practical frameworks for educators to adopt but also to adapt to their preferred style, combining and modifying them if necessary to suit their needs and context.

Citations

Citations to this article as recorded by  
  • Navigating power dynamics between pharmacy preceptors and learners
    Shane Tolleson, Mabel Truong, Natalie Rosario
    Exploratory Research in Clinical and Social Pharmacy.2024; 13: 100408.     CrossRef
  • Feedback conversations: First things first?
    Katharine A. Robb, Marcy E. Rosenbaum, Lauren Peters, Susan Lenoch, Donna Lancianese, Jane L. Miller
    Patient Education and Counseling.2023; 115: 107849.     CrossRef
Research articles
Content validity test of a safety checklist for simulated participants in simulation-based education in the United Kingdom: a methodological study
Matthew Bradley
J Educ Eval Health Prof. 2022;19:21.   Published online August 25, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.21
  • 1,607 View
  • 157 Download
AbstractAbstract PDFSupplementary Material
Purpose
Simulation training is an ever-growing means of healthcare education and often involves simulated participants (SPs), commonly known as actors. Simulation-based education (SBE) can sometimes endanger SPs, and as such we have created a safety checklist for them to follow. This study describes how we developed the checklist through a quality improvement project, and then evaluated feedback responses to assess whether SPs felt our checklist was safe.
Methods
The checklist was provided to SPs working in an acute trust simulation service when delivering multidisciplinary SBE over 4 months. Using multiple plan–do–study–act cycles, the checklist was refined by reflecting on SP feedback to ensure that the standards of the safe simulation were met. We collected 21 responses from September to December 2021 after SPs completed an SBE event.
Results
The responses showed that 100% of SPs felt safe during SBE when using our checklist. The average “confidence in safety” rating before using the checklist was 6.8/10, which increased significantly to 9.2/10 after using the checklist (P<0.0005). The checklist was refined throughout the 4 months and implemented in adult and pediatric SBE as a standard operating procedure.
Conclusion
We recommend using our safety checklist as a standard operating procedure to improve the confidence and safety of SPs during safe and effective simulations.
Medical residents and attending physicians’ perceptions of feedback and teaching in the United States: a qualitative study  
Madeleine Matthiesen, Michael S. Kelly, Kristina Dzara, Arabella Simpkin Begin
J Educ Eval Health Prof. 2022;19:9.   Published online April 26, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.9
  • 8,259 View
  • 355 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Residents and attendings agree on the importance of feedback to resident education. However, while faculty report providing frequent feedback, residents often do not perceive receiving it, particularly in the context of teaching. Given the nuanced differences between feedback and teaching, we aimed to explore resident and attending perceptions of feedback and teaching in the clinical setting.
Methods
We conducted a qualitative study of internal medicine residents and attendings from December 2018 through March 2019 at the Massachusetts General Hospital to investigate perceptions of feedback in the inpatient clinical setting. Residents and faculty were recruited to participate in focus groups. Data were analyzed using thematic analysis to explore perspectives and barriers to feedback provision and identification.
Results
Five focus groups included 33 total participants in 3 attending (n=20) and 2 resident (n=13) groups. Thematic analysis of focus group transcripts identified 7 themes which organized into 3 thematic categories: (1) disentangling feedback and teaching, (2) delivering high-quality feedback, and (3) experiencing feedback in the group setting. Residents and attendings highlighted important themes in discriminating feedback from teaching. They indicated that while feedback is reactive in response to an action or behavior, teaching is proactive and oriented toward future endeavors.
Conclusion
Confusion between the critical concepts of teaching and feedback may be minimized by allowing them to each have their intended impact, either in response to prior events or aimed toward those yet to take place.

Citations

Citations to this article as recorded by  
  • Resident Assessment of Clinician Educators According to Core ACGME Competencies
    Bailey A. Pope, Patricia A. Carney, Mary C. Brooks, Doug R. Rice, Ashly A. Albright, Stephanie A. C. Halvorson
    Journal of General Internal Medicine.2024; 39(3): 377.     CrossRef
Educational/Faculty development material
Using a virtual flipped classroom model to promote critical thinking in online graduate courses in the United States: a case presentation  
Jennifer Tomesko, Deborah Cohen, Jennifer Bridenbaugh
J Educ Eval Health Prof. 2022;19:5.   Published online February 28, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.5
  • 4,470 View
  • 462 Download
  • 4 Web of Science
  • 5 Crossref
AbstractAbstract PDFSupplementary Material
Flipped classroom models encourage student autonomy and reverse the order of traditional classroom content such as lectures and assignments. Virtual learning environments are ideal for executing flipped classroom models to improve critical thinking skills. This paper provides health professions faculty with guidance on developing a virtual flipped classroom in online graduate nutrition courses between September 2021 and January 2022 at the School of Health Professions, Rutgers The State University of New Jersey. Examples of pre-class, live virtual face-to-face, and post-class activities are provided. Active learning, immediate feedback, and enhanced student engagement in a flipped classroom may result in a more thorough synthesis of information, resulting in increased critical thinking skills. This article describes how a flipped classroom model design in graduate online courses that incorporate virtual face-to-face class sessions in a virtual learning environment can be utilized to promote critical thinking skills. Health professions faculty who teach online can apply the examples discussed to their online courses.

Citations

Citations to this article as recorded by  
  • A scoping review of educational programmes on artificial intelligence (AI) available to medical imaging staff
    G. Doherty, L. McLaughlin, C. Hughes, J. McConnell, R. Bond, S. McFadden
    Radiography.2024; 30(2): 474.     CrossRef
  • Inculcating Critical Thinking Skills in Medical Students: Ways and Means
    Mandeep Kaur, Rajiv Mahajan
    International Journal of Applied & Basic Medical Research.2023; 13(2): 57.     CrossRef
  • Promoting students’ critical thinking and scientific attitudes through socio-scientific issues-based flipped classroom
    Nurfatimah Sugrah, Suyanta, Antuni Wiyarsi
    LUMAT: International Journal on Math, Science and Technology Education.2023;[Epub]     CrossRef
  • Análisis bibliométrico de la producción científica mundial sobre el aula invertida en la educación médica
    Gloria Katty Muñoz-Estrada, Hugo Eladio Chumpitaz Caycho, John Barja-Ore, Natalia Valverde-Espinoza, Liliana Verde-Vargas, Frank Mayta-Tovalino
    Educación Médica.2022; 23(5): 100758.     CrossRef
  • Effect of a flipped classroom course to foster medical students’ AI literacy with a focus on medical imaging: a single group pre-and post-test study
    Matthias C. Laupichler, Dariusch R. Hadizadeh, Maximilian W. M. Wintergerst, Leon von der Emde, Daniel Paech, Elizabeth A. Dick, Tobias Raupach
    BMC Medical Education.2022;[Epub]     CrossRef
Brief Report
Potential of feedback during objective structured clinical examination to evoke an emotional response in medical students in Canada  
Dalia Limor Karol, Debra Pugh
J Educ Eval Health Prof. 2020;17:5.   Published online February 18, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.5
  • 6,528 View
  • 164 Download
  • 3 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Feedback has been shown to be an important driver for learning. However, many factors, such as the emotional reactions feedback evokes, may impact its effect. This study aimed to explore medical students’ perspectives on the verbal feedback they receive during an objective structured clinical examination (OSCE); their emotional reaction to this; and its impact on their subsequent performance. To do this, medical students enrolled at 4 Canadian medical schools were invited to complete a web-based survey regarding their experiences. One hundred and fifty-eight participants completed the survey. Twenty-nine percent of respondents asserted that they had experienced emotional reactions to verbal feedback received in an OSCE setting. The most common emotional responses reported were embarrassment and anxiousness. Some students (n=20) reported that the feedback they received negatively impacted subsequent OSCE performance. This study demonstrates that feedback provided during an OSCE can evoke an emotional response in students and potentially impact subsequent performance.

Citations

Citations to this article as recorded by  
  • Memory, credibility and insight: How video-based feedback promotes deeper reflection and learning in objective structured clinical exams
    Alexandra Makrides, Peter Yeates
    Medical Teacher.2022; 44(6): 664.     CrossRef
  • Objective structured clinical examination in fundamentals of nursing and obstetric care as method of verification and assessing the degree of achievement of learning outcomes
    Lucyna Sochocka, Teresa Niechwiadowicz-Czapka, Mariola Wojtal, Monika Przestrzelska, Iwona Kiersnowska, Katarzyna Szwamel
    Pielegniarstwo XXI wieku / Nursing in the 21st Century.2021; 20(3): 190.     CrossRef
Research article
Peer-assisted feedback: a successful approach for providing feedback on United States Medical Licensing Exam-style clinical skills exam notes in the United States  
Kira Nagoshi, Zareen Zaidi, Ashleigh Wright, Carolyn Stalvey
J Educ Eval Health Prof. 2019;16:29.   Published online October 8, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.29
  • 9,816 View
  • 130 Download
  • 3 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Peer-assisted learning (PAL) promotes the development of communication, facilitates improvements in clinical skills, and is a way to provide feedback to learners. We utilized PAL as a conceptual framework to explore the feasibility of peer-assisted feedback (PAF) to improve note-writing skills without requiring faculty time. The aim was to assess whether PAL was a successful method to provide feedback on the United States Medical Licensing Exams (USMLE)-style clinical skills exam notes by using student feedback on a survey in the United States.
Methods
The University of Florida College of Medicine administers clinical skills examination (CSEs) that include USMLE-like note-writing. PAL, in which students support the learning of their peers, was utilized as an alternative to faculty feedback. Second-year (MS2) and third-year (MS3) medical students taking CSEs participated in faculty-run note-grading sessions immediately after testing, which included explanations of grading rubrics and the feedback process. Students graded an anonymized peer’s notes. The graded material was then forwarded anonymously to its student author to review. Students were surveyed on their perceived ability to provide feedback and the benefits derived from PAF using a Likert scale (1–6) and open-ended comments during the 2017–2018 academic year.
Results
Students felt generally positively about the activity, with mean scores for items related to educational value of 4.49 for MS2s and 5.11 for MS3s (out of 6). MS3s perceived peer feedback as constructive, felt that evaluating each other’s notes was beneficial, and felt that the exercise would improve their future notes. While still positive, MS2 students gave lower scores than the MS3 students.
Conclusion
PAF was a successful method of providing feedback on student CSE notes, especially for MS3s. MS2s commented that although they learned during the process, they might be more invested in improving their note-writing as they approach their own USMLE exam.

Citations

Citations to this article as recorded by  
  • Teaching feedback skills to veterinary students by peer-assisted learning
    Aytaç ÜNSAL ADACA
    Ankara Üniversitesi Veteriner Fakültesi Dergisi.2023; 70(3): 237.     CrossRef
  • Pedagogic Exploration Into Adapting Automated Writing Evaluation and Peer Review Integrated Feedback Into Large-Sized University Writing Classes
    Wei-Yan Li, Kevin Kau, Yi-Jiun Shiung
    SAGE Open.2023;[Epub]     CrossRef
  • Benefits of semiology taught using near-peer tutoring are sustainable
    Benjamin Gripay, Thomas André, Marie De Laval, Brice Peneau, Alexandre Secourgeon, Nicolas Lerolle, Cédric Annweiler, Grégoire Justeau, Laurent Connan, Ludovic Martin, Loïc Bière
    BMC Medical Education.2022;[Epub]     CrossRef
Brief report
MEDTalks: a student-driven program to enhance undergraduate student understanding and interest in medical schools in Canada  
Jayson Azzi, Dalia Karol, Tayler Bailey, Christopher Jerome Ramnanan
J Educ Eval Health Prof. 2019;16:13.   Published online May 22, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.13
  • 13,458 View
  • 187 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Given the lack of programs geared towards educating undergraduate students about medical school, the purpose of this study was to evaluate whether a medical student–driven initiative program, MEDTalks, enhanced undergraduate students’ understanding of medical school in Canada and stimulated their interest in pursuing medicine. The MEDTalks program, which ran between January and April 2018 at the University of Ottawa, consisted of 5 teaching sessions, each including large-group lectures, small-group case-based learning, physical skills tutorials, and anatomy lab demonstrations, to mimic the typical medical school curriculum. At the end of the program, undergraduate student learners were invited to complete a feedback questionnaire. Twenty-nine participants provided feedback, of whom 25 reported that MEDTalks allowed them to gain exposure to the University of Ottawa medical program; 27 said that it gave them a greater understanding of the teaching structure; and 25 responded that it increased their interest in attending medical school. The MEDTalks program successfully developed a greater understanding of medical school and helped stimulate interest in pursuing medical studies among undergraduate students.

Citations

Citations to this article as recorded by  
  • Assessing the Impact of Early Undergraduate Exposure to the Medical School Curriculum
    Christiana M. Cornea, Gary Beck Dallaghan, Thomas Koonce
    Medical Science Educator.2022; 32(1): 103.     CrossRef
Research article
Learning through multiple lenses: analysis of self, peer, nearpeer, and faculty assessments of a clinical history-taking task in Australia  
Kylie Fitzgerald, Brett Vaughan
J Educ Eval Health Prof. 2018;15:22.   Published online September 18, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.22
  • 23,369 View
  • 286 Download
  • 4 Web of Science
  • 5 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Peer assessment provides a framework for developing expected skills and receiving feedback appropriate to the learner’s level. Near-peer (NP) assessment may elevate expectations and motivate learning. Feedback from peers and NPs may be a sustainable way to enhance student assessment feedback. This study analysed relationships among self, peer, NP, and faculty marking of an assessment and students’ attitudes towards marking by those various groups.
Methods
A cross-sectional study design was used. Year 2 osteopathy students (n= 86) were invited to perform self and peer assessments of a clinical history-taking and communication skills assessment. NPs and faculty also marked the assessment. Year 2 students also completed a questionnaire on their attitudes to peer/NP marking. Descriptive statistics and the Spearman rho coefficient were used to evaluate relationships across marker groups.
Results
Year 2 students (n= 9), NPs (n= 3), and faculty (n= 5) were recruited. Correlations between self and peer (r= 0.38) and self and faculty (r= 0.43) marks were moderate. A weak correlation was observed between self and NP marks (r= 0.25). Perceptions of peer and NP marking varied, with over half of the cohort suggesting that peer or NP assessments should not contribute to their grade.
Conclusion
Framing peer and NP assessment as another feedback source may offer a sustainable method for enhancing feedback without overloading faculty resources. Multiple sources of feedback may assist in developing assessment literacy and calibrating students’ self-assessment capability. The small number of students recruited suggests some acceptability of peer and NP assessment; however, further work is required to increase its acceptability.

Citations

Citations to this article as recorded by  
  • The extent and quality of evidence for osteopathic education: A scoping review
    Andrew MacMillan, Patrick Gauthier, Luciane Alberto, Arabella Gaunt, Rachel Ives, Chris Williams, Dr Jerry Draper-Rodi
    International Journal of Osteopathic Medicine.2023; 49: 100663.     CrossRef
  • History and physical exam: a retrospective analysis of a clinical opportunity
    David McLinden, Krista Hailstone, Sue Featherston
    BMC Medical Education.2023;[Epub]     CrossRef
  • How Accurate Are Our Students? A Meta-analytic Systematic Review on Self-assessment Scoring Accuracy
    Samuel P. León, Ernesto Panadero, Inmaculada García-Martínez
    Educational Psychology Review.2023;[Epub]     CrossRef
  • Evaluating the Academic Performance of Mustansiriyah Medical College Teaching Staff vs. Final-Year Students Failure Rates
    Wassan Nori, Wisam Akram , Saad Mubarak Rasheed, Nabeeha Najatee Akram, Taqi Mohammed Jwad Taher, Mustafa Ali Kassim Kassim, Alexandru Cosmin Pantazi
    Al-Rafidain Journal of Medical Sciences ( ISSN 2789-3219 ).2023; 5(1S): S151.     CrossRef
  • History-taking level and its influencing factors among nursing undergraduates based on the virtual standardized patient testing results: Cross sectional study
    Jingrong Du, Xiaowen Zhu, Juan Wang, Jing Zheng, Xiaomin Zhang, Ziwen Wang, Kun Li
    Nurse Education Today.2022; 111: 105312.     CrossRef
Case report
Formative feedback from the first-person perspective using Google Glass in a family medicine objective structured clinical examination station in the United States  
Julie Youm, Warren Wiechmann
J Educ Eval Health Prof. 2018;15:5.   Published online March 7, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.5
  • 35,510 View
  • 338 Download
  • 3 Web of Science
AbstractAbstract PDFSupplementary Material
Purpose
This case study explored the use of Google Glass in a clinical examination scenario to capture the first-person perspective of a standardized patient as a way to provide formative feedback on students’ communication and empathy skills ‘through the patient’s eyes.’
Methods
During a 3-year period between 2014 and 2017, third-year students enrolled in a family medicine clerkship participated in a Google Glass station during a summative clinical examination. At this station, standardized patients wore Google Glass to record an encounter focused on communication and empathy skills ‘through the patient’s eyes.’ Students completed an online survey using a 4-point Likert scale about their perspectives on Google Glass as a feedback tool (N= 255).
Results
We found that the students’ experiences with Google Glass ‘through the patient’s eyes’ were largely positive and that students felt the feedback provided by the Google Glass recording to be helpful. Although a third of the students felt that Google Glass was a distraction, the majority believed that the first-person perspective recordings provided an opportunity for feedback that did not exist before.
Conclusion
Continuing exploration of first-person perspective recordings using Google Glass to improve education on communication and empathy skills is warranted.
Brief Reports
Perceptions of pharmacy clerkship students and clinical preceptors regarding preceptors’ teaching behaviors at Gondar University in Ethiopia  
Tadesse Melaku, Akshaya Srikanth Bhagavathula, Yonas Getaye, Sewunet Admasu, Ramadan Alkalmi
J Educ Eval Health Prof. 2016;13:9.   Published online February 15, 2016
DOI: https://doi.org/10.3352/jeehp.2016.13.9
  • 31,616 View
  • 213 Download
  • 5 Web of Science
  • 4 Crossref
AbstractAbstract PDF
This study aimed to compare the perceptions of pharmacy clerkship students and clinical preceptors of preceptors’ teaching behaviors at Gondar University. A cross-sectional study was conducted among pharmacy clerkship students and preceptors during June 2014 and December 2015. A 52-item structured questionnaire was self-administered to 126 students and 23 preceptors. The responses are presented using descriptive statistics. The Mann-Whitney U test was applied to test the significance of differences between students and preceptors. The response rate was 89.4% for students and 95.6% for preceptors. Statistically significant differences were observed in the responses regarding two of the five communication skills that were examined, six of the 26 clinical skills, and five of the 21 parameters involving feedback. The mean scores of preceptors (2.6/3) and students (1.9/3) regarding instructors’ ability to answer questions were found to be significantly different (P= 0.01). Students and preceptors gave mean scores of 1.9 and 2.8, respectively, to a question regarding preceptors’ application of appropriate up-to-date knowledge to individual patients (P= 0.00). Significant differences were also noted between students and instructors regarding the degree to which preceptors encouraged students to evaluate their own performance (P= 0.01). Discrepancies were noted between students and preceptors regarding preceptors’ teaching behaviors. Preceptors rated their teaching behaviors more highly than students did. Short-term training is warranted for preceptors to improve some aspects of their teaching skills.

Citations

Citations to this article as recorded by  
  • Pharmaceutical care journey: Final-year pharmacy students’ experiences of the hospital-based clinical pharmacy clerkship programme in north- east Nigeria
    Roland N Okoro, John David Ohieku, Sani Ibn Yakubu
    Pharmacy Education.2021; 21: 9.     CrossRef
  • Student perceptions of non-technical skills development during advanced pharmacy practice experiences
    Sandy Diec, Pooja H. Patel, Nephy G. Samuel, Jose J. Hernandez-Munoz
    Currents in Pharmacy Teaching and Learning.2021; 13(11): 1510.     CrossRef
  • Measuring and assessing the competencies of preceptors in health professions: a systematic scoping review
    Andrew D. Bartlett, Irene S. Um, Edward J. Luca, Ines Krass, Carl R. Schneider
    BMC Medical Education.2020;[Epub]     CrossRef
  • Pharmacy students’ provision of health promotion counseling services during a community pharmacy clerkship: a cross sectional study, Northwest Ethiopia
    Dessalegn Asmelashe Gelayee, Gashaw Binega Mekonnen
    BMC Medical Education.2018;[Epub]     CrossRef
Changing medical students’ perception of the evaluation culture: Is it possible?  
Jorie M. Colbert-Getz, Steven Baumann
J Educ Eval Health Prof. 2016;13:8.   Published online February 15, 2016
DOI: https://doi.org/10.3352/jeehp.2016.13.8
  • 28,358 View
  • 178 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDF
Student feedback is a critical component of the teacher-learner cycle. However, there is not a gold standard course or clerkship evaluation form and limited research on the impact of changing the evaluation process. Results from a focus group and pre-implementation feedback survey coupled with best practices in survey design were used to improve all course/clerkship evaluation for academic year 2013-2014. In spring 2014 we asked all subjected students in University of Utah School of Medicine, United States of America to complete the same feedback survey (post-implementation survey). We assessed the evaluation climate with 3 measures on the feedback survey: overall satisfaction with the evaluation process; time students gave effort to the process; and time students used shortcuts. Scores from these measures were compared between 2013 and 2014 with Mann-Whitney U-tests. Response rates were 79% (254) for 2013 and 52% (179) for 2014. Students’ overall satisfaction score were significantly higher (more positive) post-implementation compared to pre-implementation (P<0.001). There was no change in the amount of time students gave effort to completing evaluations (P=0.981) and no change for the amount of time they used shortcuts to complete evaluations (P=0.956). We were able to change overall satisfaction with the medical school evaluation culture, but there was no change in the amount of time students gave effort to completing evaluations and times they used shortcuts to complete evaluations. To ensure accurate evaluation results we will need to focus our efforts on time needed to complete course evaluations across all four years.

Citations

Citations to this article as recorded by  
  • Benefits of focus group discussions beyond online surveys in course evaluations by medical students in the United States: a qualitative study
    Katharina Brandl, Soniya V. Rabadia, Alexander Chang, Jess Mandel
    Journal of Educational Evaluation for Health Professions.2018; 15: 25.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions