Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
39 "United States"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis  
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
J Educ Eval Health Prof. 2024;21:5.   Published online February 28, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.5
  • 520 View
  • 113 Download
AbstractAbstract PDFSupplementary Material
Purpose
We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.
Methods
This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.
Results
Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.
Conclusion
Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.
Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study  
Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-pyng Chung
J Educ Eval Health Prof. 2024;21:3.   Published online February 23, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.3
  • 342 View
  • 102 Download
AbstractAbstract PDFSupplementary Material
Purpose
Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into 6 elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.
Methods
Messick’s unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from 3 pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018–2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).
Results
Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone (r=0.34, P=0.019).
Conclusion
Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.
Mentorship and self-efficacy are associated with lower burnout in physical therapists in the United States: a cross-sectional survey study  
Matthew Pugliese, Jean-Michel Brismée, Brad Allen, Sean Riley, Justin Tammany, Paul Mintken
J Educ Eval Health Prof. 2023;20:27.   Published online September 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.27
  • 2,569 View
  • 262 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the prevalence of burnout in physical therapists in the United States and the relationships between burnout and education, mentorship, and self-efficacy.
Methods
This was a cross-sectional survey study. An electronic survey was distributed to practicing physical therapists across the United States over a 6-week period from December 2020 to January 2021. The survey was completed by 2,813 physical therapists from all states. The majority were female (68.72%), White or Caucasian (80.13%), and employed full-time (77.14%). Respondents completed questions on demographics, education, mentorship, self-efficacy, and burnout. The Burnout Clinical Subtypes Questionnaire 12 (BCSQ-12) and self-reports were used to quantify burnout, and the General Self-Efficacy Scale (GSES) was used to measure self-efficacy. Descriptive and inferential analyses were performed.
Results
Respondents from home health (median BCSQ-12=42.00) and skilled nursing facility settings (median BCSQ-12=42.00) displayed the highest burnout scores. Burnout was significantly lower among those who provided formal mentorship (median BCSQ-12=39.00, P=0.0001) compared to no mentorship (median BCSQ-12=41.00). Respondents who received formal mentorship (median BCSQ-12=38.00, P=0.0028) displayed significantly lower burnout than those who received no mentorship (median BCSQ-12=41.00). A moderate negative correlation (rho=-0.49) was observed between the GSES and burnout scores. A strong positive correlation was found between self-reported burnout status and burnout scores (rrb=0.61).
Conclusion
Burnout is prevalent in the physical therapy profession, as almost half of respondents (49.34%) reported burnout. Providing or receiving mentorship and higher self-efficacy were associated with lower burnout. Organizations should consider measuring burnout levels, investing in mentorship programs, and implementing strategies to improve self-efficacy.
Doctoral physical therapy students’ increased confidence following exploration of active video gaming systems in a problem-based learning curriculum in the United States: a pre- and post-intervention study  
Michelle Elizabeth Wormley, Wendy Romney, Diana Veneri, Andrea Oberlander
J Educ Eval Health Prof. 2022;19:7.   Published online April 26, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.7
  • 7,180 View
  • 294 Download
AbstractAbstract PDFSupplementary Material
Purpose
Active video gaming (AVG) is used in physical therapy (PT) to treat individuals with a variety of diagnoses across the lifespan. The literature supports improvements in balance, cardiovascular endurance, and motor control; however, evidence is lacking regarding the implementation of AVG in PT education. This study investigated doctoral physical therapy (DPT) students’ confidence following active exploration of AVG systems as a PT intervention in the United States.
Methods
This pretest-posttest study included 60 DPT students in 2017 (cohort 1) and 55 students in 2018 (cohort 2) enrolled in a problem-based learning curriculum. AVG systems were embedded into patient cases and 2 interactive laboratory classes across 2 consecutive semesters (April–December 2017 and April–December 2018). Participants completed a 31-question survey before the intervention and 8 months later. Students’ confidence was rated for general use, game selection, plan of care, set-up, documentation, setting, and demographics. Descriptive statistics and the Wilcoxon signed-rank test were used to compare differences in confidence pre- and post-intervention.
Results
Both cohorts showed increased confidence at the post-test, with median (interquartile range) scores as follows: cohort 1: pre-test, 57.1 (44.3–63.5); post-test, 79.1 (73.1–85.4); and cohort 2: pre-test, 61.4 (48.0–70.7); post-test, 89.3 (80.0–93.2). Cohort 2 was significantly more confident at baseline than cohort 1 (P<0.05). In cohort 1, students’ data were paired and confidence levels significantly increased in all domains: use, Z=-6.2 (P<0.01); selection, Z=-5.9 (P<0.01); plan of care, Z=-6.0 (P<0.01); set-up, Z=-5.5 (P<0.01); documentation, Z=-6.0 (P<0.01); setting, Z=-6.3 (P<0.01); and total score, Z=-6.4 (P<0.01).
Conclusion
Structured, active experiences with AVG resulted in a significant increase in students’ confidence. As technology advances in healthcare delivery, it is essential to expose students to these technologies in the classroom.
Increased competency of registered dietitian nutritionists in physical examination skills after simulation-based education in the United States  
Elizabeth MacQuillan, Jennifer Ford, Kristin Baird
J Educ Eval Health Prof. 2020;17:40.   Published online December 14, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.40
  • 4,377 View
  • 144 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to translate simulation-based dietitian nutritionist education to clinical competency attainment in a group of practicing registered dietitian nutritionists (RDNs). Using a standardized instrument to measure performance on a newly-required clinical skill, the nutrition-focused physical exam (NFPE), competence was measured both before and after a simulation-based education (SBE) session.
Methods
Eighteen practicing RDNs were recruited by their employer, Spectrum Health. Following a pre-briefing session, participants completed an initial 10-minute encounter, performing NFPE on a standardized patient (SP). Next, participants completed a 90-minute SBE training session on skills within the NFPE, including hands-on practice and role play, followed by a post-training SP encounter. Video recordings of the SP encounters were scored to assess competence in 7 skill areas within the NFPE. Scores were analyzed for participants’ initial competence and change in competence.
Results
The proportions of participants with initial competence ranged from 0% to 44% across the 7 skill areas assessed. The only competency where participants initially scored in the “meets expectations” range was “approach to the patient.” When raw competence scores were assessed for changes from pre- to post-SBE training, the paired t-test indicated significant increases in all 7 competency areas following the simulation-based training (P<0.001).
Conclusion
This study showed the effectiveness of a SBE training program for increasing competence scores of practicing RDNs on a defined clinical skill.

Citations

Citations to this article as recorded by  
  • Barriers for Liver Transplant in Patients with Alcohol-Related Hepatitis
    Gina Choi, Jihane N. Benhammou, Jung J. Yum, Elena G. Saab, Ankur P. Patel, Andrew J. Baird, Stephanie Aguirre, Douglas G. Farmer, Sammy Saab
    Journal of Clinical and Experimental Hepatology.2022; 12(1): 13.     CrossRef
Self-care perspective taking and empathy in a student-faculty book club in the United States  
Rebecca Henderson, Melanie Gross Hagen, Zareen Zaidi, Valentina Dunder, Edlira Maska, Ying Nagoshi
J Educ Eval Health Prof. 2020;17:22.   Published online July 31, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.22
  • 7,118 View
  • 172 Download
  • 7 Web of Science
  • 7 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
We aimed to study the impact of a combined faculty-student book club on education and medical practice as a part of the informal curriculum at the University of Florida College of Medicine in the United States.
Methods
Sixteen medical students and 7 faculties who participated in the book club were interviewed through phone and recorded. The interview was then transcribed and entered into the qualitative data analysis program QSR NVivo (QSR International, Burlington, MA, USA). The transcripts were reviewed, and thematic codes were developed inductively through collaborative iteration. Based on these preliminary codes, a coding dictionary was developed and applied to all interviews within QSR Nvivo to identify themes.
Results
Four main themes were identified from interviews: The first theme, the importance of literature to the development and maintenance of empathy and perspective-taking, and the second theme, the importance of the book club in promoting mentorship, personal relationships and professional development, were important to both student and faculty participants. The third and fourth themes, the need for the book club as a tool for self-care and the book club serving as a reminder about the world outside of school were discussed by student book club members.
Conclusion
Our study demonstrated that an informal book club has a significant positive impact on self-care, perspective-taking, empathy, and developing a “world outside of school” for medical school students and faculty in the United States. It also helps to foster meaningful relationships between students and faculty.

Citations

Citations to this article as recorded by  
  • Student-faculty dialogue: meaningful perspective taking on campus
    Tee R. Tyler
    Social Work With Groups.2024; 47(2): 165.     CrossRef
  • Clubes de lectura: una revisión sistemática internacional de estudios (2010-2022)
    Carmen Álvarez-Álvarez, Julián Pascual Díez
    Literatura: teoría, historia, crítica.2024;[Epub]     CrossRef
  • The implementation of a required book club for medical students and faculty
    David B. Ney, Nethra Ankam, Anita Wilson, John Spandorfer
    Medical Education Online.2023;[Epub]     CrossRef
  • Cultivating critical consciousness through a Global Health Book Club
    Sarah L. Collins, Stuart J. Case, Alexandra K. Rodriguez, Acquel C. Allen, Elizabeth A. Wood
    Frontiers in Education.2023;[Epub]     CrossRef
  • Advancing book clubs as non-formal learning to facilitate critical public pedagogy in organizations
    Robin S Grenier, Jamie L Callahan, Kristi Kaeppel, Carole Elliott
    Management Learning.2022; 53(3): 483.     CrossRef
  • Not Just for Patrons: Book Club Participation as Professional Development for Librarians
    Laila M. Brown, Valerie Brett Shaindlin
    The Library Quarterly.2021; 91(4): 420.     CrossRef
  • Medical Students’ Creation of Original Poetry, Comics, and Masks to Explore Professional Identity Formation
    Johanna Shapiro, Juliet McMullin, Gabriella Miotto, Tan Nguyen, Anju Hurria, Minh Anh Nguyen
    Journal of Medical Humanities.2021; 42(4): 603.     CrossRef
Can incoming United States pediatric interns be entrusted with the essential communication skills of informed consent?  
Nicholas Sevey, Michelle Barratt, Emma Omoruyi
J Educ Eval Health Prof. 2020;17:18.   Published online June 29, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.18
  • 4,616 View
  • 126 Download
AbstractAbstract PDFSupplementary Material
Purpose
According to the entrustable professional activities (EPA) for entering residency by the Association of American Medical Colleges, incoming residents are expected to independently obtain informed consent for procedures they are likely to perform. This requires residents to not only inform their patients but to ensure comprehension of that information. We assessed the communication skills demonstrated by 372 incoming pediatric interns between 2007 and 2018 at the University of Texas Health Science Center at Houston, obtaining informed consent for a lumbar puncture.
Methods
During a simulated case in which interns were tasked with obtaining informed consent for a lumbar puncture, a standardized patient evaluated interns by rating 7 communication-based survey items using 5-point Likert scale from “poor” to “excellent.” We then converted the scale to a numerical system and calculated intern proficiency scores (sum of ratings for each resident) and average item performance (average item rating across all interns).
Results
Interns received an average rating of 21.6 per 28 maximum score, of which 227 interns (61.0%) achieved proficiency by scoring 21 or better. Notable differences were observed when comparing groups before and after EPA implementation (76.97% vs. 47.0% proficient, respectively). Item-level analysis showed that interns struggled most to conduct the encounter in a warm and friendly manner and encourage patients to ask questions (average ratings of 2.97/4 and 2.98/4, respectively). Interns excelled at treating the patient with respect and actively listening to questions (average ratings of 3.16, each). Both average intern proficiency scores and each average item ratings were significantly lower following EPA implementation (P<0.001).
Conclusion
Interns demonstrated moderate proficiency in communicating informed consent, though clear opportunities for improvement exist such as demonstrating warmth and encouraging questions.
Correlation between physician assistant students’ performance score of history taking and physical exam documentation and scores of Graduate Record Examination, clinical year grade point average, and score of Physician Assistant National Certifying Exam in the United States  
Sara Lolar, Jamie McQueen, Sara Maher
J Educ Eval Health Prof. 2020;17:16.   Published online May 27, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.16
  • 6,345 View
  • 151 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Learning to perform and document patient history taking and physical exam (H&P) entails a major component of the first year academic education of physician assistant (PA) students at Wayne State University, USA. The H&P is summative of multiple aspects of PA education, and students must master communication with patients and other health care providers. The objectives of this study were first, to determine if there was a correlation between scores on the Graduate Record Examination (GRE) component testing and scores on graded H&Ps. The second objective was to identify a correlation between proficiency with H&P documentation and academic and clinical year grade point average (GPA) and Physician Assistant National Certifying Exam (PANCE) score.
Methods
Subjects included 147 PA students from Wayne State University from 2014–2016. PA students visited local hospitals or outpatient clinics during the academic year to perform and document patient H&Ps. Correlation between the H&P mean scores and GRE component scores, GPAs, and PANCE scores were analyzed.
Results
The subjects were 26.5 years-old (+6.5) and 111 females (75.5%). There was no correlation between the GRE component score and the H&P mean score. The H&P score was positively correlated with GPA 1 (r=0.512, P<0.001), with GPA 2 (r=0.425, P<0.001) and with PANCE score (r=0.448, P<0.001).
Conclusion
PA student skill with H&P documentation was positively related to academic performance score during PA school and achievement score on the PANCE at Wayne State University, USA.

Citations

Citations to this article as recorded by  
  • History-taking level and its influencing factors among nursing undergraduates based on the virtual standardized patient testing results: Cross sectional study
    Jingrong Du, Xiaowen Zhu, Juan Wang, Jing Zheng, Xiaomin Zhang, Ziwen Wang, Kun Li
    Nurse Education Today.2022; 111: 105312.     CrossRef
  • A Decline in Black and Dermatology Physician Assistants
    Jameka McElroy-Brooklyn, Cynthia Faires Griffith
    Journal of Physician Assistant Education.2022; 33(4): 275.     CrossRef
Use of graded responsibility and common entrustment considerations among United States emergency medicine residency programs  
Jason Lai, Benjamin Holden Schnapp, David Simon Tillman, Mary Westergaard, Jamie Hess, Aaron Kraut
J Educ Eval Health Prof. 2020;17:11.   Published online April 20, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.11
  • 5,547 View
  • 95 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The Accreditation Council for Graduate Medical Education (ACGME) requires all residency programs to provide increasing autonomy as residents progress through training, known as graded responsibility. However, there is little guidance on how to implement graded responsibility in practice and a paucity of literature on how it is currently implemented in emergency medicine (EM). We sought to determine how EM residency programs apply graded responsibility across a variety of activities and to identify which considerations are important in affording additional responsibilities to trainees.
Methods
We conducted a cross-sectional study of EM residency programs using a 23-question survey that was distributed by email to 162 ACGME-accredited EM program directors. Seven different domains of practice were queried.
Results
We received 91 responses (56.2% response rate) to the survey. Among all domains of practice except for managing critically ill medical patients, the use of graded responsibility exceeded 50% of surveyed programs. When graded responsibility was applied, post-graduate year (PGY) level was ranked an “extremely important” or “very important” consideration between 80.9% and 100.0% of the time.
Conclusion
The majority of EM residency programs are implementing graded responsibility within most domains of practice. When decisions are made surrounding graded responsibility, programs still rely heavily on the time-based model of PGY level to determine advancement.

Citations

Citations to this article as recorded by  
  • Do you see what I see?: exploring trends in organizational culture perceptions across residency programs
    Jennifer H. Chen, Paula Costa, Aimee Gardner
    Global Surgical Education - Journal of the Association for Surgical Education.2024;[Epub]     CrossRef
  • Guiding Fellows to Independent Practice
    Maybelle Kou, Aline Baghdassarian, Kajal Khanna, Nazreen Jamal, Michele Carney, Daniel M. Fein, In Kim, Melissa L. Langhan, Jerri A. Rose, Noel S. Zuckerbraun, Cindy G. Roskind
    Pediatric Emergency Care.2022; 38(10): 517.     CrossRef
Evaluation of student perceptions with 2 interprofessional assessment tools—the Collaborative Healthcare Interdisciplinary Relationship Planning instrument and the Interprofessional Attitudes Scale—following didactic and clinical learning experiences in the United States  
Vincent Dennis, Melissa Craft, Dale Bratzler, Melody Yozzo, Denise Bender, Christi Barbee, Stephen Neely, Margaret Robinson
J Educ Eval Health Prof. 2019;16:35.   Published online November 5, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.35
  • 9,900 View
  • 219 Download
  • 10 Web of Science
  • 9 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated changes in students’ attitudes using 2 validated interprofessional survey instruments—the Collaborative Healthcare Interdisciplinary Relationship Planning (CHIRP) instrument and the Interprofessional Attitudes Scale (IPAS)—before and after didactic and clinical cohorts.
Methods
Students from 7 colleges/schools participated in didactic and clinical cohorts during the 2017–2018 year. Didactic cohorts experienced 2 interactive sessions 6 months apart, while clinical cohorts experienced 4 outpatient clinical sessions once monthly. For the baseline and post-cohort assessments, 865 students were randomly assigned to complete either the 14-item CHIRP or the 27-item IPAS. The Pittman test using permutations of linear ranks was used to determine differences in the score distribution between the baseline and post-cohort assessments. Pooled results were compared for the CHIRP total score and the IPAS total and subdomain scores. For each score, 3 comparisons were made simultaneously: overall baseline versus post-didactic cohort, overall baseline versus post-clinical cohort, and post-didactic cohort versus post-clinical cohort. Alpha was adjusted to 0.0167 to account for simultaneous comparisons.
Results
The baseline and post-cohort survey response rates were 62.4% and 65.9% for CHIRP and 58.7% and 58.1% for IPAS, respectively. The post-clinical cohort scores for the IPAS subdomain of teamwork, roles, and responsibilities were significantly higher than the baseline and post-didactic cohort scores. No differences were seen for the remaining IPAS subdomain scores or the CHIRP instrument total score.
Conclusion
The IPAS instrument may discern changes in student attitudes in the subdomain of teamwork, roles, and responsibilities following short-term clinical experiences involving diverse interprofessional team members.

Citations

Citations to this article as recorded by  
  • Interprofessional communication skills training to improve medical students’ and nursing trainees’ error communication - quasi-experimental pilot study
    Lina Heier, Barbara Schellenberger, Anna Schippers, Sebastian Nies, Franziska Geiser, Nicole Ernstmann
    BMC Medical Education.2024;[Epub]     CrossRef
  • Development and implementation of interprofessional education activity among health professions students in Jordan: A pilot investigation
    Osama Y. Alshogran, Zaid Al-Hamdan, Alla El-Awaisi, Hana Alkhalidy, Nesreen Saadeh, Hadeel Alsqaier
    Journal of Interprofessional Care.2023; 37(4): 588.     CrossRef
  • Tools for faculty assessment of interdisciplinary competencies of healthcare students: an integrative review
    Sharon Brownie, Denise Blanchard, Isaac Amankwaa, Patrick Broman, Marrin Haggie, Carlee Logan, Amy Pearce, Kesava Sampath, Ann-Rong Yan, Patrea Andersen
    Frontiers in Medicine.2023;[Epub]     CrossRef
  • Interprofessional education tracks: One schools response to common IPE barriers
    Kim G. Adcock, Sally Earl
    Currents in Pharmacy Teaching and Learning.2023; 15(5): 528.     CrossRef
  • Interprofessional education and collaborative practice in Nigeria – Pharmacists' and pharmacy students' attitudes and perceptions of the obstacles and recommendations
    Segun J. Showande, Tolulope P. Ibirongbe
    Currents in Pharmacy Teaching and Learning.2023; 15(9): 787.     CrossRef
  • To IPAS or not to IPAS? Examining the construct validity of the Interprofessional Attitudes Scale in Hong Kong
    Fraide A. Ganotice, Amy Yin Man Chow, Kelvin Kai Hin Fan, Ui Soon Khoo, May Pui San Lam, Rebecca Po Wah Poon, Francis Hang Sang Tsoi, Michael Ning Wang, George L. Tipoe
    Journal of Interprofessional Care.2022; 36(1): 127.     CrossRef
  • Turkish adaptation of the interprofessional attitude scale (IPAS)
    Mukadder Inci Baser Kolcu, Ozlem Surel Karabilgin Ozturkcu, Giray Kolcu
    Journal of Interprofessional Care.2022; 36(5): 684.     CrossRef
  • Patient participation in interprofessional learning and collaboration with undergraduate health professional students in clinical placements: A scoping review
    Catrine Buck Jensen, Bente Norbye, Madeleine Abrandt Dahlgren, Anita Iversen
    Journal of Interprofessional Education & Practice.2022; 27: 100494.     CrossRef
  • Can interprofessional education change students’ attitudes? A case study from Lebanon
    Carine J. Sakr, Lina Fakih, Jocelyn Dejong, Nuhad Yazbick-Dumit, Hussein Soueidan, Wiam Haidar, Elias Boufarhat, Imad Bou Akl
    BMC Medical Education.2022;[Epub]     CrossRef
Effect of student-directed solicitation of evaluation forms on the timeliness of completion by preceptors in the United States  
Conrad Krawiec, Vonn Walter, Abigail Kate Myers
J Educ Eval Health Prof. 2019;16:32.   Published online October 16, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.32
  • 9,214 View
  • 123 Download
AbstractAbstract PDFSupplementary Material
Purpose
Summative evaluation forms assessing a student’s clinical performance are often completed by a faculty preceptor at the end of a clinical training experience. At our institution, despite the use of an electronic system, timeliness of completion has been suboptimal, potentially limiting our ability to monitor students’ progress. The aim of the present study was to determine whether a student-directed approach to summative evaluation form collection at the end of a pediatrics clerkship would enhance timeliness of completion for third-year medical students.
Methods
This was a pre- and post-intervention educational quality improvement project focused on 156 (82 pre-intervention, 74 post-intervention) third-year medical students at Penn State College of Medicine completing their 4-week pediatric clerkship. Utilizing REDCap (Research Electronic Data Capture) informatics support, student-directed evaluation form solicitation was encouraged. The Wilcoxon rank-sum test was applied to compare the pre-intervention (May 1, 2017 to March 2, 2018) and post-intervention (April 2, 2018 to December 21, 2018) percentages of forms completed before the rotation midpoint.
Results
In total, 740 evaluation forms were submitted during the pre-intervention phase and 517 during the post-intervention phase. The percentage of forms completed before the rotation midpoint increased after implementing student-directed solicitation (9.6% vs. 39.7%, P<0.05).
Conclusion
Our clerkship relies on subjective summative evaluations to track students’ progress, deploy improvement strategies, and determine criteria for advancement; however, our preceptors struggled with timely submission. Allowing students to direct the solicitation of evaluation forms enhanced the timeliness of completion and should be considered in clerkships facing similar challenges.
Application of an objective structured clinical examination to evaluate and monitor interns’ proficiency in hand hygiene and personal protective equipment use in the United States  
Ying Nagoshi, Lou Ann Cooper, Lynne Meyer, Kartik Cherabuddi, Julia Close, Jamie Dow, Merry Jennifer Markham, Carolyn Stalvey
J Educ Eval Health Prof. 2019;16:31.   Published online October 15, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.31
  • 9,855 View
  • 147 Download
  • 6 Web of Science
  • 6 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study was conducted to determine whether an objective structured clinical examination (OSCE) could be used to evaluate and monitor hand hygiene and personal protective equipment (PPE) proficiency among medical interns in the United States.
Methods
Interns in July 2015 (N=123, cohort 1) with no experience of OSCE-based contact precaution evaluation and teaching were evaluated in early 2016 using an OSCE for hand hygiene and PPE proficiency. They performed poorly. Therefore, the new interns entering in July 2016 (N=151, cohort 2) were immediately tested at the same OSCE stations as cohort 1, and were provided with feedback and teaching. Cohort 2 was then retested at the OSCE station in early 2017. The Mann-Whitney U-test was used to compare the performance of cohort 1 and cohort 2 on checklist items. In cohort 2, performance differences between the beginning and end of the intern year were compared using the McNemar chi-square test for paired nominal data.
Results
Checklist items were scored, summed, and reported as percent correct. In cohort 2, the mean percent correct was higher on the posttest than on the pretest (92% vs. 77%, P<0.0001), and the passing rate (100% correct) was also significantly higher on the posttest (55% vs. 16%). At the end of intern year, the mean percent correct was higher in cohort 2 than in cohort 1 (95% vs. 90%, P<0.0001), and 55% of cohort 2 passed (a perfect score) compared to 24% in cohort 1 (P<0.0001).
Conclusion
An OSCE can be utilized to evaluate and monitor hand hygiene and PPE proficiency among interns in the United States.

Citations

Citations to this article as recorded by  
  • Staying proper with your personal protective equipment: How to don and doff
    Cameron R. Smith, Terrie Vasilopoulos, Amanda M. Frantz, Thomas LeMaster, Ramon Andres Martinez, Amy M. Gunnett, Brenda G. Fahy
    Journal of Clinical Anesthesia.2023; 86: 111057.     CrossRef
  • Virtual Reality Medical Training for COVID-19 Swab Testing and Proper Handling of Personal Protective Equipment: Development and Usability
    Paul Zikas, Steve Kateros, Nick Lydatakis, Mike Kentros, Efstratios Geronikolakis, Manos Kamarianakis, Giannis Evangelou, Ioanna Kartsonaki, Achilles Apostolou, Tanja Birrenbach, Aristomenis K. Exadaktylos, Thomas C. Sauter, George Papapagiannakis
    Frontiers in Virtual Reality.2022;[Epub]     CrossRef
  • Effectiveness and Utility of Virtual Reality Simulation as an Educational Tool for Safe Performance of COVID-19 Diagnostics: Prospective, Randomized Pilot Trial
    Tanja Birrenbach, Josua Zbinden, George Papagiannakis, Aristomenis K Exadaktylos, Martin Müller, Wolf E Hautz, Thomas Christian Sauter
    JMIR Serious Games.2021; 9(4): e29586.     CrossRef
  • Rapid Dissemination of a COVID-19 Airway Management Simulation Using a Train-the-Trainers Curriculum
    William J. Peterson, Brendan W. Munzer, Ryan V. Tucker, Eve D. Losman, Carrie Harvey, Colman Hatton, Nana Sefa, Ben S. Bassin, Cindy H. Hsu
    Academic Medicine.2021; 96(10): 1414.     CrossRef
  • Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia
    Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
    Journal of Educational Evaluation for Health Professions.2021; 18: 23.     CrossRef
  • Comparison of students' performance of objective structured clinical examination during clinical practice
    Jihye Yu, Sukyung Lee, Miran Kim, Janghoon Lee
    Korean Journal of Medical Education.2020; 32(3): 231.     CrossRef
Peer-assisted feedback: a successful approach for providing feedback on United States Medical Licensing Exam-style clinical skills exam notes in the United States  
Kira Nagoshi, Zareen Zaidi, Ashleigh Wright, Carolyn Stalvey
J Educ Eval Health Prof. 2019;16:29.   Published online October 8, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.29
  • 9,800 View
  • 130 Download
  • 3 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Peer-assisted learning (PAL) promotes the development of communication, facilitates improvements in clinical skills, and is a way to provide feedback to learners. We utilized PAL as a conceptual framework to explore the feasibility of peer-assisted feedback (PAF) to improve note-writing skills without requiring faculty time. The aim was to assess whether PAL was a successful method to provide feedback on the United States Medical Licensing Exams (USMLE)-style clinical skills exam notes by using student feedback on a survey in the United States.
Methods
The University of Florida College of Medicine administers clinical skills examination (CSEs) that include USMLE-like note-writing. PAL, in which students support the learning of their peers, was utilized as an alternative to faculty feedback. Second-year (MS2) and third-year (MS3) medical students taking CSEs participated in faculty-run note-grading sessions immediately after testing, which included explanations of grading rubrics and the feedback process. Students graded an anonymized peer’s notes. The graded material was then forwarded anonymously to its student author to review. Students were surveyed on their perceived ability to provide feedback and the benefits derived from PAF using a Likert scale (1–6) and open-ended comments during the 2017–2018 academic year.
Results
Students felt generally positively about the activity, with mean scores for items related to educational value of 4.49 for MS2s and 5.11 for MS3s (out of 6). MS3s perceived peer feedback as constructive, felt that evaluating each other’s notes was beneficial, and felt that the exercise would improve their future notes. While still positive, MS2 students gave lower scores than the MS3 students.
Conclusion
PAF was a successful method of providing feedback on student CSE notes, especially for MS3s. MS2s commented that although they learned during the process, they might be more invested in improving their note-writing as they approach their own USMLE exam.

Citations

Citations to this article as recorded by  
  • Teaching feedback skills to veterinary students by peer-assisted learning
    Aytaç ÜNSAL ADACA
    Ankara Üniversitesi Veteriner Fakültesi Dergisi.2023; 70(3): 237.     CrossRef
  • Pedagogic Exploration Into Adapting Automated Writing Evaluation and Peer Review Integrated Feedback Into Large-Sized University Writing Classes
    Wei-Yan Li, Kevin Kau, Yi-Jiun Shiung
    SAGE Open.2023;[Epub]     CrossRef
  • Benefits of semiology taught using near-peer tutoring are sustainable
    Benjamin Gripay, Thomas André, Marie De Laval, Brice Peneau, Alexandre Secourgeon, Nicolas Lerolle, Cédric Annweiler, Grégoire Justeau, Laurent Connan, Ludovic Martin, Loïc Bière
    BMC Medical Education.2022;[Epub]     CrossRef
Mismatch between the proposed ability concepts of the Graduate Record Examination and the critical thinking skills of physical therapy applicants suggested by an expert panel in the United States  
Emily Shannon Hughes
J Educ Eval Health Prof. 2019;16:24.   Published online August 27, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.24
  • 10,700 View
  • 183 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The Graduate Record Examination (GRE) is a general examination predictive of success in US-based graduate programs. Used to assess students’ written, mathematical, and critical thinking (CT) skills, the GRE is utilized for admission to approximately 85% of US physical therapist education (PTE) programs. The purpose of this study was to assess whether the CT skills measured by the GRE match those deemed by an expert panel as the most important to assess for PTE program acceptance.
Methods
Using a modified E-Delphi approach, a 3-phase survey was distributed over 8 weeks to a panel consisting of licensed US physical therapists with expertise on CT and PTE program directors. The CT skills isolated by the expert panel, based on Facione’s Delphi report, were compared to the CT skills assessed by the GRE.
Results
The CT skills supported by the Delphi report and chosen by the expert panel for assessment prior to acceptance into US PTE programs included clarifying meaning, categorization, and analyzing arguments. Only clarifying meaning matched the CT skills from the GRE.
Conclusion
The GRE is a test for general admission to graduate programs, lacking context related to healthcare or physical therapy. The current study fails to support the GRE as an assessment tool of CT for admission to PTE programs. A context-based admission test evaluating the CT skills identified in this study should be developed for use in the admission process to predict which students will complete US PTE programs and pass the licensure exam.

Citations

Citations to this article as recorded by  
  • Correlation between physician assistant students’ performance score of history taking and physical exam documentation and scores of Graduate Record Examination, clinical year grade point average, and score of Physician Assistant National Certifying Exam i
    Sara Lolar, Jamie McQueen, Sara Maher
    Journal of Educational Evaluation for Health Professions.2020; 17: 16.     CrossRef
Educational/faculty development material
Analysis of the Clinical Education Situation framework: a tool for identifying the root cause of student failure in the United States  
Katherine Myers, Kyle Covington
J Educ Eval Health Prof. 2019;16:11.   Published online May 10, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.11
  • 14,885 View
  • 259 Download
  • 1 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Doctor of physical therapy preparation requires extensive time in precepted clinical education, which involves multiple stakeholders. Student outcomes in clinical education are impacted by many factors, and, in the case of failure, it can be challenging to determine which factors played a primary role in the poor result. Using existing root-cause analysis processes, the authors developed and implemented a framework designed to identify the causes of student failure in clinical education. This framework, when applied to a specific student failure event, can be used to identify the factors that contributed to the situation and to reveal opportunities for improvement in both the clinical and academic environments. A root-cause analysis framework can help to drive change at the programmatic level, and future studies should focus on the framework’s application to a variety of clinical and didactic settings.

Citations

Citations to this article as recorded by  
  • Applying the 2022 Cardiovascular and Pulmonary Entry-Level Physical Therapist Competencies to Physical Therapist Education and Practice
    Nancy Smith, Angela Campbell, Morgan Johanson, Pamela Bartlo, Naomi Bauer, Sagan Everett
    Journal of Physical Therapy Education.2023; 37(3): 165.     CrossRef
  • Cardiovascular and Pulmonary Entry-Level Physical Therapist Competencies: Update by Academy of Cardiovascular & Pulmonary Physical Therapy Task Force
    Morgan Johanson, Pamela Bartlo, Naomi Bauer, Angela Campbell, Sagan Everett, Nancy Smith
    Cardiopulmonary Physical Therapy Journal.2023; 34(4): 183.     CrossRef
  • The situational analysis of teaching-learning in clinical education in Iran: a postmodern grounded theory study
    Soleiman Ahmady, Hamed Khani
    BMC Medical Education.2022;[Epub]     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions