Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
42 "United States"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Reliability of a workplace-based assessment for the United States general surgical trainees’ intraoperative performance using multivariate generalizability theory: a psychometric study
Ting Sun, Stella Yun Kim, Brigitte Kristin Smith, Yoon Soo Park
J Educ Eval Health Prof. 2024;21:26.   Published online September 24, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.26
  • 521 View
  • 151 Download
AbstractAbstract PDFSupplementary Material
Purpose
The System for Improving and Measuring Procedure Learning (SIMPL), a smartphone-based operative assessment application, was developed to assess the intraoperative performance of surgical residents. This study aims to examine the reliability of the SIMPL assessment and determine the optimal number of procedures for a reliable assessment.
Methods
In this retrospective observational study, we analyzed data collected between 2015 and 2023 from 4,616 residents across 94 General Surgery Residency programs in the United States that utilized the SIMPL smartphone application. We employed multivariate generalizability theory and initially conducted generalizability studies to estimate the variance components associated with procedures. We then performed decision studies to estimate the reliability coefficient and the minimum number of procedures required for a reproducible assessment.
Results
We estimated that the reliability of the assessment of surgical trainees’ intraoperative autonomy and performance using SIMPL exceeded 0.70. Additionally, the optimal number of procedures required for a reproducible assessment was 10, 17, 15, and 17 for postgraduate year (PGY) 2, PGY 3, PGY 4, and PGY 5, respectively. Notably, the study highlighted that the assessment of residents in their senior years necessitated a larger number of procedures compared to those in their junior years.
Conclusion
The study demonstrated that the SIMPL assessment is reliably effective for evaluating the intraoperative performance of surgical trainees. Adjusting the number of procedures based on the trainees’ training stage enhances the assessment process’s accuracy and effectiveness.
Training satisfaction and future employment consideration among physician and nursing trainees at rural Veterans Affairs facilities in the United States during COVID-19: a time-series before and after study  
Heather Northcraft, Tiffany Radcliff, Anne Reid Griffin, Jia Bai, Aram Dobalian
J Educ Eval Health Prof. 2024;21:25.   Published online September 24, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.25
  • 465 View
  • 121 Download
AbstractAbstract PDFSupplementary Material
Purpose
The coronavirus disease 2019 (COVID-19) pandemic limited healthcare professional education and training opportunities in rural communities. Because the US Department of Veterans Affairs (VA) has robust programs to train clinicians in the United States, this study examined VA trainee perspectives regarding pandemic-related training in rural and urban areas and interest in future employment with the VA.
Methods
Survey responses were collected nationally from VA physicians and nursing trainees before and after COVID-19 (2018 to 2021). Logistic regression models were used to test the association between pandemic timing (pre-pandemic or pandemic), trainee program (physician or nurse), and the interaction of trainee pandemic timing and program on VA trainee satisfaction and trainee likelihood to consider future VA employment in rural and urban areas.
Results
While physician trainees at urban facilities reported decreases in overall training satisfaction and corresponding decreases in the likelihood of considering future VA employment from pre-pandemic to pandemic, rural physician trainees showed no changes in either outcome. In contrast, while nursing trainees at both urban and rural sites had decreases in training satisfaction associated with the pandemic, there was no corresponding effect on the likelihood of future employment by nurses at either urban or rural VA sites.
Conclusion
The study’s findings suggest differences in the training experiences of physicians and nurses at rural sites, as well as between physician trainees at urban and rural sites. Understanding these nuances can inform the development of targeted approaches to address the ongoing provider shortages that rural communities in the United States are facing.
Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study
Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman
J Educ Eval Health Prof. 2024;21:17.   Published online July 8, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.17
  • 1,648 View
  • 293 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States.
Methods
In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024.
Results
GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P<0.00001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items.
Conclusions
ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology’s Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.

Citations

Citations to this article as recorded by  
  • From GPT-3.5 to GPT-4.o: A Leap in AI’s Medical Exam Performance
    Markus Kipp
    Information.2024; 15(9): 543.     CrossRef
  • Artificial Intelligence can Facilitate Application of Risk Stratification Algorithms to Bladder Cancer Patient Case Scenarios
    Max S Yudovich, Ahmad N Alzubaidi, Jay D Raman
    Clinical Medicine Insights: Oncology.2024;[Epub]     CrossRef
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis  
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos
J Educ Eval Health Prof. 2024;21:5.   Published online February 28, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.5
  • 1,426 View
  • 269 Download
AbstractAbstract PDFSupplementary Material
Purpose
We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.
Methods
This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.
Results
Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.
Conclusion
Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.
Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study  
Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-pyng Chung
J Educ Eval Health Prof. 2024;21:3.   Published online February 23, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.3
  • 1,181 View
  • 198 Download
AbstractAbstract PDFSupplementary Material
Purpose
Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into 6 elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.
Methods
Messick’s unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from 3 pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018–2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).
Results
Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone (r=0.34, P=0.019).
Conclusion
Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.
Mentorship and self-efficacy are associated with lower burnout in physical therapists in the United States: a cross-sectional survey study  
Matthew Pugliese, Jean-Michel Brismée, Brad Allen, Sean Riley, Justin Tammany, Paul Mintken
J Educ Eval Health Prof. 2023;20:27.   Published online September 27, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.27
  • 5,210 View
  • 391 Download
  • 4 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the prevalence of burnout in physical therapists in the United States and the relationships between burnout and education, mentorship, and self-efficacy.
Methods
This was a cross-sectional survey study. An electronic survey was distributed to practicing physical therapists across the United States over a 6-week period from December 2020 to January 2021. The survey was completed by 2,813 physical therapists from all states. The majority were female (68.72%), White or Caucasian (80.13%), and employed full-time (77.14%). Respondents completed questions on demographics, education, mentorship, self-efficacy, and burnout. The Burnout Clinical Subtypes Questionnaire 12 (BCSQ-12) and self-reports were used to quantify burnout, and the General Self-Efficacy Scale (GSES) was used to measure self-efficacy. Descriptive and inferential analyses were performed.
Results
Respondents from home health (median BCSQ-12=42.00) and skilled nursing facility settings (median BCSQ-12=42.00) displayed the highest burnout scores. Burnout was significantly lower among those who provided formal mentorship (median BCSQ-12=39.00, P=0.0001) compared to no mentorship (median BCSQ-12=41.00). Respondents who received formal mentorship (median BCSQ-12=38.00, P=0.0028) displayed significantly lower burnout than those who received no mentorship (median BCSQ-12=41.00). A moderate negative correlation (rho=-0.49) was observed between the GSES and burnout scores. A strong positive correlation was found between self-reported burnout status and burnout scores (rrb=0.61).
Conclusion
Burnout is prevalent in the physical therapy profession, as almost half of respondents (49.34%) reported burnout. Providing or receiving mentorship and higher self-efficacy were associated with lower burnout. Organizations should consider measuring burnout levels, investing in mentorship programs, and implementing strategies to improve self-efficacy.

Citations

Citations to this article as recorded by  
  • Wellness and Stress Management Practices Among Healthcare Professionals and Health Professional Students
    Asli C. Yalim, Katherine Daly, Monica Bailey, Denise Kay, Xiang Zhu, Mohammed Patel, Laurie C. Neely, Desiree A. Díaz, Denyi M. Canario Asencio, Karla Rosario, Melissa Cowan, Magdalena Pasarica
    American Journal of Health Promotion.2024;[Epub]     CrossRef
  • Interprofessional education to support alcohol use screening and future team-based management of stress-related disorders in vulnerable populations
    Taylor Fitzpatrick-Schmidt, Scott Edwards
    Frontiers in Education.2024;[Epub]     CrossRef
  • Final results of the National Oncology Mentorship Program 2023 and its impact on burnout and professional fulfilment
    Udit Nindra, Gowri Shivasabesan, Rhiannon Mellor, Weng Ng, Wei Chua, Deme Karikios, Bethan Richards, Jia Liu
    Internal Medicine Journal.2024;[Epub]     CrossRef
Doctoral physical therapy students’ increased confidence following exploration of active video gaming systems in a problem-based learning curriculum in the United States: a pre- and post-intervention study  
Michelle Elizabeth Wormley, Wendy Romney, Diana Veneri, Andrea Oberlander
J Educ Eval Health Prof. 2022;19:7.   Published online April 26, 2022
DOI: https://doi.org/10.3352/jeehp.2022.19.7
  • 9,140 View
  • 308 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Active video gaming (AVG) is used in physical therapy (PT) to treat individuals with a variety of diagnoses across the lifespan. The literature supports improvements in balance, cardiovascular endurance, and motor control; however, evidence is lacking regarding the implementation of AVG in PT education. This study investigated doctoral physical therapy (DPT) students’ confidence following active exploration of AVG systems as a PT intervention in the United States.
Methods
This pretest-posttest study included 60 DPT students in 2017 (cohort 1) and 55 students in 2018 (cohort 2) enrolled in a problem-based learning curriculum. AVG systems were embedded into patient cases and 2 interactive laboratory classes across 2 consecutive semesters (April–December 2017 and April–December 2018). Participants completed a 31-question survey before the intervention and 8 months later. Students’ confidence was rated for general use, game selection, plan of care, set-up, documentation, setting, and demographics. Descriptive statistics and the Wilcoxon signed-rank test were used to compare differences in confidence pre- and post-intervention.
Results
Both cohorts showed increased confidence at the post-test, with median (interquartile range) scores as follows: cohort 1: pre-test, 57.1 (44.3–63.5); post-test, 79.1 (73.1–85.4); and cohort 2: pre-test, 61.4 (48.0–70.7); post-test, 89.3 (80.0–93.2). Cohort 2 was significantly more confident at baseline than cohort 1 (P<0.05). In cohort 1, students’ data were paired and confidence levels significantly increased in all domains: use, Z=-6.2 (P<0.01); selection, Z=-5.9 (P<0.01); plan of care, Z=-6.0 (P<0.01); set-up, Z=-5.5 (P<0.01); documentation, Z=-6.0 (P<0.01); setting, Z=-6.3 (P<0.01); and total score, Z=-6.4 (P<0.01).
Conclusion
Structured, active experiences with AVG resulted in a significant increase in students’ confidence. As technology advances in healthcare delivery, it is essential to expose students to these technologies in the classroom.

Citations

Citations to this article as recorded by  
  • The use of artificial intelligence in crafting a novel method for teaching normal human gait
    Scott W. Lowe
    European Journal of Physiotherapy.2024; : 1.     CrossRef
Increased competency of registered dietitian nutritionists in physical examination skills after simulation-based education in the United States  
Elizabeth MacQuillan, Jennifer Ford, Kristin Baird
J Educ Eval Health Prof. 2020;17:40.   Published online December 14, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.40
  • 5,365 View
  • 158 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to translate simulation-based dietitian nutritionist education to clinical competency attainment in a group of practicing registered dietitian nutritionists (RDNs). Using a standardized instrument to measure performance on a newly-required clinical skill, the nutrition-focused physical exam (NFPE), competence was measured both before and after a simulation-based education (SBE) session.
Methods
Eighteen practicing RDNs were recruited by their employer, Spectrum Health. Following a pre-briefing session, participants completed an initial 10-minute encounter, performing NFPE on a standardized patient (SP). Next, participants completed a 90-minute SBE training session on skills within the NFPE, including hands-on practice and role play, followed by a post-training SP encounter. Video recordings of the SP encounters were scored to assess competence in 7 skill areas within the NFPE. Scores were analyzed for participants’ initial competence and change in competence.
Results
The proportions of participants with initial competence ranged from 0% to 44% across the 7 skill areas assessed. The only competency where participants initially scored in the “meets expectations” range was “approach to the patient.” When raw competence scores were assessed for changes from pre- to post-SBE training, the paired t-test indicated significant increases in all 7 competency areas following the simulation-based training (P<0.001).
Conclusion
This study showed the effectiveness of a SBE training program for increasing competence scores of practicing RDNs on a defined clinical skill.

Citations

Citations to this article as recorded by  
  • Evaluation of Mental Health First Aid Training and Simulated Psychosis Care Role-Plays for Pharmacy Education
    Tina X. Ung, Claire L. O’Reilly, Rebekah J. Moles, Jack C. Collins, Ricki Ng, Lily Pham, Bandana Saini, Jennifer A. Ong, Timothy F. Chen, Carl R. Schneider, Sarira El-Den
    American Journal of Pharmaceutical Education.2024; 88(11): 101288.     CrossRef
  • Barriers for Liver Transplant in Patients with Alcohol-Related Hepatitis
    Gina Choi, Jihane N. Benhammou, Jung J. Yum, Elena G. Saab, Ankur P. Patel, Andrew J. Baird, Stephanie Aguirre, Douglas G. Farmer, Sammy Saab
    Journal of Clinical and Experimental Hepatology.2022; 12(1): 13.     CrossRef
Self-care perspective taking and empathy in a student-faculty book club in the United States  
Rebecca Henderson, Melanie Gross Hagen, Zareen Zaidi, Valentina Dunder, Edlira Maska, Ying Nagoshi
J Educ Eval Health Prof. 2020;17:22.   Published online July 31, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.22
  • 8,346 View
  • 189 Download
  • 12 Web of Science
  • 11 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
We aimed to study the impact of a combined faculty-student book club on education and medical practice as a part of the informal curriculum at the University of Florida College of Medicine in the United States.
Methods
Sixteen medical students and 7 faculties who participated in the book club were interviewed through phone and recorded. The interview was then transcribed and entered into the qualitative data analysis program QSR NVivo (QSR International, Burlington, MA, USA). The transcripts were reviewed, and thematic codes were developed inductively through collaborative iteration. Based on these preliminary codes, a coding dictionary was developed and applied to all interviews within QSR Nvivo to identify themes.
Results
Four main themes were identified from interviews: The first theme, the importance of literature to the development and maintenance of empathy and perspective-taking, and the second theme, the importance of the book club in promoting mentorship, personal relationships and professional development, were important to both student and faculty participants. The third and fourth themes, the need for the book club as a tool for self-care and the book club serving as a reminder about the world outside of school were discussed by student book club members.
Conclusion
Our study demonstrated that an informal book club has a significant positive impact on self-care, perspective-taking, empathy, and developing a “world outside of school” for medical school students and faculty in the United States. It also helps to foster meaningful relationships between students and faculty.

Citations

Citations to this article as recorded by  
  • Student-faculty dialogue: meaningful perspective taking on campus
    Tee R. Tyler
    Social Work With Groups.2024; 47(2): 165.     CrossRef
  • Clubes de lectura: una revisión sistemática internacional de estudios (2010-2022)
    Carmen Álvarez-Álvarez, Julián Pascual Díez
    Literatura: teoría, historia, crítica.2024;[Epub]     CrossRef
  • An open book: A virtual book club designed to connect advanced practice registered nurses through quality improvement
    Cassandra Faye Newell, Catherine Woods
    Journal of the American Association of Nurse Practitioners.2024; 36(8): 431.     CrossRef
  • Students’ informal learning interactions in health professions education: insights from a qualitative synthesis
    Sarah Barradell, Amani Bell, Kate Thomson, Jessica Hughes
    Higher Education Research & Development.2024; : 1.     CrossRef
  • Measurement instruments for perspective-taking: A BEME scoping review: BEME Review No. 91
    Elsemarijn L. Leijenaar, Megan M. Milota, Johannes J. M. van Delden, Annet van Royen–Kerkhof
    Medical Teacher.2024; : 1.     CrossRef
  • “Showing up to the conversation”: Qualitative reflections from a diversity, equity, and inclusion book club with emergency medicine leadership
    Andreia B. Alexander, Megan Palmer, Dajanae Palmer, Katie Pettit
    Academic Emergency Medicine.2024;[Epub]     CrossRef
  • The implementation of a required book club for medical students and faculty
    David B. Ney, Nethra Ankam, Anita Wilson, John Spandorfer
    Medical Education Online.2023;[Epub]     CrossRef
  • Cultivating critical consciousness through a Global Health Book Club
    Sarah L. Collins, Stuart J. Case, Alexandra K. Rodriguez, Acquel C. Allen, Elizabeth A. Wood
    Frontiers in Education.2023;[Epub]     CrossRef
  • Advancing book clubs as non-formal learning to facilitate critical public pedagogy in organizations
    Robin S Grenier, Jamie L Callahan, Kristi Kaeppel, Carole Elliott
    Management Learning.2022; 53(3): 483.     CrossRef
  • Not Just for Patrons: Book Club Participation as Professional Development for Librarians
    Laila M. Brown, Valerie Brett Shaindlin
    The Library Quarterly.2021; 91(4): 420.     CrossRef
  • Medical Students’ Creation of Original Poetry, Comics, and Masks to Explore Professional Identity Formation
    Johanna Shapiro, Juliet McMullin, Gabriella Miotto, Tan Nguyen, Anju Hurria, Minh Anh Nguyen
    Journal of Medical Humanities.2021; 42(4): 603.     CrossRef
Can incoming United States pediatric interns be entrusted with the essential communication skills of informed consent?  
Nicholas Sevey, Michelle Barratt, Emma Omoruyi
J Educ Eval Health Prof. 2020;17:18.   Published online June 29, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.18
  • 5,012 View
  • 131 Download
AbstractAbstract PDFSupplementary Material
Purpose
According to the entrustable professional activities (EPA) for entering residency by the Association of American Medical Colleges, incoming residents are expected to independently obtain informed consent for procedures they are likely to perform. This requires residents to not only inform their patients but to ensure comprehension of that information. We assessed the communication skills demonstrated by 372 incoming pediatric interns between 2007 and 2018 at the University of Texas Health Science Center at Houston, obtaining informed consent for a lumbar puncture.
Methods
During a simulated case in which interns were tasked with obtaining informed consent for a lumbar puncture, a standardized patient evaluated interns by rating 7 communication-based survey items using 5-point Likert scale from “poor” to “excellent.” We then converted the scale to a numerical system and calculated intern proficiency scores (sum of ratings for each resident) and average item performance (average item rating across all interns).
Results
Interns received an average rating of 21.6 per 28 maximum score, of which 227 interns (61.0%) achieved proficiency by scoring 21 or better. Notable differences were observed when comparing groups before and after EPA implementation (76.97% vs. 47.0% proficient, respectively). Item-level analysis showed that interns struggled most to conduct the encounter in a warm and friendly manner and encourage patients to ask questions (average ratings of 2.97/4 and 2.98/4, respectively). Interns excelled at treating the patient with respect and actively listening to questions (average ratings of 3.16, each). Both average intern proficiency scores and each average item ratings were significantly lower following EPA implementation (P<0.001).
Conclusion
Interns demonstrated moderate proficiency in communicating informed consent, though clear opportunities for improvement exist such as demonstrating warmth and encouraging questions.
Correlation between physician assistant students’ performance score of history taking and physical exam documentation and scores of Graduate Record Examination, clinical year grade point average, and score of Physician Assistant National Certifying Exam in the United States  
Sara Lolar, Jamie McQueen, Sara Maher
J Educ Eval Health Prof. 2020;17:16.   Published online May 27, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.16
  • 7,013 View
  • 162 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Learning to perform and document patient history taking and physical exam (H&P) entails a major component of the first year academic education of physician assistant (PA) students at Wayne State University, USA. The H&P is summative of multiple aspects of PA education, and students must master communication with patients and other health care providers. The objectives of this study were first, to determine if there was a correlation between scores on the Graduate Record Examination (GRE) component testing and scores on graded H&Ps. The second objective was to identify a correlation between proficiency with H&P documentation and academic and clinical year grade point average (GPA) and Physician Assistant National Certifying Exam (PANCE) score.
Methods
Subjects included 147 PA students from Wayne State University from 2014–2016. PA students visited local hospitals or outpatient clinics during the academic year to perform and document patient H&Ps. Correlation between the H&P mean scores and GRE component scores, GPAs, and PANCE scores were analyzed.
Results
The subjects were 26.5 years-old (+6.5) and 111 females (75.5%). There was no correlation between the GRE component score and the H&P mean score. The H&P score was positively correlated with GPA 1 (r=0.512, P<0.001), with GPA 2 (r=0.425, P<0.001) and with PANCE score (r=0.448, P<0.001).
Conclusion
PA student skill with H&P documentation was positively related to academic performance score during PA school and achievement score on the PANCE at Wayne State University, USA.

Citations

Citations to this article as recorded by  
  • History-taking level and its influencing factors among nursing undergraduates based on the virtual standardized patient testing results: Cross sectional study
    Jingrong Du, Xiaowen Zhu, Juan Wang, Jing Zheng, Xiaomin Zhang, Ziwen Wang, Kun Li
    Nurse Education Today.2022; 111: 105312.     CrossRef
  • A Decline in Black and Dermatology Physician Assistants
    Jameka McElroy-Brooklyn, Cynthia Faires Griffith
    Journal of Physician Assistant Education.2022;[Epub]     CrossRef
Use of graded responsibility and common entrustment considerations among United States emergency medicine residency programs  
Jason Lai, Benjamin Holden Schnapp, David Simon Tillman, Mary Westergaard, Jamie Hess, Aaron Kraut
J Educ Eval Health Prof. 2020;17:11.   Published online April 20, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.11
  • 6,282 View
  • 99 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The Accreditation Council for Graduate Medical Education (ACGME) requires all residency programs to provide increasing autonomy as residents progress through training, known as graded responsibility. However, there is little guidance on how to implement graded responsibility in practice and a paucity of literature on how it is currently implemented in emergency medicine (EM). We sought to determine how EM residency programs apply graded responsibility across a variety of activities and to identify which considerations are important in affording additional responsibilities to trainees.
Methods
We conducted a cross-sectional study of EM residency programs using a 23-question survey that was distributed by email to 162 ACGME-accredited EM program directors. Seven different domains of practice were queried.
Results
We received 91 responses (56.2% response rate) to the survey. Among all domains of practice except for managing critically ill medical patients, the use of graded responsibility exceeded 50% of surveyed programs. When graded responsibility was applied, post-graduate year (PGY) level was ranked an “extremely important” or “very important” consideration between 80.9% and 100.0% of the time.
Conclusion
The majority of EM residency programs are implementing graded responsibility within most domains of practice. When decisions are made surrounding graded responsibility, programs still rely heavily on the time-based model of PGY level to determine advancement.

Citations

Citations to this article as recorded by  
  • Do you see what I see?: exploring trends in organizational culture perceptions across residency programs
    Jennifer H. Chen, Paula Costa, Aimee Gardner
    Global Surgical Education - Journal of the Association for Surgical Education.2024;[Epub]     CrossRef
  • Guiding Fellows to Independent Practice
    Maybelle Kou, Aline Baghdassarian, Kajal Khanna, Nazreen Jamal, Michele Carney, Daniel M. Fein, In Kim, Melissa L. Langhan, Jerri A. Rose, Noel S. Zuckerbraun, Cindy G. Roskind
    Pediatric Emergency Care.2022; 38(10): 517.     CrossRef
Evaluation of student perceptions with 2 interprofessional assessment tools—the Collaborative Healthcare Interdisciplinary Relationship Planning instrument and the Interprofessional Attitudes Scale—following didactic and clinical learning experiences in the United States  
Vincent Dennis, Melissa Craft, Dale Bratzler, Melody Yozzo, Denise Bender, Christi Barbee, Stephen Neely, Margaret Robinson
J Educ Eval Health Prof. 2019;16:35.   Published online November 5, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.35
  • 10,787 View
  • 231 Download
  • 11 Web of Science
  • 10 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated changes in students’ attitudes using 2 validated interprofessional survey instruments—the Collaborative Healthcare Interdisciplinary Relationship Planning (CHIRP) instrument and the Interprofessional Attitudes Scale (IPAS)—before and after didactic and clinical cohorts.
Methods
Students from 7 colleges/schools participated in didactic and clinical cohorts during the 2017–2018 year. Didactic cohorts experienced 2 interactive sessions 6 months apart, while clinical cohorts experienced 4 outpatient clinical sessions once monthly. For the baseline and post-cohort assessments, 865 students were randomly assigned to complete either the 14-item CHIRP or the 27-item IPAS. The Pittman test using permutations of linear ranks was used to determine differences in the score distribution between the baseline and post-cohort assessments. Pooled results were compared for the CHIRP total score and the IPAS total and subdomain scores. For each score, 3 comparisons were made simultaneously: overall baseline versus post-didactic cohort, overall baseline versus post-clinical cohort, and post-didactic cohort versus post-clinical cohort. Alpha was adjusted to 0.0167 to account for simultaneous comparisons.
Results
The baseline and post-cohort survey response rates were 62.4% and 65.9% for CHIRP and 58.7% and 58.1% for IPAS, respectively. The post-clinical cohort scores for the IPAS subdomain of teamwork, roles, and responsibilities were significantly higher than the baseline and post-didactic cohort scores. No differences were seen for the remaining IPAS subdomain scores or the CHIRP instrument total score.
Conclusion
The IPAS instrument may discern changes in student attitudes in the subdomain of teamwork, roles, and responsibilities following short-term clinical experiences involving diverse interprofessional team members.

Citations

Citations to this article as recorded by  
  • Interprofessional communication skills training to improve medical students’ and nursing trainees’ error communication - quasi-experimental pilot study
    Lina Heier, Barbara Schellenberger, Anna Schippers, Sebastian Nies, Franziska Geiser, Nicole Ernstmann
    BMC Medical Education.2024;[Epub]     CrossRef
  • Tools for self- or peer-assessment of interprofessional competencies of healthcare students: a scoping review
    Sharon Brownie, Jia Rong Yap, Denise Blanchard, Issac Amankwaa, Amy Pearce, Kesava Kovanur Sampath, Ann-Rong Yan, Patrea Andersen, Patrick Broman
    Frontiers in Medicine.2024;[Epub]     CrossRef
  • Development and implementation of interprofessional education activity among health professions students in Jordan: A pilot investigation
    Osama Y. Alshogran, Zaid Al-Hamdan, Alla El-Awaisi, Hana Alkhalidy, Nesreen Saadeh, Hadeel Alsqaier
    Journal of Interprofessional Care.2023; 37(4): 588.     CrossRef
  • Tools for faculty assessment of interdisciplinary competencies of healthcare students: an integrative review
    Sharon Brownie, Denise Blanchard, Isaac Amankwaa, Patrick Broman, Marrin Haggie, Carlee Logan, Amy Pearce, Kesava Sampath, Ann-Rong Yan, Patrea Andersen
    Frontiers in Medicine.2023;[Epub]     CrossRef
  • Interprofessional education tracks: One schools response to common IPE barriers
    Kim G. Adcock, Sally Earl
    Currents in Pharmacy Teaching and Learning.2023; 15(5): 528.     CrossRef
  • Interprofessional education and collaborative practice in Nigeria – Pharmacists' and pharmacy students' attitudes and perceptions of the obstacles and recommendations
    Segun J. Showande, Tolulope P. Ibirongbe
    Currents in Pharmacy Teaching and Learning.2023; 15(9): 787.     CrossRef
  • To IPAS or not to IPAS? Examining the construct validity of the Interprofessional Attitudes Scale in Hong Kong
    Fraide A. Ganotice, Amy Yin Man Chow, Kelvin Kai Hin Fan, Ui Soon Khoo, May Pui San Lam, Rebecca Po Wah Poon, Francis Hang Sang Tsoi, Michael Ning Wang, George L. Tipoe
    Journal of Interprofessional Care.2022; 36(1): 127.     CrossRef
  • Turkish adaptation of the interprofessional attitude scale (IPAS)
    Mukadder Inci Baser Kolcu, Ozlem Surel Karabilgin Ozturkcu, Giray Kolcu
    Journal of Interprofessional Care.2022; 36(5): 684.     CrossRef
  • Patient participation in interprofessional learning and collaboration with undergraduate health professional students in clinical placements: A scoping review
    Catrine Buck Jensen, Bente Norbye, Madeleine Abrandt Dahlgren, Anita Iversen
    Journal of Interprofessional Education & Practice.2022; 27: 100494.     CrossRef
  • Can interprofessional education change students’ attitudes? A case study from Lebanon
    Carine J. Sakr, Lina Fakih, Jocelyn Dejong, Nuhad Yazbick-Dumit, Hussein Soueidan, Wiam Haidar, Elias Boufarhat, Imad Bou Akl
    BMC Medical Education.2022;[Epub]     CrossRef
Effect of student-directed solicitation of evaluation forms on the timeliness of completion by preceptors in the United States  
Conrad Krawiec, Vonn Walter, Abigail Kate Myers
J Educ Eval Health Prof. 2019;16:32.   Published online October 16, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.32
  • 9,723 View
  • 130 Download
AbstractAbstract PDFSupplementary Material
Purpose
Summative evaluation forms assessing a student’s clinical performance are often completed by a faculty preceptor at the end of a clinical training experience. At our institution, despite the use of an electronic system, timeliness of completion has been suboptimal, potentially limiting our ability to monitor students’ progress. The aim of the present study was to determine whether a student-directed approach to summative evaluation form collection at the end of a pediatrics clerkship would enhance timeliness of completion for third-year medical students.
Methods
This was a pre- and post-intervention educational quality improvement project focused on 156 (82 pre-intervention, 74 post-intervention) third-year medical students at Penn State College of Medicine completing their 4-week pediatric clerkship. Utilizing REDCap (Research Electronic Data Capture) informatics support, student-directed evaluation form solicitation was encouraged. The Wilcoxon rank-sum test was applied to compare the pre-intervention (May 1, 2017 to March 2, 2018) and post-intervention (April 2, 2018 to December 21, 2018) percentages of forms completed before the rotation midpoint.
Results
In total, 740 evaluation forms were submitted during the pre-intervention phase and 517 during the post-intervention phase. The percentage of forms completed before the rotation midpoint increased after implementing student-directed solicitation (9.6% vs. 39.7%, P<0.05).
Conclusion
Our clerkship relies on subjective summative evaluations to track students’ progress, deploy improvement strategies, and determine criteria for advancement; however, our preceptors struggled with timely submission. Allowing students to direct the solicitation of evaluation forms enhanced the timeliness of completion and should be considered in clerkships facing similar challenges.
Application of an objective structured clinical examination to evaluate and monitor interns’ proficiency in hand hygiene and personal protective equipment use in the United States  
Ying Nagoshi, Lou Ann Cooper, Lynne Meyer, Kartik Cherabuddi, Julia Close, Jamie Dow, Merry Jennifer Markham, Carolyn Stalvey
J Educ Eval Health Prof. 2019;16:31.   Published online October 15, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.31
  • 10,604 View
  • 151 Download
  • 6 Web of Science
  • 7 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study was conducted to determine whether an objective structured clinical examination (OSCE) could be used to evaluate and monitor hand hygiene and personal protective equipment (PPE) proficiency among medical interns in the United States.
Methods
Interns in July 2015 (N=123, cohort 1) with no experience of OSCE-based contact precaution evaluation and teaching were evaluated in early 2016 using an OSCE for hand hygiene and PPE proficiency. They performed poorly. Therefore, the new interns entering in July 2016 (N=151, cohort 2) were immediately tested at the same OSCE stations as cohort 1, and were provided with feedback and teaching. Cohort 2 was then retested at the OSCE station in early 2017. The Mann-Whitney U-test was used to compare the performance of cohort 1 and cohort 2 on checklist items. In cohort 2, performance differences between the beginning and end of the intern year were compared using the McNemar chi-square test for paired nominal data.
Results
Checklist items were scored, summed, and reported as percent correct. In cohort 2, the mean percent correct was higher on the posttest than on the pretest (92% vs. 77%, P<0.0001), and the passing rate (100% correct) was also significantly higher on the posttest (55% vs. 16%). At the end of intern year, the mean percent correct was higher in cohort 2 than in cohort 1 (95% vs. 90%, P<0.0001), and 55% of cohort 2 passed (a perfect score) compared to 24% in cohort 1 (P<0.0001).
Conclusion
An OSCE can be utilized to evaluate and monitor hand hygiene and PPE proficiency among interns in the United States.

Citations

Citations to this article as recorded by  
  • Staying proper with your personal protective equipment: How to don and doff
    Cameron R. Smith, Terrie Vasilopoulos, Amanda M. Frantz, Thomas LeMaster, Ramon Andres Martinez, Amy M. Gunnett, Brenda G. Fahy
    Journal of Clinical Anesthesia.2023; 86: 111057.     CrossRef
  • Virtual Reality Medical Training for COVID-19 Swab Testing and Proper Handling of Personal Protective Equipment: Development and Usability
    Paul Zikas, Steve Kateros, Nick Lydatakis, Mike Kentros, Efstratios Geronikolakis, Manos Kamarianakis, Giannis Evangelou, Ioanna Kartsonaki, Achilles Apostolou, Tanja Birrenbach, Aristomenis K. Exadaktylos, Thomas C. Sauter, George Papapagiannakis
    Frontiers in Virtual Reality.2022;[Epub]     CrossRef
  • Effectiveness and Utility of Virtual Reality Simulation as an Educational Tool for Safe Performance of COVID-19 Diagnostics: Prospective, Randomized Pilot Trial
    Tanja Birrenbach, Josua Zbinden, George Papagiannakis, Aristomenis K Exadaktylos, Martin Müller, Wolf E Hautz, Thomas Christian Sauter
    JMIR Serious Games.2021; 9(4): e29586.     CrossRef
  • Rapid Dissemination of a COVID-19 Airway Management Simulation Using a Train-the-Trainers Curriculum
    William J. Peterson, Brendan W. Munzer, Ryan V. Tucker, Eve D. Losman, Carrie Harvey, Colman Hatton, Nana Sefa, Ben S. Bassin, Cindy H. Hsu
    Academic Medicine.2021; 96(10): 1414.     CrossRef
  • Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia
    Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
    Journal of Educational Evaluation for Health Professions.2021; 18: 23.     CrossRef
  • Employment of Objective Structured Clinical Examination Tool in the Undergraduate Medical Training
    Saurabh RamBihariLal Shrivastava, Prateek Saurabh Shrivastava
    Journal of the Scientific Society.2021; 48(3): 145.     CrossRef
  • Comparison of students' performance of objective structured clinical examination during clinical practice
    Jihye Yu, Sukyung Lee, Miran Kim, Janghoon Lee
    Korean Journal of Medical Education.2020; 32(3): 231.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions
TOP