Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > J Educ Eval Health Prof > Volume 13; 2016 > Article
Brief report
Changing medical students’ perception of the evaluation culture: Is it possible?
Jorie M. Colbert-Getz1*orcid, Steven Baumann2orcid

DOI: https://doi.org/10.3352/jeehp.2016.13.8
Published online: February 15, 2016

1Department of Internal Medicine Administration, University of Utah School of Medicine, Salt Lake City, Utah, United States of America

2Office of Professionalism, Evaluation, and Learning, University of Utah School of Medicine, Salt Lake City, Utah, United States of America

*Corresponding email: jorie.colbert-getz@hsc.utah.edu

: 

• Received: January 9, 2016   • Accepted: February 14, 2016

© 2016, Korea Health Personnel Licensing Examination Institute

This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  • 28,311 Views
  • 178 Download
  • 1 Web of Science
  • 1 Crossref
  • Student feedback is a critical component of the teacher-learner cycle. However, there is not a gold standard course or clerkship evaluation form and limited research on the impact of changing the evaluation process. Results from a focus group and pre-implementation feedback survey coupled with best practices in survey design were used to improve all course/clerkship evaluation for academic year 2013-2014. In spring 2014 we asked all subjected students in University of Utah School of Medicine, United States of America to complete the same feedback survey (post-implementation survey). We assessed the evaluation climate with 3 measures on the feedback survey: overall satisfaction with the evaluation process; time students gave effort to the process; and time students used shortcuts. Scores from these measures were compared between 2013 and 2014 with Mann-Whitney U-tests. Response rates were 79% (254) for 2013 and 52% (179) for 2014. Students’ overall satisfaction score were significantly higher (more positive) post-implementation compared to pre-implementation (P<0.001). There was no change in the amount of time students gave effort to completing evaluations (P=0.981) and no change for the amount of time they used shortcuts to complete evaluations (P=0.956). We were able to change overall satisfaction with the medical school evaluation culture, but there was no change in the amount of time students gave effort to completing evaluations and times they used shortcuts to complete evaluations. To ensure accurate evaluation results we will need to focus our efforts on time needed to complete course evaluations across all four years.
Student feedback is a critical component of the teacher-learner cycle. Most medical schools rely on course/clerkship feedback as one component to measure effectiveness of their programs in terms of student satisfaction [1]. However, there are no gold standard course or clerkship evaluation forms and limited research on the impact of changing the evaluation process. In Spring 2013 the University of Utah School of Medicine conducted a focus group with student representatives from each class to gather qualitative feedback on the course/clerkship evaluation process. The following questions from a prior evaluation study were used in the focus group [2]:
“In your opinion, what is the purpose of evaluation in medical education?”
“How would you define good teaching?”
“What do you think about the evaluation tools currently used at our institution?”
“How do you arrive at an overall course rating?”
“What kind of consequences would you like to see to be drawn from course evaluations?”
After the focus group we gauged all students’ perception of the evaluation culture with an anonymous online feedback survey. The feedback survey included seven Likert scale questions about elements of the evaluation process with a strongly agree, agree, disagree, or strongly disagree scale, two items about effort and use of shortcuts in completing evaluations with a 0%-25% of the time, 26%-50% of the time, 51%-75% of the time, or 76%-100% of the time scale, and one open ended question asking for specific recommendation for improving the course/clerkship evaluation process. Results from the focus group and feedback survey coupled with best practices in survey design were used to improve all course/clerkship evaluations. Drafts of the new course and clerkship evaluations were emailed to student representatives who participated in the earlier focus group and course/clerkship directors asking for suggestions to further improve the evaluations. See Appendices 1 and 2 for the final course and clerkship evaluations, respectively.
At the University of Utah School of Medicine the MD program is four years with 80-100 students admitted in the first year. In academic year (AY) 2014 (July 2013 to June 2014) we implemented the new evaluation process. Similar to prior years all evaluations were completed online with a link emailed to students from internal survey software. All responses were anonymous. In AY 2014 we introduced an optional midpoint formative survey with four items so that course directors could review feedback and make changes before the end of a course. The midpoint survey was added because many students mentioned the end of a course was too late to address major concerns. For both pre-clinical courses and clerkships students completed an end of course/clerkship evaluation (see Appendices 1 and 2). These surveys consisted of 13 questions for pre-clinical courses and 11-18 questions for clerkships dealing with major domains deemed important by the Liaison Committee on Medical Education. Previously, the end of pre-clinical course evaluation consisted of 30 questions and the end of clerkship evaluation consisted of 51 questions. We replaced all 5-point strongly agree to strongly disagree Likert scale items with 3 point scales using ‘agree, unsure, or disagree’ options as this provided us with sufficient data and help shorten the cognition load for students in completing the surveys. In AY 2013 students completed 8 end-of-course evaluations in years 1-2 and 7 end-of-clerkship evaluations in year 3. In AY 2014 two new courses were added and thus students completed more end-of-course evaluations in years 1-2.
Significant changes were also made to the process of evaluating teaching faculty in years 1 and 2. We decreased the number of instructors each student had to evaluate by a third and also decreased the number of evaluation items. Previously, all students evaluated each lecturer at the end of the course and courses lasted 2-11 weeks. Surveys were comprised of five questions with an option for additional comments. Additionally, five students per week were selected to complete a daily survey for each lecturer in each course. In AY 2014 we omitted the daily survey, as it was redundant of our proposed survey to evaluate teaching faculty. Specifically, in an effort to provide more meaningful feedback, students were divided into three evaluation groups per course. Each group of 30-40 students was responsible for evaluating a group of instructors who were responsible for teaching during a specific time frame. Notifications were sent to students at the beginning of each course to inform them of their assigned evaluation group and their assigned lecturers. This new process provided students with one question with an option of additional comments for each assigned lecturer. The process afforded students the opportunity to know which lecturers they were to evaluate and to provide feedback in a timeframe more favorable to the actual lecture(s).
An anonymous ‘On-The-Fly’ survey was also designed and implemented in AY 2014. The ‘On-The-Fly’ system gives students an opportunity to anonymously report concerns, and to evaluate an instructor, learning activity or clinical experience in a confidential and anonymous manner in real time. ‘On-The-Fly’ surveys are available to students on a secure website and all responses go to the senior director of professionalism, learning, and evaluation who forwards on to the appropriate teaching faculty and/or dean. Refer to Appendix 3 for the ‘On-The-Fly’ survey template.
In Spring 2014 all students were asked to complete the same feedback survey that was used in Spring 2013. Responses on the pre- and post-implementation surveys were compared to determine if students’ perception of the evaluation process had changed. Specifically, we calculated the percentage of students who agreed or strongly agreed with each survey element statement and compared the pre and post-implementation results with logistic regression. We also computed an overall course evaluation satisfaction score by summing values across the 7 survey items where strongly disagree=1, disagree=2, agree=3, and strongly agree=4 (possible range, 7 to 28) and compared those score pre- and post-implementation with the Mann Whitney U-test. Mann-Whitney U-tests were also used to compare effort and shortcut ratings pre- and post-implementation.
Response rates were 79% (254) for AY 2013 and 52% (179) for AY 2014. Tables 1 and 2 provide a summary of feedback survey responses for AY 2013 (pre-implementation) and AY 2014 (post-implementation). Students had significantly more positive (strongly agree, agree) ratings post-implementation for all elements of the evaluation process except for lecturers adjusting their lectures as a result of weaknesses identified by student feedback. Students’ overall satisfaction score were significantly higher (more positive) post-implementation (mean±SD, 20.04±3.83) compared to pre-implementation (mean±SD, 17.69±3.78; P<0.001). There was no change in the amount of time students said they gave effort to completing evaluations (P=0.981) and no change for the amount of time they used shortcuts to complete evaluations (P=0.956).
We were able to change the medical school evaluation culture in terms of students overall satisfaction and satisfaction with all elements of the process except lecturers using feedback to improve lectures. Additionally, there was no change in the amount of time students gave effort to completing evaluation and times they used shortcuts to complete evaluations. However, we did substantially decrease the number of items and times students completed evaluations pre- and post-implementation. A limitation of these results is that they were for an evaluation process at one institution. Additionally, the post-implementation response rate was low. We actually took the low response rate as an indication that students were not negatively fired up about the new course evaluation process. At our institution a response rate above 70% to a non-required survey is almost always an indication that students are greatly dissatisfied/frustrated with the component being surveyed. To ensure educators get accurate feedback from students we will need to focus our efforts on time needed to complete course evaluations across all years of medical school. Future research will need to determine the usefulness of course evaluation feedback to course/clerkship directors.

Conflict of interest

No potential conflict of interest relevant to this article was reported.

The authors wish to thank the student representatives who provided in-depth feedback in the focus groups.
Audio recording of the abstract.
jeehp-13-08-abstract-recording.avi
Table 1.
Frequency and percentage of student who strongly agreed/agreed to survey items about elements of the University of Utah School of Medicine course evaluation process in academic year 2013 (pre-implementation, N=254) and 2014 (post-implementation of new process, N=175)
Element of the evaluation process Strongly agree/agree pre-implementation Strongly agree/agree post-implementation P-value Odds ratio
I feel the directors adjust courses as a result of weaknesses identified by student feedback 165 (65.0) 137 (78.3) 0.005 1.88
I feel that lecturers adjust their lectures as a result of weaknesses identified by student feedback 135 (53.1) 109 (62.3) 0.066 Not applicable
I feel that persons other than course directors and lecturers pay attention to student feedback 137 (53.9) 117 (66.9) 0.010 1.70
I feel that the evaluation process is transparent 124 (48.8) 114 (65.1) 0.002 1.93
I feel that the end of course evaluation provides an effective way to evaluate courses 147 (57.9) 138 (78.9) ≤0.001 2.77
I feel that the instructor evaluation form provides important feedback 165 (65.0) 145 (82.9) ≤0.001 2.76
I feel that the current evaluation process protects my identity and thus allows me to be honest 142 (55.9) 135 (77.1) ≤0.001 2.71

Values are presented as number (%).

Table 2.
Frequency and percentages for student effort and use of shortcuts to the University of Utah School of Medicine course evaluation process in AY 2013 (pre-implementation, N=254) and AY 2014 (post-implementation of new process, N=175)
Survey item 0%-25% of the time 26%-50% of the time 51%-75% of the time 76%-100% of the time P-value
How often are you able to give adequate thought and effort to the evaluation process? 0.981
 Pre-implementation (AY 2013) 41 (16.1) 84 (33.1) 93 (36.3) 36 (14.2)
 Post-implementation (AY 2014) 23 (13.1) 63 (36.0) 69 (39.4) 20 (11.4)
How often do you use shortcuts to complete evaluations? 0.956
 Pre-implementation (AY 2013) 104 (40.9) 62 (24.4) 50 (19.7) 38 (15.0)
 Post-implementation (AY 2014) 62 (35.4) 63 (36.0) 30 (17.1) 20 (11.4)

Values are presented as number (%).

AY, academic year.

Appendix 1.
Course evaluation tool used in University of Utah School of Medicine, United States of America in academic year 2013
jeehp-13-08-app1.pdf
Appendix 2.
Clerkship evaluation tool used in University of Utah School of Medicine, United States of America in academic year 2013
jeehp-13-08-app2.pdf
Appendix 3.
‘On-The-Fly’ evaluation tool used in University of Utah School of Medicine, United States of America in academic year 2013
jeehp-13-08-app3.pdf

Figure & Data

References

    Citations

    Citations to this article as recorded by  
    • Benefits of focus group discussions beyond online surveys in course evaluations by medical students in the United States: a qualitative study
      Katharina Brandl, Soniya V. Rabadia, Alexander Chang, Jess Mandel
      Journal of Educational Evaluation for Health Professions.2018; 15: 25.     CrossRef

    We recommend

    JEEHP : Journal of Educational Evaluation for Health Professions