Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > J Educ Eval Health Prof > Volume 13; 2016 > Article
Research article
Medical students’ satisfaction with the Applied Basic Clinical Seminar with Scenarios for Students, a novel simulation-based learning method in Greece
Panteleimon Pantelidis1*orcid, Nikolaos Staikoglou1orcid, Georgios Paparoidamis1orcid, Christos Drosos1orcid, Stefanos Karamaroudis2orcid, Athina Samara3orcid, Christodoulos Keskinis4orcid, Michail Sideris5orcid, George Giannakoulas6orcid, Georgios Tsoulfas7orcid, Asterios Karagiannis8orcid

DOI: https://doi.org/10.3352/jeehp.2016.13.13
Published online: March 24, 2016

1Medical Department, Aristotle University of Thessaloniki, Thessaloniki, Greece

2Medical Department, National and Kapodistrian University of Athens, Athens, Greece

3Medical Department, University of Thessaly, Larissa, Greece

4Medical Department, Democritus University of Thrace, Alexandroupolis, Greece

5The London Deanery HEE, Queen Mary University London, United Kingdom

6First Department of Cardiology, AHEPA Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece

7First Department of Surgery, Papageorgiou Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece

8Second Propaedeutic Department of Internal Medicine, Hippokration Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece

*Corresponding email: ppantele@auth.gr

: 

• Received: February 23, 2016   • Accepted: March 23, 2016

© 2016, Korea Health Personnel Licensing Examination Institute

This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  • 38,289 Views
  • 235 Download
  • 8 Web of Science
  • 9 Crossref
  • 9 Scopus
  • Purpose:
    The integration of simulation-based learning (SBL) methods holds promise for improving the medical education system in Greece. The Applied Basic Clinical Seminar with Scenarios for Students (ABCS3) is a novel two-day SBL course that was designed by the Scientific Society of Hellenic Medical Students. The ABCS3 targeted undergraduate medical students and consisted of three core components: the case-based lectures, the ABCDE hands-on station, and the simulation-based clinical scenarios. The purpose of this study was to evaluate the general educational environment of the course, as well as the skills and knowledge acquired by the participants.
  • Methods:
    Two sets of questions were distributed to the participants: the Dundee Ready Educational Environment Measure (DREEM) questionnaire and an internally designed feedback questionnaire (InEv). A multiple-choice examination was also distributed prior to the course and following its completion. A total of 176 participants answered the DREEM questionnaire, 56 the InEv, and 60 the MCQs.
  • Results:
    The overall DREEM score was 144.61 (±28.05) out of 200. Delegates who participated in both the case-based lectures and the interactive scenarios core components scored higher than those who only completed the case-based lecture session (P=0.038). The mean overall feedback score was 4.12 (±0.56) out of 5. Students scored significantly higher on the post-test than on the pre-test (P<0.001).
  • Conclusion:
    The ABCS3 was found to be an effective SBL program, as medical students reported positive opinions about their experiences and exhibited improvements in their clinical knowledge and skills.
Simulation-based learning (SBL) has been proven to be a cost-effective, easily accessible, and promising educational method in modern education [1]. Numerous studies have proposed it as an effective and safe educational tool for many educational goals, including practice in emergency medicine [1,2]. SBL is a safe environment to practice skills, as it allows participants to repeatedly perform clinical tasks and processes and to debrief about their performance and possible mistakes, without any risk of harming or disturbing a patient. Large meta-analyses and reviews have demonstrated the positive impact of SBL on educational goals and patient-related outcomes [3].
However, given the traditionally oriented “see one, do one, teach one” medical education model in Greece, little is known about the potential impact of SBL on the educational environment and students’ clinical improvement. With that consideration in mind, the Scientific Society of Hellenic Medical Students (SSHMS) organized the first Applied Basic Clinical Seminar with Scenarios for Students (ABCS3), which was a seminar that aimed to improve students’ clinical knowledge and skills through simulation, a strategy that is still rarely implemented in the medical educational system in Greece. In this seminar, specific simulation techniques (partial task simulators, standardized patients, and high-fidelity mannequins) were used in order to demonstrate the environment of certain emergency cases and to force participants to apply their skills in stressful situations. A further goal of the seminar was to familiarize the participants with the techniques and advantages of SBL in undergraduate education, with the ultimate hope that this educational tool will become an integrated part of the undergraduate curriculum in Greece. Although the ABCS3 was approved by the senior academic faculty, it was nonetheless grounded in medical students’ need for a more contemporary and reformed medical educational structure [4]. In fact, medical students suggested most of the core parts of its curriculum, making it a mainly student-led effort. The medical students created a diagram of the scientific modules that would be the core parts of the seminar, and specifically outlined the topics of the ten scenarios and the lectures. They also defined the learning objectives and the evaluation tools of the seminar. The academic professors revised the design process and made appropriate adjustments. Therein lies the critical importance of this initiative; it represented a significant innovation and an attempt for self-improvement on the part of the students, despite the current educational environment of a country in the middle of an economic crisis. The purpose of this study was to describe the novel structure of this seminar and to assess the students’ views of this SBL course as well as any improvement in their clinical knowledge.
The basic idea of the seminar
The ABCS3 was designed in order to create a simulation-based educational environment that involved the medical students with a variety of common clinical scenarios and familiarized them with fundamental skills and clinical competencies. The seminar lasted for two days and it consisted of three core components: the case-based lectures (Le), the first-aid and resuscitation course (named “ABCDE hands-on station”), and the simulation-based clinical Scenarios (Sc).
A total of 230 medical students from medical schools all over Greece attended the seminar from April 3 to April 4, 2015. Interested students applied online and a scientific committee selected participants based on information from their curricula vitae (year of study, grades in medical school, previously attended courses, and any publications, presentations, or conference abstracts). The attendees who were selected were then split into two groups: the full category, consisting of 60 students (in their fourth, fifth, and sixth years of study) who attended all the modules of the course, and the observer (OB) category, consisting of 170 medical students of any year of study. The OB group included delegates who attended Le, as in the full group, although they observed the Sc in a large lecture hall, via video live-streaming, without having the chance to interact.
The Le core consisted of 15 case-based lectures and was attended before the Sc section, as an introduction that would provide relevant theoretical knowledge and introduce participants to the actual procedures involved in SBL.
The ABCDE hands-on station was held between the Le and the Sc sections, aiming to teach and/or provide a refresher regarding the ABCDE approach to the attendees, as they would be asked to apply it in the subsequent scenarios.
The Sc core was the main part of the seminar and consisted of moulage scenarios and skill stations. Each scenario lasted for thirty minutes, and the 60 participants (from the full category) were divided into 10 groups (I-X) of six people. Each group participated in one scenario (I in Sc1, II in Sc2, etc.). After 30 minutes, all 10 scenarios were completed, and then, with the sound of an electronic alarm, all 10 groups moved forward to the next scenario respectively (I to Sc2, II to Sc3, and so on). Consequently, after 10 cycles of rotation (and 10 repeats of each scenario by different groups), all groups (I to X) had attended all the scenarios (Sc1 to Sc10) (Table 1). Each scenario took place in an appropriately designed room, with all the necessary medical equipment. Mannequins were used in Sc3, Sc5, and Sc9, while in Sc4, Sc6, Sc7, Sc8, and Sc10 trained actors (standardized patients) who were dressed appropriately presented their medical histories and simulated the appropriate symptoms and reactions, according to the specific requirements of each scenario. In the Sc1 station, the delegates practiced suturing, while in Sc2 the participants practiced peripheral and central venous catheter insertion. A laptop was available to visualize lab and imaging test results.
In each repetition, the six participants and two instructors were present in the room. The role of the instructors was to direct the process of the scenarios; to enforce time limits; to help them function as a team, with distinct roles for each team member; and to follow the most appropriate diagnostic and therapeutic strategy to save the patient, within narrow time limits. Table 1 contains details about the topics of the Le and Sc components.
Evaluation
Numerous methods and instruments for assessing undergraduate medical learning environments have been published [5,6]. One of the most widely-used learning environment assessment tools is the Dundee Ready Educational Environment Measure (DREEM). The DREEM and two other instruments (the internal evaluation inventory [InEv] and the MCQs), were used to assess the students’ opinion of the course (DREEM, InEv) and the contribution of the course to their clinical knowledge (MCQs) (Fig. 1). The data were processed using SPSS version 20.0 (IBM Corp., Armonk, NY, USA).
The DREEM inventory
DREEM has been translated into and validated in many languages and has been used for many evaluation purposes such as diagnosing deficiencies in a current educational environment, comparing different groups’ experiences with a given educational environment, and comparing actual experiences of an educational environment with an ideal or expectations within a single group [7,8,9,10,11].
The DREEM inventory consists of 50 five-point Likert questions, with a scale ranging from 0 to 4: 4, strongly agree; 3, agree; 2, uncertain; 1, disagree; and 0, strongly disagree. Questions 4, 8, 9, 17, 25, 35, 39, 48, and 50 are negative statements with inverse scales. The 50-item DREEM has a maximum score of 200, which indicates the ideal educational environment, and a minimum score of 0, which would indicate a very worrying result. The interpretation of the overall score is: 0-50, very poor; 51-100, plenty of problems; 101-150, more positive than negative; and 151-200,excellent. The DREEM has also five subscales, each consisting of a set of question items and with its own interpretation guide: (a) students’ perceptions of learning (SPL), with 12 items and a maximum score of 48; (b) students’ perceptions of teachers, with 11 items and a maximum score of 44; (c) students’ academic self-perceptions (SASP), with eight items and a maximum score of 32; (d) students’ perceptions of atmosphere (SPA), with 12 items and a maximum score of 48; and (e) students’ social self-perceptions (SSSP), with seven items and a maximum score of 28.
All 230 participants were asked to fill in the validated DREEM questionnaire online, with two extra questions (Q1: “Do you think that the course will prove to be beneficial to your clinical skills?” and Q2: “Would you suggest the course to another student?”) following the completion of the course. The reliability of the inventory was tested using Cronbach’s alpha. A descriptive analysis of the data (DREEM, Q1, and Q2) was performed. Moreover, the Mann-Whitney U test was performed to detect if the overall DREEM score and subscale DREEM scores were related to differences in gender, school (Aristotle University of Thessaloniki vs. other universities), and category of participation (full vs. OB). The Kruskal-Wallis test was also used to test any possible difference in DREEM scores according to year of study. Finally, correlations between the Q1, Q2, and DREEM scores were assessed using Kendall’s tau test.
Internal evaluation (InEv) inventory
The InEv inventory was designed as an evaluation tool by the SSHMS and applied specifically to the scenarios of the ABCS3. It consisted of four subsets of 10 questions each and one general question, involving the self-assessment of the respondent’s improvement in clinical skills (ICl). The four subsets referred to four different parameters of the 10 scenarios: adequacy of trainers (ATr), adequacy of facilities and equipment (AFa), adequacy of time (ATi), and general satisfaction (Sat) of each scenario. When applied to all 10 scenarios, this resulted in 40 questions (ATr1, ATr2,..., ATr10, AFa1, etc.) The answers were on a five-point Likert scale, from 1 (minimum) to 5 (maximum). Five more scores were obtained: the trainers’ score (TrS, mean of ATr1 through ATr10, maximum score of 5), the facilities/equipment score (FaS, mean of AFa1 through AFa10, maximum score of 5); the time score (TiS, mean of ATi1 through ATi10, maximum score of 5); the general satisfaction score (SatS, mean of Sat1 through Sat10, maximum score of 5), and the overall score (OvS,=(TrS+FaS+TiS+SatS+4×ICl)/8, maximum score of 5) (Table 2). The TrS, TiS, FaS, and SatS represent the overall score for each of the above four parameters, while the OvS represents the overall score of the seminar, with a special emphasis on ICl, as the improvement of the participants’ clinical skills was the primary goal of the seminar. The 60 full participants were asked to fill out the questionnaire immediately after the completion of the seminar.
The reliability of the InEv inventory was tested using Cronbach’s alpha test. A descriptive analysis of the data and the results was performed. Furthermore, the independent-samples t-test, analysis of variance, the Mann-Whitney U test, and the Kruskal-Wallis test were used to determine whether gender, year of study, or school affected the ICl, TrS, FaS, TiS, SatS and OvS results. Finally, differences in the distributions of Sat1 to Sat10 were tested with the Kruskal-Wallis test, in order to identify which scenarios “generally satisfied” the participants to a significantly greater extent.
MCQs
Tests with MCQs have been broadly used as assessment tools. A norm-referenced test was designed by the scientific committee in order to assess the participants. The 60 students in the full category were asked to answer a set of 22 complex case-based questions in one hour and 20 minutes. For each question, one answer was correct and three were wrong. The same test was distributed twice, once prior to the seminar (pre-test), and again after its completion (post-test).
The descriptive analysis evaluated the characteristics of the examinees, as well as the results of the pre- and post-tests. The related-samples Wilcoxon signed rank test was used to detect improvement or deterioration in the performance between the pre-test and the post-test, and the Mann-Whitney U test and Kruskal-Wallis test were used to examine whether gender, year of study, or school significantly affected performance in the pre- and post-test.
In order to ensure the privacy and confidentiality of the delegates’ personal details, we obtained signed, written consent from them prior to the seminar, which stated that the individuals agreed to fill out these three questionnaires anonymously, without revealing any private information.
DREEM
The response rate was 176/230 (76.5%), with 53 responses (30.1%) from the full group and 123 responses (69.9%) from participants in the OB category. Cronbach’s alpha was ≥0.8 for the overall DREEM score and all subscales (except SSSP), indicating acceptable reliability of the inventory. The descriptive analysis presents the results for each question, as well as for the subscales and the overall score. The overall DREEM score was 144.61 (±28.05), which may be interpreted as more positive than negative. All subscale scores were in the upper range of the second highest level in each category. The mean scores of the extra questions were 3.66 (±0.91) for Q1 and 3.59 (±0.63) for Q2 (Table 3). Gender did not affect the DREEM scores to a significant extent (overall and subscales, P>0.05). School only influenced the SSSP score to a significant extent. However, the category of participation and year of study significantly affected almost all scores, with full participants scoring higher than OB participants on the overall DREEM score (P=0.038), in the SPL subscale (P=0.041), in the SASP subscale (P=0.033), and on Q1, Q2 (P=0.026 and P=0.022, respectively). Among different years of study, fourth-year students scored significantly higher on the DREEM scale than sixth-year students, which was the only significant difference found in overall scores among different years of study, while students in their first, fourth, fifth, and sixth years of study were more willing to suggest the seminar to someone else (Q2). The year of study did not affect Q1 or SPL (Table 4). Finally, Q1 and Q2 responses correlated with the overall DREEM score (Kendall’s tau test) with correlation coefficients of rτ=0.401 (P<0.001) and rτ=0.287 (P<0.001), respectively.
InEv
A total of 56 out of 60 participants (31 males and 25 females) in the full category answered the questionnaire. The reliability of the InEv inventory was tested with Cronbach’s alpha test and found to be acceptable (>0.8 for the entire inventory and the TrS, FaS, and TiS subsets, but not for SatS[0.781]). With 5 as the maximum score, the results were: TrS=4.24 (±0.55), FaS=4.21 (±0.70), TiS=3.94 (±0.82), SatS=4.08 (±0.53), ICl=4.16 (±0.73) and OvS=4.12 (±0.56) (Table 2). No statistically significant differences were found according to gender, years of study, and school (P>0.05). The independent-samples Kruskal-Wallis test revealed statistically significant differences (P<0.001) in general satisfaction among Sc1 through Sc10: Sc2 scored significantly lower than all the other scenarios, except Sc3 and Sc4 (not significantly lower). Sc10 had the highest score, and differed significantly from all the other scenarios, except Sc1, Sc5, and Sc9. Finally, Sc5 had the second highest score, scoring significantly higher than Sc2, Sc3, and Sc5 (Fig. 2).
MCQs
A total of 60 full participants took the MCQs. The pre-test score was 70.08 (±16.23), while the post-test score was 85.53 (±13.40). The independent-samples Wilcoxon signed rank test showed that significant improvement took place (P<0.001). This improvement in performance was also confirmed among subsamples according to gender, year of study, and school (P<0.05 for all tests comparing the pre-test and post-test results). Furthermore, gender and year of study did not affect performance on either the pre-test or post-test, while the students of Aristotle University of Thessaloniki scored significantly higher than those of other universities on the pre-test (P=0.049) and post-test (P=0.016) (Table 5).
The ABCS3 seminar included the use of mannequin-based simulators, partial task trainers, and standardized patients. Throughout the entire process, the participants had the opportunity to experience the pressure of the emergency room and to try to develop a diagnostic and therapeutic strategy under these challenging conditions. In this seminar, students improved their clinical knowledge and their ability to take action by diagnosing and intervening. As demonstrated by the pre- and post- MCQs tests, students performed better after the completion of the seminar. This improvement was present among different subsamples.
The participants were attracted to this form of learning and concluded that the course would be substantially beneficial to their clinical skills in the future. The DREEM and InEv questionnaires presented an analytic and comprehensive perspective regarding the strong and weak points of the seminar. The DREEM score was 144.61/200, which may be interpreted as more positive than negative. Students also ranked the first ABCS3 at the upper limit of the second category of each one of the five subscales. First, fourth, fifth, and sixth-year students would recommend the seminar to their colleagues. This may imply that, on the one hand, junior students are attracted by this clinical environment that differed from the strictly theoretical framework of the preclinical first year, while, in contrast, the senior students recognized the value of the seminar, as it offered a different educational experience with the novel aspect of simulation and the benefits thereof, which are not present in the school curriculum. The recognition of the simulation element becomes more obvious when the difference of the DREEM score between the full and OB categories is considered. If we consider the OB category as the control group (having participated in Le and watched the livestreamed Scsession, but not having participated interactively in the hands-on Sc or the ABCDE hands-on station), it scored significantly lower than participants in the full category in most of the DREEM, Q1, and Q2 scores (including the overall DREEM score), suggesting that the simulation-based hands-on aspect of the seminar made a significant difference. The overall score of the InEv inventory (4.12±0.56/5), as well as the subscores (TrS= 4.24, FaS=4.21, TiS=3.94, SatS=4.08, and ICl=4.16) also indicated the high quality of the seminar. A comparison of the general satisfaction score among the scenarios (SatSc1-10), showed that Sc10 (orthopedics scenario) and Sc5 (cardiology scenario) scored especially well, whereas it is necessity to update and improve Sc2 (peripheral and central venous catheterization station), which recorded the lowest score.
Taking into account that evaluations of this type rarely result in the alteration of educational structures, the primary goal of the evaluation procedures of this first attempt was to use the feedback to improve the structure of the second ABCS3 [12,13]. Moreover, we will improve the evaluation tools and procedures used for the second ABCS3 and then compare the new results to those of the first ABCS3. Last but not least, one further goal is to contribute to the integration of simulation techniques in the medical school educational curriculum, in order to improve its efficacy and the students’ perception thereof.
In conclusion, the ABCS3, as a novel simulation-based seminar, exhibited encouraging results and could play a significant role in improving the medical education in Greece by importing SBL techniques that have been widely implemented in many other educational systems [14,15].

Conflict of interest

No potential conflict of interest relevant to this article was reported.

Audio recording of the abstract.
jeehp-13-13-abstract-recording.avi
Supplementary file 1. Data of Dundee Ready Educational Environment Measure score.
jeehp-13-13-supple1.xlsx
Supplementary file 2. Data of difference between pretest and posttest of multiple choice questions score.
jeehp-13-13-supple2.xlsx
Supplementary file 3. Data of an internally designed feedback questionnaire score.
jeehp-13-13-supple3.xlsx
Fig. 1.
The evaluation instruments used. The Dundee Ready Educational Environment Measure (DREEM) and the internal evaluation (InEv) were used to assess the educational environment, while the multiple-choice questions (MCQs) evaluated knowledge acquired in the seminar. a)230 participants took part in the seminar, of whom 60 were in the full category (lectures and scenarios), while 170 formed the observers (OB) category (lectures only). b)The DREEM instrument was distributed to both full and OB participants after the completion of the seminar. A total of 53 (30.1%) full and 123 (69.9%) OB participants responded. c)The InEv instrument was distributed only to the full category (56 of 60 answered). d) The MCQs were distributed to the full participants, before (pre-test) and after (post-test) the completion of the seminar with a 100% response rate (60 replies).
jeehp-13-13f1.gif
Fig. 2.
Simulation-based clinical scenarios Sc10 and Sc5 achieved the highest general satisfaction score on the InEv inventory, while Sc2 received the lowest.
jeehp-13-13f2.gif
Table 1.
Topics of the lecture (Le) and scenario (Sc) sessions of the seminar
Session Topic
Le1 Approach to the emergency room in a nutshell
Le2 Burns
Le3 Emergencies in cardiology
Le4 Chest trauma
Le5 Serious pelvic fractures
Le6 Pre-ABCDE lecture
Le7 Abdominal trauma and gastrointestinal emergencies
Le8 Emergencies in vascular surgery
Le9 Emergencies in neurosurgery
Le10 Emergencies in neurology
Le11 Hyperkalemia
Le12 Metabolic acidosis
Le13 Shock and sepsis
Le14 Diabetic ketoacidosis and hyperosmolar nonketotic state
Le15 Drug poisoning
Sc1 Trauma and suturing
Sc2 Peripheral and central venous catheterization
Sc3 Skull and spine trauma
Sc4 Diabetic ketoacidosis with hyperkalemia
Sc5 Acute myocardial infarction
Sc6 Pulmonary embolism
Sc7 Abdominal Trauma
Sc8 Differential diagnosis of abdominal pain
Sc9 Allergic shock: anaphylaxis
Sc10 Orthopedic emergency: pelvic trauma
Table 2.
The structure and the results of the internal evaluation inventory
Question / Score Mean (SD)
 Adequacy of trainers in Sc1 (ATr1) 4.44 (0.72)
 Adequacy of trainers in Sc2 (ATr2) 3.56 (1.18)
 Adequacy of trainers in Sc3 (ATr3) 4.15 (0.90)
 Adequacy of trainers in Sc4 (ATr4) 4.02 (0.86)
 Adequacy of trainers in Sc5 (ATr5) 4.45 (0.72)
 Adequacy of trainers in Sc6 (ATr6) 4.09 (1.00)
 Adequacy of trainers in Sc7 (ATr7) 4.26 (0.97)
 Adequacy of trainers in Sc8 (ATr8) 4.24 (0.97)
 Adequacy of trainers in Sc9 (ATr9) 4.39 (0.76)
 Adequacy of trainers in Sc10 (ATr10) 4.80 (0.56)
Mean adequacy of trainers score (TrS) 4.24 (0.55)
 Adequacy of facilities/equipment in Sc1 (AFa1) 4.46 (0.88)
 Adequacy of facilities/equipment in Sc2 (AFa2) 4.17 (1.24)
 Adequacy of facilities/equipment in Sc3 (AFa3) 3.70 (1.16)
 Adequacy of facilities/equipment in Sc4 (AFa4) 4.00 (1.05)
 Adequacy of facilities/equipment in Sc5 (AFa5) 4.50 (0.72)
 Adequacy of facilities/equipment in Sc6 (AFa6) 4.00 (1.01)
 Adequacy of facilities/equipment in Sc7 (AFa7) 4.11 (0.95)
 Adequacy of facilities/equipment in Sc8 (AFa8) 4.09 (1.19)
 Adequacy of facilities/equipment in Sc9 (AFa9) 4.37 (0.78)
 Adequacy of facilities/equipment in Sc10 (AFa10) 4.67 (0.61)
Mean adequacy of facilities/equipment score (FaS) 4.21 (0.70)
 Adequacy of time in Sc1 (ATi1) 3.78 (1.28)
 Adequacy of time in Sc2 (ATi2) 3.35 (1.36)
 Adequacy of time in Sc3 (ATi3) 4.00 (1.08)
 Adequacy of time in Sc4 (ATi4) 3.94 (1.02)
 Adequacy of time in Sc5 (ATi5) 4.00 (1.05)
 Adequacy of time in Sc6 (ATi6) 4.17 (0.80)
 Adequacy of time in Sc7 (ATi7) 3.87 (1.01)
 Adequacy of time in Sc8 (ATi8) 3.92 (1.24)
 Adequacy of time in Sc9 (ATi9) 4.26 (0.81)
 Adequacy of time in Sc10 (ATi10) 4.15 (0.94)
Mean adequacy of time score (TiS) 3.94 (0.82)
 General satisfaction in Sc1 (Sat1) 4.29 (0.94)
 General satisfaction in Sc2 (Sat2) 3.34 (1.20)
 General satisfaction in Sc3 (Sat3) 3.86 (0.94)
 General satisfaction in Sc4 (Sat4) 3.77 (1.01)
 General satisfaction in Sc5 (Sat5) 4.45 (0.74)
 General satisfaction in Sc6 (Sat6) 4.05 (0.88)
 General satisfaction in Sc7 (Sat7) 4.14 (0.92)
 General satisfaction in Sc8 (Sat8) 4.05 (0.98)
 General satisfaction in Sc9 (Sat9) 4.23 (0.79)
 General satisfaction in Sc10 (Sat10) 4.68 (0.66)
General satisfaction score (SatS) 4.08 (0.53)
Self-assessment of improvement of clinical skills (ICl) 4.16 (0.73)
Overall score (OvS) 4.12 (0.56)

Each question used a five-point Likert scale, with 1 as the minimum score and 5 as the maximum. The overall score was calculated as (TrS+FaS+TiS+SatS+4xICl)/8. The reliability was tested with Cronbach's alpha, with results of 0.951 for the OvS, 0.829 for the TrS, 0.895 for the FaS, 0.920 for the TiS, and 0.781 for the SatS.

SD, standard deviation.

Table 3.
Results of the Dundee Ready Educational Environment Measure (DREEM) inventory, Q1 and Q2
(Subscale) and Question Mean (SD)
1 (SPL). I am encouraged to participate during teaching sessions. 2.69 (1.10)
2 (SPT). The course organizers are knowledgeable. 3.61 (0.59)
3 (SSSP). There is a good support system for registrars who get stressed. 2.27 (1.07)
4 (SSSP). I am too tired to enjoy the course. 2.36 (1.24)
5 (SASP). Learning strategies which worked for me before continue to work for me now. 2.61 (0.94)
6 (SPT). The course organizers espouse a patient centered approach to consulting. 3.10 (0.95)
7 (SPL). The teaching is often stimulating. 3.09 (0.81)
8 (SPT). The course organizers ridicule the registrars. 2.63 (1.71)
9 (SPT). The course organizers are authoritarian. 2.60 (1.60)
10 (SASP). I am confident about my passing this year. 3.19 (0.87)
11 (SPA). The atmosphere is relaxed during consultation teaching. 3.27 (0.83)
12 (SPA). This course is well timetabled. 2.67 (1.12)
13 (SPL). The teaching is registrar centered. 3.01 (0.85)
14 (SSSP). I am rarely bored on this course. 2.63 (0.99)
15 (SSSP). I have good friends on this course. 2.79 (1.08)
16 (SPL). The teaching helps to develop my competence. 3.11 (0.82)
17 (SPA). Cheating is a problem on this course. 2.46 (1.51)
18 (SPT). The course organizers have good communication skills with patients. 3.25 (0.87)
19 (SSSP). My social life is good. 3.32 (0.79)
20 (SPL). The teaching is well focused. 3.07 (0.83)
21 (SPL). I feel I am being well prepared for my profession. 2.83 (0.82)
22 (SASP). The teaching helps to develop my confidence. 3.24 (0.90)
23 (SPA). The atmosphere is relaxed during lectures. 3.27 (0.83)
24 (SPL). The teaching time is put to good use. 2.78 (0.97)
25 (SPL). The teaching over emphasizes factual learning. 2.54 (1.04)
26 (SASP). Last year's work has been a good preparation for this year's work. 2.74 (0.93)
27 (SASP). I am able to memorize all I need. 2.25 (1.18)
28 (SSSP). I seldom feel lonely. 2.64 (1.22)
29 (SPT). The course organizers are good at providing feedback to registrars. 2.43 (0.98)
30 (SPA). There are opportunities for me to develop interpersonal skills. 2.88 (0.84)
31 (SASP). I have learnt a lot about empathy in my profession. 2.91 (0.91)
32 (SPT). The course organizers provide constructive criticism here. 2.82 (0.90)
33 (SPA). I feel comfortable in teaching sessions socially. 3.32 (0.80)
34 (SPA). The atmosphere is relaxed during seminars / tutorials. 2.76 (0.90)
35 (SPA). I find the experience disappointing. 2.64 (1.57)
36 (SPA). I am able to concentrate well. 2.99 (0.89)
37 (SPT). The course organizers give clear examples. 3.20 (0.79)
38 (SPL). I am clear about the learning objectives of the course. 3.15 (0.82)
39 (SPT). The course organizers get angry in teaching sessions. 2.81 (1.58)
40 (SPT). The course organizers are well prepared for their teaching sessions. 3.37 (0.74)
41 (SASP). My problem solving skills are being well developed here. 2.76 (0.93)
42 (SPA). The enjoyment outweighs the stress of the course. 2.99 (0.84)
43 (SPA). The atmosphere motivates me as a learner. 3.15 (0.77)
44 (SPL). The teaching encourages me to be an active learner. 3.03 (0.84)
45 (SASP). Much of what I have to learn seems relevant to a career in healthcare. 3.29 (0.83)
46 (SSSP). My accommodation is pleasant. 3.36 (0.81)
47 (SPL). Long term learning is emphasized over short term learning. 2.72 (1.02)
48 (SPL). The teaching is too teacher centered. 2.43 (1.22)
49 (SPT). I feel able to ask the questions I want. 2.88 (1.07)
50 (SPA). The registrars irritate the course organizers. 2.70 (1.33)
Students' perceptions of learning (SPL) 34.45 (7.31)
Students' perceptions of teachers (SPT) 32.70 (7.65)
Students' academic self-perceptions (SASP) 22.99 (4.76)
Students' perceptions of atmosphere (SPA) 35.10 (7.44)
Students' social self-perceptions (SSSP) 19.36 (4.32)
Overall score 144.61 (28.05)
Q1. Do you think that the course will prove itself beneficial to your clinical skills? 3.66 (0.91)
Q2. Would you suggest the course to another student? 3.59 (0.63)

Reliability was measured with Cronbach's alpha, with results of 0.95 for the overall score, 0.88 for SPL, 0.84 for SPT, 0.80 for SASP, 0.83 for SPA, and 0.69 for SSSP. The maximum score of each question (including Q1 and Q2) is 4, while the maximum scores of the SPL, SPT, SASP, SPA, SSSP, and overall scales were 48, 44, 32, 48, 28, and 200, respectively.

SD, standard deviation.

Table 4.
DREEM, Q1, and Q2 scores and the impact of gender, year of study, school, and category of participation
Variable N (%) Mean (SD)
Overall SPL SPT SASP SPA SSSP Q1 Q2
Overall 176 (76.5)a) 144.61 (28.05) 34.45 (7.31) 32.70 (7.65) 22.99 (4.76) 35.10 (7.44) 19.36 (4.32) 3.66 (0.91) 3.59 (0.63)
Maximum 200 48 44 32 48 28 5 5
Interpretation (limits of score category) More positive than negative (101-150) A more positive perception (25-36) Moving in the right direction (23-33) Feeling more on the positive side (17-24) A more positive atmosphere (25-36) Not too bad (15-21)
Cronbach's alpha 0.95 0.88 0.84 0.79 0.83 0.69
Gender
 Male 86 (48.9%)b) 146.76 (26.41) 34.30 (7.03) 33.15 (7.27) 23.35 (4.46) 36.24 (6.84) 19.71 (4.02) 3.71 (0.91) 3.62 (0.64)
 Female 90 (51.1%)b) 142.57 (29.54) 34.60 (7.61) 32.28 (8.00) 22.66 (5.04) 34.00 (7.86) 19.03 (4.58) 3.61 (0.92) 3.57 (0.62)
 P-value 0.597 0.353 0.619 0.662 0.081 0.912 0.503 0.321
Year of study
 1st 48 (27.3%)b) 146.33 (24.38) 35.58 (6.36) 33.08 (7.67) 22.50 (4.11) 35.21 (6.19) 19.96 (3.90) 3.54 (0.94) 3.85 (0.46)
 2nd 28 (15.9%)b) 136.86 (32.94) 32.36 (8.02) 30.79 (8.57) 22.07 (5.78) 34.14 (9.83) 17.50 (4.88) 3.29 (0.94) 3.29 (0.66)
 3rd 27 (15.3%)b) 143.37 (27.75) 33.67 (7.15) 32.59 (6.20) 24.07 (4.51) 33.93 (6.89) 19.11 (4.85) 3.85 (0.82) 3.18 (0.62)
 4th 38 (21.6%)b) 153.08 (31.40) 35.71 (8.85) 34.95 (7.96) 23.82 (5.65) 37.34 (7.70) 21.26 (3.67) 3.92 (0.78) 3.63 (0.67)
 5th 23 (13.1%)b) 149.17 (20.12) 35.00 (5.99) 33.82 (6.66) 24.09 (2.97) 37.17 (5.46) 19.09 (3.88) 3.65 (1.07) 3.70 (0.56)
 6th 12 (6.8%)b) 123.08 (20.11) 31.58 (5.48) 26.67 (5.96) 20.00 (3.62) 28.42 (5.35) 16.42 (3.12) 3.75 (0.75) 3.83 (0.39)
 P-value 0.013 0.097 0.009 0.031 0.004 0.002 0.093 <0.001
School
 A.U.Th. 144 (81.8%)b) 143.15 (29.04) 33.98 (7.63) 32.42 (7.73) 22.78 (4.92) 34.90 (7.77) 19.08 (4.46) 3.67 (0.91) 3.58 (0.65)
 Other 32 (18.2%)b) 151.22 (22.32) 36.59 (5.27) 34.00 (7.23) 23.97 (3.89) 36.00 (5.79) 20.66 (3.37) 3.59 (0.91) 3.62 (0.49)
 P-value 0.141 0.075 0.328 0.259 0.580 0.034 0.609 0.949
Category of partici pation
 Full 53 (30.1%)b) 151.98 (25.87) 36.47 (6.67) 34.57 (6.58) 24.23 (4.47) 36.55 (7.02) 20.17 (4.13) 3.89 (0.87) 3.75 (0.48)
 OB 123 (69.9%)b) 141.44 (28.46) 33.59 (7.43) 31.90 (7.95) 22.46 (4.81) 34.47 (7.56) 19.02 (4.37) 3.56 (0.92) 3.52 (0.67)
 P-value 0.038 0.041 0.058 0.033 0.181 0.127 0.026 0.022

All tests conducted after the appropriate assumptions were confirmed.

a) Refers to the target (the 230 participants in the seminar).

b) Refers to the sample that answered (176 students).

A.U.Th., Aristotle University of Thessaloniki; SPL, students’ perceptions of learning; SPT, students’ perceptions of teachers; SASP, students’ academic self-perceptions; SPA, students’ perceptions of atmosphere; SSSP, students’ social self-perceptions; OB, observer.

Table 5.
Pre-test and post-test results, overall and by genders, years of study, and school
N (%) Mean (SD)
P-value
Pre-test Post-test
Overall 60 (100%)a) 70.08 (16.23) 85.53 (13.40) <0.001
Gender
 Male 30 (50%)b) 72.42 (17.25) 87.88 (13.36) <0.001
 Female 30 (50%)b) 67.73 (15.07) 83.18 (13.25) <0.001
 P-value 0.194 0.112
Year of study
 4th 29 (48.3%)b) 67.71 (15.90) 85.11 (14.27) <0.001
 5th 21 (35%)b) 71.21 (14.61) 84.85 (11.97) <0.001
 6th 10 (16.7%)b) 74.55 (20.57) 88.18 (14.72) 0.007
 P-value 0.493 0.554
School
 A.U.Th. 50 (83.3%)b) 71.91 (15.52) 87.64 (12.15) 0.001
 Other 10 (16.7%)b) 60.91 (17.43) 75.00 (15.04) 0.007
 P-value 0.049 0.016

The score (%) was calculated as (c/22) × 100%, with c referring to the number of correct answers. The maximum score was 100%, while the minimum was 0%. The related-samples Wilcoxon signed-rank test was used to compare pre- and post-test results. All tests were conducted after the appropriate assumptions were confirmed.

a) Refers to the target (the 60 participants of the full category).

b) Refers to the sample that answered (60 students).

A.U.Th., Aristotle University of Thessaloniki; SD, standard deviation.

Figure & Data

References

    Citations

    Citations to this article as recorded by  
    • Simulação realística como estratégia de ensino na graduação médica
      Paula Dourado Sousa, Tiago Ramos Gazineu, Ricardo Luiz Luzardo Filho, Katia de Miranda Avena, Luiz Fernando Quintanilha
      Scientia Medica.2022; 32(1): e42717.     CrossRef
    • In vivo Simulation-Based Learning for Undergraduate Medical Students: Teaching and Assessment
      Michail Sideris, Marios Nicolaides, Jade Jagiello, Kathrine S Rallis, Elif Emin, Efthymia Theodorou, John Gerrard Hanrahan, Rebecca Mallick, Funlayo Odejinmi, Nikolaos Lymperopoulos, Apostolos Papalois, George Tsoulfas
      Advances in Medical Education and Practice.2021; Volume 12: 995.     CrossRef
    • Use of simulation training to teach the ABCDE primary assessment: an observational study in a Dutch University Hospital with a 3–4 months follow-up
      Amanda M Drost-de Klerck, Tycho J Olgers, Evelien K van de Meeberg, Johanna Schonrock-Adema, Jan C ter Maaten
      BMJ Open.2020; 10(7): e032023.     CrossRef
    • An in-situ interprofessional simulation program to improve teamwork and obstetric crisis management skills
      Michael Kost, Melissa Hewitt, Cindy Betron, John M. O'Donnell
      Journal of Interprofessional Education & Practice.2019; 16: 100264.     CrossRef
    • Simulation Training to Improve the Ability of First-Year Doctors to Assess and Manage Deteriorating Patients: a Systematic Review and Meta-analysis
      Nicholas Buist, Craig S. Webster
      Medical Science Educator.2019; 29(3): 749.     CrossRef
    • Optimizing engagement of undergraduate students in medical education research: The eMERG training network
      Michail Sideris, John Hanrahan, Nikolaos Staikoglou, Panteleimon Pantelidis, Connie Pidgeon, Nikolaos Psychalakis, Nikolai Andersen, Theodore Pittaras, Thanos Athanasiou, Georgios Tsoulfas, Apostolos Papalois
      Annals of Medicine and Surgery.2018; 31: 6.     CrossRef
    • Application of mini‐clinical evaluation exercise for assessing the integrated‐based learning during physical diagnostic course
      Jingjing Da, Yan Ran, Mingjing Pi, Jing Wu, Rong Dong, Qian Li, Qian Zhang, Xiangyan Zhang, Yan Zha
      Biochemistry and Molecular Biology Education.2018; 46(5): 417.     CrossRef
    • The use of theatre in medical education in the emergency cases school: an appealing and widely accessible way of learning
      Christodoulos Keskinis, Vasileios Bafitis, Panagiota Karailidou, Christina Pagonidou, Panteleimon Pantelidis, Alexandros Rampotas, Michail Sideris, Georgios Tsoulfas, Dimitrios Stakos
      Perspectives on Medical Education.2017; 6(3): 199.     CrossRef
    • Integrated Medical Curriculum: Advantages and Disadvantages
      Gustavo A. Quintero, John Vergel, Martha Arredondo, María-Cristina Ariza, Paula Gómez, Ana-Maria Pinzon-Barrios
      Journal of Medical Education and Curricular Development.2016; 3: JMECD.S18920.     CrossRef

    Figure
    We recommend

    JEEHP : Journal of Educational Evaluation for Health Professions