A virtual point-of-care ultrasound (POCUS) education program was initiated to introduce handheld ultrasound technology to Georgetown Public Hospital Corporation in Guyana, a low-resource setting. We studied ultrasound competency and participant satisfaction in a cohort of 20 physicians-in-training through the urology clinic. The program consisted of a training phase, where they learned how to use the Butterfly iQ ultrasound, and a mentored implementation phase, where they applied their skills in the clinic. The assessment was through written exams and an objective structured clinical exam (OSCE). Fourteen students completed the program. The written exam scores were 3.36/5 in the training phase and 3.57/5 in the mentored implementation phase, and all students earned 100% on the OSCE. Students expressed satisfaction with the program. Our POCUS education program demonstrates the potential to teach clinical skills in low-resource settings and the value of virtual global health partnerships in advancing POCUS and minimally invasive diagnostics.
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.
Citations
Citations to this article as recorded by
Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal Cureus.2023;[Epub] CrossRef
Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers Sun Huh Journal of Educational Evaluation for Health Professions.2023; 20: 5. CrossRef
Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic Sun Huh Science Editing.2023; 10(1): 1. CrossRef
Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta Cureus.2023;[Epub] CrossRef
Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry Arindam Ghosh, Aritri Bir Cureus.2023;[Epub] CrossRef
Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts Omar Temsah, Samina A Khan, Yazan Chaiah, Abdulrahman Senjab, Khalid Alhasan, Amr Jamal, Fadi Aljamaan, Khalid H Malki, Rabih Halwani, Jaffar A Al-Tawfiq, Mohamad-Hani Temsah, Ayman Al-Eyadhy Cureus.2023;[Epub] CrossRef
ChatGPT for Future Medical and Dental Research Bader Fatani Cureus.2023;[Epub] CrossRef
ChatGPT in Dentistry: A Comprehensive Review Hind M Alhaidry, Bader Fatani, Jenan O Alrayes, Aljowhara M Almana, Nawaf K Alfhaed Cureus.2023;[Epub] CrossRef
Can we trust AI chatbots’ answers about disease diagnosis and patient care? Sun Huh Journal of the Korean Medical Association.2023; 66(4): 218. CrossRef
Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions (Preprint) Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, Javaid Sheikh JMIR Medical Education.2023;[Epub] CrossRef
Purpose This study aimed to identify factors that have been studied for their associations with National Licensing Examination (ENAM) scores in Peru.
Methods A search was conducted of literature databases and registers, including EMBASE, SciELO, Web of Science, MEDLINE, Peru’s National Register of Research Work, and Google Scholar. The following key terms were used: “ENAM” and “associated factors.” Studies in English and Spanish were included. The quality of the included studies was evaluated using the Medical Education Research Study Quality Instrument (MERSQI).
Results In total, 38,500 participants were enrolled in 12 studies. Most (11/12) studies were cross-sectional, except for one case-control study. Three studies were published in peer-reviewed journals. The mean MERSQI was 10.33. A better performance on the ENAM was associated with a higher-grade point average (GPA) (n=8), internship setting in EsSalud (n=4), and regular academic status (n=3). Other factors showed associations in various studies, such as medical school, internship setting, age, gender, socioeconomic status, simulations test, study resources, preparation time, learning styles, study techniques, test-anxiety, and self-regulated learning strategies.
Conclusion The ENAM is a multifactorial phenomenon; our model gives students a locus of control on what they can do to improve their score (i.e., implement self-regulated learning strategies) and faculty, health policymakers, and managers a framework to improve the ENAM score (i.e., design remediation programs to improve GPA and integrate anxiety-management courses into the curriculum).
Purpose This study aims to apply the yes/no Angoff and Hofstee methods to actual Korean Medical Licensing Examination (KMLE) 2022 written examination data to estimate cut scores for the written KMLE.
Methods Fourteen panelists gathered to derive the cut score of the 86th KMLE written examination data using the yes/no Angoff method. The panel reviewed the items individually before the meeting and shared their respective understanding of the minimum-competency physician. The standard setting process was conducted in 5 rounds over a total of 800 minutes. In addition, 2 rounds of the Hofstee method were conducted before starting the standard setting process and after the second round of yes/no Angoff.
Results For yes/no Angoff, as each round progressed, the panel’s opinion gradually converged to a cut score of 198 points, and the final passing rate was 95.1%. The Hofstee cut score was 208 points out of a maximum 320 with a passing rate of 92.1% at the first round. It scored 204 points with a passing rate of 93.3% in the second round.
Conclusion The difference between the cut scores obtained through yes/no Angoff and Hofstee methods did not exceed 2% points, and they were within the range of cut scores from previous studies. In both methods, the difference between the panelists decreased as rounds were repeated. Overall, our findings suggest the acceptability of cut scores and the possibility of independent use of both methods.
Citations
Citations to this article as recorded by
Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers Sun Huh Journal of Educational Evaluation for Health Professions.2023; 20: 5. CrossRef
Purpose Undertaking a standard-setting exercise is a common method for setting pass/fail cut scores for high-stakes examinations. The recently introduced equal Z standard-setting method (EZ method) has been found to be a valid and effective alternative for the commonly used Angoff and Hofstee methods and their variants. The current study aims to estimate the minimum number of panelists required for obtaining acceptable and reliable cut scores using the EZ method.
Methods The primary data were extracted from 31 panelists who used the EZ method for setting cut scores for a 12-station of medical school’s final objective structured clinical examination (OSCE) in Taiwan. For this study, a new data set composed of 1,000 random samples of different panel sizes, ranging from 5 to 25 panelists, was established and analyzed. Analysis of variance was performed to measure the differences in the cut scores set by the sampled groups, across all sizes within each station.
Results On average, a panel of 10 experts or more yielded cut scores with confidence more than or equal to 90% and 15 experts yielded cut scores with confidence more than or equal to 95%. No significant differences in cut scores associated with panel size were identified for panels of 5 or more experts.
Conclusion The EZ method was found to be valid and feasible. Less than an hour was required for 12 panelists to assess 12 OSCE stations. Calculating the cut scores required only basic statistical skills.
Purpose This study investigated whether the reliability was acceptable when the number of cases in the objective structured clinical examination (OSCE) decreased from 12 to 8 using generalizability theory (GT).
Methods This psychometric study analyzed the OSCE data of 439 fourth-year medical students conducted in the Busan and Gyeongnam areas of South Korea from July 12 to 15, 2021. The generalizability study (G-study) considered 3 facets—students (p), cases (c), and items (i)—and designed the analysis as p×(i:c) due to items being nested in a case. The acceptable generalizability (G) coefficient was set to 0.70. The G-study and decision study (D-study) were performed using G String IV ver. 6.3.8 (Papawork, Hamilton, ON, Canada).
Results All G coefficients except for July 14 (0.69) were above 0.70. The major sources of variance components (VCs) were items nested in cases (i:c), from 51.34% to 57.70%, and residual error (pi:c), from 39.55% to 43.26%. The proportion of VCs in cases was negligible, ranging from 0% to 2.03%.
Conclusion The case numbers decreased in the 2021 Busan and Gyeongnam OSCE. However, the reliability was acceptable. In the D-study, reliability was maintained at 0.70 or higher if there were more than 21 items/case in 8 cases and more than 18 items/case in 9 cases. However, according to the G-study, increasing the number of items nested in cases rather than the number of cases could further improve reliability. The consortium needs to maintain a case bank with various items to implement a reliable blueprinting combination for the OSCE.
Purpose The percent Angoff (PA) method has been recommended as a reliable method to set the cutoff score instead of a fixed cut point of 60% in the Korean Medical Licensing Examination (KMLE). The yes/no Angoff (YNA) method, which is easy for panelists to judge, can be considered as an alternative because the KMLE has many items to evaluate. This study aimed to compare the cutoff score and the reliability depending on whether the PA or the YNA standard-setting method was used in the KMLE.
Methods The materials were the open-access PA data of the KMLE. The PA data were converted to YNA data in 5 categories, in which the probabilities for a “yes” decision by panelists were 50%, 60%, 70%, 80%, and 90%. SPSS for descriptive analysis and G-string for generalizability theory were used to present the results.
Results The PA method and the YNA method counting 60% as “yes,” estimated similar cutoff scores. Those cutoff scores were deemed acceptable based on the results of the Hofstee method. The highest reliability coefficients estimated by the generalizability test were from the PA method and the YNA method, with probabilities of 70%, 80%, 60%, and 50% for deciding “yes,” in descending order. The panelist’s specialty was the main cause of the error variance. The error size was similar regardless of the standard-setting method.
Conclusion The above results showed that the PA method was more reliable than the YNA method in estimating the cutoff score of the KMLE. However, the YNA method with a 60% probability for deciding “yes” also can be used as a substitute for the PA method in estimating the cutoff score of the KMLE.
Citations
Citations to this article as recorded by
Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers Sun Huh Journal of Educational Evaluation for Health Professions.2023; 20: 5. CrossRef
Possibility of independent use of the yes/no Angoff and Hofstee methods for the standard setting of the Korean Medical Licensing Examination written test: a descriptive study Do-Hwan Kim, Ye Ji Kang, Hoon-Ki Park Journal of Educational Evaluation for Health Professions.2022; 19: 33. CrossRef
Purpose Setting standards is critical in health professions. However, appropriate standard setting methods do not always apply to the set cut score in performance assessment. The aim of this study was to compare the cut score when the standard setting is changed from the norm-referenced method to the borderline group method (BGM) and borderline regression method (BRM) in an objective structured clinical examination (OSCE) in medical school.
Methods This was an explorative study to model the implementation of the BGM and BRM. A total of 107 fourth-year medical students attended the OSCE at 7 stations for encountering standardized patients (SPs) and at 1 station for performing skills on a manikin on July 15th, 2021. Thirty-two physician examiners evaluated the performance by completing a checklist and global rating scales.
Results The cut score of the norm-referenced method was lower than that of the BGM (P<0.01) and BRM (P<0.02). There was no significant difference in the cut score between the BGM and BRM (P=0.40). The station with the highest standard deviation and the highest proportion of the borderline group showed the largest cut score difference in standard setting methods.
Conclusion Prefixed cut scores by the norm-referenced method without considering station contents or examinee performance can vary due to station difficulty and content, affecting the appropriateness of standard setting decisions. If there is an adequate consensus on the criteria for the borderline group, standard setting with the BRM could be applied as a practical and defensible method to determine the cut score for OSCE.
Citations
Citations to this article as recorded by
Possibility of using the yes/no Angoff method as a substitute for the percent Angoff method for estimating the cutoff score of the Korean Medical Licensing Examination: a simulation study Janghee Park Journal of Educational Evaluation for Health Professions.2022; 19: 23. CrossRef
Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study Sun Kim, A Ra Cho, Chul Woon Chung Journal of Educational Evaluation for Health Professions.2021; 18: 28. CrossRef
Purpose Improving physicians’ critical thinking abilities could have meaningful impacts on various aspects of routine medical practice, such as choosing treatment plans, making an accurate diagnosis, and reducing medical errors. The present study aimed to measure the effects of a curriculum integrating critical thinking on medical students’ skills at Tehran University of Medical Sciences, Iran.
Methods A 1-group pre-test, post-test quasi-experimental design was used to assess medical students’ critical thinking abilities as they progressed from the first week of medical school to middle of the third year of the undergraduate medical curriculum. Fifty-six participants completed the California Critical Thinking Skills Test twice from 2016 to 2019.
Results Medical students were asked to complete the California Critical Thinking Skills Test the week before their first educational session. The post-test was conducted 6 weeks after the 2 and half-year program. Out of 91 medical students with a mean age of 20±2.8 years who initially participated in the study, 56 completed both the pre- and post-tests. The response rate of this study was 61.5%. The analysis subscale showed the largest change. Significant changes were found in the analysis (P=0.03), evaluation (P=0.04), and inductive reasoning (P<0.0001) subscales, but not in the inference (P=0.28), and deductive reasoning (P=0.42) subscales. There was no significant difference according to gender (P=0.77).
Conclusion The findings of this study show that a critical thinking program had a substantial effect on medical students’ analysis, inductive reasoning, and evaluation skills, but negligible effects on their inference and deductive reasoning scores.
Citations
Citations to this article as recorded by
Evaluating and comparing critical thinking skills of residents of Tehran University of Medical Sciences Saeed Reza Mehrpour, Amin Hoseini Shavoun, Azita Kheiltash, Rasoul Masoomi, Roya Nasle Seraji BMC Medical Education.2023;[Epub] CrossRef
Purpose This study investigated pharmacy students’ perceptions of various aspects of virtual objective structured clinical examinations (vOSCEs) conducted during the coronavirus disease 2019 pandemic in Malaysia.
Methods This cross-sectional study involved third- and fourth-year pharmacy students at the International Islamic University Malaysia. A validated self-administered questionnaire was distributed to students who had taken a vOSCE a week before.
Results Out of the 253 students who were approached, 231 (91.3%) completed the questionnaire. More than 75% of the participants agreed that the instructions and preparations were clear and helpful in familiarizing them with the vOSCE flow. It was found that 53.2% of the respondents were satisfied with the flow and conduct of the vOSCE. However, only approximately one-third of the respondents believed that the tasks provided in the vOSCE were more convenient, less stressful, and easier to perform than those in the conventional OSCE. Furthermore, 49.7% of the students favored not having a vOSCE in the future when conducting a conventional OSCE becomes feasible again. Internet connection was reported as a problem hindering the performance of the vOSCE by 51.9% of the participants. Students who were interested in clinical pharmacy courses were more satisfied than other students with the preparation and operation of the vOSCE, the faculty support, and the allocated time.
Conclusion Students were satisfied with the organization and operation of the vOSCE. However, they still preferred the conventional OSCE over the vOSCE. These findings might indicate a further need to expose students to telehealthcare models.
Citations
Citations to this article as recorded by
Evaluation of grit and its associated factors among undergraduate pharmacy students from 14 Asian and Middle Eastern countries amid the COVID-19 pandemic Mohamed Hassan Elnaem, Muna Barakat, Naeem Mubarak, Mohammed Salim K.T., Doaa H. Abdelaziz, Ahmed Ibrahim Fathelrahman, Abrar K. Thabit, Diana Laila Ramatillah, Ali Azeez Al-Jumaili, Nabeel Kashan Syed, Mohammed Fathelrahman Adam, Md. Sanower Hossain, Moh Saudi Pharmaceutical Journal.2023; 31(3): 410. CrossRef
Shifting to Authentic Assessments? A Systematic Review of Student Perceptions of High-Fidelity Assessments in Pharmacy Harjit Singh, Daniel Malone, Angelina S. Lim American Journal of Pharmaceutical Education.2023; : 100099. CrossRef
Virtual OSCE: Experience and challenges with a large cohort of pharmacy students Hanis Hanum Zulkifly, Izzati Abdul Halim Zaki, Mahmathi Karuppannan, Zakiah Mohd Noordin Pharmacy Education.2022; 22(1): 23. CrossRef
Students’ and Examiners’ Experiences of Their First Virtual Pharmacy Objective Structured Clinical Examination (OSCE) in Australia during the COVID-19 Pandemic Vivienne Mak, Sunanthiny Krishnan, Sara Chuang Healthcare.2022; 10(2): 328. CrossRef
Perceptions of Pharmacy Students on the E-Learning Strategies Adopted during the COVID-19 Pandemic: A Systematic Review Carla Pires Pharmacy.2022; 10(1): 31. CrossRef
Perceptions of undergraduate pharmacy students towards online assessments used during the COVID-19 pandemic in a public university in Malaysia Usman Abubakar, A'isyah Humaira' Mohd Salehudin, Nik Afiqah Athirah Nik Mohd Asri, Nur Atiqah Mohammad Rohi, Nur Hasyimah Ramli, Nur Izzah Mohd Khairuddin, Nur Fariesya Saiful Izham, Siti Hajar Nasrullah, Auwal Adam Sa’ad Pharmacy Education.2022; 22(1): 191. CrossRef
Evaluation of the Utility of Online Objective Structured Clinical Examination Conducted During the COVID-19 Pandemic Mona Arekat, Mohamed Hany Shehata, Abdelhalim Deifalla, Ahmed Al-Ansari, Archana Kumar, Mohamed Alsenbesy, Hamdi Alshenawi, Amgad El-Agroudy, Mariwan Husni, Diaa Rizk, Abdelaziz Elamin, Afif Ben Salah, Hani Atwa Advances in Medical Education and Practice.2022; Volume 13: 407. CrossRef
COVID-19-Driven Improvements and Innovations in Pharmacy Education: A Scoping Review Jennifer Courtney, Erika Titus-Lay, Ashim Malhotra, Jeffrey Nehira, Islam Mohamed, Welly Mente, Uyen Le, Linda Buckley, Xiaodong Feng, Ruth Vinall Pharmacy.2022; 10(3): 60. CrossRef
Supporting pharmacy students' preparation for an entry-to-practice OSCE using video cases Michelle Flood, Judith Strawbridge, Eimear Ní Sheachnasaigh, Theo Ryan, Laura J. Sahm, Aoife Fleming, James W. Barlow Currents in Pharmacy Teaching and Learning.2022; 14(12): 1525. CrossRef
Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen Journal of Educational Evaluation for Health Professions.2021; 18: 23. CrossRef
Purpose Pediatric clerkships that utilize off-campus clinical sites ensure clinical comparability by requiring completion of patient-focused tasks. Some tasks may not be attainable (especially off-campus); thus, they are not assigned. The objective of this study was to evaluate the feasibility of providing a voluntary assignment list to third-year medical students in their pediatric clerkship.
Methods This is a retrospective single-center cross-sectional analysis of voluntary assignment completion during the 2019–2020 academic year. Third-year medical students were provided a voluntary assignment list (observe a procedure, use an interpreter phone to obtain a pediatric history, ask a preceptor to critique a clinical note, and follow-up on a patient after the rotation ends). Descriptive statistics were used to assess the timing and distribution of voluntary assignment completion.
Results In total, 132 subjects (77 on the main campus, 55 off-campus) were included. Eighteen (13.6%) main-campus and 16 (12.1%) off-campus students completed at least 1 voluntary assignment. The following voluntary assignments were completed: observe a procedure (15, 11.4%), use an interpreter phone (26, 19.7%), ask a preceptor to critique a clinical note (12, 9.1%), and follow-up on a patient after the rotation ends (7, 5.3%). Off-campus students completed the assignments more often (29.1%) than on-campus students (23.4%)
Conclusion Our clerkship values specific patient-focused tasks that may enhance student development, but are not attainable at all clinical sites. When provided a voluntary assignment list, 34 out of 132 students (25.8%) completed them. Clerkships that utilize off-campus sites should consider this approach to optimize the pediatric educational experience.
Purpose Summative evaluation forms assessing a student’s clinical performance are often completed by a faculty preceptor at the end of a clinical training experience. At our institution, despite the use of an electronic system, timeliness of completion has been suboptimal, potentially limiting our ability to monitor students’ progress. The aim of the present study was to determine whether a student-directed approach to summative evaluation form collection at the end of a pediatrics clerkship would enhance timeliness of completion for third-year medical students.
Methods This was a pre- and post-intervention educational quality improvement project focused on 156 (82 pre-intervention, 74 post-intervention) third-year medical students at Penn State College of Medicine completing their 4-week pediatric clerkship. Utilizing REDCap (Research Electronic Data Capture) informatics support, student-directed evaluation form solicitation was encouraged. The Wilcoxon rank-sum test was applied to compare the pre-intervention (May 1, 2017 to March 2, 2018) and post-intervention (April 2, 2018 to December 21, 2018) percentages of forms completed before the rotation midpoint.
Results In total, 740 evaluation forms were submitted during the pre-intervention phase and 517 during the post-intervention phase. The percentage of forms completed before the rotation midpoint increased after implementing student-directed solicitation (9.6% vs. 39.7%, P<0.05).
Conclusion Our clerkship relies on subjective summative evaluations to track students’ progress, deploy improvement strategies, and determine criteria for advancement; however, our preceptors struggled with timely submission. Allowing students to direct the solicitation of evaluation forms enhanced the timeliness of completion and should be considered in clerkships facing similar challenges.
Student-led peer-assisted mock objective structured clinical examinations (MOSCEs) have been used in various settings to help students prepare for subsequent higher-stakes, faculty-run OSCEs. MOSCE participants generally valued feedback from peers and reported benefits to learning. Our study investigated whether participation in a peer-assisted MOSCE affected subsequent OSCE performance. To determine whether mean OSCE scores differed depending on whether medical students participated in the MOSCE, we conducted a between-subjects analysis of variance, with cohort (2016 vs. 2017) and MOSCE participation (MOSCE vs. no MOSCE) as independent variables and the mean OSCE score as the dependent variable. Participation in the MOSCE had no influence on mean OSCE scores (P=0.19). There was a significant correlation between mean MOSCE scores and mean OSCE scores (Pearson r=0.52, P<0.001). Although previous studies described self-reported benefits from participation in student-led MOSCEs, it was not associated with objective benefits in this study.
Citations
Citations to this article as recorded by
Benefits of semiology taught using near-peer tutoring are sustainable Benjamin Gripay, Thomas André, Marie De Laval, Brice Peneau, Alexandre Secourgeon, Nicolas Lerolle, Cédric Annweiler, Grégoire Justeau, Laurent Connan, Ludovic Martin, Loïc Bière BMC Medical Education.2022;[Epub] CrossRef
Identification des facteurs associés à la réussite aux examens cliniques objectifs et structurés dans la faculté de médecine de Rouen M. Leclercq, M. Vannier, Y. Benhamou, A. Liard, V. Gilard, I. Auquit-Auckbur, H. Levesque, L. Sibert, P. Schneider La Revue de Médecine Interne.2022; 43(5): 278. CrossRef
Evaluation of the Experience of Peer-led Mock Objective Structured Practical Examination for First- and Second-year Medical Students Faisal Alsaif, Lamia Alkuwaiz, Mohammed Alhumud, Reem Idris, Lina Neel, Mansour Aljabry, Mona Soliman Advances in Medical Education and Practice.2022; Volume 13: 987. CrossRef
The use of a formative OSCE to prepare emergency medicine residents for summative OSCEs: a mixed-methods cohort study Magdalene Hui Min Lee, Dong Haur Phua, Kenneth Wei Jian Heng International Journal of Emergency Medicine.2021;[Epub] CrossRef
Tutor–Student Partnership in Practice OSCE to Enhance Medical Education Eve Cosker, Valentin Favier, Patrice Gallet, Francis Raphael, Emmanuelle Moussier, Louise Tyvaert, Marc Braun, Eva Feigerlova Medical Science Educator.2021; 31(6): 1803. CrossRef
Peers as OSCE assessors for junior medical students – a review of routine use: a mixed methods study Simon Schwill, Johanna Fahrbach-Veeser, Andreas Moeltner, Christiane Eicher, Sonia Kurczyk, David Pfisterer, Joachim Szecsenyi, Svetla Loukanova BMC Medical Education.2020;[Epub] CrossRef
Purpose It is assumed that case-based questions require higher-order cognitive processing, whereas questions that are not case-based require lower-order cognitive processing. In this study, we investigated to what extent case-based and non-case-based questions followed this assumption based on Bloom’s taxonomy.
Methods In this article, 4,800 questions from the Interuniversity Progress Test of Medicine were classified based on whether they were case-based and on the level of Bloom’s taxonomy that they involved. Lower-order questions require students to remember or/and have a basic understanding of knowledge. Higher-order questions require students to apply, analyze, or/and evaluate. The phi coefficient was calculated to investigate the relationship between whether questions were case-based and the required level of cognitive processing.
Results Our results demonstrated that 98.1% of case-based questions required higher-level cognitive processing. Of the non-case-based questions, 33.7% required higher-level cognitive processing. The phi coefficient demonstrated a significant, but moderate correlation between the presence of a patient case in a question and its required level of cognitive processing (phi coefficient= 0.55, P< 0.001).
Conclusion Medical instructors should be aware of the association between item format (case-based versus non-case-based) and the cognitive processes they elicit in order to meet the desired balance in a test, taking the learning objectives and the test difficulty into account.
Citations
Citations to this article as recorded by
Identifying the response process validity of clinical vignette -type multiple choice questions: An eye-tracking study Francisco Carlos Specian Junior, Thiago Martins Santos, John Sandars, Eliana Martorano Amaral, Dario Cecilio-Fernandes Medical Teacher.2023; : 1. CrossRef
Relationship between medical programme progress test performance and surgical clinical attachment timing and performance Andy Wearn, Vanshay Bindra, Bradley Patten, Benjamin P. T. Loveday Medical Teacher.2023; : 1. CrossRef
Analysis of Orthopaedic In-Training Examination Trauma Questions: 2017 to 2021 Lilah Fones, Daryl C. Osbahr, Daniel E. Davis, Andrew M. Star, Atif K. Ahmed, Arjun Saxena JAAOS: Global Research and Reviews.2023;[Epub] CrossRef
Use of Sociodemographic Information in Clinical Vignettes of Multiple-Choice Questions for Preclinical Medical Students Kelly Carey-Ewend, Amir Feinberg, Alexis Flen, Clark Williamson, Carmen Gutierrez, Samuel Cykert, Gary L. Beck Dallaghan, Kurt O. Gilliland Medical Science Educator.2023;[Epub] CrossRef
What faculty write versus what students see? Perspectives on multiple-choice questions using Bloom’s taxonomy Seetha U. Monrad, Nikki L. Bibler Zaidi, Karri L. Grob, Joshua B. Kurtz, Andrew W. Tai, Michael Hortsch, Larry D. Gruppen, Sally A. Santen Medical Teacher.2021; 43(5): 575. CrossRef
Aménagement du concours de première année commune aux études de santé (PACES) : entre justice sociale et éthique confraternelle en devenir ? R. Pougnet, L. Pougnet Éthique & Santé.2020; 17(4): 250. CrossRef
Knowledge of dental faculty in gulf cooperation council states of multiple-choice questions’ item writing flaws Mawlood Kowash, Hazza Alhobeira, Iyad Hussein, Manal Al Halabi, Saif Khan Medical Education Online.2020;[Epub] CrossRef
Purpose This study aimed to assess the agreement between 2 raters in evaluations of students on a prosthodontic clinical practical exam integrated with directly observed procedural skills (DOPS).
Methods A sample of 76 students was monitored by 2 raters to evaluate the process and the final registered maxillomandibular relation for a completely edentulous patient at Mansoura Dental School, Egypt on a practical exam of bachelor’s students from May 15 to June 28, 2017. Each registered relation was evaluated from a total of 60 marks subdivided into 3 score categories: occlusal plane orientation (OPO), vertical dimension registration (VDR), and centric relation registration (CRR). The marks for each category included an assessment of DOPS. The marks of OPO and VDR for both raters were compared using the graph method to measure reliability through Bland and Altman analysis. The reliability of the CRR marks was evaluated by the Krippendorff alpha ratio.
Results The results revealed highly similar marks between raters for OPO (mean= 18.1 for both raters), with close limits of agreement (0.73 and −0.78). For VDR, the mean marks were close (mean= 17.4 and 17.1 for examiners 1 and 2, respectively), with close limits of agreement (2.7 and −2.2). There was a strong correlation (Krippendorff alpha ratio, 0.92; 95% confidence interval, 0.79– 0.99) between the raters in the evaluation of CRR.
Conclusion The 2 raters’ evaluation of a clinical traditional practical exam integrated with DOPS showed no significant differences in the evaluations of candidates at the end of a clinical prosthodontic course. The limits of agreement between raters could be optimized by excluding subjective evaluation parameters and complicated cases from the examination procedure.
Citations
Citations to this article as recorded by
In‐person and virtual assessment of oral radiology skills and competences by the Objective Structured Clinical Examination Fernanda R. Porto, Mateus A. Ribeiro, Luciano A. Ferreira, Rodrigo G. Oliveira, Karina L. Devito Journal of Dental Education.2023; 87(4): 505. CrossRef
Evaluation agreement between peer assessors, supervisors, and parents in assessing communication and interpersonal skills of students of pediatric dentistry Jin Asari, Maiko Fujita-Ohtani, Kuniomi Nakamura, Tomomi Nakamura, Yoshinori Inoue, Shigenari Kimoto Pediatric Dental Journal.2023;[Epub] CrossRef