Purpose It is assumed that case-based questions require higher-order cognitive processing, whereas questions that are not case-based require lower-order cognitive processing. In this study, we investigated to what extent case-based and non-case-based questions followed this assumption based on Bloom’s taxonomy.
Methods In this article, 4,800 questions from the Interuniversity Progress Test of Medicine were classified based on whether they were case-based and on the level of Bloom’s taxonomy that they involved. Lower-order questions require students to remember or/and have a basic understanding of knowledge. Higher-order questions require students to apply, analyze, or/and evaluate. The phi coefficient was calculated to investigate the relationship between whether questions were case-based and the required level of cognitive processing.
Results Our results demonstrated that 98.1% of case-based questions required higher-level cognitive processing. Of the non-case-based questions, 33.7% required higher-level cognitive processing. The phi coefficient demonstrated a significant, but moderate correlation between the presence of a patient case in a question and its required level of cognitive processing (phi coefficient= 0.55, P< 0.001).
Conclusion Medical instructors should be aware of the association between item format (case-based versus non-case-based) and the cognitive processes they elicit in order to meet the desired balance in a test, taking the learning objectives and the test difficulty into account.
Citations
Citations to this article as recorded by
To the Point: Substituting SOAP Notes for Vignettes in Preclinical Assessment Question Stems Kristina Lindquist, Derek Meeks, Kyle Mefferd, Cheryl Vanier, Terrence W. Miller Medical Science Educator.2025;[Epub] CrossRef
Progress is impossible without change: implementing automatic item generation in medical knowledge progress testing Filipe Manuel Vidal Falcão, Daniela S.M. Pereira, José Miguel Pêgo, Patrício Costa Education and Information Technologies.2024; 29(4): 4505. CrossRef
Reliability across content areas in progress tests assessing medical knowledge: a Brazilian cross-sectional study with implications for medical education assessments Pedro Tadao Hamamoto Filho, Miriam Hashimoto, Alba Regina de Abreu Lima, Leandro Arthur Diehl, Neide Tomimura Costa, Patrícia Moretti Rehder, Samira Yarak, Maria Cristina de Andrade, Maria de Lourdes Marmorato Botta Hafner, Zilda Maria Tosta Ribeiro, Júli Sao Paulo Medical Journal.2024;[Epub] CrossRef
Development of qualified items for nursing education assessment: The progress testing experience Bruna Moreno Dias, Lúcia Marta Giunta da Silva, Pedro Tadao Hamamoto Filho, Valdes Roberto Bollela, Carmen Silvia Gabriel Nurse Education in Practice.2024; 81: 104199. CrossRef
Identifying the response process validity of clinical vignette-type multiple choice questions: An eye-tracking study Francisco Carlos Specian Junior, Thiago Martins Santos, John Sandars, Eliana Martorano Amaral, Dario Cecilio-Fernandes Medical Teacher.2023; 45(8): 845. CrossRef
Relationship between medical programme progress test performance and surgical clinical attachment timing and performance Andy Wearn, Vanshay Bindra, Bradley Patten, Benjamin P. T. Loveday Medical Teacher.2023; 45(8): 877. CrossRef
Analysis of Orthopaedic In-Training Examination Trauma Questions: 2017 to 2021 Lilah Fones, Daryl C. Osbahr, Daniel E. Davis, Andrew M. Star, Atif K. Ahmed, Arjun Saxena JAAOS: Global Research and Reviews.2023;[Epub] CrossRef
Use of Sociodemographic Information in Clinical Vignettes of Multiple-Choice Questions for Preclinical Medical Students Kelly Carey-Ewend, Amir Feinberg, Alexis Flen, Clark Williamson, Carmen Gutierrez, Samuel Cykert, Gary L. Beck Dallaghan, Kurt O. Gilliland Medical Science Educator.2023; 33(3): 659. CrossRef
What faculty write versus what students see? Perspectives on multiple-choice questions using Bloom’s taxonomy Seetha U. Monrad, Nikki L. Bibler Zaidi, Karri L. Grob, Joshua B. Kurtz, Andrew W. Tai, Michael Hortsch, Larry D. Gruppen, Sally A. Santen Medical Teacher.2021; 43(5): 575. CrossRef
Aménagement du concours de première année commune aux études de santé (PACES) : entre justice sociale et éthique confraternelle en devenir ? R. Pougnet, L. Pougnet Éthique & Santé.2020; 17(4): 250. CrossRef
Knowledge of dental faculty in gulf cooperation council states of multiple-choice questions’ item writing flaws Mawlood Kowash, Hazza Alhobeira, Iyad Hussein, Manal Al Halabi, Saif Khan Medical Education Online.2020;[Epub] CrossRef
Purpose This study aimed to assess the agreement between 2 raters in evaluations of students on a prosthodontic clinical practical exam integrated with directly observed procedural skills (DOPS).
Methods A sample of 76 students was monitored by 2 raters to evaluate the process and the final registered maxillomandibular relation for a completely edentulous patient at Mansoura Dental School, Egypt on a practical exam of bachelor’s students from May 15 to June 28, 2017. Each registered relation was evaluated from a total of 60 marks subdivided into 3 score categories: occlusal plane orientation (OPO), vertical dimension registration (VDR), and centric relation registration (CRR). The marks for each category included an assessment of DOPS. The marks of OPO and VDR for both raters were compared using the graph method to measure reliability through Bland and Altman analysis. The reliability of the CRR marks was evaluated by the Krippendorff alpha ratio.
Results The results revealed highly similar marks between raters for OPO (mean= 18.1 for both raters), with close limits of agreement (0.73 and −0.78). For VDR, the mean marks were close (mean= 17.4 and 17.1 for examiners 1 and 2, respectively), with close limits of agreement (2.7 and −2.2). There was a strong correlation (Krippendorff alpha ratio, 0.92; 95% confidence interval, 0.79– 0.99) between the raters in the evaluation of CRR.
Conclusion The 2 raters’ evaluation of a clinical traditional practical exam integrated with DOPS showed no significant differences in the evaluations of candidates at the end of a clinical prosthodontic course. The limits of agreement between raters could be optimized by excluding subjective evaluation parameters and complicated cases from the examination procedure.
Citations
Citations to this article as recorded by
In‐person and virtual assessment of oral radiology skills and competences by the Objective Structured Clinical Examination Fernanda R. Porto, Mateus A. Ribeiro, Luciano A. Ferreira, Rodrigo G. Oliveira, Karina L. Devito Journal of Dental Education.2023; 87(4): 505. CrossRef
Evaluation agreement between peer assessors, supervisors, and parents in assessing communication and interpersonal skills of students of pediatric dentistry Jin Asari, Maiko Fujita-Ohtani, Kuniomi Nakamura, Tomomi Nakamura, Yoshinori Inoue, Shigenari Kimoto Pediatric Dental Journal.2023; 33(2): 133. CrossRef
Improving the reliability and consistency of objective structured clinical examination (OSCE) raters’ marking poses a continual challenge in medical education. The purpose of this study was to evaluate an e-Learning training module for OSCE raters who participated in the assessment of third-year medical students at the University of Ottawa, Canada. The effects of online training and those of traditional in-person (face-to-face) orientation were compared. Of the 90 physicians recruited as raters for this OSCE, 60 consented to participate (67.7%) in the study in March 2017. Of the 60 participants, 55 rated students during the OSCE, while the remaining 5 were back-up raters. The number of raters in the online training group was 41, while that in the traditional in-person training group was 19. Of those with prior OSCE experience (n= 18) who participated in the online group, 13 (68%) reported that they preferred this format to the in-person orientation. The total average time needed to complete the online module was 15 minutes. Furthermore, 89% of the participants felt the module provided clarity in the rater training process. There was no significant difference in the number of missing ratings based on the type of orientation that raters received. Our study indicates that online OSCE rater training is comparable to traditional face-to-face orientation.
Citations
Citations to this article as recorded by
Raters and examinees training for objective structured clinical examination: comparing the effectiveness of three instructional methodologies Jefferson Garcia Guerrero, Ayidah Sanad Alqarni, Lorraine Turiano Estadilla, Lizy Sonia Benjamin, Vanitha Innocent Rani BMC Nursing.2024;[Epub] CrossRef
Assessment methods and the validity and reliability of measurement tools in online objective structured clinical examinations: a systematic scoping review Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen Journal of Educational Evaluation for Health Professions.2021; 18: 11. CrossRef
Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen Journal of Educational Evaluation for Health Professions.2021; 18: 23. CrossRef
No observed effect of a student-led mock objective structured clinical examination on subsequent performance scores in medical students in Canada Lorenzo Madrazo, Claire Bo Lee, Meghan McConnell, Karima Khamisa, Debra Pugh Journal of Educational Evaluation for Health Professions.2019; 16: 14. CrossRef
ОБ’ЄКТИВНИЙ СТРУКТУРОВАНИЙ КЛІНІЧНИЙ ІСПИТ ЯК ВИМІР ПРАКТИЧНОЇ ПІДГОТОВКИ МАЙБУТНЬОГО ЛІКАРЯ M. M. Korda, A. H. Shulhai, N. V. Pasyaka, N. V. Petrenko, N. V. Haliyash, N. A. Bilkevich Медична освіта.2019; (3): 19. CrossRef
Purpose While a strong learning environment is critical to medical student education, the assessment of medical school learning environments has confounded researchers. Our goal was to assess the validity and utility of the Johns Hopkins Learning Environment Scale (JHLES) for preclinical students at three Malaysian medical schools with distinct educational and institutional models. Two schools were new international partnerships, and the third was school leaver program established without international partnership. Methods: First- and second-year students responded anonymously to surveys at the end of the academic year. The surveys included the JHLES, a 28-item survey using five-point Likert scale response options, the Dundee Ready Educational Environment Measure (DREEM), the most widely used method to assess learning environments internationally, a personal growth scale, and single-item global learning environment assessment variables. Results: The overall response rate was 369/429 (86%). After adjusting for the medical school year, gender, and ethnicity of the respondents, the JHLES detected differences across institutions in four out of seven domains (57%), with each school having a unique domain profile. The DREEM detected differences in one out of five categories (20%). The JHLES was more strongly correlated than the DREEM to two thirds of the single-item variables and the personal growth scale. The JHLES showed high internal reliability for the total score (α=0.92) and the seven domains (α= 0.56-0.85). Conclusion: The JHLES detected variation between learning environment domains across three educational settings, thereby creating unique learning environment profiles. Interpretation of these profiles may allow schools to understand how they are currently supporting trainees and identify areas needing attention.
Citations
Citations to this article as recorded by
Quality of the educational environmentin Slovakia and the Czech Republic using the DREEM inventory Ľubomíra Lizáková, Daša Stupková, Lucie Libešová, Ivana Lamková, Valéria Horanská, Ľudmila Andráščíková, Alena Lochmannová Pielegniarstwo XXI wieku / Nursing in the 21st Century.2025;[Epub] CrossRef
A scoping review of the questionnaires used for the assessment of the perception of undergraduate students of the learning environment in healthcare professions education programs Banan Mukhalalati, Ola Yakti, Sara Elshami Advances in Health Sciences Education.2024; 29(4): 1501. CrossRef
Validation of the Polish version of the Johns Hopkins Learning Environment Scale–a confirmatory factor analysis Dorota Wójcik, Leszek Szalewski, Adam Bęben, Iwona Ordyniec-Kwaśnica, Robert B. Shochet Scientific Reports.2024;[Epub] CrossRef
Exploring medical students' experience of the learning environment: a mixed methods study in Saudi medical college Mohammed Almansour, Noura Abouammoh, Reem Bin Idris, Omar Abdullatif Alsuliman, Renad Abdulrahman Alhomaidi, Mohammed Hamad Alhumud, Hani A. Alghamdi BMC Medical Education.2024;[Epub] CrossRef
A multicenter cross-sectional study in China revealing the intrinsic relationship between medical students’ grade and their perceptions of the learning environment Runzhi Huang, Weijin Qian, Sujie Xie, Mei Cheng, Meiqiong Gong, Shuyuan Xian, Minghao Jin, Mengyi Zhang, Jieling Tang, Bingnan Lu, Yiting Yang, Zhenglin Liu, Mingyu Qu, Haonan Ma, Xinru Wu, Huabin Yin, Xiaonan Wang, Xin Liu, Yue Wang, Wenfang Chen, Min Li BMC Medical Education.2024;[Epub] CrossRef
Learning environment and its relationship with quality of life and burn-out among undergraduate medical students in Pakistan: a cross-sectional study Saadia Shahzad, Gohar Wajid BMJ Open.2024; 14(8): e080440. CrossRef
Validation of the Polish version of the DREEM questionnaire – a confirmatory factor analysis Dorota Wójcik, Leszek Szalewski, Adam Bęben, Iwona Ordyniec-Kwaśnica, Sue Roff BMC Medical Education.2023;[Epub] CrossRef
Association between patient care ownership and personal or environmental factors among medical trainees: a multicenter cross-sectional study Hirohisa Fujikawa, Daisuke Son, Takuya Aoki, Masato Eto BMC Medical Education.2022;[Epub] CrossRef
Measuring Students’ Perceptions of the Medical School Learning Environment: Translation, Transcultural Adaptation, and Validation of 2 Instruments to the Brazilian Portuguese Language Rodolfo F Damiano, Aline O Furtado, Betina N da Silva, Oscarina da S Ezequiel, Alessandra LG Lucchetti, Lisabeth F DiLalla, Sean Tackett, Robert B Shochet, Giancarlo Lucchetti Journal of Medical Education and Curricular Development.2020;[Epub] CrossRef
Developing an Introductory Radiology Clerkship at Perdana University Graduate School of Medicine in Kuala Lumpur, Malaysia Sarah Wallace Cater, Lakshmi Krishnan, Lars Grimm, Brian Garibaldi, Isabel Green Health Professions Education.2017; 3(2): 113. CrossRef
Trainers' perception of the learning environment and student competency: A qualitative investigation of midwifery and anesthesia training programs in Ethiopia Sharon Kibwana, Rachel Haws, Adrienne Kols, Firew Ayalew, Young-Mi Kim, Jos van Roosmalen, Jelle Stekelenburg Nurse Education Today.2017; 55: 5. CrossRef