Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
4 "Educational assessment"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Comparison of the level of cognitive processing between case-based items and non-case-based items on the Interuniversity Progress Test of Medicine in the Netherlands  
Dario Cecilio-Fernandes, Wouter Kerdijk, Andreas Johannes Bremers, Wytze Aalders, René Anton Tio
J Educ Eval Health Prof. 2018;15:28.   Published online December 12, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.28
  • 18,913 View
  • 196 Download
  • 8 Web of Science
  • 8 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
It is assumed that case-based questions require higher-order cognitive processing, whereas questions that are not case-based require lower-order cognitive processing. In this study, we investigated to what extent case-based and non-case-based questions followed this assumption based on Bloom’s taxonomy.
Methods
In this article, 4,800 questions from the Interuniversity Progress Test of Medicine were classified based on whether they were case-based and on the level of Bloom’s taxonomy that they involved. Lower-order questions require students to remember or/and have a basic understanding of knowledge. Higher-order questions require students to apply, analyze, or/and evaluate. The phi coefficient was calculated to investigate the relationship between whether questions were case-based and the required level of cognitive processing.
Results
Our results demonstrated that 98.1% of case-based questions required higher-level cognitive processing. Of the non-case-based questions, 33.7% required higher-level cognitive processing. The phi coefficient demonstrated a significant, but moderate correlation between the presence of a patient case in a question and its required level of cognitive processing (phi coefficient= 0.55, P< 0.001).
Conclusion
Medical instructors should be aware of the association between item format (case-based versus non-case-based) and the cognitive processes they elicit in order to meet the desired balance in a test, taking the learning objectives and the test difficulty into account.

Citations

Citations to this article as recorded by  
  • Progress is impossible without change: implementing automatic item generation in medical knowledge progress testing
    Filipe Manuel Vidal Falcão, Daniela S.M. Pereira, José Miguel Pêgo, Patrício Costa
    Education and Information Technologies.2024; 29(4): 4505.     CrossRef
  • Identifying the response process validity of clinical vignette-type multiple choice questions: An eye-tracking study
    Francisco Carlos Specian Junior, Thiago Martins Santos, John Sandars, Eliana Martorano Amaral, Dario Cecilio-Fernandes
    Medical Teacher.2023; 45(8): 845.     CrossRef
  • Relationship between medical programme progress test performance and surgical clinical attachment timing and performance
    Andy Wearn, Vanshay Bindra, Bradley Patten, Benjamin P. T. Loveday
    Medical Teacher.2023; 45(8): 877.     CrossRef
  • Analysis of Orthopaedic In-Training Examination Trauma Questions: 2017 to 2021
    Lilah Fones, Daryl C. Osbahr, Daniel E. Davis, Andrew M. Star, Atif K. Ahmed, Arjun Saxena
    JAAOS: Global Research and Reviews.2023;[Epub]     CrossRef
  • Use of Sociodemographic Information in Clinical Vignettes of Multiple-Choice Questions for Preclinical Medical Students
    Kelly Carey-Ewend, Amir Feinberg, Alexis Flen, Clark Williamson, Carmen Gutierrez, Samuel Cykert, Gary L. Beck Dallaghan, Kurt O. Gilliland
    Medical Science Educator.2023; 33(3): 659.     CrossRef
  • What faculty write versus what students see? Perspectives on multiple-choice questions using Bloom’s taxonomy
    Seetha U. Monrad, Nikki L. Bibler Zaidi, Karri L. Grob, Joshua B. Kurtz, Andrew W. Tai, Michael Hortsch, Larry D. Gruppen, Sally A. Santen
    Medical Teacher.2021; 43(5): 575.     CrossRef
  • Aménagement du concours de première année commune aux études de santé (PACES) : entre justice sociale et éthique confraternelle en devenir ?
    R. Pougnet, L. Pougnet
    Éthique & Santé.2020; 17(4): 250.     CrossRef
  • Knowledge of dental faculty in gulf cooperation council states of multiple-choice questions’ item writing flaws
    Mawlood Kowash, Hazza Alhobeira, Iyad Hussein, Manal Al Halabi, Saif Khan
    Medical Education Online.2020;[Epub]     CrossRef
Agreement between 2 raters’ evaluations of a traditional prosthodontic practical exam integrated with directly observed procedural skills in Egypt  
Ahmed Khalifa Khalifa, Salah Hegazy
J Educ Eval Health Prof. 2018;15:23.   Published online September 27, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.23
  • 23,840 View
  • 204 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to assess the agreement between 2 raters in evaluations of students on a prosthodontic clinical practical exam integrated with directly observed procedural skills (DOPS).
Methods
A sample of 76 students was monitored by 2 raters to evaluate the process and the final registered maxillomandibular relation for a completely edentulous patient at Mansoura Dental School, Egypt on a practical exam of bachelor’s students from May 15 to June 28, 2017. Each registered relation was evaluated from a total of 60 marks subdivided into 3 score categories: occlusal plane orientation (OPO), vertical dimension registration (VDR), and centric relation registration (CRR). The marks for each category included an assessment of DOPS. The marks of OPO and VDR for both raters were compared using the graph method to measure reliability through Bland and Altman analysis. The reliability of the CRR marks was evaluated by the Krippendorff alpha ratio.
Results
The results revealed highly similar marks between raters for OPO (mean= 18.1 for both raters), with close limits of agreement (0.73 and −0.78). For VDR, the mean marks were close (mean= 17.4 and 17.1 for examiners 1 and 2, respectively), with close limits of agreement (2.7 and −2.2). There was a strong correlation (Krippendorff alpha ratio, 0.92; 95% confidence interval, 0.79– 0.99) between the raters in the evaluation of CRR.
Conclusion
The 2 raters’ evaluation of a clinical traditional practical exam integrated with DOPS showed no significant differences in the evaluations of candidates at the end of a clinical prosthodontic course. The limits of agreement between raters could be optimized by excluding subjective evaluation parameters and complicated cases from the examination procedure.

Citations

Citations to this article as recorded by  
  • In‐person and virtual assessment of oral radiology skills and competences by the Objective Structured Clinical Examination
    Fernanda R. Porto, Mateus A. Ribeiro, Luciano A. Ferreira, Rodrigo G. Oliveira, Karina L. Devito
    Journal of Dental Education.2023; 87(4): 505.     CrossRef
  • Evaluation agreement between peer assessors, supervisors, and parents in assessing communication and interpersonal skills of students of pediatric dentistry
    Jin Asari, Maiko Fujita-Ohtani, Kuniomi Nakamura, Tomomi Nakamura, Yoshinori Inoue, Shigenari Kimoto
    Pediatric Dental Journal.2023; 33(2): 133.     CrossRef
Brief report
The implementation and evaluation of an e-Learning training module for objective structured clinical examination raters in Canada  
Karima Khamisa, Samantha Halman, Isabelle Desjardins, Mireille St. Jean, Debra Pugh
J Educ Eval Health Prof. 2018;15:18.   Published online August 6, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.18
  • 22,879 View
  • 253 Download
  • 3 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
Improving the reliability and consistency of objective structured clinical examination (OSCE) raters’ marking poses a continual challenge in medical education. The purpose of this study was to evaluate an e-Learning training module for OSCE raters who participated in the assessment of third-year medical students at the University of Ottawa, Canada. The effects of online training and those of traditional in-person (face-to-face) orientation were compared. Of the 90 physicians recruited as raters for this OSCE, 60 consented to participate (67.7%) in the study in March 2017. Of the 60 participants, 55 rated students during the OSCE, while the remaining 5 were back-up raters. The number of raters in the online training group was 41, while that in the traditional in-person training group was 19. Of those with prior OSCE experience (n= 18) who participated in the online group, 13 (68%) reported that they preferred this format to the in-person orientation. The total average time needed to complete the online module was 15 minutes. Furthermore, 89% of the participants felt the module provided clarity in the rater training process. There was no significant difference in the number of missing ratings based on the type of orientation that raters received. Our study indicates that online OSCE rater training is comparable to traditional face-to-face orientation.

Citations

Citations to this article as recorded by  
  • Assessment methods and the validity and reliability of measurement tools in online objective structured clinical examinations: a systematic scoping review
    Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
    Journal of Educational Evaluation for Health Professions.2021; 18: 11.     CrossRef
  • Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia
    Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
    Journal of Educational Evaluation for Health Professions.2021; 18: 23.     CrossRef
  • No observed effect of a student-led mock objective structured clinical examination on subsequent performance scores in medical students in Canada
    Lorenzo Madrazo, Claire Bo Lee, Meghan McConnell, Karima Khamisa, Debra Pugh
    Journal of Educational Evaluation for Health Professions.2019; 16: 14.     CrossRef
  • ОБ’ЄКТИВНИЙ СТРУКТУРОВАНИЙ КЛІНІЧНИЙ ІСПИТ ЯК ВИМІР ПРАКТИЧНОЇ ПІДГОТОВКИ МАЙБУТНЬОГО ЛІКАРЯ
    M. M. Korda, A. H. Shulhai, N. V. Pasyaka, N. V. Petrenko, N. V. Haliyash, N. A. Bilkevich
    Медична освіта.2019; (3): 19.     CrossRef
Research Article
Profiling medical school learning environments in Malaysia: a validation study of the Johns Hopkins Learning Environment Scale  
Sean Tackett, Hamidah Abu Bakar, Nicole A. Shilkofski, Niamh Coady, Krishna Rampal, Scott Wright
J Educ Eval Health Prof. 2015;12:39.   Published online July 9, 2015
DOI: https://doi.org/10.3352/jeehp.2015.12.39
  • 30,159 View
  • 171 Download
  • 10 Web of Science
  • 5 Crossref
AbstractAbstract PDF
Purpose
While a strong learning environment is critical to medical student education, the assessment of medical school learning environments has confounded researchers. Our goal was to assess the validity and utility of the Johns Hopkins Learning Environment Scale (JHLES) for preclinical students at three Malaysian medical schools with distinct educational and institutional models. Two schools were new international partnerships, and the third was school leaver program established without international partnership. Methods: First- and second-year students responded anonymously to surveys at the end of the academic year. The surveys included the JHLES, a 28-item survey using five-point Likert scale response options, the Dundee Ready Educational Environment Measure (DREEM), the most widely used method to assess learning environments internationally, a personal growth scale, and single-item global learning environment assessment variables. Results: The overall response rate was 369/429 (86%). After adjusting for the medical school year, gender, and ethnicity of the respondents, the JHLES detected differences across institutions in four out of seven domains (57%), with each school having a unique domain profile. The DREEM detected differences in one out of five categories (20%). The JHLES was more strongly correlated than the DREEM to two thirds of the single-item variables and the personal growth scale. The JHLES showed high internal reliability for the total score (α=0.92) and the seven domains (α= 0.56-0.85). Conclusion: The JHLES detected variation between learning environment domains across three educational settings, thereby creating unique learning environment profiles. Interpretation of these profiles may allow schools to understand how they are currently supporting trainees and identify areas needing attention.

Citations

Citations to this article as recorded by  
  • Validation of the Polish version of the DREEM questionnaire – a confirmatory factor analysis
    Dorota Wójcik, Leszek Szalewski, Adam Bęben, Iwona Ordyniec-Kwaśnica, Sue Roff
    BMC Medical Education.2023;[Epub]     CrossRef
  • Association between patient care ownership and personal or environmental factors among medical trainees: a multicenter cross-sectional study
    Hirohisa Fujikawa, Daisuke Son, Takuya Aoki, Masato Eto
    BMC Medical Education.2022;[Epub]     CrossRef
  • Measuring Students’ Perceptions of the Medical School Learning Environment: Translation, Transcultural Adaptation, and Validation of 2 Instruments to the Brazilian Portuguese Language
    Rodolfo F Damiano, Aline O Furtado, Betina N da Silva, Oscarina da S Ezequiel, Alessandra LG Lucchetti, Lisabeth F DiLalla, Sean Tackett, Robert B Shochet, Giancarlo Lucchetti
    Journal of Medical Education and Curricular Development.2020; 7: 238212052090218.     CrossRef
  • Developing an Introductory Radiology Clerkship at Perdana University Graduate School of Medicine in Kuala Lumpur, Malaysia
    Sarah Wallace Cater, Lakshmi Krishnan, Lars Grimm, Brian Garibaldi, Isabel Green
    Health Professions Education.2017; 3(2): 113.     CrossRef
  • Trainers' perception of the learning environment and student competency: A qualitative investigation of midwifery and anesthesia training programs in Ethiopia
    Sharon Kibwana, Rachel Haws, Adrienne Kols, Firew Ayalew, Young-Mi Kim, Jos van Roosmalen, Jelle Stekelenburg
    Nurse Education Today.2017; 55: 5.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions