Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
27 "Assessment"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Use of learner-driven, formative, ad-hoc, prospective assessment of competence in physical therapist clinical education in the United States: a prospective cohort study  
Carey Holleran, Jeffrey Konrad, Barbara Norton, Tamara Burlis, Steven Ambler
J Educ Eval Health Prof. 2023;20:36.   Published online December 8, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.36
  • 699 View
  • 110 Download
AbstractAbstract PDFSupplementary Material
Purpose
The purpose of this project was to implement a process for learner-driven, formative, prospective, ad-hoc, entrustment assessment in Doctor of Physical Therapy clinical education. Our goals were to develop an innovative entrustment assessment tool, and then explore whether the tool detected (1) differences between learners at different stages of development and (2) differences within learners across the course of a clinical education experience. We also investigated whether there was a relationship between the number of assessments and change in performance.
Methods
A prospective, observational, cohort of clinical instructors (CIs) was recruited to perform learner-driven, formative, ad-hoc, prospective, entrustment assessments. Two entrustable professional activities (EPAs) were used: (1) gather a history and perform an examination and (2) implement and modify the plan of care, as needed. CIs provided a rating on the entrustment scale and provided narrative support for their rating.
Results
Forty-nine learners participated across 4 clinical experiences (CEs), resulting in 453 EPA learner-driven assessments. For both EPAs, statistically significant changes were detected both between learners at different stages of development and within learners across the course of a CE. Improvement within each CE was significantly related to the number of feedback opportunities.
Conclusion
The results of this pilot study provide preliminary support for the use of learner-driven, formative, ad-hoc assessments of competence based on EPAs with a novel entrustment scale. The number of formative assessments requested correlated with change on the EPA scale, suggesting that formative feedback may augment performance improvement.
Enhancement of the technical and non-technical skills of nurse anesthesia students using the Anesthetic List Management Assessment Tool in Iran: a quasi-experimental study  
Ali Khalafi, Maedeh Kordnejad, Vahid Saidkhani
J Educ Eval Health Prof. 2023;20:19.   Published online June 16, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.19
  • 1,091 View
  • 80 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the effect of evaluations based on the Anesthetic List Management Assessment Tool (ALMAT) form on improving the technical and non-technical skills of final-year nurse anesthesia students at Ahvaz Jundishapur University of Medical Sciences (AJUMS).
Methods
This was a semi-experimental study with a pre-test and post-test design. It included 45 final-year nurse anesthesia students of AJUMS and lasted for 3 months. The technical and non-technical skills of the intervention group were assessed at 4 university hospitals using formative-feedback evaluation based on the ALMAT form, from induction of anesthesia until reaching mastery and independence. Finally, the students’ degree of improvement in technical and non-technical skills was compared between the intervention and control groups. Statistical tests (the independent t-test, paired t-test, and Mann-Whitney test) were used to analyze the data.
Results
The rate of improvement in post-test scores of technical skills was significantly higher in the intervention group than in the control group (P˂0.0001). Similarly, the students in the intervention group received significantly higher post-test scores for non-technical skills than the students in the control group (P˂0.0001).
Conclusion
The findings of this study showed that the use of ALMAT as a formative-feedback evaluation method to evaluate technical and non-technical skills had a significant effect on improving these skills and was effective in helping students learn and reach mastery and independence.
Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia  
Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
J Educ Eval Health Prof. 2021;18:23.   Published online September 23, 2021
DOI: https://doi.org/10.3352/jeehp.2021.18.23
  • 4,772 View
  • 224 Download
  • 2 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
It aimed to compare the use of the tele-objective structured clinical examination (teleOSCE) with in-person assessment in high-stakes clinical examination so as to determine the impact of the teleOSCE on the assessment undertaken. Discussion follows regarding what skills and domains can effectively be assessed in a teleOSCE.
Methods
This study is a retrospective observational analysis. It compares the results achieved by final year medical students in their clinical examination, assessed using the teleOSCE in 2020 (n=285), with those who were examined using the traditional in-person format in 2019 (n=280). The study was undertaken at the University of New South Wales, Australia.
Results
In the domain of physical examination, students in 2020 scored 0.277 points higher than those in 2019 (mean difference=–0.277, P<0.001, effect size=0.332). Across all other domains, there was no significant difference in mean scores between 2019 and 2020.
Conclusion
The teleOSCE does not negatively impact assessment in clinical examination in all domains except physical examination. If the teleOSCE is the future of clinical skills examination, assessment of physical examination will require concomitant workplace-based assessment.

Citations

Citations to this article as recorded by  
  • Radiography education in 2022 and beyond - Writing the history of the present: A narrative review
    Y.X. Tay, J.P. McNulty
    Radiography.2023; 29(2): 391.     CrossRef
  • Feasibility and reliability of the pandemic-adapted online-onsite hybrid graduation OSCE in Japan
    Satoshi Hara, Kunio Ohta, Daisuke Aono, Toshikatsu Tamai, Makoto Kurachi, Kimikazu Sugimori, Hiroshi Mihara, Hiroshi Ichimura, Yasuhiko Yamamoto, Hideki Nomura
    Advances in Health Sciences Education.2023;[Epub]     CrossRef
  • Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study
    Sun Kim, A Ra Cho, Chul Woon Chung
    Journal of Educational Evaluation for Health Professions.2021; 18: 28.     CrossRef
Review
Assessment methods and the validity and reliability of measurement tools in online objective structured clinical examinations: a systematic scoping review  
Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
J Educ Eval Health Prof. 2021;18:11.   Published online June 1, 2021
DOI: https://doi.org/10.3352/jeehp.2021.18.11
  • 6,533 View
  • 392 Download
  • 10 Web of Science
  • 9 Crossref
AbstractAbstract PDFSupplementary Material
The coronavirus disease 2019 (COVID-19) pandemic has required educators to adapt the in-person objective structured clinical examination (OSCE) to online settings in order for it to remain a critical component of the multifaceted assessment of a student’s competency. This systematic scoping review aimed to summarize the assessment methods and validity and reliability of the measurement tools used in current online OSCE (hereafter, referred to as teleOSCE) approaches. A comprehensive literature review was undertaken following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews guidelines. Articles were eligible if they reported any form of performance assessment, in any field of healthcare, delivered in an online format. Two reviewers independently screened the results and analyzed relevant studies. Eleven articles were included in the analysis. Pre-recorded videos were used in 3 studies, while observations by remote examiners through an online platform were used in 7 studies. Acceptability as perceived by students was reported in 2 studies. This systematic scoping review identified several insights garnered from implementing teleOSCEs, the components transferable from telemedicine, and the need for systemic research to establish the ideal teleOSCE framework. TeleOSCEs may be able to improve the accessibility and reproducibility of clinical assessments and equip students with the requisite skills to effectively practice telemedicine in the future.

Citations

Citations to this article as recorded by  
  • The virtual Clinical Assessment of Skills and Competence: the impact and challenges of a digitised final examination
    Kenny Chu, Shivanthi Sathanandan
    BJPsych Bulletin.2023; 47(2): 110.     CrossRef
  • Virtual Learning and Assessment in Rheumatology Fellowship Training: Objective Structured Clinical Examination Revisited
    Rachel M. Wolfe, Faye N. Hant, Rumey C. Ishizawar, Lisa G. Criscione‐Schreiber, Beth L. Jonas, Kenneth S. O'Rourke, Marcy B. Bolster
    Arthritis Care & Research.2023; 75(12): 2435.     CrossRef
  • Feasibility and reliability of the pandemic-adapted online-onsite hybrid graduation OSCE in Japan
    Satoshi Hara, Kunio Ohta, Daisuke Aono, Toshikatsu Tamai, Makoto Kurachi, Kimikazu Sugimori, Hiroshi Mihara, Hiroshi Ichimura, Yasuhiko Yamamoto, Hideki Nomura
    Advances in Health Sciences Education.2023;[Epub]     CrossRef
  • Innovations in assessment in health professions education during the COVID‐19 pandemic: A scoping review
    Jamal Giri, Claire Stewart
    The Clinical Teacher.2023;[Epub]     CrossRef
  • Evaluation of the Utility of Online Objective Structured Clinical Examination Conducted During the COVID-19 Pandemic
    Mona Arekat, Mohamed Hany Shehata, Abdelhalim Deifalla, Ahmed Al-Ansari, Archana Kumar, Mohamed Alsenbesy, Hamdi Alshenawi, Amgad El-Agroudy, Mariwan Husni, Diaa Rizk, Abdelaziz Elamin, Afif Ben Salah, Hani Atwa
    Advances in Medical Education and Practice.2022; Volume 13: 407.     CrossRef
  • Comparison of student pharmacists' performance on in-person vs. virtual OSCEs in a pre-APPE capstone course
    Justine S. Gortney, Joseph P. Fava, Andrew D. Berti, Brittany Stewart
    Currents in Pharmacy Teaching and Learning.2022; 14(9): 1116.     CrossRef
  • Is online objective structured clinical examination teaching an acceptable replacement in post-COVID-19 medical education in the United Kingdom?: a descriptive study
    Vashist Motkur, Aniket Bharadwaj, Nimalesh Yogarajah
    Journal of Educational Evaluation for Health Professions.2022; 19: 30.     CrossRef
  • Equal Z standard-setting method to estimate the minimum number of panelists for a medical school’s objective structured clinical examination in Taiwan: a simulation study
    Ying-Ying Yang, Pin-Hsiang Huang, Ling-Yu Yang, Chia-Chang Huang, Chih-Wei Liu, Shiau-Shian Huang, Chen-Huan Chen, Fa-Yauh Lee, Shou-Yen Kao, Boaz Shulruf
    Journal of Educational Evaluation for Health Professions.2022; 19: 27.     CrossRef
  • Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia
    Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
    Journal of Educational Evaluation for Health Professions.2021; 18: 23.     CrossRef
Research articles
Development and validation of a portfolio assessment system for medical schools in Korea  
Dong Mi Yoo, A Ra Cho, Sun Kim
J Educ Eval Health Prof. 2020;17:39.   Published online December 9, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.39
  • 4,723 View
  • 207 Download
  • 2 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Consistent evaluation procedures based on objective and rational standards are essential for the sustainability of portfolio-based education, which has been widely introduced in medical education. We aimed to develop and implement a portfolio assessment system, and to assess its validity and reliability.
Methods
We developed a portfolio assessment system from March 2019 to August 2019 and confirmed its content validity through expert assessment by an expert group comprising 2 medical education specialists, 2 professors involved in education at medical school, and a professor of basic medical science. Six trained assessors conducted 2 rounds of evaluation of 7 randomly selected portfolios for the “Self-Development and Portfolio II” course from January 2020 to July 2020. These data are used inter-rater reliability was evaluated using intra-class correlation coefficients (ICCs) in September 2020.
Results
The portfolio assessment system is based on the following process; assessor selection, training, analytical/comprehensive evaluation, and consensus. Appropriately trained assessors evaluated portfolios based on specific assessment criteria and a rubric for assigning points. In the analysis of inter-rater reliability, the first round of evaluation grades was submitted, and all assessment areas except “goal-setting” showed a high ICC of 0.81 or higher. After the first round of assessment, we attempted to standardize objective assessment procedures. As a result, all components of the assessments showed close correlations, with ICCs of 0.81 or higher.
Conclusion
We confirmed that when assessors with an appropriate training conduct portfolio assessment based on specified standards through a systematic procedure, the results are reliable.

Citations

Citations to this article as recorded by  
  • Development of an electronic learning progression dashboard to monitor student clinical experiences
    Hollis Lai, Nazila Ameli, Steven Patterson, Anthea Senior, Doris Lunardon
    Journal of Dental Education.2022; 86(6): 759.     CrossRef
  • Medical Student Portfolios: A Systematic Scoping Review
    Rei Tan, Jacquelin Jia Qi Ting, Daniel Zhihao Hong, Annabelle Jia Sing Lim, Yun Ting Ong, Anushka Pisupati, Eleanor Jia Xin Chong, Min Chiam, Alexia Sze Inn Lee, Laura Hui Shuen Tan, Annelissa Mien Chew Chin, Limin Wijaya, Warren Fong, Lalit Kumar Radha K
    Journal of Medical Education and Curricular Development.2022; 9: 238212052210760.     CrossRef
  • Development of Teaching and Learning Manual for Competency-Based Practice for Meridian & Acupuncture Points Class
    Eunbyul Cho, Jiseong Hong, Yeonkyeong Nam, Haegue Shin, Jae-Hyo Kim
    Korean Journal of Acupuncture.2022; 39(4): 184.     CrossRef
Estimation of item parameters and examinees’ mastery probability in each domain of the Korean Medical Licensing Examination using a deterministic inputs, noisy “and” gate (DINA) model  
Younyoung Choi, Dong Gi Seo
J Educ Eval Health Prof. 2020;17:35.   Published online November 17, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.35
  • 4,774 View
  • 97 Download
AbstractAbstract PDFSupplementary Material
Purpose
The deterministic inputs, noisy “and” gate (DINA) model is a promising statistical method for providing useful diagnostic information about students’ level of achievement, as educators often want to receive diagnostic information on how examinees did on each content strand, which is referred to as a diagnostic profile. The purpose of this paper was to classify examinees of the Korean Medical Licensing Examination (KMLE) in different content domains using the DINA model.
Methods
This paper analyzed data from the KMLE, with 360 items and 3,259 examinees. An application study was conducted to estimate examinees’ parameters and item characteristics. The guessing and slipping parameters of each item were estimated, and statistical analysis was conducted using the DINA model.
Results
The output table shows examples of some items that can be used to check item quality. The probabilities of mastery of each content domain were also estimated, indicating the mastery profile of each examinee. The classification accuracy and consistency for 8 content domains ranged from 0.849 to 0.972 and from 0.839 to 0.994, respectively. As a result, the classification reliability of the cognitive diagnosis model was very high for the 8 content domains of the KMLE.
Conclusion
This mastery profile can provide useful diagnostic information for each examinee in terms of each content domain of the KMLE. Individual mastery profiles allow educators and examinees to understand which domain(s) should be improved in order to master all domains in the KMLE. In addition, all items showed reasonable results in terms of item parameters.
Reviews
A proposal for the future of medical education accreditation in Korea  
Ki-Young Lim
J Educ Eval Health Prof. 2020;17:32.   Published online October 21, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.32
  • 4,519 View
  • 122 Download
  • 2 Web of Science
  • 5 Crossref
AbstractAbstract PDFSupplementary Material
For the past 20 years, the medical education accreditation program of the Korean Institute of Medical Education and Evaluation (KIMEE) has contributed significantly to the standardization and improvement of the quality of basic medical education in Korea. It should now contribute to establishing and promoting the future of medical education. The Accreditation Standards of KIMEE 2019 (ASK2019) have been adopted since 2019, with the goal of achieving world-class medical education by applying a learner-centered curriculum using a continuum framework for the 3 phases of formal medical education: basic medical education, postgraduate medical education, and continuing professional development. ASK2019 will also be able to promote medical education that meets community needs and employs systematic assessments throughout the education process. These are important changes that can be used to gauge the future of the medical education accreditation system. Furthermore, globalization, inter-professional education, health systems science, and regular self-assessment systems are emerging as essential topics for the future of medical education. It is time for the medical education accreditation system in Korea to observe and adopt new trends in global medical education.

Citations

Citations to this article as recorded by  
  • Analyzing the characteristics of mission statements in Korean medical schools based on the Korean Doctor’s Role framework
    Ye Ji Kang, Soomin Lee, Hyo Jeong Lee, Do-Hwan Kim
    Korean Journal of Medical Education.2024; 36(1): 99.     CrossRef
  • Accreditation standards items of post-2nd cycle related to the decision of accreditation of medical schools by the Korean Institute of Medical Education and Evaluation
    Kwi Hwa Park, Geon Ho Lee, Su Jin Chae, Seong Yong Kim
    Korean Journal of Medical Education.2023; 35(1): 1.     CrossRef
  • Continuing Professional Development of Pharmacists and The Roles of Pharmacy Schools
    Hyemin Park, Jeong-Hyun Yoon
    Korean Journal of Clinical Pharmacy.2022; 32(4): 281.     CrossRef
  • Definition of character for medical education based on expert opinions in Korea
    Yera Hur
    Journal of Educational Evaluation for Health Professions.2021; 18: 26.     CrossRef
  • Special reviews on the history and future of the Korean Institute of Medical Education and Evaluation to memorialize its collaboration with the Korea Health Personnel Licensing Examination Institute to designate JEEHP as a co-official journal
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2020; 17: 33.     CrossRef
Is accreditation in medical education in Korea an opportunity or a burden?  
Hanna Jung, Woo Taek Jeon, Shinki An
J Educ Eval Health Prof. 2020;17:31.   Published online October 21, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.31
  • 4,937 View
  • 126 Download
  • 3 Web of Science
  • 8 Crossref
AbstractAbstract PDFSupplementary Material
The accreditation process is both an opportunity and a burden for medical schools in Korea. The line that separates the two is based on how medical schools recognize and utilize the accreditation process. In other words, accreditation is a burden for medical schools if they view the accreditation process as merely a formal procedure or a means to maintain accreditation status for medical education. However, if medical schools acknowledge the positive value of the accreditation process, accreditation can be both an opportunity and a tool for developing medical education. The accreditation process has educational value by catalyzing improvements in the quality, equity, and efficiency of medical education and by increasing the available options. For the accreditation process to contribute to medical education development, accrediting agencies and medical schools must first be recognized as partners of an educational alliance working together towards common goals. Secondly, clear guidelines on accreditation standards should be periodically reviewed and shared. Finally, a formative self-evaluation process must be introduced for institutions to utilize the accreditation process as an opportunity to develop medical education. This evaluation system could be developed through collaboration among medical schools, academic societies for medical education, and the accrediting authority.

Citations

Citations to this article as recorded by  
  • To prove or improve? Examining how paradoxical tensions shape evaluation practices in accreditation contexts
    Betty Onyura, Abigail J. Fisher, Qian Wu, Shrutikaa Rajkumar, Sarick Chapagain, Judith Nassuna, David Rojas, Latika Nirula
    Medical Education.2024; 58(3): 354.     CrossRef
  • ASPIRE for excellence in curriculum development
    John Jenkins, Sharon Peters, Peter McCrorie
    Medical Teacher.2024; : 1.     CrossRef
  • Accreditation standards items of post-2nd cycle related to the decision of accreditation of medical schools by the Korean Institute of Medical Education and Evaluation
    Kwi Hwa Park, Geon Ho Lee, Su Jin Chae, Seong Yong Kim
    Korean Journal of Medical Education.2023; 35(1): 1.     CrossRef
  • The Need for the Standards for Anatomy Labs in Medical School Evaluation and Accreditation
    Yu-Ran Heo, Jae-Ho Lee
    Anatomy & Biological Anthropology.2023; 36(3): 81.     CrossRef
  • Seal of Approval or Ticket to Triumph? The Impact of Accreditation on Medical Student Performance in Foreign Medical Council Examinations
    Saurabh RamBihariLal Shrivastava, Titi Savitri Prihatiningsih, Kresna Lintang Pratidina
    Indian Journal of Medical Specialities.2023; 14(4): 249.     CrossRef
  • Internal evaluation in the faculties affiliated to zanjan university of medical sciences: Quality assurance of medical science education based on institutional accreditation
    Alireza Abdanipour, Farhad Ramezani‐Badr, Ali Norouzi, Mehdi Ghaemi
    Journal of Medical Education Development.2022; 15(46): 61.     CrossRef
  • Development of Mission and Vision of College of Korean Medicine Using the Delphi Techniques and Big-Data Analysis
    Sanghee Yeo, Seong Hun Choi, Su Jin Chae
    Journal of Korean Medicine.2021; 42(4): 176.     CrossRef
  • Special reviews on the history and future of the Korean Institute of Medical Education and Evaluation to memorialize its collaboration with the Korea Health Personnel Licensing Examination Institute to designate JEEHP as a co-official journal
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2020; 17: 33.     CrossRef
Research articles
Development of a checklist to validate the framework of a narrative medicine program based on Gagne’s instructional design model in Iran through consensus of a multidisciplinary expert panel  
Saeideh Daryazadeh, Nikoo Yamani, Payman Adibi
J Educ Eval Health Prof. 2019;16:34.   Published online October 31, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.34
  • 8,625 View
  • 167 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Narrative medicine is a patient-centered approach focusing on the development of narrative skills and self-awareness that incorporates “attending, representing, and affiliating” in clinical encounters. Acquiring narrative competency promotes clinical performance, and narratives can be used for teaching professionalism, empathy, multicultural education, and professional development. This study was conducted to develop a checklist to validate the framework of a narrative medicine program through consensus of a panel.
Methods
This expert panel study was conducted from 2018 to 2019 at Isfahan University of Medical Sciences, Iran. It included 2 phases: developing a framework in 2 steps and forming an expert panel to validate the framework in 3 rounds. We adapted a 3-stage narrative medicine model with 9 training activities from Gagne’s theory, developed a framework, and then produced a checklist to validate the framework in a multidisciplinary expert panel that consisted of 7 experts. The RAND/UCLA appropriateness method was used to assess the experts’ agreement. The first-round opinions were received by email. Consensus was achieved in the second and third rounds through face-to-face meetings to facilitate interactions and discussion among the experts.
Results
Sixteen valid indicators were approved and 100% agreement was obtained among experts (with median values in the range of 7–9 out of a maximum of 9, with no disagreement), and the framework was validated by the expert panel.
Conclusion
The 16 checklist indicators can be used to evaluate narrative medicine programs as a simple and practical guide to improve teaching effectiveness and promote life-long learning.

Citations

Citations to this article as recorded by  
  • Challenges of Implementing the First Narrative Medicine Course for Teaching Professionalism in Iran: A Qualitative Content Analysis
    Saeideh Daryazadeh, Payman Adibi, Nikoo Yamani
    Educational Research in Medical Sciences.2022;[Epub]     CrossRef
  • Impact of a narrative medicine program on reflective capacity and empathy of medical students in Iran
    Saeideh Daryazadeh, Payman Adibi, Nikoo Yamani, Roya Mollabashi
    Journal of Educational Evaluation for Health Professions.2020; 17: 3.     CrossRef
Development of a self-assessment tool for resident doctors’ communication skills in India  
Upendra Baitha, Piyush Ranjan, Siddharth Sarkar, Charu Arora, Archana Kumari, Sada Nand Dwivedi, Asmita Patil, Nayer Jamshed
J Educ Eval Health Prof. 2019;16:17.   Published online June 24, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.17
  • 14,272 View
  • 261 Download
  • 9 Web of Science
  • 7 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Effective communication skills are essential for resident doctors to provide optimum patient care. This study was conducted to develop and validate a questionnaire for the self-assessment of resident doctors’ communication skills in India.
Methods
This was a mixed-methods study conducted in 2 phases. The first phase consisted of questionnaire development, including the identification of relevant literature, focus group discussions with residents and experts from clinical specialties, and pre-testing of the questionnaire. The second phase involved administering the questionnaire survey to 95 residents from the Departments of Medicine, Emergency Medicine, Pediatrics, and Surgery at the All India Institute of Medical Sciences, New Delhi, India in April 2019. Internal consistency was tested and the factor structure was analyzed to test construct validity.
Results
The questionnaire consisted of 3 sections: (A) 4 items on doctor-patient conflicts and the role of communication skills in avoiding these conflicts, (B) 29 items on self-assessment of communication skills in different settings, and (C) 8 items on barriers to practicing good communication skills. Sections B and C had good internal consistency (Cronbach α: 0.885 and 0.771, respectively). Section C had a 2-factor solution, and the barriers were classified as ‘training’ and ‘infrastructure’ factors.
Conclusion
This appears to be a valid assessment tool of resident doctors’ communication skills, with potential utility for identifying gaps in communication skills and developing communication skills modules.

Citations

Citations to this article as recorded by  
  • Leveraging the vantage point – exploring nurses’ perception of residents’ communication skills: a mixed-methods study
    Komal Abdul Rahim, Maryam Pyar Ali Lakhdir, Noreen Afzal, Asma Altaf Hussain Merchant, Namra Qadeer Shaikh, Ali Aahil Noorali, Umar Tariq, Rida Ahmad, Saqib Kamran Bakhshi, Saad bin Zafar Mahmood, Muhammad Rizwan Khan, Muhammed Tariq, Adil H. Haider
    BMC Medical Education.2023;[Epub]     CrossRef
  • Developing a communication-skills training curriculum for resident-physicians to enhance patient outcomes at an academic medical centre: an ongoing mixed-methods study protocol
    Hamna Shahbaz, Ali Aahil Noorali, Maha Inam, Namra Qadeer, Asma Altaf Hussain Merchant, Adnan Ali Khan, Noreen Afzal, Komal Abdul Rahim, Ibrahim Munaf, Rida Ahmad, Muhammad Tariq, Adil H Haider
    BMJ Open.2022; 12(8): e056840.     CrossRef
  • A cross-sectional evaluation of communication skills and perceived barriers among the resident doctors at a tertiary care center in India
    Amandeep Singh, Piyush Ranjan, Archana Kumari, Siddharth Sarkar, Tanveer Kaur, Ramesh Aggarwal, AshishDatt Upadhyay, Biswaroop Chakrawarty, Jamshed Nayer, Mohit Joshi, Avinash Chakrawarty
    Journal of Education and Health Promotion.2022; 11(1): 425.     CrossRef
  • Development and validation of a questionnaire to assess preventive practices against COVID-19 pandemic in the general population
    Ayush Agarwal, Piyush Ranjan, Priyanka Rohilla, Yellamraju Saikaustubh, Anamika Sahu, Sada Nand Dwivedi, Aakansha, Upendra Baitha, Arvind Kumar
    Preventive Medicine Reports.2021; 22: 101339.     CrossRef
  • Development and Validation of a Comprehensive Questionnaire to Assess Interpersonal Discord (Bullying, Harassment, and Discrimination) at the Workplace in a Healthcare Setting
    Amandeep Singh, Piyush Ranjan, Tanveer Kaur, Siddharth Sarkar, Ashish D Upadhyay, Upendra Baitha, Prayas Sethi, Ranveer S Jadon, Pankaj Jorwal
    Cureus.2021;[Epub]     CrossRef
  • Development and Validation of a Questionnaire to Evaluate Workplace Violence in Healthcare Settings
    Archana Kumari, Amandeep Singh, Piyush Ranjan, Siddharth Sarkar, Tanveer Kaur, Ashish D Upadhyay, Kirti Verma, Vignan Kappagantu, Ajay Mohan, Upendra Baitha
    Cureus.2021;[Epub]     CrossRef
  • The value of communicating with patients in their first language
    Piyush Ranjan, Archana Kumari, Charu Arora
    Expert Review of Pharmacoeconomics & Outcomes Research.2020; 20(6): 559.     CrossRef
Educational/faculty development material
Analysis of the Clinical Education Situation framework: a tool for identifying the root cause of student failure in the United States  
Katherine Myers, Kyle Covington
J Educ Eval Health Prof. 2019;16:11.   Published online May 10, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.11
  • 14,905 View
  • 259 Download
  • 1 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Doctor of physical therapy preparation requires extensive time in precepted clinical education, which involves multiple stakeholders. Student outcomes in clinical education are impacted by many factors, and, in the case of failure, it can be challenging to determine which factors played a primary role in the poor result. Using existing root-cause analysis processes, the authors developed and implemented a framework designed to identify the causes of student failure in clinical education. This framework, when applied to a specific student failure event, can be used to identify the factors that contributed to the situation and to reveal opportunities for improvement in both the clinical and academic environments. A root-cause analysis framework can help to drive change at the programmatic level, and future studies should focus on the framework’s application to a variety of clinical and didactic settings.

Citations

Citations to this article as recorded by  
  • Applying the 2022 Cardiovascular and Pulmonary Entry-Level Physical Therapist Competencies to Physical Therapist Education and Practice
    Nancy Smith, Angela Campbell, Morgan Johanson, Pamela Bartlo, Naomi Bauer, Sagan Everett
    Journal of Physical Therapy Education.2023; 37(3): 165.     CrossRef
  • Cardiovascular and Pulmonary Entry-Level Physical Therapist Competencies: Update by Academy of Cardiovascular & Pulmonary Physical Therapy Task Force
    Morgan Johanson, Pamela Bartlo, Naomi Bauer, Angela Campbell, Sagan Everett, Nancy Smith
    Cardiopulmonary Physical Therapy Journal.2023; 34(4): 183.     CrossRef
  • The situational analysis of teaching-learning in clinical education in Iran: a postmodern grounded theory study
    Soleiman Ahmady, Hamed Khani
    BMC Medical Education.2022;[Epub]     CrossRef
Research articles
Medical students’ thought process while solving problems in 3 different types of clinical assessments in Korea: clinical performance examination, multimedia case-based assessment, and modified essay question  
Sejin Kim, Ikseon Choi, Bo Young Yoon, Min Jeong Kwon, Seok-jin Choi, Sang Hyun Kim, Jong-Tae Lee, Byoung Doo Rhee
J Educ Eval Health Prof. 2019;16:10.   Published online May 9, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.10
  • 16,377 View
  • 271 Download
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to explore students’ cognitive patterns while solving clinical problems in 3 different types of assessments—clinical performance examination (CPX), multimedia case-based assessment (CBA), and modified essay question (MEQ)—and thereby to understand how different types of assessments stimulate different patterns of thinking.
Methods
A total of 6 test-performance cases from 2 fourth-year medical students were used in this cross-case study. Data were collected through one-on-one interviews using a stimulated recall protocol where students were shown videos of themselves taking each assessment and asked to elaborate on what they were thinking. The unit of analysis was the smallest phrases or sentences in the participants’ narratives that represented meaningful cognitive occurrences. The narrative data were reorganized chronologically and then analyzed according to the hypothetico-deductive reasoning framework for clinical reasoning.
Results
Both participants demonstrated similar proportional frequencies of clinical reasoning patterns on the same clinical assessments. The results also revealed that the three different assessment types may stimulate different patterns of clinical reasoning. For example, the CPX strongly promoted the participants’ reasoning related to inquiry strategy, while the MEQ strongly promoted hypothesis generation. Similarly, data analysis and synthesis by the participants were more strongly stimulated by the CBA than by the other assessment types.
Conclusion
This study found that different assessment designs stimulated different patterns of thinking during problem-solving. This finding can contribute to the search for ways to improve current clinical assessments. Importantly, the research method used in this study can be utilized as an alternative way to examine the validity of clinical assessments.

Citations

Citations to this article as recorded by  
  • Future directions of online learning environment design at medical schools: a transition towards a post-pandemic context
    Sejin Kim
    Kosin Medical Journal.2023; 38(1): 12.     CrossRef
  • Clinical Reasoning Training based on the analysis of clinical case using a virtual environment
    Sandra Elena Lisperguer Soto, María Soledad Calvo, Gabriela Paz Urrejola Contreras, Miguel Ángel Pérez Lizama
    Educación Médica.2021; 22(3): 139.     CrossRef
  • Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study
    Sun Kim, A Ra Cho, Chul Woon Chung
    Journal of Educational Evaluation for Health Professions.2021; 18: 28.     CrossRef
Comparison of the level of cognitive processing between case-based items and non-case-based items on the Interuniversity Progress Test of Medicine in the Netherlands  
Dario Cecilio-Fernandes, Wouter Kerdijk, Andreas Johannes Bremers, Wytze Aalders, René Anton Tio
J Educ Eval Health Prof. 2018;15:28.   Published online December 12, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.28
  • 18,929 View
  • 196 Download
  • 8 Web of Science
  • 8 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
It is assumed that case-based questions require higher-order cognitive processing, whereas questions that are not case-based require lower-order cognitive processing. In this study, we investigated to what extent case-based and non-case-based questions followed this assumption based on Bloom’s taxonomy.
Methods
In this article, 4,800 questions from the Interuniversity Progress Test of Medicine were classified based on whether they were case-based and on the level of Bloom’s taxonomy that they involved. Lower-order questions require students to remember or/and have a basic understanding of knowledge. Higher-order questions require students to apply, analyze, or/and evaluate. The phi coefficient was calculated to investigate the relationship between whether questions were case-based and the required level of cognitive processing.
Results
Our results demonstrated that 98.1% of case-based questions required higher-level cognitive processing. Of the non-case-based questions, 33.7% required higher-level cognitive processing. The phi coefficient demonstrated a significant, but moderate correlation between the presence of a patient case in a question and its required level of cognitive processing (phi coefficient= 0.55, P< 0.001).
Conclusion
Medical instructors should be aware of the association between item format (case-based versus non-case-based) and the cognitive processes they elicit in order to meet the desired balance in a test, taking the learning objectives and the test difficulty into account.

Citations

Citations to this article as recorded by  
  • Progress is impossible without change: implementing automatic item generation in medical knowledge progress testing
    Filipe Manuel Vidal Falcão, Daniela S.M. Pereira, José Miguel Pêgo, Patrício Costa
    Education and Information Technologies.2024; 29(4): 4505.     CrossRef
  • Identifying the response process validity of clinical vignette-type multiple choice questions: An eye-tracking study
    Francisco Carlos Specian Junior, Thiago Martins Santos, John Sandars, Eliana Martorano Amaral, Dario Cecilio-Fernandes
    Medical Teacher.2023; 45(8): 845.     CrossRef
  • Relationship between medical programme progress test performance and surgical clinical attachment timing and performance
    Andy Wearn, Vanshay Bindra, Bradley Patten, Benjamin P. T. Loveday
    Medical Teacher.2023; 45(8): 877.     CrossRef
  • Analysis of Orthopaedic In-Training Examination Trauma Questions: 2017 to 2021
    Lilah Fones, Daryl C. Osbahr, Daniel E. Davis, Andrew M. Star, Atif K. Ahmed, Arjun Saxena
    JAAOS: Global Research and Reviews.2023;[Epub]     CrossRef
  • Use of Sociodemographic Information in Clinical Vignettes of Multiple-Choice Questions for Preclinical Medical Students
    Kelly Carey-Ewend, Amir Feinberg, Alexis Flen, Clark Williamson, Carmen Gutierrez, Samuel Cykert, Gary L. Beck Dallaghan, Kurt O. Gilliland
    Medical Science Educator.2023; 33(3): 659.     CrossRef
  • What faculty write versus what students see? Perspectives on multiple-choice questions using Bloom’s taxonomy
    Seetha U. Monrad, Nikki L. Bibler Zaidi, Karri L. Grob, Joshua B. Kurtz, Andrew W. Tai, Michael Hortsch, Larry D. Gruppen, Sally A. Santen
    Medical Teacher.2021; 43(5): 575.     CrossRef
  • Aménagement du concours de première année commune aux études de santé (PACES) : entre justice sociale et éthique confraternelle en devenir ?
    R. Pougnet, L. Pougnet
    Éthique & Santé.2020; 17(4): 250.     CrossRef
  • Knowledge of dental faculty in gulf cooperation council states of multiple-choice questions’ item writing flaws
    Mawlood Kowash, Hazza Alhobeira, Iyad Hussein, Manal Al Halabi, Saif Khan
    Medical Education Online.2020;[Epub]     CrossRef
Agreement between 2 raters’ evaluations of a traditional prosthodontic practical exam integrated with directly observed procedural skills in Egypt  
Ahmed Khalifa Khalifa, Salah Hegazy
J Educ Eval Health Prof. 2018;15:23.   Published online September 27, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.23
  • 23,862 View
  • 205 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to assess the agreement between 2 raters in evaluations of students on a prosthodontic clinical practical exam integrated with directly observed procedural skills (DOPS).
Methods
A sample of 76 students was monitored by 2 raters to evaluate the process and the final registered maxillomandibular relation for a completely edentulous patient at Mansoura Dental School, Egypt on a practical exam of bachelor’s students from May 15 to June 28, 2017. Each registered relation was evaluated from a total of 60 marks subdivided into 3 score categories: occlusal plane orientation (OPO), vertical dimension registration (VDR), and centric relation registration (CRR). The marks for each category included an assessment of DOPS. The marks of OPO and VDR for both raters were compared using the graph method to measure reliability through Bland and Altman analysis. The reliability of the CRR marks was evaluated by the Krippendorff alpha ratio.
Results
The results revealed highly similar marks between raters for OPO (mean= 18.1 for both raters), with close limits of agreement (0.73 and −0.78). For VDR, the mean marks were close (mean= 17.4 and 17.1 for examiners 1 and 2, respectively), with close limits of agreement (2.7 and −2.2). There was a strong correlation (Krippendorff alpha ratio, 0.92; 95% confidence interval, 0.79– 0.99) between the raters in the evaluation of CRR.
Conclusion
The 2 raters’ evaluation of a clinical traditional practical exam integrated with DOPS showed no significant differences in the evaluations of candidates at the end of a clinical prosthodontic course. The limits of agreement between raters could be optimized by excluding subjective evaluation parameters and complicated cases from the examination procedure.

Citations

Citations to this article as recorded by  
  • In‐person and virtual assessment of oral radiology skills and competences by the Objective Structured Clinical Examination
    Fernanda R. Porto, Mateus A. Ribeiro, Luciano A. Ferreira, Rodrigo G. Oliveira, Karina L. Devito
    Journal of Dental Education.2023; 87(4): 505.     CrossRef
  • Evaluation agreement between peer assessors, supervisors, and parents in assessing communication and interpersonal skills of students of pediatric dentistry
    Jin Asari, Maiko Fujita-Ohtani, Kuniomi Nakamura, Tomomi Nakamura, Yoshinori Inoue, Shigenari Kimoto
    Pediatric Dental Journal.2023; 33(2): 133.     CrossRef
Learning through multiple lenses: analysis of self, peer, nearpeer, and faculty assessments of a clinical history-taking task in Australia  
Kylie Fitzgerald, Brett Vaughan
J Educ Eval Health Prof. 2018;15:22.   Published online September 18, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.22
  • 23,375 View
  • 287 Download
  • 4 Web of Science
  • 5 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Peer assessment provides a framework for developing expected skills and receiving feedback appropriate to the learner’s level. Near-peer (NP) assessment may elevate expectations and motivate learning. Feedback from peers and NPs may be a sustainable way to enhance student assessment feedback. This study analysed relationships among self, peer, NP, and faculty marking of an assessment and students’ attitudes towards marking by those various groups.
Methods
A cross-sectional study design was used. Year 2 osteopathy students (n= 86) were invited to perform self and peer assessments of a clinical history-taking and communication skills assessment. NPs and faculty also marked the assessment. Year 2 students also completed a questionnaire on their attitudes to peer/NP marking. Descriptive statistics and the Spearman rho coefficient were used to evaluate relationships across marker groups.
Results
Year 2 students (n= 9), NPs (n= 3), and faculty (n= 5) were recruited. Correlations between self and peer (r= 0.38) and self and faculty (r= 0.43) marks were moderate. A weak correlation was observed between self and NP marks (r= 0.25). Perceptions of peer and NP marking varied, with over half of the cohort suggesting that peer or NP assessments should not contribute to their grade.
Conclusion
Framing peer and NP assessment as another feedback source may offer a sustainable method for enhancing feedback without overloading faculty resources. Multiple sources of feedback may assist in developing assessment literacy and calibrating students’ self-assessment capability. The small number of students recruited suggests some acceptability of peer and NP assessment; however, further work is required to increase its acceptability.

Citations

Citations to this article as recorded by  
  • The extent and quality of evidence for osteopathic education: A scoping review
    Andrew MacMillan, Patrick Gauthier, Luciane Alberto, Arabella Gaunt, Rachel Ives, Chris Williams, Dr Jerry Draper-Rodi
    International Journal of Osteopathic Medicine.2023; 49: 100663.     CrossRef
  • History and physical exam: a retrospective analysis of a clinical opportunity
    David McLinden, Krista Hailstone, Sue Featherston
    BMC Medical Education.2023;[Epub]     CrossRef
  • How Accurate Are Our Students? A Meta-analytic Systematic Review on Self-assessment Scoring Accuracy
    Samuel P. León, Ernesto Panadero, Inmaculada García-Martínez
    Educational Psychology Review.2023;[Epub]     CrossRef
  • Evaluating the Academic Performance of Mustansiriyah Medical College Teaching Staff vs. Final-Year Students Failure Rates
    Wassan Nori, Wisam Akram , Saad Mubarak Rasheed, Nabeeha Najatee Akram, Taqi Mohammed Jwad Taher, Mustafa Ali Kassim Kassim, Alexandru Cosmin Pantazi
    Al-Rafidain Journal of Medical Sciences ( ISSN 2789-3219 ).2023; 5(1S): S151.     CrossRef
  • History-taking level and its influencing factors among nursing undergraduates based on the virtual standardized patient testing results: Cross sectional study
    Jingrong Du, Xiaowen Zhu, Juan Wang, Jing Zheng, Xiaomin Zhang, Ziwen Wang, Kun Li
    Nurse Education Today.2022; 111: 105312.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions