Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
28 "Assessment"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Review
Attraction and achievement as 2 attributes of gamification in healthcare: an evolutionary concept analysis  
Hyun Kyoung Kim
J Educ Eval Health Prof. 2024;21:10.   Published online April 11, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.10
  • 1,717 View
  • 323 Download
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
This study conducted a conceptual analysis of gamification in healthcare utilizing Rogers’ evolutionary concept analysis methodology to identify its attributes and provide a method for its applications in the healthcare field. Gamification has recently been used as a health intervention and education method, but the concept is used inconsistently and confusingly. A literature review was conducted to derive definitions, surrogate terms, antecedents, influencing factors, attributes (characteristics with dimensions and features), related concepts, consequences, implications, and hypotheses from various academic fields. A total of 56 journal articles in English and Korean, published between August 2 and August 7, 2023, were extracted from databases such as PubMed Central, the Institute of Electrical and Electronics Engineers, the Association for Computing Machinery Digital Library, the Research Information Sharing Service, and the Korean Studies Information Service System, using the keywords “gamification” and “healthcare.” These articles were then analyzed. Gamification in healthcare is defined as the application of game elements in health-related contexts to improve health outcomes. The attributes of this concept were categorized into 2 main areas: attraction and achievement. These categories encompass various strategies for synchronization, enjoyable engagement, visual rewards, and goal-reinforcing frames. Through a multidisciplinary analysis of the concept’s attributes and influencing factors, this paper provides practical strategies for implementing gamification in health interventions. When developing a gamification strategy, healthcare providers can reference this analysis to ensure the game elements are used both appropriately and effectively.

Citations

Citations to this article as recorded by  
  • Short-Term Impact of Digital Mental Health Interventions on Psychological Well-Being and Blood Sugar Control in Type 2 Diabetes Patients in Riyadh
    Abdulaziz M. Alodhialah, Ashwaq A. Almutairi, Mohammed Almutairi
    Healthcare.2024; 12(22): 2257.     CrossRef
Research articles
Use of learner-driven, formative, ad-hoc, prospective assessment of competence in physical therapist clinical education in the United States: a prospective cohort study  
Carey Holleran, Jeffrey Konrad, Barbara Norton, Tamara Burlis, Steven Ambler
J Educ Eval Health Prof. 2023;20:36.   Published online December 8, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.36
  • 1,605 View
  • 164 Download
AbstractAbstract PDFSupplementary Material
Purpose
The purpose of this project was to implement a process for learner-driven, formative, prospective, ad-hoc, entrustment assessment in Doctor of Physical Therapy clinical education. Our goals were to develop an innovative entrustment assessment tool, and then explore whether the tool detected (1) differences between learners at different stages of development and (2) differences within learners across the course of a clinical education experience. We also investigated whether there was a relationship between the number of assessments and change in performance.
Methods
A prospective, observational, cohort of clinical instructors (CIs) was recruited to perform learner-driven, formative, ad-hoc, prospective, entrustment assessments. Two entrustable professional activities (EPAs) were used: (1) gather a history and perform an examination and (2) implement and modify the plan of care, as needed. CIs provided a rating on the entrustment scale and provided narrative support for their rating.
Results
Forty-nine learners participated across 4 clinical experiences (CEs), resulting in 453 EPA learner-driven assessments. For both EPAs, statistically significant changes were detected both between learners at different stages of development and within learners across the course of a CE. Improvement within each CE was significantly related to the number of feedback opportunities.
Conclusion
The results of this pilot study provide preliminary support for the use of learner-driven, formative, ad-hoc assessments of competence based on EPAs with a novel entrustment scale. The number of formative assessments requested correlated with change on the EPA scale, suggesting that formative feedback may augment performance improvement.
Enhancement of the technical and non-technical skills of nurse anesthesia students using the Anesthetic List Management Assessment Tool in Iran: a quasi-experimental study  
Ali Khalafi, Maedeh Kordnejad, Vahid Saidkhani
J Educ Eval Health Prof. 2023;20:19.   Published online June 16, 2023
DOI: https://doi.org/10.3352/jeehp.2023.20.19
  • 1,752 View
  • 98 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the effect of evaluations based on the Anesthetic List Management Assessment Tool (ALMAT) form on improving the technical and non-technical skills of final-year nurse anesthesia students at Ahvaz Jundishapur University of Medical Sciences (AJUMS).
Methods
This was a semi-experimental study with a pre-test and post-test design. It included 45 final-year nurse anesthesia students of AJUMS and lasted for 3 months. The technical and non-technical skills of the intervention group were assessed at 4 university hospitals using formative-feedback evaluation based on the ALMAT form, from induction of anesthesia until reaching mastery and independence. Finally, the students’ degree of improvement in technical and non-technical skills was compared between the intervention and control groups. Statistical tests (the independent t-test, paired t-test, and Mann-Whitney test) were used to analyze the data.
Results
The rate of improvement in post-test scores of technical skills was significantly higher in the intervention group than in the control group (P˂0.0001). Similarly, the students in the intervention group received significantly higher post-test scores for non-technical skills than the students in the control group (P˂0.0001).
Conclusion
The findings of this study showed that the use of ALMAT as a formative-feedback evaluation method to evaluate technical and non-technical skills had a significant effect on improving these skills and was effective in helping students learn and reach mastery and independence.
Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia  
Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
J Educ Eval Health Prof. 2021;18:23.   Published online September 23, 2021
DOI: https://doi.org/10.3352/jeehp.2021.18.23
  • 5,513 View
  • 233 Download
  • 4 Web of Science
  • 5 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
It aimed to compare the use of the tele-objective structured clinical examination (teleOSCE) with in-person assessment in high-stakes clinical examination so as to determine the impact of the teleOSCE on the assessment undertaken. Discussion follows regarding what skills and domains can effectively be assessed in a teleOSCE.
Methods
This study is a retrospective observational analysis. It compares the results achieved by final year medical students in their clinical examination, assessed using the teleOSCE in 2020 (n=285), with those who were examined using the traditional in-person format in 2019 (n=280). The study was undertaken at the University of New South Wales, Australia.
Results
In the domain of physical examination, students in 2020 scored 0.277 points higher than those in 2019 (mean difference=–0.277, P<0.001, effect size=0.332). Across all other domains, there was no significant difference in mean scores between 2019 and 2020.
Conclusion
The teleOSCE does not negatively impact assessment in clinical examination in all domains except physical examination. If the teleOSCE is the future of clinical skills examination, assessment of physical examination will require concomitant workplace-based assessment.

Citations

Citations to this article as recorded by  
  • Feasibility and reliability of the pandemic-adapted online-onsite hybrid graduation OSCE in Japan
    Satoshi Hara, Kunio Ohta, Daisuke Aono, Toshikatsu Tamai, Makoto Kurachi, Kimikazu Sugimori, Hiroshi Mihara, Hiroshi Ichimura, Yasuhiko Yamamoto, Hideki Nomura
    Advances in Health Sciences Education.2024; 29(3): 949.     CrossRef
  • Feasibility of an online clinical assessment of competence in physiotherapy students
    Brooke Flew, Lucy Chipchase, Darren Lee, Jodie A. McClelland
    Physiotherapy Theory and Practice.2024; : 1.     CrossRef
  • Navigating digital assessments in medical education: Findings from a scoping review
    Chin-Siang Ang, Sakura Ito, Jennifer Cleland
    Medical Teacher.2024; : 1.     CrossRef
  • Radiography education in 2022 and beyond - Writing the history of the present: A narrative review
    Y.X. Tay, J.P. McNulty
    Radiography.2023; 29(2): 391.     CrossRef
  • Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study
    Sun Kim, A Ra Cho, Chul Woon Chung
    Journal of Educational Evaluation for Health Professions.2021; 18: 28.     CrossRef
Review
Assessment methods and the validity and reliability of measurement tools in online objective structured clinical examinations: a systematic scoping review  
Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
J Educ Eval Health Prof. 2021;18:11.   Published online June 1, 2021
DOI: https://doi.org/10.3352/jeehp.2021.18.11
  • 7,569 View
  • 416 Download
  • 12 Web of Science
  • 12 Crossref
AbstractAbstract PDFSupplementary Material
The coronavirus disease 2019 (COVID-19) pandemic has required educators to adapt the in-person objective structured clinical examination (OSCE) to online settings in order for it to remain a critical component of the multifaceted assessment of a student’s competency. This systematic scoping review aimed to summarize the assessment methods and validity and reliability of the measurement tools used in current online OSCE (hereafter, referred to as teleOSCE) approaches. A comprehensive literature review was undertaken following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews guidelines. Articles were eligible if they reported any form of performance assessment, in any field of healthcare, delivered in an online format. Two reviewers independently screened the results and analyzed relevant studies. Eleven articles were included in the analysis. Pre-recorded videos were used in 3 studies, while observations by remote examiners through an online platform were used in 7 studies. Acceptability as perceived by students was reported in 2 studies. This systematic scoping review identified several insights garnered from implementing teleOSCEs, the components transferable from telemedicine, and the need for systemic research to establish the ideal teleOSCE framework. TeleOSCEs may be able to improve the accessibility and reproducibility of clinical assessments and equip students with the requisite skills to effectively practice telemedicine in the future.

Citations

Citations to this article as recorded by  
  • Feasibility and reliability of the pandemic-adapted online-onsite hybrid graduation OSCE in Japan
    Satoshi Hara, Kunio Ohta, Daisuke Aono, Toshikatsu Tamai, Makoto Kurachi, Kimikazu Sugimori, Hiroshi Mihara, Hiroshi Ichimura, Yasuhiko Yamamoto, Hideki Nomura
    Advances in Health Sciences Education.2024; 29(3): 949.     CrossRef
  • A level playing field? Evaluation of the virtual Objective Structured Clinical Examination in Psychiatry and Addiction Medicine: A mixed methods study
    Rebecca E Reay, Paul A Maguire, Jeffrey CL Looi
    Australasian Psychiatry.2024; 32(4): 359.     CrossRef
  • Conducting an objective structured clinical examination under COVID-restricted conditions
    Andrea Gotzmann, John Boulet, Yichi Zhang, Judy McCormick, Mathieu Wojcik, Ilona Bartman, Debra Pugh
    BMC Medical Education.2024;[Epub]     CrossRef
  • The virtual Clinical Assessment of Skills and Competence: the impact and challenges of a digitised final examination
    Kenny Chu, Shivanthi Sathanandan
    BJPsych Bulletin.2023; 47(2): 110.     CrossRef
  • Virtual Learning and Assessment in Rheumatology Fellowship Training: Objective Structured Clinical Examination Revisited
    Rachel M. Wolfe, Faye N. Hant, Rumey C. Ishizawar, Lisa G. Criscione‐Schreiber, Beth L. Jonas, Kenneth S. O'Rourke, Marcy B. Bolster
    Arthritis Care & Research.2023; 75(12): 2435.     CrossRef
  • Innovations in assessment in health professions education during the COVID‐19 pandemic: A scoping review
    Jamal Giri, Claire Stewart
    The Clinical Teacher.2023;[Epub]     CrossRef
  • Evaluation of the Utility of Online Objective Structured Clinical Examination Conducted During the COVID-19 Pandemic
    Mona Arekat, Mohamed Hany Shehata, Abdelhalim Deifalla, Ahmed Al-Ansari, Archana Kumar, Mohamed Alsenbesy, Hamdi Alshenawi, Amgad El-Agroudy, Mariwan Husni, Diaa Rizk, Abdelaziz Elamin, Afif Ben Salah, Hani Atwa
    Advances in Medical Education and Practice.2022; Volume 13: 407.     CrossRef
  • Comparison of student pharmacists' performance on in-person vs. virtual OSCEs in a pre-APPE capstone course
    Justine S. Gortney, Joseph P. Fava, Andrew D. Berti, Brittany Stewart
    Currents in Pharmacy Teaching and Learning.2022; 14(9): 1116.     CrossRef
  • Is online objective structured clinical examination teaching an acceptable replacement in post-COVID-19 medical education in the United Kingdom?: a descriptive study
    Vashist Motkur, Aniket Bharadwaj, Nimalesh Yogarajah
    Journal of Educational Evaluation for Health Professions.2022; 19: 30.     CrossRef
  • Equal Z standard-setting method to estimate the minimum number of panelists for a medical school’s objective structured clinical examination in Taiwan: a simulation study
    Ying-Ying Yang, Pin-Hsiang Huang, Ling-Yu Yang, Chia-Chang Huang, Chih-Wei Liu, Shiau-Shian Huang, Chen-Huan Chen, Fa-Yauh Lee, Shou-Yen Kao, Boaz Shulruf
    Journal of Educational Evaluation for Health Professions.2022; 19: 27.     CrossRef
  • Applying the Student Response System in the Online Dermatologic Video Curriculum on Medical Students' Interaction and Learning Outcomes during the COVID-19 Pandemic
    Chih-Tsung Hung, Shao-An Fang, Feng-Cheng Liu, Chih-Hsiung Hsu, Ting-Yu Yu, Wei-Ming Wang
    Indian Journal of Dermatology.2022; 67(4): 477.     CrossRef
  • Empirical analysis comparing the tele-objective structured clinical examination and the in-person assessment in Australia
    Jonathan Zachary Felthun, Silas Taylor, Boaz Shulruf, Digby Wigram Allen
    Journal of Educational Evaluation for Health Professions.2021; 18: 23.     CrossRef
Research articles
Development and validation of a portfolio assessment system for medical schools in Korea  
Dong Mi Yoo, A Ra Cho, Sun Kim
J Educ Eval Health Prof. 2020;17:39.   Published online December 9, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.39
  • 5,493 View
  • 223 Download
  • 3 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Consistent evaluation procedures based on objective and rational standards are essential for the sustainability of portfolio-based education, which has been widely introduced in medical education. We aimed to develop and implement a portfolio assessment system, and to assess its validity and reliability.
Methods
We developed a portfolio assessment system from March 2019 to August 2019 and confirmed its content validity through expert assessment by an expert group comprising 2 medical education specialists, 2 professors involved in education at medical school, and a professor of basic medical science. Six trained assessors conducted 2 rounds of evaluation of 7 randomly selected portfolios for the “Self-Development and Portfolio II” course from January 2020 to July 2020. These data are used inter-rater reliability was evaluated using intra-class correlation coefficients (ICCs) in September 2020.
Results
The portfolio assessment system is based on the following process; assessor selection, training, analytical/comprehensive evaluation, and consensus. Appropriately trained assessors evaluated portfolios based on specific assessment criteria and a rubric for assigning points. In the analysis of inter-rater reliability, the first round of evaluation grades was submitted, and all assessment areas except “goal-setting” showed a high ICC of 0.81 or higher. After the first round of assessment, we attempted to standardize objective assessment procedures. As a result, all components of the assessments showed close correlations, with ICCs of 0.81 or higher.
Conclusion
We confirmed that when assessors with an appropriate training conduct portfolio assessment based on specified standards through a systematic procedure, the results are reliable.

Citations

Citations to this article as recorded by  
  • Development and integration of a clinical dashboard within a dental school setting
    Fatemeh S. Afshari, Judy Chia‐Chun Yuan, Cortino Sukotjo, Susan A. Rowan, Michael L. Spector
    Journal of Dental Education.2024; 88(11): 1539.     CrossRef
  • Development of an electronic learning progression dashboard to monitor student clinical experiences
    Hollis Lai, Nazila Ameli, Steven Patterson, Anthea Senior, Doris Lunardon
    Journal of Dental Education.2022; 86(6): 759.     CrossRef
  • Medical Student Portfolios: A Systematic Scoping Review
    Rei Tan, Jacquelin Jia Qi Ting, Daniel Zhihao Hong, Annabelle Jia Sing Lim, Yun Ting Ong, Anushka Pisupati, Eleanor Jia Xin Chong, Min Chiam, Alexia Sze Inn Lee, Laura Hui Shuen Tan, Annelissa Mien Chew Chin, Limin Wijaya, Warren Fong, Lalit Kumar Radha K
    Journal of Medical Education and Curricular Development.2022;[Epub]     CrossRef
  • Development of Teaching and Learning Manual for Competency-Based Practice for Meridian & Acupuncture Points Class
    Eunbyul Cho, Jiseong Hong, Yeonkyeong Nam, Haegue Shin, Jae-Hyo Kim
    Korean Journal of Acupuncture.2022; 39(4): 184.     CrossRef
Estimation of item parameters and examinees’ mastery probability in each domain of the Korean Medical Licensing Examination using a deterministic inputs, noisy “and” gate (DINA) model  
Younyoung Choi, Dong Gi Seo
J Educ Eval Health Prof. 2020;17:35.   Published online November 17, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.35
  • 5,330 View
  • 102 Download
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The deterministic inputs, noisy “and” gate (DINA) model is a promising statistical method for providing useful diagnostic information about students’ level of achievement, as educators often want to receive diagnostic information on how examinees did on each content strand, which is referred to as a diagnostic profile. The purpose of this paper was to classify examinees of the Korean Medical Licensing Examination (KMLE) in different content domains using the DINA model.
Methods
This paper analyzed data from the KMLE, with 360 items and 3,259 examinees. An application study was conducted to estimate examinees’ parameters and item characteristics. The guessing and slipping parameters of each item were estimated, and statistical analysis was conducted using the DINA model.
Results
The output table shows examples of some items that can be used to check item quality. The probabilities of mastery of each content domain were also estimated, indicating the mastery profile of each examinee. The classification accuracy and consistency for 8 content domains ranged from 0.849 to 0.972 and from 0.839 to 0.994, respectively. As a result, the classification reliability of the cognitive diagnosis model was very high for the 8 content domains of the KMLE.
Conclusion
This mastery profile can provide useful diagnostic information for each examinee in terms of each content domain of the KMLE. Individual mastery profiles allow educators and examinees to understand which domain(s) should be improved in order to master all domains in the KMLE. In addition, all items showed reasonable results in terms of item parameters.

Citations

Citations to this article as recorded by  
  • Large-Scale Parallel Cognitive Diagnostic Test Assembly Using A Dual-Stage Differential Evolution-Based Approach
    Xi Cao, Ying Lin, Dong Liu, Henry Been-Lirn Duh, Jun Zhang
    IEEE Transactions on Artificial Intelligence.2024; 5(6): 3120.     CrossRef
Reviews
A proposal for the future of medical education accreditation in Korea  
Ki-Young Lim
J Educ Eval Health Prof. 2020;17:32.   Published online October 21, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.32
  • 5,240 View
  • 132 Download
  • 3 Web of Science
  • 6 Crossref
AbstractAbstract PDFSupplementary Material
For the past 20 years, the medical education accreditation program of the Korean Institute of Medical Education and Evaluation (KIMEE) has contributed significantly to the standardization and improvement of the quality of basic medical education in Korea. It should now contribute to establishing and promoting the future of medical education. The Accreditation Standards of KIMEE 2019 (ASK2019) have been adopted since 2019, with the goal of achieving world-class medical education by applying a learner-centered curriculum using a continuum framework for the 3 phases of formal medical education: basic medical education, postgraduate medical education, and continuing professional development. ASK2019 will also be able to promote medical education that meets community needs and employs systematic assessments throughout the education process. These are important changes that can be used to gauge the future of the medical education accreditation system. Furthermore, globalization, inter-professional education, health systems science, and regular self-assessment systems are emerging as essential topics for the future of medical education. It is time for the medical education accreditation system in Korea to observe and adopt new trends in global medical education.

Citations

Citations to this article as recorded by  
  • Analyzing the characteristics of mission statements in Korean medical schools based on the Korean Doctor’s Role framework
    Ye Ji Kang, Soomin Lee, Hyo Jeong Lee, Do-Hwan Kim
    Korean Journal of Medical Education.2024; 36(1): 99.     CrossRef
  • Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study
    Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee
    Journal of Educational Evaluation for Health Professions.2024; 21: 8.     CrossRef
  • Accreditation standards items of post-2nd cycle related to the decision of accreditation of medical schools by the Korean Institute of Medical Education and Evaluation
    Kwi Hwa Park, Geon Ho Lee, Su Jin Chae, Seong Yong Kim
    Korean Journal of Medical Education.2023; 35(1): 1.     CrossRef
  • Continuing Professional Development of Pharmacists and The Roles of Pharmacy Schools
    Hyemin Park, Jeong-Hyun Yoon
    Korean Journal of Clinical Pharmacy.2022; 32(4): 281.     CrossRef
  • Definition of character for medical education based on expert opinions in Korea
    Yera Hur
    Journal of Educational Evaluation for Health Professions.2021; 18: 26.     CrossRef
  • Special reviews on the history and future of the Korean Institute of Medical Education and Evaluation to memorialize its collaboration with the Korea Health Personnel Licensing Examination Institute to designate JEEHP as a co-official journal
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2020; 17: 33.     CrossRef
Is accreditation in medical education in Korea an opportunity or a burden?  
Hanna Jung, Woo Taek Jeon, Shinki An
J Educ Eval Health Prof. 2020;17:31.   Published online October 21, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.31
  • 5,807 View
  • 139 Download
  • 5 Web of Science
  • 9 Crossref
AbstractAbstract PDFSupplementary Material
The accreditation process is both an opportunity and a burden for medical schools in Korea. The line that separates the two is based on how medical schools recognize and utilize the accreditation process. In other words, accreditation is a burden for medical schools if they view the accreditation process as merely a formal procedure or a means to maintain accreditation status for medical education. However, if medical schools acknowledge the positive value of the accreditation process, accreditation can be both an opportunity and a tool for developing medical education. The accreditation process has educational value by catalyzing improvements in the quality, equity, and efficiency of medical education and by increasing the available options. For the accreditation process to contribute to medical education development, accrediting agencies and medical schools must first be recognized as partners of an educational alliance working together towards common goals. Secondly, clear guidelines on accreditation standards should be periodically reviewed and shared. Finally, a formative self-evaluation process must be introduced for institutions to utilize the accreditation process as an opportunity to develop medical education. This evaluation system could be developed through collaboration among medical schools, academic societies for medical education, and the accrediting authority.

Citations

Citations to this article as recorded by  
  • To prove or improve? Examining how paradoxical tensions shape evaluation practices in accreditation contexts
    Betty Onyura, Abigail J. Fisher, Qian Wu, Shrutikaa Rajkumar, Sarick Chapagain, Judith Nassuna, David Rojas, Latika Nirula
    Medical Education.2024; 58(3): 354.     CrossRef
  • ASPIRE for excellence in curriculum development
    John Jenkins, Sharon Peters, Peter McCrorie
    Medical Teacher.2024; 46(5): 633.     CrossRef
  • Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study
    Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee
    Journal of Educational Evaluation for Health Professions.2024; 21: 8.     CrossRef
  • Accreditation standards items of post-2nd cycle related to the decision of accreditation of medical schools by the Korean Institute of Medical Education and Evaluation
    Kwi Hwa Park, Geon Ho Lee, Su Jin Chae, Seong Yong Kim
    Korean Journal of Medical Education.2023; 35(1): 1.     CrossRef
  • The Need for the Standards for Anatomy Labs in Medical School Evaluation and Accreditation
    Yu-Ran Heo, Jae-Ho Lee
    Anatomy & Biological Anthropology.2023; 36(3): 81.     CrossRef
  • Seal of Approval or Ticket to Triumph? The Impact of Accreditation on Medical Student Performance in Foreign Medical Council Examinations
    Saurabh RamBihariLal Shrivastava, Titi Savitri Prihatiningsih, Kresna Lintang Pratidina
    Indian Journal of Medical Specialities.2023; 14(4): 249.     CrossRef
  • Internal evaluation in the faculties affiliated to zanjan university of medical sciences: Quality assurance of medical science education based on institutional accreditation
    Alireza Abdanipour, Farhad Ramezani‐Badr, Ali Norouzi, Mehdi Ghaemi
    Journal of Medical Education Development.2022; 15(46): 61.     CrossRef
  • Development of Mission and Vision of College of Korean Medicine Using the Delphi Techniques and Big-Data Analysis
    Sanghee Yeo, Seong Hun Choi, Su Jin Chae
    Journal of Korean Medicine.2021; 42(4): 176.     CrossRef
  • Special reviews on the history and future of the Korean Institute of Medical Education and Evaluation to memorialize its collaboration with the Korea Health Personnel Licensing Examination Institute to designate JEEHP as a co-official journal
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2020; 17: 33.     CrossRef
Research articles
Development of a checklist to validate the framework of a narrative medicine program based on Gagne’s instructional design model in Iran through consensus of a multidisciplinary expert panel  
Saeideh Daryazadeh, Nikoo Yamani, Payman Adibi
J Educ Eval Health Prof. 2019;16:34.   Published online October 31, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.34
  • 9,298 View
  • 180 Download
  • 1 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Narrative medicine is a patient-centered approach focusing on the development of narrative skills and self-awareness that incorporates “attending, representing, and affiliating” in clinical encounters. Acquiring narrative competency promotes clinical performance, and narratives can be used for teaching professionalism, empathy, multicultural education, and professional development. This study was conducted to develop a checklist to validate the framework of a narrative medicine program through consensus of a panel.
Methods
This expert panel study was conducted from 2018 to 2019 at Isfahan University of Medical Sciences, Iran. It included 2 phases: developing a framework in 2 steps and forming an expert panel to validate the framework in 3 rounds. We adapted a 3-stage narrative medicine model with 9 training activities from Gagne’s theory, developed a framework, and then produced a checklist to validate the framework in a multidisciplinary expert panel that consisted of 7 experts. The RAND/UCLA appropriateness method was used to assess the experts’ agreement. The first-round opinions were received by email. Consensus was achieved in the second and third rounds through face-to-face meetings to facilitate interactions and discussion among the experts.
Results
Sixteen valid indicators were approved and 100% agreement was obtained among experts (with median values in the range of 7–9 out of a maximum of 9, with no disagreement), and the framework was validated by the expert panel.
Conclusion
The 16 checklist indicators can be used to evaluate narrative medicine programs as a simple and practical guide to improve teaching effectiveness and promote life-long learning.

Citations

Citations to this article as recorded by  
  • Challenges of Implementing the First Narrative Medicine Course for Teaching Professionalism in Iran: A Qualitative Content Analysis
    Saeideh Daryazadeh, Payman Adibi, Nikoo Yamani
    Educational Research in Medical Sciences.2022;[Epub]     CrossRef
  • Impact of a narrative medicine program on reflective capacity and empathy of medical students in Iran
    Saeideh Daryazadeh, Payman Adibi, Nikoo Yamani, Roya Mollabashi
    Journal of Educational Evaluation for Health Professions.2020; 17: 3.     CrossRef
Development of a self-assessment tool for resident doctors’ communication skills in India  
Upendra Baitha, Piyush Ranjan, Siddharth Sarkar, Charu Arora, Archana Kumari, Sada Nand Dwivedi, Asmita Patil, Nayer Jamshed
J Educ Eval Health Prof. 2019;16:17.   Published online June 24, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.17
  • 14,990 View
  • 274 Download
  • 11 Web of Science
  • 11 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Effective communication skills are essential for resident doctors to provide optimum patient care. This study was conducted to develop and validate a questionnaire for the self-assessment of resident doctors’ communication skills in India.
Methods
This was a mixed-methods study conducted in 2 phases. The first phase consisted of questionnaire development, including the identification of relevant literature, focus group discussions with residents and experts from clinical specialties, and pre-testing of the questionnaire. The second phase involved administering the questionnaire survey to 95 residents from the Departments of Medicine, Emergency Medicine, Pediatrics, and Surgery at the All India Institute of Medical Sciences, New Delhi, India in April 2019. Internal consistency was tested and the factor structure was analyzed to test construct validity.
Results
The questionnaire consisted of 3 sections: (A) 4 items on doctor-patient conflicts and the role of communication skills in avoiding these conflicts, (B) 29 items on self-assessment of communication skills in different settings, and (C) 8 items on barriers to practicing good communication skills. Sections B and C had good internal consistency (Cronbach α: 0.885 and 0.771, respectively). Section C had a 2-factor solution, and the barriers were classified as ‘training’ and ‘infrastructure’ factors.
Conclusion
This appears to be a valid assessment tool of resident doctors’ communication skills, with potential utility for identifying gaps in communication skills and developing communication skills modules.

Citations

Citations to this article as recorded by  
  • Development and Validation of a Comprehensive Nursing Competence Assessment Questionnaire (CNCAQ) to Determine the Perceived Clinical Competence of Nursing Graduates
    Sunita Srivastava, Hariprasath Pandurangan, Anil Kumar
    Nursing & Midwifery Research Journal.2024; 20(2): 96.     CrossRef
  • Socio‐behaviour change intervention in health care professionals: Impact and effectiveness
    Chinmay Shah, Fouzia Shersad
    Medical Education.2024;[Epub]     CrossRef
  • Communication skills of residents: are they as good as they think?
    Namra Qadeer Shaikh, Ali Aahil Noorali, Asma Altaf Hussain Merchant, Noreen Afzal, Maryam Pyar Ali Lakhdir, Komal Abdul Rahim, Syeda Fatima Shariq, Rida Ahmad, Saqib Kamran Bakhshi, Saad Bin Zafar Mahmood, Shayan Shah, Muhammad Rizwan Khan, Muhammad Tariq
    Medical Education Online.2024;[Epub]     CrossRef
  • Leveraging the vantage point – exploring nurses’ perception of residents’ communication skills: a mixed-methods study
    Komal Abdul Rahim, Maryam Pyar Ali Lakhdir, Noreen Afzal, Asma Altaf Hussain Merchant, Namra Qadeer Shaikh, Ali Aahil Noorali, Umar Tariq, Rida Ahmad, Saqib Kamran Bakhshi, Saad bin Zafar Mahmood, Muhammad Rizwan Khan, Muhammed Tariq, Adil H. Haider
    BMC Medical Education.2023;[Epub]     CrossRef
  • Developing a communication-skills training curriculum for resident-physicians to enhance patient outcomes at an academic medical centre: an ongoing mixed-methods study protocol
    Hamna Shahbaz, Ali Aahil Noorali, Maha Inam, Namra Qadeer, Asma Altaf Hussain Merchant, Adnan Ali Khan, Noreen Afzal, Komal Abdul Rahim, Ibrahim Munaf, Rida Ahmad, Muhammad Tariq, Adil H Haider
    BMJ Open.2022; 12(8): e056840.     CrossRef
  • A cross-sectional evaluation of communication skills and perceived barriers among the resident doctors at a tertiary care center in India
    Amandeep Singh, Piyush Ranjan, Archana Kumari, Siddharth Sarkar, Tanveer Kaur, Ramesh Aggarwal, Ashish Datt Upadhyay, Biswaroop Chakrawarty, Jamshed Nayer, Mohit Joshi, Avinash Chakrawarty
    Journal of Education and Health Promotion.2022; 11(1): 425.     CrossRef
  • What do clinical resident doctors think about workplace violence? A qualitative study comprising focus group discussions and thematic analysis from a tertiary care center of India
    Amandeep Singh, Piyush Ranjan, Siddharth Sarkar, Tarang Preet Kaur, Roshan Mathew, Dinesh Gora, Ajay Mohan, Jaswant Jangra
    Journal of Family Medicine and Primary Care.2022; 11(6): 2678.     CrossRef
  • Development and validation of a questionnaire to assess preventive practices against COVID-19 pandemic in the general population
    Ayush Agarwal, Piyush Ranjan, Priyanka Rohilla, Yellamraju Saikaustubh, Anamika Sahu, Sada Nand Dwivedi, Aakansha, Upendra Baitha, Arvind Kumar
    Preventive Medicine Reports.2021; 22: 101339.     CrossRef
  • Development and Validation of a Comprehensive Questionnaire to Assess Interpersonal Discord (Bullying, Harassment, and Discrimination) at the Workplace in a Healthcare Setting
    Amandeep Singh, Piyush Ranjan, Tanveer Kaur, Siddharth Sarkar, Ashish D Upadhyay, Upendra Baitha, Prayas Sethi, Ranveer S Jadon, Pankaj Jorwal
    Cureus.2021;[Epub]     CrossRef
  • Development and Validation of a Questionnaire to Evaluate Workplace Violence in Healthcare Settings
    Archana Kumari, Amandeep Singh, Piyush Ranjan, Siddharth Sarkar, Tanveer Kaur, Ashish D Upadhyay, Kirti Verma, Vignan Kappagantu, Ajay Mohan, Upendra Baitha
    Cureus.2021;[Epub]     CrossRef
  • The value of communicating with patients in their first language
    Piyush Ranjan, Archana Kumari, Charu Arora
    Expert Review of Pharmacoeconomics & Outcomes Research.2020; 20(6): 559.     CrossRef
Educational/faculty development material
Analysis of the Clinical Education Situation framework: a tool for identifying the root cause of student failure in the United States  
Katherine Myers, Kyle Covington
J Educ Eval Health Prof. 2019;16:11.   Published online May 10, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.11
  • 15,516 View
  • 266 Download
  • 1 Web of Science
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Doctor of physical therapy preparation requires extensive time in precepted clinical education, which involves multiple stakeholders. Student outcomes in clinical education are impacted by many factors, and, in the case of failure, it can be challenging to determine which factors played a primary role in the poor result. Using existing root-cause analysis processes, the authors developed and implemented a framework designed to identify the causes of student failure in clinical education. This framework, when applied to a specific student failure event, can be used to identify the factors that contributed to the situation and to reveal opportunities for improvement in both the clinical and academic environments. A root-cause analysis framework can help to drive change at the programmatic level, and future studies should focus on the framework’s application to a variety of clinical and didactic settings.

Citations

Citations to this article as recorded by  
  • Applying the 2022 Cardiovascular and Pulmonary Entry-Level Physical Therapist Competencies to Physical Therapist Education and Practice
    Nancy Smith, Angela Campbell, Morgan Johanson, Pamela Bartlo, Naomi Bauer, Sagan Everett
    Journal of Physical Therapy Education.2023; 37(3): 165.     CrossRef
  • Cardiovascular and Pulmonary Entry-Level Physical Therapist Competencies: Update by Academy of Cardiovascular & Pulmonary Physical Therapy Task Force
    Morgan Johanson, Pamela Bartlo, Naomi Bauer, Angela Campbell, Sagan Everett, Nancy Smith
    Cardiopulmonary Physical Therapy Journal.2023; 34(4): 183.     CrossRef
  • The situational analysis of teaching-learning in clinical education in Iran: a postmodern grounded theory study
    Soleiman Ahmady, Hamed Khani
    BMC Medical Education.2022;[Epub]     CrossRef
Research articles
Medical students’ thought process while solving problems in 3 different types of clinical assessments in Korea: clinical performance examination, multimedia case-based assessment, and modified essay question  
Sejin Kim, Ikseon Choi, Bo Young Yoon, Min Jeong Kwon, Seok-jin Choi, Sang Hyun Kim, Jong-Tae Lee, Byoung Doo Rhee
J Educ Eval Health Prof. 2019;16:10.   Published online May 9, 2019
DOI: https://doi.org/10.3352/jeehp.2019.16.10
  • 17,001 View
  • 288 Download
  • 3 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to explore students’ cognitive patterns while solving clinical problems in 3 different types of assessments—clinical performance examination (CPX), multimedia case-based assessment (CBA), and modified essay question (MEQ)—and thereby to understand how different types of assessments stimulate different patterns of thinking.
Methods
A total of 6 test-performance cases from 2 fourth-year medical students were used in this cross-case study. Data were collected through one-on-one interviews using a stimulated recall protocol where students were shown videos of themselves taking each assessment and asked to elaborate on what they were thinking. The unit of analysis was the smallest phrases or sentences in the participants’ narratives that represented meaningful cognitive occurrences. The narrative data were reorganized chronologically and then analyzed according to the hypothetico-deductive reasoning framework for clinical reasoning.
Results
Both participants demonstrated similar proportional frequencies of clinical reasoning patterns on the same clinical assessments. The results also revealed that the three different assessment types may stimulate different patterns of clinical reasoning. For example, the CPX strongly promoted the participants’ reasoning related to inquiry strategy, while the MEQ strongly promoted hypothesis generation. Similarly, data analysis and synthesis by the participants were more strongly stimulated by the CBA than by the other assessment types.
Conclusion
This study found that different assessment designs stimulated different patterns of thinking during problem-solving. This finding can contribute to the search for ways to improve current clinical assessments. Importantly, the research method used in this study can be utilized as an alternative way to examine the validity of clinical assessments.

Citations

Citations to this article as recorded by  
  • Future directions of online learning environment design at medical schools: a transition towards a post-pandemic context
    Sejin Kim
    Kosin Medical Journal.2023; 38(1): 12.     CrossRef
  • Clinical Reasoning Training based on the analysis of clinical case using a virtual environment
    Sandra Elena Lisperguer Soto, María Soledad Calvo, Gabriela Paz Urrejola Contreras, Miguel Ángel Pérez Lizama
    Educación Médica.2021; 22(3): 139.     CrossRef
  • Newly appointed medical faculty members’ self-evaluation of their educational roles at the Catholic University of Korea College of Medicine in 2020 and 2021: a cross-sectional survey-based study
    Sun Kim, A Ra Cho, Chul Woon Chung
    Journal of Educational Evaluation for Health Professions.2021; 18: 28.     CrossRef
Comparison of the level of cognitive processing between case-based items and non-case-based items on the Interuniversity Progress Test of Medicine in the Netherlands  
Dario Cecilio-Fernandes, Wouter Kerdijk, Andreas Johannes Bremers, Wytze Aalders, René Anton Tio
J Educ Eval Health Prof. 2018;15:28.   Published online December 12, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.28
  • 19,680 View
  • 208 Download
  • 10 Web of Science
  • 10 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
It is assumed that case-based questions require higher-order cognitive processing, whereas questions that are not case-based require lower-order cognitive processing. In this study, we investigated to what extent case-based and non-case-based questions followed this assumption based on Bloom’s taxonomy.
Methods
In this article, 4,800 questions from the Interuniversity Progress Test of Medicine were classified based on whether they were case-based and on the level of Bloom’s taxonomy that they involved. Lower-order questions require students to remember or/and have a basic understanding of knowledge. Higher-order questions require students to apply, analyze, or/and evaluate. The phi coefficient was calculated to investigate the relationship between whether questions were case-based and the required level of cognitive processing.
Results
Our results demonstrated that 98.1% of case-based questions required higher-level cognitive processing. Of the non-case-based questions, 33.7% required higher-level cognitive processing. The phi coefficient demonstrated a significant, but moderate correlation between the presence of a patient case in a question and its required level of cognitive processing (phi coefficient= 0.55, P< 0.001).
Conclusion
Medical instructors should be aware of the association between item format (case-based versus non-case-based) and the cognitive processes they elicit in order to meet the desired balance in a test, taking the learning objectives and the test difficulty into account.

Citations

Citations to this article as recorded by  
  • Progress is impossible without change: implementing automatic item generation in medical knowledge progress testing
    Filipe Manuel Vidal Falcão, Daniela S.M. Pereira, José Miguel Pêgo, Patrício Costa
    Education and Information Technologies.2024; 29(4): 4505.     CrossRef
  • Reliability across content areas in progress tests assessing medical knowledge: a Brazilian cross-sectional study with implications for medical education assessments
    Pedro Tadao Hamamoto Filho, Miriam Hashimoto, Alba Regina de Abreu Lima, Leandro Arthur Diehl, Neide Tomimura Costa, Patrícia Moretti Rehder, Samira Yarak, Maria Cristina de Andrade, Maria de Lourdes Marmorato Botta Hafner, Zilda Maria Tosta Ribeiro, Júli
    Sao Paulo Medical Journal.2024;[Epub]     CrossRef
  • Development of qualified items for nursing education assessment: The progress testing experience
    Bruna Moreno Dias, Lúcia Marta Giunta da Silva, Pedro Tadao Hamamoto Filho, Valdes Roberto Bollela, Carmen Silvia Gabriel
    Nurse Education in Practice.2024; 81: 104199.     CrossRef
  • Identifying the response process validity of clinical vignette-type multiple choice questions: An eye-tracking study
    Francisco Carlos Specian Junior, Thiago Martins Santos, John Sandars, Eliana Martorano Amaral, Dario Cecilio-Fernandes
    Medical Teacher.2023; 45(8): 845.     CrossRef
  • Relationship between medical programme progress test performance and surgical clinical attachment timing and performance
    Andy Wearn, Vanshay Bindra, Bradley Patten, Benjamin P. T. Loveday
    Medical Teacher.2023; 45(8): 877.     CrossRef
  • Analysis of Orthopaedic In-Training Examination Trauma Questions: 2017 to 2021
    Lilah Fones, Daryl C. Osbahr, Daniel E. Davis, Andrew M. Star, Atif K. Ahmed, Arjun Saxena
    JAAOS: Global Research and Reviews.2023;[Epub]     CrossRef
  • Use of Sociodemographic Information in Clinical Vignettes of Multiple-Choice Questions for Preclinical Medical Students
    Kelly Carey-Ewend, Amir Feinberg, Alexis Flen, Clark Williamson, Carmen Gutierrez, Samuel Cykert, Gary L. Beck Dallaghan, Kurt O. Gilliland
    Medical Science Educator.2023; 33(3): 659.     CrossRef
  • What faculty write versus what students see? Perspectives on multiple-choice questions using Bloom’s taxonomy
    Seetha U. Monrad, Nikki L. Bibler Zaidi, Karri L. Grob, Joshua B. Kurtz, Andrew W. Tai, Michael Hortsch, Larry D. Gruppen, Sally A. Santen
    Medical Teacher.2021; 43(5): 575.     CrossRef
  • Aménagement du concours de première année commune aux études de santé (PACES) : entre justice sociale et éthique confraternelle en devenir ?
    R. Pougnet, L. Pougnet
    Éthique & Santé.2020; 17(4): 250.     CrossRef
  • Knowledge of dental faculty in gulf cooperation council states of multiple-choice questions’ item writing flaws
    Mawlood Kowash, Hazza Alhobeira, Iyad Hussein, Manal Al Halabi, Saif Khan
    Medical Education Online.2020;[Epub]     CrossRef
Agreement between 2 raters’ evaluations of a traditional prosthodontic practical exam integrated with directly observed procedural skills in Egypt  
Ahmed Khalifa Khalifa, Salah Hegazy
J Educ Eval Health Prof. 2018;15:23.   Published online September 27, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.23
  • 24,440 View
  • 208 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to assess the agreement between 2 raters in evaluations of students on a prosthodontic clinical practical exam integrated with directly observed procedural skills (DOPS).
Methods
A sample of 76 students was monitored by 2 raters to evaluate the process and the final registered maxillomandibular relation for a completely edentulous patient at Mansoura Dental School, Egypt on a practical exam of bachelor’s students from May 15 to June 28, 2017. Each registered relation was evaluated from a total of 60 marks subdivided into 3 score categories: occlusal plane orientation (OPO), vertical dimension registration (VDR), and centric relation registration (CRR). The marks for each category included an assessment of DOPS. The marks of OPO and VDR for both raters were compared using the graph method to measure reliability through Bland and Altman analysis. The reliability of the CRR marks was evaluated by the Krippendorff alpha ratio.
Results
The results revealed highly similar marks between raters for OPO (mean= 18.1 for both raters), with close limits of agreement (0.73 and −0.78). For VDR, the mean marks were close (mean= 17.4 and 17.1 for examiners 1 and 2, respectively), with close limits of agreement (2.7 and −2.2). There was a strong correlation (Krippendorff alpha ratio, 0.92; 95% confidence interval, 0.79– 0.99) between the raters in the evaluation of CRR.
Conclusion
The 2 raters’ evaluation of a clinical traditional practical exam integrated with DOPS showed no significant differences in the evaluations of candidates at the end of a clinical prosthodontic course. The limits of agreement between raters could be optimized by excluding subjective evaluation parameters and complicated cases from the examination procedure.

Citations

Citations to this article as recorded by  
  • In‐person and virtual assessment of oral radiology skills and competences by the Objective Structured Clinical Examination
    Fernanda R. Porto, Mateus A. Ribeiro, Luciano A. Ferreira, Rodrigo G. Oliveira, Karina L. Devito
    Journal of Dental Education.2023; 87(4): 505.     CrossRef
  • Evaluation agreement between peer assessors, supervisors, and parents in assessing communication and interpersonal skills of students of pediatric dentistry
    Jin Asari, Maiko Fujita-Ohtani, Kuniomi Nakamura, Tomomi Nakamura, Yoshinori Inoue, Shigenari Kimoto
    Pediatric Dental Journal.2023; 33(2): 133.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions
TOP