Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Funded articles

Page Path
HOME > Browse articles > Funded articles
139 Funded articles
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Importance, performance frequency, and predicted future importance of dietitians’ jobs by practicing dietitians in Korea: a survey study
Cheongmin Sohn, Sooyoun Kwon, Won Gyoung Kim, Kyung-Eun Lee, Sun-Young Lee, Seungmin Lee
J Educ Eval Health Prof. 2024;21:1.   Published online January 2, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.1
Funded: Korea Health Personnel Licensing Examination Institute
  • 1,893 View
  • 284 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to explore the perceptions held by practicing dietitians of the importance of their tasks performed in current work environments, the frequency at which those tasks are performed, and predictions about the importance of those tasks in future work environments.
Methods
This was a cross-sectional survey study. An online survey was administered to 350 practicing dietitians. They were asked to assess the importance, performance frequency, and predicted changes in the importance of 27 tasks using a 5-point scale. Descriptive statistics were calculated, and the means of the variables were compared across categorized work environments using analysis of variance.
Results
The importance scores of all surveyed tasks were higher than 3.0, except for the marketing management task. Self-development, nutrition education/counseling, menu planning, food safety management, and documentation/data management were all rated higher than 4.0. The highest performance frequency score was related to documentation/data management. The importance scores of all duties, except for professional development, differed significantly by workplace. As for predictions about the future importance of the tasks surveyed, dietitians responded that the importance of all 27 tasks would either remain at current levels or increase in the future.
Conclusion
Twenty-seven tasks were confirmed to represent dietitians’ job functions in various workplaces. These tasks can be used to improve the test specifications of the Korean Dietitian Licensing Examination and the curriculum of dietetic education programs.
Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study  
Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-pyng Chung
J Educ Eval Health Prof. 2024;21:3.   Published online February 23, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.3
Funded: Association of American Medical Colleges, Northeastern Group on Educational Affairs
  • 1,311 View
  • 203 Download
AbstractAbstract PDFSupplementary Material
Purpose
Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into 6 elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.
Methods
Messick’s unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from 3 pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018–2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).
Results
Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone (r=0.34, P=0.019).
Conclusion
Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.
Development and psychometric evaluation of a 360-degree evaluation instrument to assess medical students’ performance in clinical settings at the emergency medicine department in Iran: a methodological study  
Golnaz Azami, Sanaz Aazami, Boshra Ebrahimy, Payam Emami
J Educ Eval Health Prof. 2024;21:7.   Published online April 1, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.7
Funded: Kurdistan University of Medical Sciences
  • 1,573 View
  • 265 Download
AbstractAbstract PDFSupplementary Material
Background
In the Iranian context, no 360-degree evaluation tool has been developed to assess the performance of prehospital medical emergency students in clinical settings. This article describes the development of a 360-degree evaluation tool and presents its first psychometric evaluation.
Methods
There were 2 steps in this study: step 1 involved developing the instrument (i.e., generating the items) and step 2 constituted the psychometric evaluation of the instrument. We performed exploratory and confirmatory factor analyses and also evaluated the instrument’s face, content, and convergent validity and reliability.
Results
The instrument contains 55 items across 6 domains, including leadership, management, and teamwork (19 items), consciousness and responsiveness (14 items), clinical and interpersonal communication skills (8 items), integrity (7 items), knowledge and accountability (4 items), and loyalty and transparency (3 items). The instrument was confirmed to be a valid measure, as the 6 domains had eigenvalues over Kaiser’s criterion of 1 and in combination explained 60.1% of the variance (Bartlett’s test of sphericity [1,485]=19,867.99, P<0.01). Furthermore, this study provided evidence for the instrument’s convergent validity and internal consistency (α=0.98), suggesting its suitability for assessing student performance.
Conclusion
We found good evidence for the validity and reliability of the instrument. Our instrument can be used to make future evaluations of student performance in the clinical setting more structured, transparent, informative, and comparable.
Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study  
Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee
J Educ Eval Health Prof. 2024;21:8.   Published online April 2, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.8
Funded: Korean Institute of Medical Education and Evaluation, Korea Association of Medical Colleges, Korean Society of Medical Education
  • 1,613 View
  • 313 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to identify challenges and potential improvements in Korea's medical education accreditation process according to the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019). Meta-evaluation was conducted to survey the experiences and perceptions of stakeholders, including self-assessment committee members, site visit committee members, administrative staff, and medical school professors.
Methods
A cross-sectional study was conducted using surveys sent to 40 medical schools. The 332 participants included self-assessment committee members, site visit team members, administrative staff, and medical school professors. The t-test, one-way analysis of variance and the chi-square test were used to analyze and compare opinions on medical education accreditation between the categories of participants.
Results
Site visit committee members placed greater importance on the necessity of accreditation than faculty members. A shared positive view on accreditation’s role in improving educational quality was seen among self-evaluation committee members and professors. Administrative staff highly regarded the Korean Institute of Medical Education and Evaluation’s reliability and objectivity, unlike the self-evaluation committee members. Site visit committee members positively perceived the clarity of accreditation standards, differing from self-assessment committee members. Administrative staff were most optimistic about implementing standards. However, the accreditation process encountered challenges, especially in duplicating content and preparing self-evaluation reports. Finally, perceptions regarding the accuracy of final site visit reports varied significantly between the self-evaluation committee members and the site visit committee members.
Conclusion
This study revealed diverse views on medical education accreditation, highlighting the need for improved communication, expectation alignment, and stakeholder collaboration to refine the accreditation process and quality.

Citations

Citations to this article as recorded by  
  • The new placement of 2,000 entrants at Korean medical schools in 2025: is the government’s policy evidence-based?
    Sun Huh
    The Ewha Medical Journal.2024;[Epub]     CrossRef
Review
Attraction and achievement as 2 attributes of gamification in healthcare: an evolutionary concept analysis  
Hyun Kyoung Kim
J Educ Eval Health Prof. 2024;21:10.   Published online April 11, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.10
Funded: National Research Foundation of Korea, Ministry of Science and ICT
  • 2,138 View
  • 337 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
This study conducted a conceptual analysis of gamification in healthcare utilizing Rogers’ evolutionary concept analysis methodology to identify its attributes and provide a method for its applications in the healthcare field. Gamification has recently been used as a health intervention and education method, but the concept is used inconsistently and confusingly. A literature review was conducted to derive definitions, surrogate terms, antecedents, influencing factors, attributes (characteristics with dimensions and features), related concepts, consequences, implications, and hypotheses from various academic fields. A total of 56 journal articles in English and Korean, published between August 2 and August 7, 2023, were extracted from databases such as PubMed Central, the Institute of Electrical and Electronics Engineers, the Association for Computing Machinery Digital Library, the Research Information Sharing Service, and the Korean Studies Information Service System, using the keywords “gamification” and “healthcare.” These articles were then analyzed. Gamification in healthcare is defined as the application of game elements in health-related contexts to improve health outcomes. The attributes of this concept were categorized into 2 main areas: attraction and achievement. These categories encompass various strategies for synchronization, enjoyable engagement, visual rewards, and goal-reinforcing frames. Through a multidisciplinary analysis of the concept’s attributes and influencing factors, this paper provides practical strategies for implementing gamification in health interventions. When developing a gamification strategy, healthcare providers can reference this analysis to ensure the game elements are used both appropriately and effectively.

Citations

Citations to this article as recorded by  
  • Short-Term Impact of Digital Mental Health Interventions on Psychological Well-Being and Blood Sugar Control in Type 2 Diabetes Patients in Riyadh
    Abdulaziz M. Alodhialah, Ashwaq A. Almutairi, Mohammed Almutairi
    Healthcare.2024; 12(22): 2257.     CrossRef
Research articles
Revised evaluation objectives of the Korean Dentist Clinical Skill Test: a survey study and focus group interviews  
Jae-Hoon Kim, Young J Kim, Deuk-Sang Ma, Se-Hee Park, Ahran Pae, June-Sung Shim, Il-Hyung Yang, Ui-Won Jung, Byung-Joon Choi, Yang-Hyun Chun
J Educ Eval Health Prof. 2024;21:11.   Published online May 30, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.11
Funded: Korea Health Personnel Licensing Examination Institute
  • 1,019 View
  • 230 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to propose a revision of the evaluation objectives of the Korean Dentist Clinical Skill Test by analyzing the opinions of those involved in the examination after a review of those objectives.
Methods
The clinical skill test objectives were reviewed based on the national-level dental practitioner competencies, dental school educational competencies, and the third dental practitioner job analysis. Current and former examinees were surveyed about their perceptions of the evaluation objectives. The validity of 22 evaluation objectives and overlapping perceptions based on area of specialty were surveyed on a 5-point Likert scale by professors who participated in the clinical skill test and dental school faculty members. Additionally, focus group interviews were conducted with experts on the examination.
Results
It was necessary to consider including competency assessments for “emergency rescue skills” and “planning and performing prosthetic treatment.” There were no significant differences between current and former examinees in their perceptions of the clinical skill test’s objectives. The professors who participated in the examination and dental school faculty members recognized that most of the objectives were valid. However, some responses stated that “oromaxillofacial cranial nerve examination,” “temporomandibular disorder palpation test,” and “space management for primary and mixed dentition” were unfeasible evaluation objectives and overlapped with dental specialty areas.
Conclusion
When revising the Korean Dentist Clinical Skill Test’s objectives, it is advisable to consider incorporating competency assessments related to “emergency rescue skills” and “planning and performing prosthetic treatment.”
Development of examination objectives for the Korean paramedic and emergency medical technician examination: a survey study  
Tai-hwan Uhm, Heakyung Choi, Seok Hwan Hong, Hyungsub Kim, Minju Kang, Keunyoung Kim, Hyejin Seo, Eunyoung Ki, Hyeryeong Lee, Heejeong Ahn, Uk-jin Choi, Sang Woong Park
J Educ Eval Health Prof. 2024;21:13.   Published online June 12, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.13
Funded: Korea Health Personnel Licensing Examination Institute Research Fund
  • 1,063 View
  • 231 Download
AbstractAbstract PDFSupplementary Material
Purpose
The duties of paramedics and emergency medical technicians (P&EMTs) are continuously changing due to developments in medical systems. This study presents evaluation goals for P&EMTs by analyzing their work, especially the tasks that new P&EMTs (with less than 3 years’ experience) find difficult, to foster the training of P&EMTs who could adapt to emergency situations after graduation.
Methods
A questionnaire was created based on prior job analyses of P&EMTs. The survey questions were reviewed through focus group interviews, from which 253 task elements were derived. A survey was conducted from July 10, 2023 to October 13, 2023 on the frequency, importance, and difficulty of the 6 occupations in which P&EMTs were employed.
Results
The P&EMTs’ most common tasks involved obtaining patients’ medical histories and measuring vital signs, whereas the most important task was cardiopulmonary resuscitation (CPR). The task elements that the P&EMTs found most difficult were newborn delivery and infant CPR. New paramedics reported that treating patients with fractures, poisoning, and childhood fever was difficult, while new EMTs reported that they had difficulty keeping diaries, managing ambulances, and controlling infection.
Conclusion
Communication was the most important item for P&EMTs, whereas CPR was the most important skill. It is important for P&EMTs to have knowledge of all tasks; however, they also need to master frequently performed tasks and those that pose difficulties in the field. By deriving goals for evaluating P&EMTs, changes could be made to their education, thereby making it possible to train more capable P&EMTs.
Redesigning a faculty development program for clinical teachers in Indonesia: a before-and-after study
Rita Mustika, Nadia Greviana, Dewi Anggraeni Kusumoningrum, Anyta Pinasthika
J Educ Eval Health Prof. 2024;21:14.   Published online June 13, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.14
Funded: Hibah Publikasi Terindeks International
  • 1,183 View
  • 295 Download
AbstractAbstract PDFSupplementary Material
Purpose
Faculty development (FD) is important to support teaching, including for clinical teachers. Faculty of Medicine Universitas Indonesia (FMUI) has conducted a clinical teacher training program developed by the medical education department since 2008, both for FMUI teachers and for those at other centers in Indonesia. However, participation is often challenging due to clinical, administrative, and research obligations. The coronavirus disease 2019 pandemic amplified the urge to transform this program. This study aimed to redesign and evaluate an FD program for clinical teachers that focuses on their needs and current situation.
Methods
A 5-step design thinking framework (empathizing, defining, ideating, prototyping, and testing) was used with a pre/post-test design. Design thinking made it possible to develop a participant-focused program, while the pre/post-test design enabled an assessment of the program’s effectiveness.
Results
Seven medical educationalists and 4 senior and 4 junior clinical teachers participated in a group discussion in the empathize phase of design thinking. The research team formed a prototype of a 3-day blended learning course, with an asynchronous component using the Moodle learning management system and a synchronous component using the Zoom platform. Pre-post-testing was done in 2 rounds, with 107 and 330 participants, respectively. Evaluations of the first round provided feedback for improving the prototype for the second round.
Conclusion
Design thinking enabled an innovative-creative process of redesigning FD that emphasized participants’ needs. The pre/post-testing showed that the program was effective. Combining asynchronous and synchronous learning expands access and increases flexibility. This approach could also apply to other FD programs.
Special article on the 20th anniversary of the journal
Comparison of real data and simulated data analysis of a stopping rule based on the standard error of measurement in computerized adaptive testing for medical examinations in Korea: a psychometric study  
Dong Gi Seo, Jeongwook Choi, Jinha Kim
J Educ Eval Health Prof. 2024;21:18.   Published online July 9, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.18
Funded: Hallym University
  • 1,014 View
  • 311 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to compare and evaluate the efficiency and accuracy of computerized adaptive testing (CAT) under 2 stopping rules (standard error of measurement [SEM]=0.3 and 0.25) using both real and simulated data in medical examinations in Korea.
Methods
This study employed post-hoc simulation and real data analysis to explore the optimal stopping rule for CAT in medical examinations. The real data were obtained from the responses of 3rd-year medical students during examinations in 2020 at Hallym University College of Medicine. Simulated data were generated using estimated parameters from a real item bank in R. Outcome variables included the number of examinees’ passing or failing with SEM values of 0.25 and 0.30, the number of items administered, and the correlation. The consistency of real CAT result was evaluated by examining consistency of pass or fail based on a cut score of 0.0. The efficiency of all CAT designs was assessed by comparing the average number of items administered under both stopping rules.
Results
Both SEM 0.25 and SEM 0.30 provided a good balance between accuracy and efficiency in CAT. The real data showed minimal differences in pass/fail outcomes between the 2 SEM conditions, with a high correlation (r=0.99) between ability estimates. The simulation results confirmed these findings, indicating similar average item numbers between real and simulated data.
Conclusion
The findings suggest that both SEM 0.25 and 0.30 are effective termination criteria in the context of the Rasch model, balancing accuracy and efficiency in CAT.
Review
Insights into undergraduate medical student selection tools: a systematic review and meta-analysis  
Pin-Hsiang Huang, Arash Arianpoor, Silas Taylor, Jenzel Gonzales, Boaz Shulruf
J Educ Eval Health Prof. 2024;21:22.   Published online September 12, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.22
Correction in: J Educ Eval Health Prof 2024;21(0):41
Funded: UCAT ANZ Consortium
  • 901 View
  • 211 Download
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Evaluating medical school selection tools is vital for evidence-based student selection. With previous reviews revealing knowledge gaps, this meta-analysis offers insights into the effectiveness of these selection tools.
Methods
A systematic review and meta-analysis were conducted applying the following criteria: peer-reviewed articles available in English, published from 2010 and which include empirical data linking performance in selection tools with assessment and dropout outcomes of undergraduate entry medical programs. Systematic reviews, meta-analyses, general opinion pieces, or commentaries were excluded. Effect sizes (ESs) of the predictability of academic and clinical performance within and by the end of the medicine program were extracted, and the pooled ESs were presented.
Results
Sixty-seven out of 2,212 articles were included, which yielded 236 ESs. Previous academic achievement predicted medical program academic performance (Cohen’s d=0.697 in early program; 0.619 in end of program) and clinical exams (0.545 in end of program). Within aptitude tests, verbal reasoning and quantitative reasoning predicted academic achievement in the early program and in the last years (0.704 & 0.643, respectively). Overall aptitude tests predicted academic achievement in both the early and last years (0.550 & 0.371, respectively). Neither panel interviews, multiple mini-interviews, nor situational judgement tests (SJT) yielded statistically significant pooled ES.
Conclusion
Current evidence suggests that learning outcomes are predicted by previous academic achievement and aptitude tests. The predictive value of SJT and topics such as selection algorithms, features of interview (e.g., content of the questions) and the way the interviewers’ reports are used, warrant further research.

Citations

Citations to this article as recorded by  
  • Notice of Retraction and Replacement: Insights into undergraduate medical student selection tools: a systematic review and meta-analysis
    Pin-Hsiang Huang, Arash Arianpoor, Silas Taylor, Jenzel Gonzales, Boaz Shulruf
    Journal of Educational Evaluation for Health Professions.2024; 21: 41.     CrossRef
Research articles
The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study  
Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen
J Educ Eval Health Prof. 2024;21:24.   Published online September 23, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.24
Funded: Da Nang University of Medical Technology and Pharmacy
  • 1,085 View
  • 250 Download
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the effect of simulation-based training on nursing students’ problem-solving skills, critical thinking skills, and self-efficacy.
Methods
A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants’ problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.
Results
The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172=2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172=-2.26 and P=0.025.
Conclusion
The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.

Citations

Citations to this article as recorded by  
  • The Effect of Work-Based Learning on Employability Skills: The Role of Self-Efficacy and Vocational Identity
    Suyitno Suyitno, Muhammad Nurtanto, Dwi Jatmoko, Yuli Widiyono, Riawan Yudi Purwoko, Fuad Abdillah, Setuju Setuju, Yudan Hermawan
    European Journal of Educational Research.2025; 14(1): 309.     CrossRef
Training satisfaction and future employment consideration among physician and nursing trainees at rural Veterans Affairs facilities in the United States during COVID-19: a time-series before and after study  
Heather Northcraft, Tiffany Radcliff, Anne Reid Griffin, Jia Bai, Aram Dobalian
J Educ Eval Health Prof. 2024;21:25.   Published online September 24, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.25
Funded: Office of Patient Care Services, Department of Veterans Affairs
  • 641 View
  • 135 Download
AbstractAbstract PDFSupplementary Material
Purpose
The coronavirus disease 2019 (COVID-19) pandemic limited healthcare professional education and training opportunities in rural communities. Because the US Department of Veterans Affairs (VA) has robust programs to train clinicians in the United States, this study examined VA trainee perspectives regarding pandemic-related training in rural and urban areas and interest in future employment with the VA.
Methods
Survey responses were collected nationally from VA physicians and nursing trainees before and after COVID-19 (2018 to 2021). Logistic regression models were used to test the association between pandemic timing (pre-pandemic or pandemic), trainee program (physician or nurse), and the interaction of trainee pandemic timing and program on VA trainee satisfaction and trainee likelihood to consider future VA employment in rural and urban areas.
Results
While physician trainees at urban facilities reported decreases in overall training satisfaction and corresponding decreases in the likelihood of considering future VA employment from pre-pandemic to pandemic, rural physician trainees showed no changes in either outcome. In contrast, while nursing trainees at both urban and rural sites had decreases in training satisfaction associated with the pandemic, there was no corresponding effect on the likelihood of future employment by nurses at either urban or rural VA sites.
Conclusion
The study’s findings suggest differences in the training experiences of physicians and nurses at rural sites, as well as between physician trainees at urban and rural sites. Understanding these nuances can inform the development of targeted approaches to address the ongoing provider shortages that rural communities in the United States are facing.
A new performance evaluation indicator for the LEE Jong-wook Fellowship Program of Korea Foundation for International Healthcare to better assess its long-term educational impacts: a Delphi study  
Minkyung Oh, Bo Young Yoon
J Educ Eval Health Prof. 2024;21:27.   Published online October 2, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.27
Funded: Korea Foundation for International Healthcare
  • 517 View
  • 227 Download
AbstractAbstract PDFSupplementary Material
Purpose
The Dr. LEE Jong-wook Fellowship Program, established by the Korea Foundation for International Healthcare (KOFIH), aims to strengthen healthcare capacity in partner countries. The aim of the study was to develop new performance evaluation indicators for the program to better assess long-term educational impact across various courses and professional roles.
Methods
A 3-stage process was employed. First, a literature review of established evaluation models (Kirkpatrick’s 4 levels, context/input/process/product evaluation model, Organization for Economic Cooperation and Development Assistance Committee criteria) was conducted to devise evaluation criteria. Second, these criteria were validated via a 2-round Delphi survey with 18 experts in training projects from May 2021 to June 2021. Third, the relative importance of the evaluation criteria was determined using the analytic hierarchy process (AHP), calculating weights and ensuring consistency through the consistency index and consistency ratio (CR), with CR values below 0.1 indicating acceptable consistency.
Results
The literature review led to a combined evaluation model, resulting in 4 evaluation areas, 20 items, and 92 indicators. The Delphi surveys confirmed the validity of these indicators, with content validity ratio values exceeding 0.444. The AHP analysis assigned weights to each indicator, and CR values below 0.1 indicated consistency. The final set of evaluation indicators was confirmed through a workshop with KOFIH and adopted as the new evaluation tool.
Conclusion
The developed evaluation framework provides a comprehensive tool for assessing the long-term outcomes of the Dr. LEE Jong-wook Fellowship Program. It enhances evaluation capabilities and supports improvements in the training program’s effectiveness and international healthcare collaboration.
Review
The legality and appropriateness of keeping Korean Medical Licensing Examination items confidential: a comparative analysis and review of court rulings  
Jae Sun Kim, Dae Un Hong, Ju Yoen Lee
J Educ Eval Health Prof. 2024;21:28.   Published online October 15, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.28
Funded: Korea Health Personnel Licensing Examination Institute
  • 483 View
  • 184 Download
AbstractAbstract PDFSupplementary Material
This study examines the legality and appropriateness of keeping the multiple-choice question items of the Korean Medical Licensing Examination (KMLE) confidential. Through an analysis of cases from the United States, Canada, and Australia, where medical licensing exams are conducted using item banks and computer-based testing, we found that exam items are kept confidential to ensure fairness and prevent cheating. In Korea, the Korea Health Personnel Licensing Examination Institute (KHPLEI) has been disclosing KMLE questions despite concerns over exam integrity. Korean courts have consistently ruled that multiple-choice question items prepared by public institutions are non-public information under Article 9(1)(v) of the Korea Official Information Disclosure Act (KOIDA), which exempts disclosure if it significantly hinders the fairness of exams or research and development. The Constitutional Court of Korea has upheld this provision. Given the time and cost involved in developing high-quality items and the need to accurately assess examinees’ abilities, there are compelling reasons to keep KMLE items confidential. As a public institution responsible for selecting qualified medical practitioners, KHPLEI should establish its disclosure policy based on a balanced assessment of public interest, without influence from specific groups. We conclude that KMLE questions qualify as non-public information under KOIDA, and KHPLEI may choose to maintain their confidentiality to ensure exam fairness and efficiency.
Research article
Validation of the Blended Learning Usability Evaluation–Questionnaire (BLUE-Q) through an innovative Bayesian questionnaire validation approach  
Anish Kumar Arora, Charo Rodriguez, Tamara Carver, Hao Zhang, Tibor Schuster
J Educ Eval Health Prof. 2024;21:31.   Published online November 7, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.31
Funded: Institute for Health Sciences Education, Canadian Institutes of Health Research
  • 462 View
  • 157 Download
AbstractAbstract PDFSupplementary Material
Purpose
The primary aim of this study is to validate the Blended Learning Usability Evaluation–Questionnaire (BLUE-Q) for use in the field of health professions education through a Bayesian approach. As Bayesian questionnaire validation remains elusive, a secondary aim of this article is to serve as a simplified tutorial for engaging in such validation practices in health professions education.
Methods
A total of 10 health education-based experts in blended learning were recruited to participate in a 30-minute interviewer-administered survey. On a 5-point Likert scale, experts rated how well they perceived each item of the BLUE-Q to reflect its underlying usability domain (i.e., effectiveness, efficiency, satisfaction, accessibility, organization, and learner experience). Ratings were descriptively analyzed and converted into beta prior distributions. Participants were also given the option to provide qualitative comments for each item.
Results
After reviewing the computed expert prior distributions, 31 quantitative items were identified as having a probability of “low endorsement” and were thus removed from the questionnaire. Additionally, qualitative comments were used to revise the phrasing and order of items to ensure clarity and logical flow. The BLUE-Q’s final version comprises 23 Likert-scale items and 6 open-ended items.
Conclusion
Questionnaire validation can generally be a complex, time-consuming, and costly process, inhibiting many from engaging in proper validation practices. In this study, we demonstrate that a Bayesian questionnaire validation approach can be a simple, resource-efficient, yet rigorous solution to validating a tool for content and item-domain correlation through the elicitation of domain expert endorsement ratings.

JEEHP : Journal of Educational Evaluation for Health Professions
TOP