Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Data sharing

Page Path
HOME > Browse articles > Data sharing
206 Data sharing
Filter
Filter
Article category
Keywords
Authors
Funded articles
Research articles
A nationwide survey on the curriculum and educational resources related to the Clinical Skills Test of the Korean Medical Licensing Examination: a cross-sectional descriptive study  
Eun-Kyung Chung, Seok Hoon Kang, Do-Hoon Kim, MinJeong Kim, Ji-Hyun Seo, Keunmi Lee, Eui-Ryoung Han
J Educ Eval Health Prof. 2025;22:11.   Published online March 13, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.11
  • 891 View
  • 161 Download
AbstractAbstract PDFSupplementary Material
Purpose
The revised Clinical Skills Test (CST) of the Korean Medical Licensing Exam aims to provide a better assessment of physicians’ clinical competence and ability to interact with patients. This study examined the impact of the revised CST on medical education curricula and resources nationwide, while also identifying areas for improvement within the revised CST.
Methods
This study surveyed faculty responsible for clinical clerkships at 40 medical schools throughout Korea to evaluate the status and changes in clinical skills education, assessment, and resources related to the CST. The researchers distributed the survey via email through regional consortia between December 7, 2023 and January 19, 2024.
Results
Nearly all schools implemented preliminary student–patient encounters during core clinical rotations. Schools primarily conducted clinical skills assessments in the third and fourth years, with a simplified form introduced in the first and second years. Remedial education was conducted through various methods, including one-on-one feedback from faculty after the assessment. All schools established clinical skills centers and made ongoing improvements. Faculty members did not perceive the CST revisions as significantly altering clinical clerkship or skills assessments. They suggested several improvements, including assessing patient records to improve accuracy and increasing the objectivity of standardized patient assessments to ensure fairness.
Conclusion
During the CST, students’ involvement in patient encounters and clinical skills education increased, improving the assessment and feedback processes for clinical skills within the curriculum. To enhance students’ clinical competencies and readiness, strengthening the validity and reliability of the CST is essential.
Correlation between a motion analysis method and Global Operative Assessment of Laparoscopic Skills for assessing interns’ performance in a simulated peg transfer task in Jordan: a validation study  
Esraa Saleh Abdelall, Shadi Mohammad Hamouri, Abdallah Fawaz Al Dwairi, Omar Mefleh Al- Araidah
J Educ Eval Health Prof. 2025;22:10.   Published online March 6, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.10
  • 893 View
  • 169 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aims to validate the use of ProAnalyst (Xcitex Inc.), a software for professional motion analysts to assess the performance of surgical interns while performing the peg transfer task in a simulator box for safe practice in real minimally invasive surgery.
Methods
A correlation study was conducted in a multidisciplinary skills simulation lab at the Faculty of Medicine, Jordan University of Science and Technology from October 2019 to February 2020. Forty-one interns (i.e., novices and intermediates) were recruited and an expert surgeon participated as a reference benchmark. Videos of participants’ performance were analyzed through the ProAnalyst and Global Operative Assessment of Laparoscopic Skills (GOALS). Two results were s analyzed for correlation.
Results
The motion analysis scores by Proanalyst were correlated with those by GOALS for novices (r=–0.62925, P=0.009), and Intermediates (r= –0.53422, P=0.033). Both assessment methods differentiated the participants’ performance based on their experience level.
Conclusion
The motion analysis scoring method with Proanalyst provides an objective, time-efficient, and reproducible assessment of interns’ performance, and comparable to GOALS. It may require initial training and set-up; however, it eliminates the need for expert surgeon judgment.
Correspondence
Accuracy of ChatGPT in answering cardiology board-style questions
Albert Andrew
J Educ Eval Health Prof. 2025;22:9.   Published online February 27, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.9
  • 1,165 View
  • 130 Download
PDFSupplementary Material
Research articles
Simulation-based teaching versus traditional small group teaching for first-year medical students among high and low scorers in respiratory physiology, India: a randomized controlled trial  
Nalini Yelahanka Channegowda, Dinker Ramanand Pai, Shivasakthy Manivasakan
J Educ Eval Health Prof. 2025;22:8.   Published online February 21, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.8
  • 688 View
  • 166 Download
AbstractAbstract PDFSupplementary Material
Purpose
Although it is widely utilized in clinical subjects for skill training, using simulation-based education (SBE) for teaching basic science concepts to phase I medical students or pre-clinical students is limited. Simulation-based education/teaching is preferred in cardiovascular and respiratory physiology when compared to other systems because it is easy to recreate both the normal physiological component and alterations in the simulated environment, thus a promoting deep understanding of the core concepts.
Methods
A block randomized study was conducted among 107 phase 1 (first-year) medical undergraduate students at a Deemed to be University in India. Group A received SBE and Group B traditional small group teaching. The effectiveness of the teaching intervention was assessed using pre- and post-tests. Student feedback was obtained through a self administered structured questionnaire via an anonymous online survey and by in-depth interview.
Results
The intervention group showed a statistically significant improvement in post-test scores compared to the control group. A sub-analysis revealed that high scorers performed better than low scorers in both groups, but the knowledge gain among low scorers was more significant in the intervention group.
Conclusion
This teaching strategy offers a valuable supplement to traditional methods, fostering a deeper comprehension of clinical concepts from the outset of medical training.
Empirical effect of the Dr LEE Jong-wook Fellowship Program to empower sustainable change for the health workforce in Tanzania: a mixed-methods study  
Masoud Dauda, Swabaha Aidarus Yusuph, Harouni Yasini, Issa Mmbaga, Perpetua Mwambinngu, Hansol Park, Gyeongbae Seo, Kyoung Kyun Oh
J Educ Eval Health Prof. 2025;22:6.   Published online January 20, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.6
  • 1,340 View
  • 208 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study evaluated the Dr LEE Jong-wook Fellowship Program’s impact on Tanzania’s health workforce, focusing on relevance, effectiveness, efficiency, impact, and sustainability in addressing healthcare gaps.
Methods
A mixed-methods research design was employed. Data were collected from 97 out of 140 alumni through an online survey, 35 in-depth interviews, and one focus group discussion. The study was conducted from November to December 2023 and included alumni from 2009 to 2022. Measurement instruments included structured questionnaires for quantitative data and semi-structured guides for qualitative data. Quantitative analysis involved descriptive and inferential statistics (Spearman’s rank correlation, non-parametric tests) using Python ver. 3.11.0 and Stata ver. 14.0. Thematic analysis was employed to analyze qualitative data using NVivo ver. 12.0.
Results
Findings indicated high relevance (mean=91.6, standard deviation [SD]=8.6), effectiveness (mean=86.1, SD=11.2), efficiency (mean=82.7, SD=10.2), and impact (mean=87.7, SD=9.9), with improved skills, confidence, and institutional service quality. However, sustainability had a lower score (mean=58.0, SD=11.1), reflecting challenges in follow-up support and resource allocation. Effectiveness strongly correlated with impact (ρ=0.746, P<0.001). The qualitative findings revealed that participants valued tailored training but highlighted barriers, such as language challenges and insufficient practical components. Alumni-led initiatives contributed to knowledge sharing, but limited resources constrained sustainability.
Conclusion
The Fellowship Program enhanced Tanzania’s health workforce capacity, but it requires localized curricula and strengthened alumni networks for sustainability. These findings provide actionable insights for improving similar programs globally, confirming the hypothesis that tailored training positively influences workforce and institutional outcomes.
Reliability and construct validation of the Blended Learning Usability Evaluation–Questionnaire with interprofessional clinicians in Canada: a methodological study  
Anish Kumar Arora, Jeff Myers, Tavis Apramian, Kulamakan Kulasegaram, Daryl Bainbridge, Hsien Seow
J Educ Eval Health Prof. 2025;22:5.   Published online January 16, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.5
  • 935 View
  • 180 Download
AbstractAbstract PDFSupplementary Material
Purpose
To generate Cronbach’s alpha and further mixed methods construct validity evidence for the Blended Learning Usability Evaluation–Questionnaire (BLUE-Q).
Methods
Forty interprofessional clinicians completed the BLUE-Q after finishing a 3-month long blended learning professional development program in Ontario, Canada. Reliability was assessed with Cronbach’s α for each of the 3 sections of the BLUE-Q and for all quantitative items together. Construct validity was evaluated through the Grand-Guillaume-Perrenoud et al. framework, which consists of 3 elements: congruence, convergence, and credibility. To compare quantitative and qualitative results, descriptive statistics, including means and standard deviations for each Likert scale item of the BLUE-Q were calculated.
Results
Cronbach’s α was 0.95 for the pedagogical usability section, 0.85 for the synchronous modality section, 0.93 for the asynchronous modality section, and 0.96 for all quantitative items together. Mean ratings (with standard deviations) were 4.77 (0.506) for pedagogy, 4.64 (0.654) for synchronous learning, and 4.75 (0.536) for asynchronous learning. Of the 239 qualitative comments received, 178 were identified as substantive, of which 88% were considered congruent and 79% were considered convergent with the high means. Among all congruent responses, 69% were considered confirming statements and 31% were considered clarifying statements, suggesting appropriate credibility. Analysis of the clarifying statements assisted in identifying 5 categories of suggestions for program improvement.
Conclusion
The BLUE-Q demonstrates high reliability and appropriate construct validity in the context of a blended learning program with interprofessional clinicians, making it a valuable tool for comprehensive program evaluation, quality improvement, and evaluative research in health professions education.
Empathy and tolerance of ambiguity in medical students and doctors participating in art-based observational training at the Rijksmuseum in Amsterdam, the Netherlands: a before-and-after study  
Stella Anna Bult, Thomas van Gulik
J Educ Eval Health Prof. 2025;22:3.   Published online January 14, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.3
  • 1,279 View
  • 164 Download
AbstractAbstract PDFSupplementary Material
Purpose
This research presents an experimental study using validated questionnaires to quantitatively assess the outcomes of art-based observational training in medical students, residents, and specialists. The study tested the hypothesis that art-based observational training would lead to measurable effects on judgement skills (tolerance of ambiguity) and empathy in medical students and doctors.
Methods
An experimental cohort study with pre- and post-intervention assessments was conducted using validated questionnaires and qualitative evaluation forms to examine the outcomes of art-based observational training in medical students and doctors. Between December 2023 and June 2024, 15 art courses were conducted in the Rijksmuseum in Amsterdam. Participants were assessed on empathy using the Jefferson Scale of Empathy (JSE) and tolerance of ambiguity using the Tolerance of Ambiguity in Medical Students and Doctors (TAMSAD) scale.
Results
In total, 91 participants were included; 29 participants completed the JSE and 62 completed the TAMSAD scales. The results showed statistically significant post-test increases for mean JSE and TAMSAD scores (3.71 points for the JSE, ranging from 20 to 140, and 1.86 points for the TAMSAD, ranging from 0 to 100). The qualitative findings were predominantly positive.
Conclusion
The results suggest that incorporating art-based observational training in medical education improves empathy and tolerance of ambiguity. This study highlights the importance of art-based observational training in medical education in the professional development of medical students and doctors.
Pharmacy students’ perspective on remote flipped classrooms in Malaysia: a qualitative study  
Wei Jin Wong, Shaun Wen Huey Lee, Ronald Fook Seng Lee
J Educ Eval Health Prof. 2025;22:2.   Published online January 14, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.2
  • 791 View
  • 174 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to explore pharmacy students’ perceptions of remote flipped classrooms in Malaysia, focusing on their learning experiences and identifying areas for potential improvement to inform future educational strategies.
Methods
A qualitative approach was employed, utilizing inductive thematic analysis. Twenty Bachelor of Pharmacy students (18 women, 2 men; age range, 19–24 years) from Monash University participated in 8 focus group discussions over 2 rounds during the coronavirus disease 2019 pandemic. Participants were recruited via convenience sampling. The focus group discussions, led by experienced academics, were conducted in English via Zoom, recorded, and transcribed for analysis using NVivo. Themes were identified through emergent coding and iterative discussions to ensure thematic saturation.
Results
Five major themes emerged: flexibility, communication, technological challenges, skill-based learning challenges, and time-based effects. Students appreciated the flexibility of accessing and reviewing pre-class materials at their convenience. Increased engagement through anonymous question submission was noted, yet communication difficulties and lack of non-verbal cues in remote workshops were significant drawbacks. Technological issues, such as internet connectivity problems, hindered learning, especially during assessments. Skill-based learning faced challenges in remote settings, including lab activities and clinical examinations. Additionally, prolonged remote learning led to feelings of isolation, fatigue, and a desire to return to in-person interactions.
Conclusion
Remote flipped classrooms offer flexibility and engagement benefits but present notable challenges related to communication, technology, and skill-based learning. To improve remote education, institutions should integrate robust technological support, enhance communication strategies, and incorporate virtual simulations for practical skills. Balancing asynchronous and synchronous methods while addressing academic success and socioemotional wellness is essential for effective remote learning environments.
Editorial
Halted medical education and medical residents’ training in Korea, journal metrics, and appreciation to reviewers and volunteers
Sun Huh
J Educ Eval Health Prof. 2025;22:1.   Published online January 13, 2025
DOI: https://doi.org/10.3352/jeehp.2025.22.1
  • 988 View
  • 95 Download
  • 1 Web of Science
  • 2 Crossref
PDFSupplementary Material

Citations

Citations to this article as recorded by  
  • How a medical journal can survive the freezing era of article production in Korea, and highlights in this issue of the Ewha Medical Journal
    Ji Yeon Byun
    The Ewha Medical Journal.2025;[Epub]     CrossRef
  • Korea’s 2024 reduction in medical research output amid physician residents’ resignation
    Jeong-Ju Yoo, Hyun Bin Choi, Young-Seok Kim, Sang Gyune Kim
    Ewha Medical Journal.2025; 48(2): e36.     CrossRef
Research articles
Inter-rater reliability and content validity of the measurement tool for portfolio assessments used in the Introduction to Clinical Medicine course at Ewha Womans University College of Medicine: a methodological study  
Dong-Mi Yoo, Jae Jin Han
J Educ Eval Health Prof. 2024;21:39.   Published online December 10, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.39
  • 1,123 View
  • 172 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to examine the reliability and validity of a measurement tool for portfolio assessments in medical education. Specifically, it investigated scoring consistency among raters and assessment criteria appropriateness according to an expert panel.
Methods
A cross-sectional observational study was conducted from September to December 2018 for the Introduction to Clinical Medicine course at the Ewha Womans University College of Medicine. Data were collected for 5 randomly selected portfolios scored by a gold-standard rater and 6 trained raters. An expert panel assessed the validity of 12 assessment items using the content validity index (CVI). Statistical analysis included Pearson correlation coefficients for rater alignment, the intraclass correlation coefficient (ICC) for inter-rater reliability, and the CVI for item-level validity.
Results
Rater 1 had the highest Pearson correlation (0.8916) with the gold-standard rater, while Rater 5 had the lowest (0.4203). The ICC for all raters was 0.3821, improving to 0.4415 after excluding Raters 1 and 5, indicating a 15.6% reliability increase. All assessment items met the CVI threshold of ≥0.75, with some achieving a perfect score (CVI=1.0). However, items like “sources” and “level and degree of performance” showed lower validity (CVI=0.72).
Conclusion
The present measurement tool for portfolio assessments demonstrated moderate reliability and strong validity, supporting its use as a credible tool. For a more reliable portfolio assessment, more faculty training is needed.
Development and validation of a measurement tool to assess student perceptions of using real patients in physical therapy education at the Rocky Mountain University, the United States: a methodological study  
Stacia Hall Thompson, Hina Garg, Mary Shotwell, Michelle Webb
J Educ Eval Health Prof. 2024;21:30.   Published online November 7, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.30
  • 839 View
  • 157 Download
AbstractAbstract PDFSupplementary Material
Purpose
This study aimed to develop and validate the Student Perceptions of Real Patient Use in Physical Therapy Education (SPRP-PTE) survey to assess physical therapy student (SPT) perceptions regarding real patient use in didactic education.
Methods
This cross-sectional observational study developed a 48-item survey and tested the survey on 130 SPTs. Face and content validity were determined by an expert review and content validity index (CVI). Construct validity and internal consistency reliability were determined via exploratory factor analysis (EFA) and Cronbach’s α.
Results
Three main constructs were identified (value, satisfaction, and confidence), each having 4 subconstruct components (overall, cognitive, psychomotor, and affective learning). Expert review demonstrated adequate face and content validity (CVI=96%). The initial EFA of the 48-item survey revealed items with inconsistent loadings and low correlations, leading to the removal of 18 items. An EFA of the 30-item survey demonstrated 1-factor loadings of all survey constructs except satisfaction and the entire survey. All constructs had adequate internal consistency (Cronbach’s α >0.85).
Conclusion
The SPRP-PTE survey provides a reliable and valid way to assess student perceptions of real patient use. Future studies are encouraged to validate the SPRP-PTE survey further.
Educational/Faculty development material
The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study  
Elio Stefan Arruzza, Carla Marie Evangelista, Minh Chau
J Educ Eval Health Prof. 2024;21:29.   Published online October 31, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.29
  • 1,882 View
  • 262 Download
  • 4 Web of Science
  • 4 Crossref
AbstractAbstract PDFSupplementary Material
This study investigated the performance of ChatGPT-4.0o in evaluating the quality of positioning in radiographic images. Thirty radiographs depicting a variety of knee, elbow, ankle, hand, pelvis, and shoulder projections were produced using anthropomorphic phantoms and uploaded to ChatGPT-4.0o. The model was prompted to provide a solution to identify any positioning errors with justification and offer improvements. A panel of radiographers assessed the solutions for radiographic quality based on established positioning criteria, with a grading scale of 1–5. In only 20% of projections, ChatGPT-4.0o correctly recognized all errors with justifications and offered correct suggestions for improvement. The most commonly occurring score was 3 (9 cases, 30%), wherein the model recognized at least 1 specific error and provided a correct improvement. The mean score was 2.9. Overall, low accuracy was demonstrated, with most projections receiving only partially correct solutions. The findings reinforce the importance of robust radiography education and clinical experience.

Citations

Citations to this article as recorded by  
  • Evaluating Large Language Models for Burning Mouth Syndrome Diagnosis
    Takayuki Suga, Osamu Uehara, Yoshihiro Abiko, Akira Toyofuku
    Journal of Pain Research.2025; Volume 18: 1387.     CrossRef
  • Evaluating the performance of GPT-3.5, GPT-4, and GPT-4o in the Chinese National Medical Licensing Examination
    Dingyuan Luo, Mengke Liu, Runyuan Yu, Yulian Liu, Wenjun Jiang, Qi Fan, Naifeng Kuang, Qiang Gao, Tao Yin, Zuncheng Zheng
    Scientific Reports.2025;[Epub]     CrossRef
  • Conversational LLM Chatbot ChatGPT-4 for Colonoscopy Boston Bowel Preparation Scoring: An Artificial Intelligence-to-Head Concordance Analysis
    Raffaele Pellegrino, Alessandro Federico, Antonietta Gerarda Gravina
    Diagnostics.2024; 14(22): 2537.     CrossRef
  • Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study
    Minh Chau, Elio Stefan Arruzza, Kelly Spuur
    Journal of Educational Evaluation for Health Professions.2024; 21: 34.     CrossRef
Research articles
A new performance evaluation indicator for the LEE Jong-wook Fellowship Program of Korea Foundation for International Healthcare to better assess its long-term educational impacts: a Delphi study  
Minkyung Oh, Bo Young Yoon
J Educ Eval Health Prof. 2024;21:27.   Published online October 2, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.27
  • 1,222 View
  • 253 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
The Dr. LEE Jong-wook Fellowship Program, established by the Korea Foundation for International Healthcare (KOFIH), aims to strengthen healthcare capacity in partner countries. The aim of the study was to develop new performance evaluation indicators for the program to better assess long-term educational impact across various courses and professional roles.
Methods
A 3-stage process was employed. First, a literature review of established evaluation models (Kirkpatrick’s 4 levels, context/input/process/product evaluation model, Organization for Economic Cooperation and Development Assistance Committee criteria) was conducted to devise evaluation criteria. Second, these criteria were validated via a 2-round Delphi survey with 18 experts in training projects from May 2021 to June 2021. Third, the relative importance of the evaluation criteria was determined using the analytic hierarchy process (AHP), calculating weights and ensuring consistency through the consistency index and consistency ratio (CR), with CR values below 0.1 indicating acceptable consistency.
Results
The literature review led to a combined evaluation model, resulting in 4 evaluation areas, 20 items, and 92 indicators. The Delphi surveys confirmed the validity of these indicators, with content validity ratio values exceeding 0.444. The AHP analysis assigned weights to each indicator, and CR values below 0.1 indicated consistency. The final set of evaluation indicators was confirmed through a workshop with KOFIH and adopted as the new evaluation tool.
Conclusion
The developed evaluation framework provides a comprehensive tool for assessing the long-term outcomes of the Dr. LEE Jong-wook Fellowship Program. It enhances evaluation capabilities and supports improvements in the training program’s effectiveness and international healthcare collaboration.

Citations

Citations to this article as recorded by  
  • Halted medical education and medical residents’ training in Korea, journal metrics, and appreciation to reviewers and volunteers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2025; 22: 1.     CrossRef
The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study  
Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen
J Educ Eval Health Prof. 2024;21:24.   Published online September 23, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.24
  • 3,266 View
  • 341 Download
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
This study investigated the effect of simulation-based training on nursing students’ problem-solving skills, critical thinking skills, and self-efficacy.
Methods
A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants’ problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.
Results
The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172=2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172=-2.26 and P=0.025.
Conclusion
The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.

Citations

Citations to this article as recorded by  
  • The Effect of Work-Based Learning on Employability Skills: The Role of Self-Efficacy and Vocational Identity
    Suyitno Suyitno, Muhammad Nurtanto, Dwi Jatmoko, Yuli Widiyono, Riawan Yudi Purwoko, Fuad Abdillah, Setuju Setuju, Yudan Hermawan
    European Journal of Educational Research.2025; 14(1): 309.     CrossRef
  • Interactive Success: Empowering Young Minds through Games-Based Learning at NADI PPR Intan Baiduri
    Mohamad Zaki Mohamad Saad, Shafinah Kamarudin, Zuraini Zukiffly, Siti Soleha Zuaimi
    Progress in Computers and Learning .2025; 2(1): 29.     CrossRef
Review
Insights into undergraduate medical student selection tools: a systematic review and meta-analysis  
Pin-Hsiang Huang, Arash Arianpoor, Silas Taylor, Jenzel Gonzales, Boaz Shulruf
J Educ Eval Health Prof. 2024;21:22.   Published online September 12, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.22
Correction in: J Educ Eval Health Prof 2024;21(0):41
  • 1,955 View
  • 250 Download
  • 1 Web of Science
  • 1 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Evaluating medical school selection tools is vital for evidence-based student selection. With previous reviews revealing knowledge gaps, this meta-analysis offers insights into the effectiveness of these selection tools.
Methods
A systematic review and meta-analysis were conducted applying the following criteria: peer-reviewed articles available in English, published from 2010 and which include empirical data linking performance in selection tools with assessment and dropout outcomes of undergraduate entry medical programs. Systematic reviews, meta-analyses, general opinion pieces, or commentaries were excluded. Effect sizes (ESs) of the predictability of academic and clinical performance within and by the end of the medicine program were extracted, and the pooled ESs were presented.
Results
Sixty-seven out of 2,212 articles were included, which yielded 236 ESs. Previous academic achievement predicted medical program academic performance (Cohen’s d=0.697 in early program; 0.619 in end of program) and clinical exams (0.545 in end of program). Within aptitude tests, verbal reasoning and quantitative reasoning predicted academic achievement in the early program and in the last years (0.704 & 0.643, respectively). Overall aptitude tests predicted academic achievement in both the early and last years (0.550 & 0.371, respectively). Neither panel interviews, multiple mini-interviews, nor situational judgement tests (SJT) yielded statistically significant pooled ES.
Conclusion
Current evidence suggests that learning outcomes are predicted by previous academic achievement and aptitude tests. The predictive value of SJT and topics such as selection algorithms, features of interview (e.g., content of the questions) and the way the interviewers’ reports are used, warrant further research.

Citations

Citations to this article as recorded by  
  • Notice of Retraction and Replacement: Insights into undergraduate medical student selection tools: a systematic review and meta-analysis
    Pin-Hsiang Huang, Arash Arianpoor, Silas Taylor, Jenzel Gonzales, Boaz Shulruf
    Journal of Educational Evaluation for Health Professions.2024; 21: 41.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions
TOP