-
Insights into undergraduate medical student selection tools: a systematic review and meta-analysis
-
Pin-Hsiang Huang
, Arash Arianpoor , Silas Taylor , Jenzel Gonzales , Boaz Shulruf
-
J Educ Eval Health Prof. 2024;21:22. Published online September 12, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.22
-
Correction in: J Educ Eval Health Prof 2024;21(0):41
-
1,137
View
-
226
Download
-
1
Crossref
-
Abstract
PDF Supplementary Material
- Purpose
Evaluating medical school selection tools is vital for evidence-based student selection. With previous reviews revealing knowledge gaps, this meta-analysis offers insights into the effectiveness of these selection tools.
Methods A systematic review and meta-analysis were conducted applying the following criteria: peer-reviewed articles available in English, published from 2010 and which include empirical data linking performance in selection tools with assessment and dropout outcomes of undergraduate entry medical programs. Systematic reviews, meta-analyses, general opinion pieces, or commentaries were excluded. Effect sizes (ESs) of the predictability of academic and clinical performance within and by the end of the medicine program were extracted, and the pooled ESs were presented.
Results Sixty-seven out of 2,212 articles were included, which yielded 236 ESs. Previous academic achievement predicted medical program academic performance (Cohen’s d=0.697 in early program; 0.619 in end of program) and clinical exams (0.545 in end of program). Within aptitude tests, verbal reasoning and quantitative reasoning predicted academic achievement in the early program and in the last years (0.704 & 0.643, respectively). Overall aptitude tests predicted academic achievement in both the early and last years (0.550 & 0.371, respectively). Neither panel interviews, multiple mini-interviews, nor situational judgement tests (SJT) yielded statistically significant pooled ES.
Conclusion Current evidence suggests that learning outcomes are predicted by previous academic achievement and aptitude tests. The predictive value of SJT and topics such as selection algorithms, features of interview (e.g., content of the questions) and the way the interviewers’ reports are used, warrant further research.
-
Citations
Citations to this article as recorded by 
- Notice of Retraction and Replacement: Insights into undergraduate medical student selection tools: a systematic review and meta-analysis
Pin-Hsiang Huang, Arash Arianpoor, Silas Taylor, Jenzel Gonzales, Boaz Shulruf Journal of Educational Evaluation for Health Professions.2024; 21: 41. CrossRef
-
What impacts students’ satisfaction the most from Medicine Student Experience Questionnaire in Australia: a validity study
-
Pin-Hsiang Huang
, Gary Velan , Greg Smith , Melanie Fentoullis , Sean Edward Kennedy , Karen Jane Gibson , Kerry Uebel , Boaz Shulruf
-
J Educ Eval Health Prof. 2023;20:2. Published online January 18, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.2
-
-
2,575
View
-
157
Download
-
2
Web of Science
-
1
Crossref
-
Abstract
PDF Supplementary Material
- Purpose
This study evaluated the validity of student feedback derived from Medicine Student Experience Questionnaire (MedSEQ), as well as the predictors of students’ satisfaction in the Medicine program.
Methods Data from MedSEQ applying to the University of New South Wales Medicine program in 2017, 2019, and 2021 were analyzed. Confirmatory factor analysis (CFA) and Cronbach’s α were used to assess the construct validity and reliability of MedSEQ respectively. Hierarchical multiple linear regressions were used to identify the factors that most impact students’ overall satisfaction with the program.
Results A total of 1,719 students (34.50%) responded to MedSEQ. CFA showed good fit indices (root mean square error of approximation=0.051; comparative fit index=0.939; chi-square/degrees of freedom=6.429). All factors yielded good (α>0.7) or very good (α>0.8) levels of reliability, except the “online resources” factor, which had acceptable reliability (α=0.687). A multiple linear regression model with only demographic characteristics explained 3.8% of the variance in students’ overall satisfaction, whereas the model adding 8 domains from MedSEQ explained 40%, indicating that 36.2% of the variance was attributable to students’ experience across the 8 domains. Three domains had the strongest impact on overall satisfaction: “being cared for,” “satisfaction with teaching,” and “satisfaction with assessment” (β=0.327, 0.148, 0.148, respectively; all with P<0.001).
Conclusion MedSEQ has good construct validity and high reliability, reflecting students’ satisfaction with the Medicine program. Key factors impacting students’ satisfaction are the perception of being cared for, quality teaching irrespective of the mode of delivery and fair assessment tasks which enhance learning.
-
Citations
Citations to this article as recorded by 
- Mental health and quality of life across 6 years of medical training: A year-by-year analysis
Natalia de Castro Pecci Maddalena, Alessandra Lamas Granero Lucchetti, Ivana Lucia Damasio Moutinho, Oscarina da Silva Ezequiel, Giancarlo Lucchetti International Journal of Social Psychiatry.2024; 70(2): 298. CrossRef
-
Development and validation of the student ratings in clinical teaching scale in Australia: a methodological study
-
Pin-Hsiang Huang
, Anthony John O’Sullivan , Boaz Shulruf
-
J Educ Eval Health Prof. 2023;20:26. Published online September 5, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.26
-
-
1,911
View
-
156
Download
-
1
Web of Science
-
Abstract
PDF Supplementary Material
- Purpose
This study aimed to devise a valid measurement for assessing clinical students’ perceptions of teaching practices.
Methods A new tool was developed based on a meta-analysis encompassing effective clinical teaching-learning factors. Seventy-nine items were generated using a frequency (never to always) scale. The tool was applied to the University of New South Wales year 2, 3, and 6 medical students. Exploratory and confirmatory factor analysis (exploratory factor analysis [EFA] and confirmatory factor analysis [CFA], respectively) were conducted to establish the tool’s construct validity and goodness of fit, and Cronbach’s α was used for reliability.
Results In total, 352 students (44.2%) completed the questionnaire. The EFA identified student-centered learning, problem-solving learning, self-directed learning, and visual technology (reliability, 0.77 to 0.89). CFA showed acceptable goodness of fit (chi-square P<0.01, comparative fit index=0.930 and Tucker-Lewis index=0.917, root mean square error of approximation=0.069, standardized root mean square residual=0.06).
Conclusion The established tool—Student Ratings in Clinical Teaching (STRICT)—is a valid and reliable tool that demonstrates how students perceive clinical teaching efficacy. STRICT measures the frequency of teaching practices to mitigate the biases of acquiescence and social desirability. Clinical teachers may use the tool to adapt their teaching practices with more active learning activities and to utilize visual technology to facilitate clinical learning efficacy. Clinical educators may apply STRICT to assess how these teaching practices are implemented in current clinical settings.
-
Equal Z standard-setting method to estimate the minimum number of panelists for a medical school’s objective structured clinical examination in Taiwan: a simulation study
-
Ying-Ying Yang
, Pin-Hsiang Huang , Ling-Yu Yang , Chia-Chang Huang , Chih-Wei Liu , Shiau-Shian Huang , Chen-Huan Chen , Fa-Yauh Lee , Shou-Yen Kao , Boaz Shulruf
-
J Educ Eval Health Prof. 2022;19:27. Published online October 17, 2022
-
DOI: https://doi.org/10.3352/jeehp.2022.19.27
-
-
Abstract
PDF Supplementary Material
- Purpose
Undertaking a standard-setting exercise is a common method for setting pass/fail cut scores for high-stakes examinations. The recently introduced equal Z standard-setting method (EZ method) has been found to be a valid and effective alternative for the commonly used Angoff and Hofstee methods and their variants. The current study aims to estimate the minimum number of panelists required for obtaining acceptable and reliable cut scores using the EZ method.
Methods The primary data were extracted from 31 panelists who used the EZ method for setting cut scores for a 12-station of medical school’s final objective structured clinical examination (OSCE) in Taiwan. For this study, a new data set composed of 1,000 random samples of different panel sizes, ranging from 5 to 25 panelists, was established and analyzed. Analysis of variance was performed to measure the differences in the cut scores set by the sampled groups, across all sizes within each station.
Results On average, a panel of 10 experts or more yielded cut scores with confidence more than or equal to 90% and 15 experts yielded cut scores with confidence more than or equal to 95%. No significant differences in cut scores associated with panel size were identified for panels of 5 or more experts.
Conclusion The EZ method was found to be valid and feasible. Less than an hour was required for 12 panelists to assess 12 OSCE stations. Calculating the cut scores required only basic statistical skills.
|