Technical report
-
Increased accessibility of computer-based testing for residency application to a hospital in Brazil with item characteristics comparable to paper-based testing: a psychometric study
-
Marcos Carvalho Borges, Luciane Loures Santos, Paulo Henrique Manso, Elaine Christine Dantas Moisés, Pedro Soler Coltro, Priscilla Costa Fonseca, Paulo Roberto Alves Gentil, Rodrigo de Carvalho Santana, Lucas Faria Rodrigues, Benedito Carlos Maciel, Hilton Marcos Alves Ricz
-
J Educ Eval Health Prof. 2024;21:32. Published online November 11, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.32
-
-
Abstract
PDFSupplementary Material
- Purpose
With the coronavirus disease 2019 pandemic, online high-stakes exams have become a viable alternative. This study evaluated the feasibility of computer-based testing (CBT) for medical residency applications in Brazil and its impacts on item quality and applicants’ access compared to paper-based testing.
Methods
In 2020, an online CBT was conducted in a Ribeirao Preto Clinical Hospital in Brazil. In total, 120 multiple-choice question items were constructed. Two years later, the exam was performed as paper-based testing. Item construction processes were similar for both exams. Difficulty and discrimination indexes, point-biserial coefficient, difficulty, discrimination, guessing parameters, and Cronbach’s α coefficient were measured based on the item response and classical test theories. Internet stability for applicants was monitored.
Results
In 2020, 4,846 individuals (57.1% female, mean age of 26.64±3.37 years) applied to the residency program, versus 2,196 individuals (55.2% female, mean age of 26.47±3.20 years) in 2022. For CBT, there was an increase of 2,650 applicants (120.7%), albeit with significant differences in demographic characteristics. There was a significant increase in applicants from more distant and lower-income Brazilian regions, such as the North (5.6% vs. 2.7%) and Northeast (16.9% vs. 9.0%). No significant differences were found in difficulty and discrimination indexes, point-biserial coefficients, and Cronbach’s α coefficients between the 2 exams.
Conclusion
Online CBT with multiple-choice questions was a viable format for a residency application exam, improving accessibility without compromising exam integrity and quality.
Research article
-
The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study
-
Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen
-
J Educ Eval Health Prof. 2024;21:24. Published online September 23, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.24
-
-
Abstract
PDFSupplementary Material
- Purpose
This study investigated the effect of simulation-based training on nursing students’ problem-solving skills, critical thinking skills, and self-efficacy.
Methods
A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants’ problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.
Results
The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172=2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172=-2.26 and P=0.025.
Conclusion
The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.
Review
-
Immersive simulation in nursing and midwifery education: a systematic review
-
Lahoucine Ben Yahya, Aziz Naciri, Mohamed Radid, Ghizlane Chemsi
-
J Educ Eval Health Prof. 2024;21:19. Published online August 8, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.19
-
-
Abstract
PDFSupplementary Material
- Purpose
Immersive simulation is an innovative training approach in health education that enhances student learning. This study examined its impact on engagement, motivation, and academic performance in nursing and midwifery students.
Methods
A comprehensive systematic search was meticulously conducted in 4 reputable databases—Scopus, PubMed, Web of Science, and Science Direct—following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The research protocol was pre-registered in the PROSPERO registry, ensuring transparency and rigor. The quality of the included studies was assessed using the Medical Education Research Study Quality Instrument.
Results
Out of 90 identified studies, 11 were included in the present review, involving 1,090 participants. Four out of 5 studies observed high post-test engagement scores in the intervention groups. Additionally, 5 out of 6 studies that evaluated motivation found higher post-test motivational scores in the intervention groups than in control groups using traditional approaches. Furthermore, among the 8 out of 11 studies that evaluated academic performance during immersive simulation training, 5 reported significant differences (P<0.001) in favor of the students in the intervention groups.
Conclusion
Immersive simulation, as demonstrated by this study, has a significant potential to enhance student engagement, motivation, and academic performance, surpassing traditional teaching methods. This potential underscores the urgent need for future research in various contexts to better integrate this innovative educational approach into nursing and midwifery education curricula, inspiring hope for improved teaching methods.
Research article
-
Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study
-
Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman
-
J Educ Eval Health Prof. 2024;21:17. Published online July 8, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.17
-
-
1,648
View
-
293
Download
-
2
Web of Science
-
2
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States.
Methods
In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024.
Results
GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P<0.00001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items.
Conclusions
ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology’s Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.
-
Citations
Citations to this article as recorded by
- From GPT-3.5 to GPT-4.o: A Leap in AI’s Medical Exam Performance
Markus Kipp
Information.2024; 15(9): 543. CrossRef - Artificial Intelligence can Facilitate Application of Risk Stratification Algorithms to Bladder Cancer Patient Case Scenarios
Max S Yudovich, Ahmad N Alzubaidi, Jay D Raman
Clinical Medicine Insights: Oncology.2024;[Epub] CrossRef
Educational/Faculty development material
-
The 6 degrees of curriculum integration in medical education in the United States
-
Julie Youm, Jennifer Christner, Kevin Hittle, Paul Ko, Cinda Stone, Angela D. Blood, Samara Ginzburg
-
J Educ Eval Health Prof. 2024;21:15. Published online June 13, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.15
-
-
Abstract
PDFSupplementary Material
- Despite explicit expectations and accreditation requirements for integrated curriculum, there needs to be more clarity around an accepted common definition, best practices for implementation, and criteria for successful curriculum integration. To address the lack of consensus surrounding integration, we reviewed the literature and herein propose a definition for curriculum integration for the medical education audience. We further believe that medical education is ready to move beyond “horizontal” (1-dimensional) and “vertical” (2-dimensional) integration and propose a model of “6 degrees of curriculum integration” to expand the 2-dimensional concept for future designs of medical education programs and best prepare learners to meet the needs of patients. These 6 degrees include: interdisciplinary, timing and sequencing, instruction and assessment, incorporation of basic and clinical sciences, knowledge and skills-based competency progression, and graduated responsibilities in patient care. We encourage medical educators to look beyond 2-dimensional integration to this holistic and interconnected representation of curriculum integration.
Research articles
-
Redesigning a faculty development program for clinical teachers in Indonesia: a before-and-after study
-
Rita Mustika, Nadia Greviana, Dewi Anggraeni Kusumoningrum, Anyta Pinasthika
-
J Educ Eval Health Prof. 2024;21:14. Published online June 13, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.14
-
-
Abstract
PDFSupplementary Material
- Purpose
Faculty development (FD) is important to support teaching, including for clinical teachers. Faculty of Medicine Universitas Indonesia (FMUI) has conducted a clinical teacher training program developed by the medical education department since 2008, both for FMUI teachers and for those at other centers in Indonesia. However, participation is often challenging due to clinical, administrative, and research obligations. The coronavirus disease 2019 pandemic amplified the urge to transform this program. This study aimed to redesign and evaluate an FD program for clinical teachers that focuses on their needs and current situation.
Methods
A 5-step design thinking framework (empathizing, defining, ideating, prototyping, and testing) was used with a pre/post-test design. Design thinking made it possible to develop a participant-focused program, while the pre/post-test design enabled an assessment of the program’s effectiveness.
Results
Seven medical educationalists and 4 senior and 4 junior clinical teachers participated in a group discussion in the empathize phase of design thinking. The research team formed a prototype of a 3-day blended learning course, with an asynchronous component using the Moodle learning management system and a synchronous component using the Zoom platform. Pre-post-testing was done in 2 rounds, with 107 and 330 participants, respectively. Evaluations of the first round provided feedback for improving the prototype for the second round.
Conclusion
Design thinking enabled an innovative-creative process of redesigning FD that emphasized participants’ needs. The pre/post-testing showed that the program was effective. Combining asynchronous and synchronous learning expands access and increases flexibility. This approach could also apply to other FD programs.
-
Events related to medication errors and related factors involving nurses’ behavior to reduce medication errors in Japan: a Bayesian network modeling-based factor analysis and scenario analysis
-
Naotaka Sugimura, Katsuhiko Ogasawara
-
J Educ Eval Health Prof. 2024;21:12. Published online June 11, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.12
-
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to identify the relationships between medication errors and the factors affecting nurses’ knowledge and behavior in Japan using Bayesian network modeling. It also aimed to identify important factors through scenario analysis with consideration of nursing students’ and nurses’ education regarding patient safety and medications.
Methods
We used mixed methods. First, error events related to medications and related factors were qualitatively extracted from 119 actual incident reports in 2022 from the database of the Japan Council for Quality Health Care. These events and factors were then quantitatively evaluated in a flow model using Bayesian network, and a scenario analysis was conducted to estimate the posterior probabilities of events when the prior probabilities of some factors were 0%.
Results
There were 10 types of events related to medication errors. A 5-layer flow model was created using Bayesian network analysis. The scenario analysis revealed that “failure to confirm the 5 rights,” “unfamiliarity with operations of medications,” “insufficient knowledge of medications,” and “assumptions and forgetfulness” were factors that were significantly associated with the occurrence of medical errors.
Conclusion
This study provided an estimate of the effects of mitigating nurses’ behavioral factors that trigger medication errors. The flow model itself can also be used as an educational tool to reflect on behavior when incidents occur. It is expected that patient safety education will be recognized as a major element of nursing education worldwide and that an integrated curriculum will be developed.
-
Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study
-
Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee
-
J Educ Eval Health Prof. 2024;21:8. Published online April 2, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.8
-
-
1,404
View
-
306
Download
-
1
Web of Science
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to identify challenges and potential improvements in Korea's medical education accreditation process according to the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019). Meta-evaluation was conducted to survey the experiences and perceptions of stakeholders, including self-assessment committee members, site visit committee members, administrative staff, and medical school professors.
Methods
A cross-sectional study was conducted using surveys sent to 40 medical schools. The 332 participants included self-assessment committee members, site visit team members, administrative staff, and medical school professors. The t-test, one-way analysis of variance and the chi-square test were used to analyze and compare opinions on medical education accreditation between the categories of participants.
Results
Site visit committee members placed greater importance on the necessity of accreditation than faculty members. A shared positive view on accreditation’s role in improving educational quality was seen among self-evaluation committee members and professors. Administrative staff highly regarded the Korean Institute of Medical Education and Evaluation’s reliability and objectivity, unlike the self-evaluation committee members. Site visit committee members positively perceived the clarity of accreditation standards, differing from self-assessment committee members. Administrative staff were most optimistic about implementing standards. However, the accreditation process encountered challenges, especially in duplicating content and preparing self-evaluation reports. Finally, perceptions regarding the accuracy of final site visit reports varied significantly between the self-evaluation committee members and the site visit committee members.
Conclusion
This study revealed diverse views on medical education accreditation, highlighting the need for improved communication, expectation alignment, and stakeholder collaboration to refine the accreditation process and quality.
-
Citations
Citations to this article as recorded by
- The new placement of 2,000 entrants at Korean medical schools in
2025: is the government’s policy evidence-based?
Sun Huh
The Ewha Medical Journal.2024;[Epub] CrossRef
Review
-
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
-
Xiaojun Xu, Yixiao Chen, Jing Miao
-
J Educ Eval Health Prof. 2024;21:6. Published online March 15, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.6
-
-
4,350
View
-
509
Download
-
10
Web of Science
-
12
Crossref
-
Abstract
PDFSupplementary Material
- Background
ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.
Methods
A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.
Results
ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.
Conclusion
ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.
-
Citations
Citations to this article as recorded by
- Chatbots in neurology and neuroscience: Interactions with students, patients and neurologists
Stefano Sandrone
Brain Disorders.2024; 15: 100145. CrossRef - ChatGPT in education: unveiling frontiers and future directions through systematic literature review and bibliometric analysis
Buddhini Amarathunga
Asian Education and Development Studies.2024;[Epub] CrossRef - Evaluating the performance of ChatGPT-3.5 and ChatGPT-4 on the Taiwan plastic surgery board examination
Ching-Hua Hsieh, Hsiao-Yun Hsieh, Hui-Ping Lin
Heliyon.2024; 10(14): e34851. CrossRef - Preparing for Artificial General Intelligence (AGI) in Health Professions Education: AMEE Guide No. 172
Ken Masters, Anne Herrmann-Werner, Teresa Festl-Wietek, David Taylor
Medical Teacher.2024; 46(10): 1258. CrossRef - A Comparative Analysis of ChatGPT and Medical Faculty Graduates in Medical Specialization Exams: Uncovering the Potential of Artificial Intelligence in Medical Education
Gülcan Gencer, Kerem Gencer
Cureus.2024;[Epub] CrossRef - Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: a narrative review
Sang-Jun Kim
Science Editing.2024; 11(2): 96. CrossRef - Innovation Off the Bat: Bridging the ChatGPT Gap in Digital Competence among English as a Foreign Language Teachers
Gulsara Urazbayeva, Raisa Kussainova, Aikumis Aibergen, Assel Kaliyeva, Gulnur Kantayeva
Education Sciences.2024; 14(9): 946. CrossRef - Exploring the perceptions of Chinese pre-service teachers on the integration of generative AI in English language teaching: Benefits, challenges, and educational implications
Ji Young Chung, Seung-Hoon Jeong
Online Journal of Communication and Media Technologies.2024; 14(4): e202457. CrossRef - Unveiling the bright side and dark side of AI-based ChatGPT : a bibliographic and thematic approach
Chandan Kumar Tiwari, Mohd. Abass Bhat, Abel Dula Wedajo, Shagufta Tariq Khan
Journal of Decision Systems.2024; : 1. CrossRef - Artificial Intelligence in Medical Education and Mentoring in Rehabilitation Medicine
Julie K. Silver, Mustafa Reha Dodurgali, Nara Gavini
American Journal of Physical Medicine & Rehabilitation.2024; 103(11): 1039. CrossRef - The Potential of Artificial Intelligence Tools for Reducing Uncertainty in Medicine and Directions for Medical Education
Sauliha Rabia Alli, Soaad Qahhār Hossain, Sunit Das, Ross Upshur
JMIR Medical Education.2024; 10: e51446. CrossRef - A Systematic Literature Review of Empirical Research on Applying Generative Artificial Intelligence in Education
Xin Zhang, Peng Zhang, Yuan Shen, Min Liu, Qiong Wang, Dragan Gašević, Yizhou Fan
Frontiers of Digital Education.2024; 1(3): 223. CrossRef
Research articles
-
Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study
-
Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-pyng Chung
-
J Educ Eval Health Prof. 2024;21:3. Published online February 23, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.3
-
-
Abstract
PDFSupplementary Material
- Purpose
Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into 6 elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.
Methods
Messick’s unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from 3 pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018–2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).
Results
Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone (r=0.34, P=0.019).
Conclusion
Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.
-
Negative effects on medical students’ scores for clinical performance during the COVID-19 pandemic in Taiwan: a comparative study
-
Eunice Jia-Shiow Yuan, Shiau-Shian Huang, Chia-An Hsu, Jiing-Feng Lirng, Tzu-Hao Li, Chia-Chang Huang, Ying-Ying Yang, Chung-Pin Li, Chen-Huan Chen
-
J Educ Eval Health Prof. 2023;20:37. Published online December 26, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.37
-
-
1,805
View
-
107
Download
-
1
Web of Science
-
1
Crossref
-
Abstract
PDFSupplementary Material
- Purpose
Coronavirus disease 2019 (COVID-19) has heavily impacted medical clinical education in Taiwan. Medical curricula have been altered to minimize exposure and limit transmission. This study investigated the effect of COVID-19 on Taiwanese medical students’ clinical performance using online standardized evaluation systems and explored the factors influencing medical education during the pandemic.
Methods
Medical students were scored from 0 to 100 based on their clinical performance from 1/1/2018 to 6/31/2021. The students were placed into pre-COVID-19 (before 2/1/2020) and midst-COVID-19 (on and after 2/1/2020) groups. Each group was further categorized into COVID-19-affected specialties (pulmonary, infectious, and emergency medicine) and other specialties. Generalized estimating equations (GEEs) were used to compare and examine the effects of relevant variables on student performance.
Results
In total, 16,944 clinical scores were obtained for COVID-19-affected specialties and other specialties. For the COVID-19-affected specialties, the midst-COVID-19 score (88.513.52) was significantly lower than the pre-COVID-19 score (90.143.55) (P<0.0001). For the other specialties, the midst-COVID-19 score (88.323.68) was also significantly lower than the pre-COVID-19 score (90.063.58) (P<0.0001). There were 1,322 students (837 males and 485 females). Male students had significantly lower scores than female students (89.333.68 vs. 89.993.66, P=0.0017). GEE analysis revealed that the COVID-19 pandemic (unstandardized beta coefficient=-1.99, standard error [SE]=0.13, P<0.0001), COVID-19-affected specialties (B=0.26, SE=0.11, P=0.0184), female students (B=1.10, SE=0.20, P<0.0001), and female attending physicians (B=-0.19, SE=0.08, P=0.0145) were independently associated with students’ scores.
Conclusion
COVID-19 negatively impacted medical students' clinical performance, regardless of their specialty. Female students outperformed male students, irrespective of the pandemic.
-
Citations
Citations to this article as recorded by
- The emergence of generative artificial intelligence platforms in 2023, journal metrics, appreciation to reviewers and volunteers, and obituary
Sun Huh
Journal of Educational Evaluation for Health Professions.2024; 21: 9. CrossRef
-
Use of learner-driven, formative, ad-hoc, prospective assessment of competence in physical therapist clinical education in the United States: a prospective cohort study
-
Carey Holleran, Jeffrey Konrad, Barbara Norton, Tamara Burlis, Steven Ambler
-
J Educ Eval Health Prof. 2023;20:36. Published online December 8, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.36
-
-
Abstract
PDFSupplementary Material
- Purpose
The purpose of this project was to implement a process for learner-driven, formative, prospective, ad-hoc, entrustment assessment in Doctor of Physical Therapy clinical education. Our goals were to develop an innovative entrustment assessment tool, and then explore whether the tool detected (1) differences between learners at different stages of development and (2) differences within learners across the course of a clinical education experience. We also investigated whether there was a relationship between the number of assessments and change in performance.
Methods
A prospective, observational, cohort of clinical instructors (CIs) was recruited to perform learner-driven, formative, ad-hoc, prospective, entrustment assessments. Two entrustable professional activities (EPAs) were used: (1) gather a history and perform an examination and (2) implement and modify the plan of care, as needed. CIs provided a rating on the entrustment scale and provided narrative support for their rating.
Results
Forty-nine learners participated across 4 clinical experiences (CEs), resulting in 453 EPA learner-driven assessments. For both EPAs, statistically significant changes were detected both between learners at different stages of development and within learners across the course of a CE. Improvement within each CE was significantly related to the number of feedback opportunities.
Conclusion
The results of this pilot study provide preliminary support for the use of learner-driven, formative, ad-hoc assessments of competence based on EPAs with a novel entrustment scale. The number of formative assessments requested correlated with change on the EPA scale, suggesting that formative feedback may augment performance improvement.
-
Effect of a transcultural nursing course on improving the cultural competency of nursing graduate students in Korea: a before-and-after study
-
Kyung Eui Bae, Geum Hee Jeong
-
J Educ Eval Health Prof. 2023;20:35. Published online December 4, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.35
-
-
Abstract
PDFSupplementary Material
- Purpose
This study aimed to evaluate the impact of a transcultural nursing course on enhancing the cultural competency of graduate nursing students in Korea. We hypothesized that participants’ cultural competency would significantly improve in areas such as communication, biocultural ecology and family, dietary habits, death rituals, spirituality, equity, and empowerment and intermediation after completing the course. Furthermore, we assessed the participants’ overall satisfaction with the course.
Methods
A before-and-after study was conducted with graduate nursing students at Hallym University, Chuncheon, Korea, from March to June 2023. A transcultural nursing course was developed based on Giger & Haddad’s transcultural nursing model and Purnell’s theoretical model of cultural competence. Data was collected using a cultural competence scale for registered nurses developed by Kim and his colleagues. A total of 18 students participated, and the paired t-test was employed to compare pre-and post-intervention scores.
Results
The study revealed significant improvements in all 7 categories of cultural nursing competence (P<0.01). Specifically, the mean differences in scores (pre–post) ranged from 0.74 to 1.09 across the categories. Additionally, participants expressed high satisfaction with the course, with an average score of 4.72 out of a maximum of 5.0.
Conclusion
The transcultural nursing course effectively enhanced the cultural competency of graduate nursing students. Such courses are imperative to ensure quality care for the increasing multicultural population in Korea.
Brief report
-
ChatGPT (GPT-3.5) as an assistant tool in microbial pathogenesis studies in Sweden: a cross-sectional comparative study
-
Catharina Hultgren, Annica Lindkvist, Volkan Özenci, Sophie Curbo
-
J Educ Eval Health Prof. 2023;20:32. Published online November 22, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.32
-
-
1,824
View
-
131
Download
-
2
Web of Science
-
2
Crossref
-
Abstract
PDFSupplementary Material
- ChatGPT (GPT-3.5) has entered higher education and there is a need to determine how to use it effectively. This descriptive study compared the ability of GPT-3.5 and teachers to answer questions from dental students and construct detailed intended learning outcomes. When analyzed according to a Likert scale, we found that GPT-3.5 answered the questions from dental students in a similar or even more elaborate way compared to the answers that had previously been provided by a teacher. GPT-3.5 was also asked to construct detailed intended learning outcomes for a course in microbial pathogenesis, and when these were analyzed according to a Likert scale they were, to a large degree, found irrelevant. Since students are using GPT-3.5, it is important that instructors learn how to make the best use of it both to be able to advise students and to benefit from its potential.
-
Citations
Citations to this article as recorded by
- Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review
Xiaojun Xu, Yixiao Chen, Jing Miao
Journal of Educational Evaluation for Health Professions.2024; 21: 6. CrossRef - Information amount, accuracy, and relevance of generative artificial intelligence platforms’ answers regarding learning objectives of medical arthropodology evaluated in English and Korean queries in December 2023: a descriptive study
Hyunju Lee, Soobin Park
Journal of Educational Evaluation for Health Professions.2023; 20: 39. CrossRef
Technical report
-
Item difficulty index, discrimination index, and reliability of the 26 health professions licensing examinations in 2022, Korea: a psychometric study
-
Yoon Hee Kim, Bo Hyun Kim, Joonki Kim, Bokyoung Jung, Sangyoung Bae
-
J Educ Eval Health Prof. 2023;20:31. Published online November 22, 2023
-
DOI: https://doi.org/10.3352/jeehp.2023.20.31
-
-
Abstract
PDFSupplementary Material
- Purpose
This study presents item analysis results of the 26 health personnel licensing examinations managed by the Korea Health Personnel Licensing Examination Institute (KHPLEI) in 2022.
Methods
The item difficulty index, item discrimination index, and reliability were calculated. The item discrimination index was calculated using a discrimination index based on the upper and lower 27% rule and the item-total correlation.
Results
Out of 468,352 total examinees, 418,887 (89.4%) passed. The pass rates ranged from 27.3% for health educators level 1 to 97.1% for oriental medical doctors. Most examinations had a high average difficulty index, albeit to varying degrees, ranging from 61.3% for prosthetists and orthotists to 83.9% for care workers. The average discrimination index based on the upper and lower 27% rule ranged from 0.17 for oriental medical doctors to 0.38 for radiological technologists. The average item-total correlation ranged from 0.20 for oriental medical doctors to 0.38 for radiological technologists. The Cronbach α, as a measure of reliability, ranged from 0.872 for health educators-level 3 to 0.978 for medical technologists. The correlation coefficient between the average difficulty index and average discrimination index was -0.2452 (P=0.1557), that between the average difficulty index and the average item-total correlation was 0.3502 (P=0.0392), and that between the average discrimination index and the average item-total correlation was 0.7944 (P<0.0001).
Conclusion
This technical report presents the item analysis results and reliability of the recent examinations by the KHPLEI, demonstrating an acceptable range of difficulty index and discrimination index values, as well as good reliability.