Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
4 "Emergency medical technicians"
Filter
Filter
Article category
Keywords
Publication year
Authors
Funded articles
Research articles
Development of examination objectives for the Korean paramedic and emergency medical technician examination: a survey study  
Tai-hwan Uhm, Heakyung Choi, Seok Hwan Hong, Hyungsub Kim, Minju Kang, Keunyoung Kim, Hyejin Seo, Eunyoung Ki, Hyeryeong Lee, Heejeong Ahn, Uk-jin Choi, Sang Woong Park
J Educ Eval Health Prof. 2024;21:13.   Published online June 12, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.13
  • 1,427 View
  • 240 Download
AbstractAbstract PDFSupplementary Material
Purpose
The duties of paramedics and emergency medical technicians (P&EMTs) are continuously changing due to developments in medical systems. This study presents evaluation goals for P&EMTs by analyzing their work, especially the tasks that new P&EMTs (with less than 3 years’ experience) find difficult, to foster the training of P&EMTs who could adapt to emergency situations after graduation.
Methods
A questionnaire was created based on prior job analyses of P&EMTs. The survey questions were reviewed through focus group interviews, from which 253 task elements were derived. A survey was conducted from July 10, 2023 to October 13, 2023 on the frequency, importance, and difficulty of the 6 occupations in which P&EMTs were employed.
Results
The P&EMTs’ most common tasks involved obtaining patients’ medical histories and measuring vital signs, whereas the most important task was cardiopulmonary resuscitation (CPR). The task elements that the P&EMTs found most difficult were newborn delivery and infant CPR. New paramedics reported that treating patients with fractures, poisoning, and childhood fever was difficult, while new EMTs reported that they had difficulty keeping diaries, managing ambulances, and controlling infection.
Conclusion
Communication was the most important item for P&EMTs, whereas CPR was the most important skill. It is important for P&EMTs to have knowledge of all tasks; however, they also need to master frequently performed tasks and those that pose difficulties in the field. By deriving goals for evaluating P&EMTs, changes could be made to their education, thereby making it possible to train more capable P&EMTs.
The relationship of examinees’ individual characteristics and perceived acceptability of smart device-based testing to test scores on the practice test of the Korea Emergency Medicine Technician Licensing Examination  
Eun Young Lim, Mi Kyoung Yim, Sun Huh
J Educ Eval Health Prof. 2018;15:33.   Published online December 27, 2018
DOI: https://doi.org/10.3352/jeehp.2018.15.33
  • 20,230 View
  • 239 Download
  • 2 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Purpose
Smart device-based testing (SBT) is being introduced into the Republic of Korea’s high-stakes examination system, starting with the Korean Emergency Medicine Technician Licensing Examination (KEMTLE) in December 2017. In order to minimize the effects of variation in examinees’ environment on test scores, this study aimed to identify any associations of variables related to examinees’ individual characteristics and their perceived acceptability of SBT with their SBT practice test scores.
Methods
Of the 569 candidate students who took the KEMTLE on September 12, 2015, 560 responded to a survey questionnaire on the acceptability of SBT after the examination. The questionnaire addressed 8 individual characteristics and contained 2 satisfaction, 9 convenience, and 9 preference items. A comparative analysis according to individual variables was performed. Furthermore, a generalized linear model (GLM) analysis was conducted to identify the effects of individual characteristics and perceived acceptability of SBT on test scores.
Results
Among those who preferred SBT over paper-and-pencil testing, test scores were higher for male participants (mean± standard deviation [SD], 4.36± 0.72) than for female participants (mean± SD, 4.21± 0.73). According to the GLM, no variables evaluated— including gender and experience with computer-based testing, SBT, or using a tablet PC—showed a statistically significant relationship with the total score, scores on multimedia items, or scores on text items.
Conclusion
Individual characteristics and perceived acceptability of SBT did not affect the SBT practice test scores of emergency medicine technician students in Korea. It should be possible to adopt SBT for the KEMTLE without interference from the variables examined in this study.

Citations

Citations to this article as recorded by  
  • Application of computer-based testing in the Korean Medical Licensing Examination, the emergence of the metaverse in medical education, journal metrics and statistics, and appreciation to reviewers and volunteers
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2022; 19: 2.     CrossRef
  • Evaluation of Student Satisfaction with Ubiquitous-Based Tests in Women’s Health Nursing Course
    Mi-Young An, Yun-Mi Kim
    Healthcare.2021; 9(12): 1664.     CrossRef
Review article
Overview and current management of computerized adaptive testing in licensing/certification examinations  
Dong Gi Seo
J Educ Eval Health Prof. 2017;14:17.   Published online July 26, 2017
DOI: https://doi.org/10.3352/jeehp.2017.14.17
  • 40,274 View
  • 389 Download
  • 15 Web of Science
  • 14 Crossref
AbstractAbstract PDF
Computerized adaptive testing (CAT) has been implemented in high-stakes examinations such as the National Council Licensure Examination-Registered Nurses in the United States since 1994. Subsequently, the National Registry of Emergency Medical Technicians in the United States adopted CAT for certifying emergency medical technicians in 2007. This was done with the goal of introducing the implementation of CAT for medical health licensing examinations. Most implementations of CAT are based on item response theory, which hypothesizes that both the examinee and items have their own characteristics that do not change. There are 5 steps for implementing CAT: first, determining whether the CAT approach is feasible for a given testing program; second, establishing an item bank; third, pretesting, calibrating, and linking item parameters via statistical analysis; fourth, determining the specification for the final CAT related to the 5 components of the CAT algorithm; and finally, deploying the final CAT after specifying all the necessary components. The 5 components of the CAT algorithm are as follows: item bank, starting item, item selection rule, scoring procedure, and termination criterion. CAT management includes content balancing, item analysis, item scoring, standard setting, practice analysis, and item bank updates. Remaining issues include the cost of constructing CAT platforms and deploying the computer technology required to build an item bank. In conclusion, in order to ensure more accurate estimations of examinees’ ability, CAT may be a good option for national licensing examinations. Measurement theory can support its implementation for high-stakes examinations.

Citations

Citations to this article as recorded by  
  • From Development to Validation: Exploring the Efficiency of Numetrive, a Computerized Adaptive Assessment of Numerical Reasoning
    Marianna Karagianni, Ioannis Tsaousis
    Behavioral Sciences.2025; 15(3): 268.     CrossRef
  • Validation of the cognitive section of the Penn computerized adaptive test for neurocognitive and clinical psychopathology assessment (CAT-CCNB)
    Akira Di Sandro, Tyler M. Moore, Eirini Zoupou, Kelly P. Kennedy, Katherine C. Lopez, Kosha Ruparel, Lucky J. Njokweni, Sage Rush, Tarlan Daryoush, Olivia Franco, Alesandra Gorgone, Andrew Savino, Paige Didier, Daniel H. Wolf, Monica E. Calkins, J. Cobb S
    Brain and Cognition.2024; 174: 106117.     CrossRef
  • Comparison of real data and simulated data analysis of a stopping rule based on the standard error of measurement in computerized adaptive testing for medical examinations in Korea: a psychometric study
    Dong Gi Seo, Jeongwook Choi, Jinha Kim
    Journal of Educational Evaluation for Health Professions.2024; 21: 18.     CrossRef
  • The irtQ R package: a user-friendly tool for item response theory-based test data analysis and calibration
    Hwanggyu Lim, Kyungseok Kang
    Journal of Educational Evaluation for Health Professions.2024; 21: 23.     CrossRef
  • Implementing Computer Adaptive Testing for High-Stakes Assessment: A Shift for Examinations Council of Lesotho
    Musa Adekunle Ayanwale, Julia Chere-Masopha, Mapulane Mochekele, Malebohang Catherine Morena
    International Journal of New Education.2024;[Epub]     CrossRef
  • The current utilization of the patient-reported outcome measurement information system (PROMIS) in isolated or combined total knee arthroplasty populations
    Puneet Gupta, Natalia Czerwonka, Sohil S. Desai, Alirio J. deMeireles, David P. Trofa, Alexander L. Neuwirth
    Knee Surgery & Related Research.2023;[Epub]     CrossRef
  • Evaluating a Computerized Adaptive Testing Version of a Cognitive Ability Test Using a Simulation Study
    Ioannis Tsaousis, Georgios D. Sideridis, Hannan M. AlGhamdi
    Journal of Psychoeducational Assessment.2021; 39(8): 954.     CrossRef
  • Accuracy and Efficiency of Web-based Assessment Platform (LIVECAT) for Computerized Adaptive Testing
    Do-Gyeong Kim, Dong-Gi Seo
    The Journal of Korean Institute of Information Technology.2020; 18(4): 77.     CrossRef
  • Transformaciones en educación médica: innovaciones en la evaluación de los aprendizajes y avances tecnológicos (parte 2)
    Veronica Luna de la Luz, Patricia González-Flores
    Investigación en Educación Médica.2020; 9(34): 87.     CrossRef
  • Introduction to the LIVECAT web-based computerized adaptive testing platform
    Dong Gi Seo, Jeongwook Choi
    Journal of Educational Evaluation for Health Professions.2020; 17: 27.     CrossRef
  • Computerised adaptive testing accurately predicts CLEFT-Q scores by selecting fewer, more patient-focused questions
    Conrad J. Harrison, Daan Geerards, Maarten J. Ottenhof, Anne F. Klassen, Karen W.Y. Wong Riff, Marc C. Swan, Andrea L. Pusic, Chris J. Sidey-Gibbons
    Journal of Plastic, Reconstructive & Aesthetic Surgery.2019; 72(11): 1819.     CrossRef
  • Presidential address: Preparing for permanent test centers and computerized adaptive testing
    Chang Hwi Kim
    Journal of Educational Evaluation for Health Professions.2018; 15: 1.     CrossRef
  • Updates from 2018: Being indexed in Embase, becoming an affiliated journal of the World Federation for Medical Education, implementing an optional open data policy, adopting principles of transparency and best practice in scholarly publishing, and appreci
    Sun Huh
    Journal of Educational Evaluation for Health Professions.2018; 15: 36.     CrossRef
  • Linear programming method to construct equated item sets for the implementation of periodical computer-based testing for the Korean Medical Licensing Examination
    Dong Gi Seo, Myeong Gi Kim, Na Hui Kim, Hye Sook Shin, Hyun Jung Kim
    Journal of Educational Evaluation for Health Professions.2018; 15: 26.     CrossRef
Technical report
Varying levels of difficulty index of skills-test items randomly selected by examinees on the Korean emergency medical technician licensing examination  
Bongyeun Koh, Sunggi Hong, Soon-Sim Kim, Jin-Sook Hyun, Milye Baek, Jundong Moon, Hayran Kwon, Gyoungyong Kim, Seonggi Min, Gu-Hyun Kang
J Educ Eval Health Prof. 2016;13:5.   Published online January 15, 2016
DOI: https://doi.org/10.3352/jeehp.2016.13.5
  • 35,996 View
  • 175 Download
  • 1 Crossref
AbstractAbstract PDF
Purpose
The goal of this study was to characterize the difficulty index of the items in the skills test components of the class I and II Korean emergency medical technician licensing examination (KEMTLE), which requires examinees to select items randomly.
Methods
The results of 1,309 class I KEMTLE examinations and 1,801 class II KEMTLE examinations in 2013 were subjected to analysis. Items from the basic and advanced skills test sections of the KEMTLE were compared to determine whether some were significantly more difficult than others.
Results
In the class I KEMTLE, all 4 of the items on the basic skills test showed significant variation in difficulty index (P< 0.01), as well as 4 of the 5 items on the advanced skills test (P< 0.05). In the class II KEMTLE, 4 of the 5 items on the basic skills test showed significantly different difficulty index (P< 0.01), as well as all 3 of the advanced skills test items (P< 0.01).
Conclusion
In the skills test components of the class I and II KEMTLE, the procedure in which examinees randomly select questions should be revised to require examinees to respond to a set of fixed items in order to improve the reliability of the national licensing examination.

Citations

Citations to this article as recorded by  
  • Multimedia-Based online Test on Indonesian Language Receptive Skills Development
    M Sudaryanto, D Mardapi, S Hadi
    Journal of Physics: Conference Series.2019; 1339(1): 012120.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions
TOP