Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Search

Page Path
HOME > Search
2 "Likelihood functions"
Filter
Filter
Article category
Keywords
Publication year
Authors
Software reports
Special article on the 20th anniversary of the journal
The irtQ R package: a user-friendly tool for item response theory-based test data analysis and calibration  
Hwanggyu Lim, Kyungseok Kang
J Educ Eval Health Prof. 2024;21:23.   Published online September 12, 2024
DOI: https://doi.org/10.3352/jeehp.2024.21.23
  • 5,622 View
  • 319 Download
  • 3 Web of Science
  • 2 Crossref
AbstractAbstract PDFSupplementary Material
Computerized adaptive testing (CAT) has become a widely adopted test design for high-stakes licensing and certification exams, particularly in the health professions in the United States, due to its ability to tailor test difficulty in real time, reducing testing time while providing precise ability estimates. A key component of CAT is item response theory (IRT), which facilitates the dynamic selection of items based on examinees' ability levels during a test. Accurate estimation of item and ability parameters is essential for successful CAT implementation, necessitating convenient and reliable software to ensure precise parameter estimation. This paper introduces the irtQ R package (http://CRAN.R-project.org/), which simplifies IRT-based analysis and item calibration under unidimensional IRT models. While it does not directly simulate CAT, it provides essential tools to support CAT development, including parameter estimation using marginal maximum likelihood estimation via the expectation-maximization algorithm, pretest item calibration through fixed item parameter calibration and fixed ability parameter calibration methods, and examinee ability estimation. The package also enables users to compute item and test characteristic curves and information functions necessary for evaluating the psychometric properties of a test. This paper illustrates the key features of the irtQ package through examples using simulated datasets, demonstrating its utility in IRT applications such as test data analysis and ability scoring. By providing a user-friendly environment for IRT analysis, irtQ significantly enhances the capacity for efficient adaptive testing research and operations. Finally, the paper highlights additional core functionalities of irtQ, emphasizing its broader applicability to the development and operation of IRT-based assessments.

Citations

Citations to this article as recorded by  
  • Development of a CAT based Diagnostic System for Assessing Basic Academic Skills in Undergraduate Students
    Woo-Jin Han, Jeongwook Choi, Dong-Gi Seo
    The Korean Association of General Education.2025; 19(3): 177.     CrossRef
  • Feasibility of applying computerized adaptive testing to the Clinical Medical Science Comprehensive Examination in Korea: a psychometric study
    Jeongwook Choi, Sung-Soo Jung, Eun Kwang Choi, Kyung Sik Kim, Dong Gi Seo
    Journal of Educational Evaluation for Health Professions.2025; 22: 29.     CrossRef
Introduction to the LIVECAT web-based computerized adaptive testing platform  
Dong Gi Seo, Jeongwook Choi
J Educ Eval Health Prof. 2020;17:27.   Published online September 29, 2020
DOI: https://doi.org/10.3352/jeehp.2020.17.27
  • 8,565 View
  • 161 Download
  • 7 Web of Science
  • 9 Crossref
AbstractAbstract PDFSupplementary Material
This study introduces LIVECAT, a web-based computerized adaptive testing platform. This platform provides many functions, including writing item content, managing an item bank, creating and administering a test, reporting test results, and providing information about a test and examinees. The LIVECAT provides examination administrators with an easy and flexible environment for composing and managing examinations. It is available at http://www.thecatkorea.com/. Several tools were used to program LIVECAT, as follows: operating system, Amazon Linux; web server, nginx 1.18; WAS, Apache Tomcat 8.5; database, Amazon RDMS—Maria DB; and languages, JAVA8, HTML5/CSS, Javascript, and jQuery. The LIVECAT platform can be used to implement several item response theory (IRT) models such as the Rasch and 1-, 2-, 3-parameter logistic models. The administrator can choose a specific model of test construction in LIVECAT. Multimedia data such as images, audio files, and movies can be uploaded to items in LIVECAT. Two scoring methods (maximum likelihood estimation and expected a posteriori) are available in LIVECAT and the maximum Fisher information item selection method is applied to every IRT model in LIVECAT. The LIVECAT platform showed equal or better performance compared with a conventional test platform. The LIVECAT platform enables users without psychometric expertise to easily implement and perform computerized adaptive testing at their institutions. The most recent LIVECAT version only provides a dichotomous item response model and the basic components of CAT. Shortly, LIVECAT will include advanced functions, such as polytomous item response models, weighted likelihood estimation method, and content balancing method.

Citations

Citations to this article as recorded by  
  • A Systematic Review on Computerized Adaptive Testing
    Hümeyra Demir, Selahattin Gelbal
    Erzincan Üniversitesi Eğitim Fakültesi Dergisi.2025; 27(1): 137.     CrossRef
  • Development of a CAT based Diagnostic System for Assessing Basic Academic Skills in Undergraduate Students
    Woo-Jin Han, Jeongwook Choi, Dong-Gi Seo
    The Korean Association of General Education.2025; 19(3): 177.     CrossRef
  • Feasibility of applying computerized adaptive testing to the Clinical Medical Science Comprehensive Examination in Korea: a psychometric study
    Jeongwook Choi, Sung-Soo Jung, Eun Kwang Choi, Kyung Sik Kim, Dong Gi Seo
    Journal of Educational Evaluation for Health Professions.2025; 22: 29.     CrossRef
  • Comparison of real data and simulated data analysis of a stopping rule based on the standard error of measurement in computerized adaptive testing for medical examinations in Korea: a psychometric study
    Dong Gi Seo, Jeongwook Choi, Jinha Kim
    Journal of Educational Evaluation for Health Professions.2024; 21: 18.     CrossRef
  • Educational Technology in the University: A Comprehensive Look at the Role of a Professor and Artificial Intelligence
    Cheolkyu Shin, Dong Gi Seo, Seoyeon Jin, Soo Hwa Lee, Hyun Je Park
    IEEE Access.2024; 12: 116727.     CrossRef
  • The irtQ R package: a user-friendly tool for item response theory-based test data analysis and calibration
    Hwanggyu Lim, Kyungseok Kang
    Journal of Educational Evaluation for Health Professions.2024; 21: 23.     CrossRef
  • Presidential address: improving item validity and adopting computer-based testing, clinical skills assessments, artificial intelligence, and virtual reality in health professions licensing examinations in Korea
    Hyunjoo Pai
    Journal of Educational Evaluation for Health Professions.2023; 20: 8.     CrossRef
  • Patient-reported outcome measures in cancer care: Integration with computerized adaptive testing
    Minyu Liang, Zengjie Ye
    Asia-Pacific Journal of Oncology Nursing.2023; 10(12): 100323.     CrossRef
  • Development of a character qualities test for medical students in Korea using polytomous item response theory and factor analysis: a preliminary scale development study
    Yera Hur, Dong Gi Seo
    Journal of Educational Evaluation for Health Professions.2023; 20: 20.     CrossRef

JEEHP : Journal of Educational Evaluation for Health Professions
TOP