This study examines the legality and appropriateness of keeping the multiple-choice question items of the Korean Medical Licensing Examination (KMLE) confidential. Through an analysis of cases from the United States, Canada, and Australia, where medical licensing exams are conducted using item banks and computer-based testing, we found that exam items are kept confidential to ensure fairness and prevent cheating. In Korea, the Korea Health Personnel Licensing Examination Institute (KHPLEI) has been disclosing KMLE questions despite concerns over exam integrity. Korean courts have consistently ruled that multiple-choice question items prepared by public institutions are non-public information under Article 9(1)(v) of the Korea Official Information Disclosure Act (KOIDA), which exempts disclosure if it significantly hinders the fairness of exams or research and development. The Constitutional Court of Korea has upheld this provision. Given the time and cost involved in developing high-quality items and the need to accurately assess examinees’ abilities, there are compelling reasons to keep KMLE items confidential. As a public institution responsible for selecting qualified medical practitioners, KHPLEI should establish its disclosure policy based on a balanced assessment of public interest, without influence from specific groups. We conclude that KMLE questions qualify as non-public information under KOIDA, and KHPLEI may choose to maintain their confidentiality to ensure exam fairness and efficiency.
Purpose The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model’s performance.
Methods The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).
Results The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.
Conclusion Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study’s findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.
Citations
Citations to this article as recorded by
Potential of ChatGPT to Pass the Japanese Medical and Healthcare Professional National Licenses: A Literature Review Kai Ishida, Eisuke Hanada Cureus.2024;[Epub] CrossRef
Performance of Generative Pre-trained Transformer (GPT)-4 and Gemini Advanced on the First-Class Radiation Protection Supervisor Examination in Japan Hiroki Goto, Yoshioki Shiraishi, Seiji Okada Cureus.2024;[Epub] CrossRef
Performance of ChatGPT‐3.5 and ChatGPT‐4o in the Japanese National Dental Examination Osamu Uehara, Tetsuro Morikawa, Fumiya Harada, Nodoka Sugiyama, Yuko Matsuki, Daichi Hiraki, Hinako Sakurai, Takashi Kado, Koki Yoshida, Yukie Murata, Hirofumi Matsuoka, Toshiyuki Nagasawa, Yasushi Furuichi, Yoshihiro Abiko, Hiroko Miura Journal of Dental Education.2024;[Epub] CrossRef
Purpose This study presents item analysis results of the 26 health personnel licensing examinations managed by the Korea Health Personnel Licensing Examination Institute (KHPLEI) in 2022.
Methods The item difficulty index, item discrimination index, and reliability were calculated. The item discrimination index was calculated using a discrimination index based on the upper and lower 27% rule and the item-total correlation.
Results Out of 468,352 total examinees, 418,887 (89.4%) passed. The pass rates ranged from 27.3% for health educators level 1 to 97.1% for oriental medical doctors. Most examinations had a high average difficulty index, albeit to varying degrees, ranging from 61.3% for prosthetists and orthotists to 83.9% for care workers. The average discrimination index based on the upper and lower 27% rule ranged from 0.17 for oriental medical doctors to 0.38 for radiological technologists. The average item-total correlation ranged from 0.20 for oriental medical doctors to 0.38 for radiological technologists. The Cronbach α, as a measure of reliability, ranged from 0.872 for health educators-level 3 to 0.978 for medical technologists. The correlation coefficient between the average difficulty index and average discrimination index was -0.2452 (P=0.1557), that between the average difficulty index and the average item-total correlation was 0.3502 (P=0.0392), and that between the average discrimination index and the average item-total correlation was 0.7944 (P<0.0001).
Conclusion This technical report presents the item analysis results and reliability of the recent examinations by the KHPLEI, demonstrating an acceptable range of difficulty index and discrimination index values, as well as good reliability.
Purpose This study investigated the validity of introducing a clinical skills examination (CSE) to the Korean Oriental Medicine Licensing Examination through a mixed-method modified Delphi study.
Methods A 3-round Delphi study was conducted between September and November 2022. The expert panel comprised 21 oriental medicine education experts who were officially recommended by relevant institutions and organizations. The questionnaires included potential content for the CSE and a detailed implementation strategy. Subcommittees were formed to discuss concerns around the introduction of the CSE, which were collected as open-ended questions. In this study, a 66.7% or greater agreement rate was defined as achieving a consensus.
Results The expert panel’s evaluation of the proposed clinical presentations and basic clinical skills suggested their priorities. Of the 10 items investigated for building a detailed implementation strategy for the introduction of the CSE to the Korean Oriental Medicine Licensing Examination, a consensus was achieved on 9. However, the agreement rate on the timing of the introduction of the CSE was low. Concerns around 4 clinical topics were discussed in the subcommittees, and potential solutions were proposed.
Conclusion This study offers preliminary data and raises some concerns that can be used as a reference while discussing the introduction of the CSE to the Korean Oriental Medicine Licensing Examination.
Purpose This study aims to suggest the number of test items in each of 8 nursing activity categories of the Korean Nursing Licensing Examination, which comprises 134 activity statements including 275 items. The examination will be able to evaluate the minimum ability that nursing graduates must have to perform their duties. Methods: Two opinion surveys involving the members of 7 academic societies were conducted from March 19 to May 14, 2021. The survey results were reviewed by members of 4 expert associations from May 21 to June 4, 2021. The results for revised numbers of items in each category were compared with those reported by Tak and his colleagues and the National Council License Examination for Registered Nurses of the United States. Results: Based on 2 opinion surveys and previous studies, the suggestions for item allocation to 8 nursing activity categories of the Korean Nursing Licensing Examination in this study are as follows: 50 items for management of care and improvement of professionalism, 33 items for safety and infection control, 40 items for management of potential risk, 28 items for basic care, 47 items for physiological integrity and maintenance, 33 items for pharmacological and parenteral therapies, 24 items for psychosocial integrity and maintenance, and 20 items for health promotion and maintenance. Twenty other items related to health and medical laws were not included due to their mandatory status. Conclusion: These suggestions for the number of test items for each activity category will be helpful in developing new items for the Korean Nursing Licensing Examination.
Purpose The number of Korean midwifery licensing examination applicants has steadily decreased due to the low birth rate and lack of training institutions for midwives. This study aimed to evaluate the adequacy of the examination-based licensing system and the possibility of a training-based licensing system.
Methods A survey questionnaire was developed and dispatched to 230 professionals from December 28, 2022 to January 13, 2023, through an online form using Google Surveys. Descriptive statistics were used to analyze the results.
Results Responses from 217 persons (94.3%) were analyzed after excluding incomplete responses. Out of the 217 participants, 198 (91.2%) agreed with maintaining the current examination-based licensing system; 94 (43.3%) agreed with implementing a training-based licensing system to cover the examination costs due to the decreasing number of applicants; 132 (60.8%) agreed with establishing a midwifery education evaluation center for a training-based licensing system; 163 (75.1%) said that the quality of midwifery might be lowered if midwives were produced only by a training-based licensing system, and 197 (90.8%) said that the training of midwives as birth support personnel should be promoted in Korea.
Conclusion Favorable results were reported for the examination-based licensing system; however, if a training-based licensing system is implemented, it will be necessary to establish a midwifery education evaluation center to manage the quality of midwives. As the annual number of candidates for the Korean midwifery licensing examination has been approximately 10 in recent years, it is necessary to consider more actively granting midwifery licenses through a training-based licensing system.
Purpose This study aimed to develop the examination objectives based on nursing competency of the Korean Nursing Licensing Examination.
Methods This is a validity study to develop the examination objectives based on nursing competency. Data were collected in December 2021. We reviewed the literature related to changing nurse roles and on the learning objectives for the Korea Medical Licensing Examination and other health personnel licensing examinations. Thereafter, we created a draft of the nursing problems list for examination objectives based on the literature review, and the content validity was evaluated by experts. A final draft of the examination objectives is presented and discussed.
Results A total of 4 domains, 12 classes, and 85 nursing problems for the Korean Nursing Liscensing Examination were developed. They included the essentials of objectives, related factors, evaluation goals, related activity statements, related clients, related settings, and specific outcomes.
Conclusion This study developed a draft of the examination objectives based on clinical competency that were related to the clinical situations of nurses and comprised appropriate test items for the licensing examination. Above results may be able to provide fundamental data for item development that reflects future nursing practices.
Citations
Citations to this article as recorded by
A validity study of COMLEX-USA Level 3 with the new test design Xia Mao, John R. Boulet, Jeanne M. Sandella, Michael F. Oliverio, Larissa Smith Journal of Osteopathic Medicine.2024; 124(6): 257. CrossRef
A Survey on Perceptions of the Direction of Korean Medicine Education and National Licensing Examination Han-Byul Cho, Won-Suk Sung, Jiseong Hong, Yeonseok Kang, Eun-Jung Kim Healthcare.2023; 11(12): 1685. CrossRef
Suggestion for item allocation to 8 nursing activity categories of the Korean Nursing Licensing Examination: a survey-based descriptive study Kyunghee Kim, So Young Kang, Younhee Kang, Youngran Kweon, Hyunjung Kim, Youngshin Song, Juyeon Cho, Mi-Young Choi, Hyun Su Lee Journal of Educational Evaluation for Health Professions.2023; 20: 18. CrossRef
Purpose This study aimsed to gather opinions from medical educators on the possibility of introducing an interview to the Korean Medical Licensing Examination (KMLE) to assess professional attributes. Specifically following topics were dealt with: the appropriate timing and tool to assess unprofessional conduct; ; the possiblity of prevention of unprofessional conduct by introducing an interview to the KMLE; and the possibility of implementation of an interview to the KMLE.
Methods A cross-sectional study approach based on a survey questionnaire was adopted. We analyzed 104 pieces of news about doctors’ unprofessional conduct to determine the deficient professional attributes. We derived 24 items of unprofessional conduct and developed the questionnaire and surveyed 250 members of the Korean Society of Medical Education 2 times. Descriptive statistics, cross-tabulation analysis, and Fisher’s exact test were applied to the responses. The answers to the open-ended questions were analyzed using conventional content analysis.
Results In the survey, 49 members (19.6%) responded. Out of 49, 24 (49.5%) responded in the 2nd survey. To assess unprofessional conduct, there was no dominant timing among basic medical education (BME), KMLE, and continuing professional development (CPD). There was no overwhelming assessment tool among written examination, objective structured clinical examination, practice observation, and interview. Response rates of “impossible” (49.0%) and “possible” (42.9%) suggested an interview of the KMLE prevented unprofessional conduct. In terms of implementation, “impossible” (50.0%) was selected more often than “possible” (33.3%).
Conclusion Professional attributes should be assessed by various tools over the period from BME to CPD. Hence, it may be impossible to introduce an interview to assess professional attributes to the KMLE, and a system is needed such as self-regulation by the professional body rather than licensing examination.
Purpose This study aimed to evaluate the level of professional ethics awareness and medical ethics competency in order to assess the potential need for ethics items to be included on the Korean Dental Hygienist Licensing Examination.
Methods In total, 358 clinical dental hygienists and dental hygiene students completed a structured questionnaire to evaluate their level of ethical awareness and medical ethics competency. The sub-factors of medical ethics were classified into relationships with patients, medical and social relations, and individual specialized fields.
Results Only 32.1% of participants indicated that they had taken a course on professional ethics in the university curriculum, but 95.2% of respondents considered professional ethics to be important. The overall score for medical ethics competency was average (3.37 out of 5). The score for relationships with patients was 3.75 points, followed by medical and social relations (3.19 points) and individual specialized fields (3.16 points). The level of professional ethics awareness was higher among participants who had taken a course on professional ethics than among those who had not done so or who did not remember whether they had done so.
Conclusion Dental hygienists were aware of the importance of professional ethics, but their medical ethics competency was moderate. Therefore, medical ethics should be treated as a required subject in the university curriculum, and medical ethics competency evaluations should be strengthened by adding ethics items to the Korean Dental Hygienist Licensing Examination.
Purpose This study explored the possibility of using the Angoff method, in which panel experts determine the cut score of an exam, for the Korean Nursing Licensing Examination (KNLE). Two mock exams for the KNLE were analyzed. The Angoff standard setting procedure was conducted and the results were analyzed. We also aimed to examine the procedural validity of applying the Angoff method in this context.
Methods For both mock exams, we set a pass-fail cut score using the Angoff method. The standard setting panel consisted of 16 nursing professors. After the Angoff procedure, the procedural validity of establishing the standard was evaluated by investigating the responses of the standard setters.
Results The descriptions of the minimally competent person for the KNLE were presented at the levels of general and subject performance. The cut scores of first and second mock exams were 74.4 and 76.8, respectively. These were higher than the traditional cut score (60% of the total score of the KNLE). The panel survey showed very positive responses, with scores higher than 4 out of 5 points on a Likert scale.
Conclusion The scores calculated for both mock tests were similar, and were much higher than the existing cut scores. In the second simulation, the standard deviation of the Angoff rating was lower than in the first simulation. According to the survey results, procedural validity was acceptable, as shown by a high level of confidence. The results show that determining cut scores by an expert panel is an applicable method.
Citations
Citations to this article as recorded by
Experts’ prediction of item difficulty of multiple-choice questions in the Ethiopian Undergraduate Medicine Licensure Examination Shewatatek Gedamu Wonde, Tefera Tadesse, Belay Moges, Stefan K. Schauber BMC Medical Education.2024;[Epub] CrossRef
Comparing Estimated and Real Item Difficulty Using Multi-Facet Rasch Analysis Ayfer SAYIN, Sebahat GÖREN Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi.2023; 14(4): 440. CrossRef
Application of computer-based testing in the Korean Medical Licensing Examination, the emergence of the metaverse in medical education, journal metrics and statistics, and appreciation to reviewers and volunteers Sun Huh Journal of Educational Evaluation for Health Professions.2022; 19: 2. CrossRef
Possibility of using the yes/no Angoff method as a substitute for the percent Angoff method for estimating the cutoff score of the Korean Medical Licensing Examination: a simulation study Janghee Park Journal of Educational Evaluation for Health Professions.2022; 19: 23. CrossRef
Development of examination objectives based on nursing competency for the Korean Nursing Licensing Examination: a validity study Sujin Shin, Gwang Suk Kim, Jun-Ah Song, Inyoung Lee Journal of Educational Evaluation for Health Professions.2022; 19: 19. CrossRef
Possibility of independent use of the yes/no Angoff and Hofstee methods for the standard setting of the Korean Medical Licensing Examination written test: a descriptive study Do-Hwan Kim, Ye Ji Kang, Hoon-Ki Park Journal of Educational Evaluation for Health Professions.2022; 19: 33. CrossRef
Comparing the cut score for the borderline group method and borderline regression method with norm-referenced standard setting in an objective structured clinical examination in medical school in Korea Song Yi Park, Sang-Hwa Lee, Min-Jeong Kim, Ki-Hwan Ji, Ji Ho Ryu Journal of Educational Evaluation for Health Professions.2021; 18: 25. CrossRef
Purpose We aimed to review and provide a quality improvement for the document utilized by the relevant Korean government body to verify and evaluate foreign university/college graduates’ eligibility for nursing and qualification to take the Korean nursing licensing examination.
Methods This was a descriptive study. We analyzed the current Korean qualification system for foreign graduates to Korean nursing licensing examination and the same system utilized in some other countries. Then, we created a draft of the reviewed qualification standards document based on the 2 prior analyses and their comparisons, and applied a questionnaire in an open hearing with 5 experts to enhance the draft’s quality. Finally, we presented and discussed the final draft.
Results The reviewed criteria of the qualification standards included confirming whether the foreign graduate’s university has an accreditation provided by its relevant government body, the exclusion of foreign graduates’ provision of several documents previously required, a minimum number of credits (1,000 hours) for their original course, a 3-year minimum enrollment period for their original course, and a mandatory reassessment of the foreign graduates’ university recognition in a 5-year cycle.
Conclusion We believe that by creating a review draft that addresses the flaws of the current document utilized to determine the qualification for foreign graduates to take the Korean nursing licensing examination, we have simplified it for a better understanding of the application process. We hope that this draft will contribute to a more objective and equitable qualification process for foreign university nurse graduates in Korea.
Citations
Citations to this article as recorded by
Recognition of nursing qualification and credentialing pathway of Filipino nurses in Finland: A qualitative study Floro Cubelo, Maliheh Nekouei Marvi Langari, Krista Jokiniemi, Hannele Turunen International Nursing Review.2024; 71(3): 661. CrossRef
Purpose Smart device-based testing (SBT) is being introduced into the Republic of Korea’s high-stakes examination system, starting with the Korean Emergency Medicine Technician Licensing Examination (KEMTLE) in December 2017. In order to minimize the effects of variation in examinees’ environment on test scores, this study aimed to identify any associations of variables related to examinees’ individual characteristics and their perceived acceptability of SBT with their SBT practice test scores.
Methods Of the 569 candidate students who took the KEMTLE on September 12, 2015, 560 responded to a survey questionnaire on the acceptability of SBT after the examination. The questionnaire addressed 8 individual characteristics and contained 2 satisfaction, 9 convenience, and 9 preference items. A comparative analysis according to individual variables was performed. Furthermore, a generalized linear model (GLM) analysis was conducted to identify the effects of individual characteristics and perceived acceptability of SBT on test scores.
Results Among those who preferred SBT over paper-and-pencil testing, test scores were higher for male participants (mean± standard deviation [SD], 4.36± 0.72) than for female participants (mean± SD, 4.21± 0.73). According to the GLM, no variables evaluated— including gender and experience with computer-based testing, SBT, or using a tablet PC—showed a statistically significant relationship with the total score, scores on multimedia items, or scores on text items.
Conclusion Individual characteristics and perceived acceptability of SBT did not affect the SBT practice test scores of emergency medicine technician students in Korea. It should be possible to adopt SBT for the KEMTLE without interference from the variables examined in this study.
Citations
Citations to this article as recorded by
Application of computer-based testing in the Korean Medical Licensing Examination, the emergence of the metaverse in medical education, journal metrics and statistics, and appreciation to reviewers and volunteers Sun Huh Journal of Educational Evaluation for Health Professions.2022; 19: 2. CrossRef
Evaluation of Student Satisfaction with Ubiquitous-Based Tests in Women’s Health Nursing Course Mi-Young An, Yun-Mi Kim Healthcare.2021; 9(12): 1664. CrossRef
Purpose This study aimed to compare the possible standard-setting methods for the Korean Radiological Technologist Licensing Examination, which has a fixed cut score, and to suggest the most appropriate method.
Methods Six radiological technology professors set standards for 250 items on the Korean Radiological Technologist Licensing Examination administered in December 2016 using the Angoff, Ebel, bookmark, and Hofstee methods.
Results With a maximum percentile score of 100, the cut score for the examination was 71.27 using the Angoff method, 62.2 using the Ebel method, 64.49 using the bookmark method, and 62 using the Hofstee method. Based on the Hofstee method, an acceptable cut score for the examination would be between 52.83 and 70, but the cut score was 71.27 using the Angoff method.
Conclusion The above results suggest that the best standard-setting method to determine the cut score would be a panel discussion with the modified Angoff or Ebel method, with verification of the rated results by the Hofstee method. Since no standard-setting method has yet been adopted for the Korean Radiological Technologist Licensing Examination, this study will be able to provide practical guidance for introducing a standard-setting process.
Citations
Citations to this article as recorded by
Setting standards for a diagnostic test of aviation English for student pilots Maria Treadaway, John Read Language Testing.2024; 41(3): 557. CrossRef
Third Year Veterinary Student Academic Encumbrances and Tenacity: Navigating Clinical Skills Curricula and Assessment Saundra H. Sample, Elpida Artemiou, Darlene J. Donszelmann, Cindy Adams Journal of Veterinary Medical Education.2024;[Epub] CrossRef
The challenges inherent with anchor-based approaches to the interpretation of important change in clinical outcome assessments Kathleen W. Wyrwich, Geoffrey R. Norman Quality of Life Research.2023; 32(5): 1239. CrossRef
Possibility of independent use of the yes/no Angoff and Hofstee methods for the standard setting of the Korean Medical Licensing Examination written test: a descriptive study Do-Hwan Kim, Ye Ji Kang, Hoon-Ki Park Journal of Educational Evaluation for Health Professions.2022; 19: 33. CrossRef
Comparison of the validity of bookmark and Angoff standard setting methods in medical performance tests Majid Yousefi Afrashteh BMC Medical Education.2021;[Epub] CrossRef
Comparing the cut score for the borderline group method and borderline regression method with norm-referenced standard setting in an objective structured clinical examination in medical school in Korea Song Yi Park, Sang-Hwa Lee, Min-Jeong Kim, Ki-Hwan Ji, Ji Ho Ryu Journal of Educational Evaluation for Health Professions.2021; 18: 25. CrossRef
Using the Angoff method to set a standard on mock exams for the Korean Nursing Licensing Examination Mi Kyoung Yim, Sujin Shin Journal of Educational Evaluation for Health Professions.2020; 17: 14. CrossRef
Performance of the Ebel standard-setting method for the spring 2019 Royal College of Physicians and Surgeons of Canada internal medicine certification examination consisting of multiple-choice questions Jimmy Bourque, Haley Skinner, Jonathan Dupré, Maria Bacchus, Martha Ainslie, Irene W. Y. Ma, Gary Cole Journal of Educational Evaluation for Health Professions.2020; 17: 12. CrossRef
Similarity of the cut score in test sets with different item amounts using the modified Angoff, modified Ebel, and Hofstee standard-setting methods for the Korean Medical Licensing Examination Janghee Park, Mi Kyoung Yim, Na Jin Kim, Duck Sun Ahn, Young-Min Kim Journal of Educational Evaluation for Health Professions.2020; 17: 28. CrossRef
Purpose The dimensionality of examinations provides empirical evidence of the internal test structure underlying the responses to a set of items. In turn, the internal structure is an important piece of evidence of the validity of an examination. Thus, the aim of this study was to investigate the performance of the DETECT program and to use it to examine the internal structure of the Korean nursing licensing examination.
Methods Non-parametric methods of dimensional testing, such as the DETECT program, have been proposed as ways of overcoming the limitations of traditional parametric methods. A non-parametric method (the DETECT program) was investigated using simulation data under several conditions and applied to the Korean nursing licensing examination.
Results The DETECT program performed well in terms of determining the number of underlying dimensions under several different conditions in the simulated data. Further, the DETECT program correctly revealed the internal structure of the Korean nursing licensing examination, meaning that it detected the proper number of dimensions and appropriately clustered the items within each dimension.
Conclusion The DETECT program performed well in detecting the number of dimensions and in assigning items for each dimension. This result implies that the DETECT method can be useful for examining the internal structure of assessments, such as licensing examinations, that possess relatively many domains and content areas.
Citations
Citations to this article as recorded by
Meanings of Rough Sex across Gender, Sexual Identity, and Political Ideology: A Conditional Covariance Approach Dubravka Svetina Valdivia, Debby Herbenick, Tsung-chieh Fu, Heather Eastman-Mueller, Lucia Guerra-Reyes, Molly Rosenberg Journal of Sex & Marital Therapy.2022; 48(6): 579. CrossRef
The accuracy and consistency of mastery for each content domain using the Rasch and deterministic inputs, noisy “and” gate diagnostic classification models: a simulation study and a real-world analysis using data from the Korean Medical Licensing Examinat Dong Gi Seo, Jae Kum Kim Journal of Educational Evaluation for Health Professions.2021; 18: 15. CrossRef
Estimation of item parameters and examinees’ mastery probability in each domain of the Korean Medical Licensing Examination using a deterministic inputs, noisy “and” gate (DINA) model Younyoung Choi, Dong Gi Seo Journal of Educational Evaluation for Health Professions.2020; 17: 35. CrossRef
Linear programming method to construct equated item sets for the implementation of periodical computer-based testing for the Korean Medical Licensing Examination Dong Gi Seo, Myeong Gi Kim, Na Hui Kim, Hye Sook Shin, Hyun Jung Kim Journal of Educational Evaluation for Health Professions.2018; 15: 26. CrossRef
Purpose The purpose of this study was to analyze opinions about the action plan for implementation of clinical performance exam as part of the national nursing licensing examination and presents the expected effects of the performance exam and aspects to consider regarding its implementation.
Methods This study used a mixed-methods design. Quantitative data were collected by a questionnaire survey, while qualitative data were collected by focus group interviews with experts. The survey targeted 200 nursing professors and clinical nurses with more than 5 years of work experience, and the focus group interviews were conducted with 28 of professors, clinical instructors, and nurses at hospitals.
Results First, nursing professors and clinical specialists agreed that the current written tests have limitations in evaluating examinees’ ability, and that the introduction of a clinical performance exam will yield positive results. Clinical performance exam is necessary to evaluate and improve nurses’ work ability, which means that the implementation of a performance exam is advisable if its credibility and validity can be verified. Second, most respondents chose direct performance exams using simulators or standardized patients as the most suitable format of the test.
Conclusion In conclusion, the current national nursing licensing exam is somewhat limited in its ability to identify competent nurses. Thus, the time has come for us to seriously consider the introduction of a performance exam. The prerequisites for successfully implementing clinical performance exam as part of the national nursing licensing exam are a professional training process and forming a consortium to standardize practical training.
Citations
Citations to this article as recorded by
The Clinical Nursing Competency Assessment System of Ghana: Perspectives of Key Informants Oboshie Anim-Boamah, Christmal Dela Christmals, Susan Jennifer Armstrong Sage Open.2022;[Epub] CrossRef
Adaptation of Extended Reality Smart Glasses for Core Nursing Skill Training Among Undergraduate Nursing Students: Usability and Feasibility Study Sun Kyung Kim, Youngho Lee, Hyoseok Yoon, Jongmyung Choi Journal of Medical Internet Research.2021; 23(3): e24313. CrossRef
Nursing Students’ Experiences on Clinical Competency Assessment in Ghana Oboshie Anim-Boamah, Christmal Dela Christmals, Susan Jennifer Armstrong Nurse Media Journal of Nursing.2021; 11(3): 278. CrossRef
Clinical nursing competency assessment: a scoping review Oboshie Anim-Boamah, Christmal Dela Christmals, Susan Jennifer Armstrong Frontiers of Nursing.2021; 8(4): 341. CrossRef
Factors Influencing the Success of the National Nursing Competency Examination taken by the Nursing Diploma Students in Yogyakarta Yulia Wardani Jurnal Ners.2020; 14(2): 172. CrossRef