Purpose To ensure faculty members’ active participation in education in response to growing demand, medical schools should clearly describe educational activities in their promotion regulations. This study analyzed the status of how medical education activities are evaluated in promotion regulations in 2022, in Korea.
Methods Data were collected from promotion regulations retrieved by searching the websites of 22 medical schools/universities in August 2022. To categorize educational activities and evaluation methods, the Association of American Medical Colleges framework for educational activities was utilized. Correlations between medical schools’ characteristics and the evaluation of medical educational activities were analyzed.
Results We defined 6 categories, including teaching, development of education products, education administration and service, scholarship in education, student affairs, and others, and 20 activities with 57 sub-activities. The average number of included activities was highest in the development of education products category and lowest in the scholarship in education category. The weight adjustment factors of medical educational activities were the characteristics of the target subjects and faculty members, the number of involved faculty members, and the difficulty of activities. Private medical schools tended to have more educational activities in the regulations than public medical schools. The greater the number of faculty members, the greater the number of educational activities in the education administration and service categories.
Conclusion Medical schools included various medical education activities and their evaluation methods in promotion regulations in Korea. This study provides basic data for improving the rewarding system for efforts of medical faculty members in education.
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
Citations
Citations to this article as recorded by
The importance of human supervision in the use of ChatGPT as a support tool in scientific writing William Castillo-González Metaverse Basic and Applied Research.2023;[Epub] CrossRef
Purpose Orthopedic manual therapy (OMT) education demonstrates significant variability between philosophies and while literature has offered a more comprehensive understanding of the contextual, patient specific, and technique factors which interact to influence outcome, most OMT training paradigms continue to emphasize the mechanical basis for OMT application. The purpose of this study was to establish consensus on modifications & adaptions to training paradigms which need to occur within OMT education to align with current evidence.
Methods A 3-round Delphi survey instrument designed to identify foundational knowledge to include and omit from OMT education was completed by 28 educators working within high level manual therapy education programs internationally. Round 1 consisted of open-ended questions to identify content in each area. Round 2 and Round 3 allowed participants to rank the themes identified in Round 1.
Results Consensus was reached on 25 content areas to include within OMT education, 1 content area to omit from OMT education, and 34 knowledge components which should be present in those providing OMT. Support was seen for education promoting understanding the complex psychological, neurophysiological, and biomechanical systems as they relate to both evaluation and treatment effect. While some concepts were more consistently supported there was significant variability in responses which is largely expected to be related to previous training.
Conclusion The results of this study indicate manual therapy educators understanding of evidence-based practice as support for all 3 tiers of evidence were represented. The results of this study should guide OMT training program development and modification.
Purpose Nutrition support nurse is a member of a nutrition support team and is a health care professional who takes a significant part in all aspects of nutritional care. This study aims to investigate ways to improve the quality of tasks performed by nutrition support nurses through survey questionnaires in Korea.
Methods An online survey was conducted between October 12 and November 31, 2018. The questionnaire consists of 36 items categorized into 5 subscales: nutrition-focused support care, education and counseling, consultation and coordination, research and quality improvement, and leadership. The importance–performance analysis method was used to confirm the relationship between the importance and performance of nutrition support nurses’ tasks.
Results A total of 101 nutrition support nurses participated in this survey. The importance (5.56±0.78) and performance (4.50±1.06) of nutrition support nurses’ tasks showed a significant difference (t=11.27, P<0.001). Education, counseling/consultation, and participation in developing their processes and guidelines were identified as low-performance activities compared with their importance.
Conclusion To intervene nutrition support effectively, nutrition support nurses should have the qualification or competency through the education program based on their practice. Improved awareness of nutrition support nurses participating in research and quality improvement activity for role development is required.
Purpose This study evaluated the validity of student feedback derived from Medicine Student Experience Questionnaire (MedSEQ), as well as the predictors of students’ satisfaction in the Medicine program.
Methods Data from MedSEQ applying to the University of New South Wales Medicine program in 2017, 2019, and 2021 were analyzed. Confirmatory factor analysis (CFA) and Cronbach’s α were used to assess the construct validity and reliability of MedSEQ respectively. Hierarchical multiple linear regressions were used to identify the factors that most impact students’ overall satisfaction with the program.
Results A total of 1,719 students (34.50%) responded to MedSEQ. CFA showed good fit indices (root mean square error of approximation=0.051; comparative fit index=0.939; chi-square/degrees of freedom=6.429). All factors yielded good (α>0.7) or very good (α>0.8) levels of reliability, except the “online resources” factor, which had acceptable reliability (α=0.687). A multiple linear regression model with only demographic characteristics explained 3.8% of the variance in students’ overall satisfaction, whereas the model adding 8 domains from MedSEQ explained 40%, indicating that 36.2% of the variance was attributable to students’ experience across the 8 domains. Three domains had the strongest impact on overall satisfaction: “being cared for,” “satisfaction with teaching,” and “satisfaction with assessment” (β=0.327, 0.148, 0.148, respectively; all with P<0.001).
Conclusion MedSEQ has good construct validity and high reliability, reflecting students’ satisfaction with the Medicine program. Key factors impacting students’ satisfaction are the perception of being cared for, quality teaching irrespective of the mode of delivery and fair assessment tasks which enhance learning.
This study aimed to compare the knowledge and interpretation ability of ChatGPT, a language model of artificial general intelligence, with those of medical students in Korea by administering a parasitology examination to both ChatGPT and medical students. The examination consisted of 79 items and was administered to ChatGPT on January 1, 2023. The examination results were analyzed in terms of ChatGPT’s overall performance score, its correct answer rate by the items’ knowledge level, and the acceptability of its explanations of the items. ChatGPT’s performance was lower than that of the medical students, and ChatGPT’s correct answer rate was not related to the items’ knowledge level. However, there was a relationship between acceptable explanations and correct answers. In conclusion, ChatGPT’s knowledge and interpretation ability for this parasitology examination were not yet comparable to those of medical students in Korea.
Citations
Citations to this article as recorded by
Applicability of ChatGPT in Assisting to Solve Higher Order Problems in Pathology Ranwir K Sinha, Asitava Deb Roy, Nikhil Kumar, Himel Mondal Cureus.2023;[Epub] CrossRef
Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers Sun Huh Journal of Educational Evaluation for Health Professions.2023; 20: 5. CrossRef
Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic Sun Huh Science Editing.2023; 10(1): 1. CrossRef
Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum Dipmala Das, Nikhil Kumar, Langamba Angom Longjam, Ranwir Sinha, Asitava Deb Roy, Himel Mondal, Pratima Gupta Cureus.2023;[Epub] CrossRef