Skip Navigation
Skip to contents

JEEHP : Journal of Educational Evaluation for Health Professions

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > J Educ Eval Health Prof > Volume 21; 2024 > Article
Research article Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study
Minh Chau1orcid, Elio Stefan Arruzza2*orcid, Kelly Spuur1orcid

DOI: https://doi.org/10.3352/jeehp.2024.21.34
Published online: November 18, 2024

1Faculty of Science and Health, Charles Sturt University, Bathurst NSW, Australia

2UniSA Allied Health & Human Performance, University of South Australia, Adelaide, SA, Australia

*Corresponding email:  Elio.Arruzza@unisa.edu.au

Editor: Sun Huh, Hallym University, Korea

• Received: 1 November 2024   • Accepted: 11 November 2024
prev next
  • 322 Views
  • 69 Download
  • 0 Crossref
  • 0 Scopus

Purpose
This study evaluates the use of ChatGPT-4o in creating tailored continuing professional development (CPD) plans for radiography students, addressing the challenge of aligning CPD with Medical Radiation Practice Board of Australia (MRPBA) requirements. We hypothesized that ChatGPT-4o could support students in CPD planning while meeting regulatory standards.
Methods
A descriptive, experimental design was used to generate 3 unique CPD plans using ChatGPT-4o, each tailored to hypothetical graduate radiographers in varied clinical settings. Each plan followed MRPBA guidelines, focusing on computed tomography specialization by the second year. Three MRPBA-registered academics assessed the plans using criteria of appropriateness, timeliness, relevance, reflection, and completeness from October 2024 to November 2024. Ratings underwent analysis using the Friedman test and intraclass correlation coefficient (ICC) to measure consistency among evaluators.
Results
ChatGPT-4o generated CPD plans generally adhered to regulatory standards across scenarios. The Friedman test indicated no significant differences among raters (P=0.420, 0.761, and 0.807 for each scenario), suggesting consistent scores within scenarios. However, ICC values were low (–0.96, 0.41, and 0.058 for scenarios 1, 2, and 3), revealing variability among raters, particularly in timeliness and completeness criteria, suggesting limitations in the ChatGPT-4o’s ability to address individualized and context-specific needs.
Conclusion
ChatGPT-4o demonstrates the potential to ease the cognitive demands of CPD planning, offering structured support in CPD development. However, human oversight remains essential to ensure plans are contextually relevant and deeply reflective. Future research should focus on enhancing artificial intelligence’s personalization for CPD evaluation, highlighting ChatGPT-4o’s potential and limitations as a tool in professional education.

Figure

JEEHP : Journal of Educational Evaluation for Health Professions
TOP