-
The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study
-
Elio Stefan Arruzza
, Carla Marie Evangelista , Minh Chau
-
J Educ Eval Health Prof. 2024;21:29. Published online October 31, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.29
-
-
1,093
View
-
236
Download
-
2
Web of Science
-
2
Crossref
-
Abstract
PDF Supplementary Material
- This study investigated the performance of ChatGPT-4.0o in evaluating the quality of positioning in radiographic images. Thirty radiographs depicting a variety of knee, elbow, ankle, hand, pelvis, and shoulder projections were produced using anthropomorphic phantoms and uploaded to ChatGPT-4.0o. The model was prompted to provide a solution to identify any positioning errors with justification and offer improvements. A panel of radiographers assessed the solutions for radiographic quality based on established positioning criteria, with a grading scale of 1–5. In only 20% of projections, ChatGPT-4.0o correctly recognized all errors with justifications and offered correct suggestions for improvement. The most commonly occurring score was 3 (9 cases, 30%), wherein the model recognized at least 1 specific error and provided a correct improvement. The mean score was 2.9. Overall, low accuracy was demonstrated, with most projections receiving only partially correct solutions. The findings reinforce the importance of robust radiography education and clinical experience.
-
Citations
Citations to this article as recorded by 
- Conversational LLM Chatbot ChatGPT-4 for Colonoscopy Boston Bowel Preparation Scoring: An Artificial Intelligence-to-Head Concordance Analysis
Raffaele Pellegrino, Alessandro Federico, Antonietta Gerarda Gravina Diagnostics.2024; 14(22): 2537. CrossRef - Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study
Minh Chau, Elio Stefan Arruzza, Kelly Spuur Journal of Educational Evaluation for Health Professions.2024; 21: 34. CrossRef
-
Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study
-
Minh Chau
, Elio Stefan Arruzza , Kelly Spuur
-
J Educ Eval Health Prof. 2024;21:34. Published online November 18, 2024
-
DOI: https://doi.org/10.3352/jeehp.2024.21.34
-
-
Abstract
PDF Supplementary Material
- Purpose
This study evaluates the use of ChatGPT-4o in creating tailored continuing professional development (CPD) plans for radiography students, addressing the challenge of aligning CPD with Medical Radiation Practice Board of Australia (MRPBA) requirements. We hypothesized that ChatGPT-4o could support students in CPD planning while meeting regulatory standards.
Methods A descriptive, experimental design was used to generate 3 unique CPD plans using ChatGPT-4o, each tailored to hypothetical graduate radiographers in varied clinical settings. Each plan followed MRPBA guidelines, focusing on computed tomography specialization by the second year. Three MRPBA-registered academics assessed the plans using criteria of appropriateness, timeliness, relevance, reflection, and completeness from October 2024 to November 2024. Ratings underwent analysis using the Friedman test and intraclass correlation coefficient (ICC) to measure consistency among evaluators.
Results ChatGPT-4o generated CPD plans generally adhered to regulatory standards across scenarios. The Friedman test indicated no significant differences among raters (P=0.420, 0.761, and 0.807 for each scenario), suggesting consistent scores within scenarios. However, ICC values were low (–0.96, 0.41, and 0.058 for scenarios 1, 2, and 3), revealing variability among raters, particularly in timeliness and completeness criteria, suggesting limitations in the ChatGPT-4o’s ability to address individualized and context-specific needs.
Conclusion ChatGPT-4o demonstrates the potential to ease the cognitive demands of CPD planning, offering structured support in CPD development. However, human oversight remains essential to ensure plans are contextually relevant and deeply reflective. Future research should focus on enhancing artificial intelligence’s personalization for CPD evaluation, highlighting ChatGPT-4o’s potential and limitations as a tool in professional education.
|