Comprehensive Summary
This study assesses the ability of ChatGPT 3.5 to answer common questions regarding pediatric scoliosis. A list of twelve frequently asked questions was compiled under the guidance of three orthopedic surgeons. These questions addressed topics such as the definition and causes of scoliosis, common treatments, and its impact on quality of life. The question set was entered into ChatGPT 3.5 with instructions to respond with a single comprehensive paragraph. Responses were evaluated for accuracy and readability using DISCERN and WordCalc, which integrates multiple readability scales. Additionally, the Mika et al. scale, a newly validated measure for ChatGPT responses, was used to assign each response a score of 1 (excellent) to 4 (unsatisfactory). All twelve responses had a mean DISCERN score of 45.9, placing them in the “average” category. Additionally, the responses received mean scores between 1.7 and 3 on the Mika et al. scale. Finally, the reading levels of the responses ranged from 11th grade to college graduate. Overall, ChatGPT 3.5 provided responses that were mostly satisfactory and free of error. However, answers often required clarification and were written at an unrealistically high reading level, indicating the need for further research before generalization.
Outcomes and Implications
As AI becomes a more popular option for seeking information in the healthcare setting, it is crucial that the content provided is accurate and clear for patients. In adolescent scoliosis, ChatGPT 3.5 has the ability to present complex medical cases in accessible language, making it easier for patients and their families to understand the condition and how to manage it. This method has potential to reduce stress and anxiety around the condition, as well as promote more informed decision-making. However, findings from this study suggest that further research and refinement is needed before ChatGPT can provide consistently adequate responses without the need for clarification.