Comprehensive Summary
This study conducted by Tekin et al. focuses on evaluating the quality of Google and ChatGPT responses to questions on scoliosis. The keyword “scoliosis” was entered on Google Chrome incognito mode, and the first ten questions in the “People Also Ask” section were recorded. Responses to these questions were collected from both Google and ChatGPT 4.0. The responses were evaluated by the quality of the response and the credibility of the source. In terms of the quality of the response, Tekin et al. classified 90% of ChatGPT’s responses as excellent, not requiring any clarification. However, they rated 50% of Google’s responses as unsatisfactory, needing significant clarification. In terms of the credibility of the source, 60% of ChatGPT’s sources were academic, while the rest were commercial websites. Google, on the other hand, only used sources from commercial websites. The researchers concluded that ChatGPT outperforms Google in the quality of response, depth of explanation and use of credible sources.
Outcomes and Implications
With widespread usage of AI, ChatGPT has become a frequently used source of information for medicine by the public. However, as AI does not always provide reliable information, there is a need to evaluate the quality of its responses to medicine. Despite the study indicating that ChatGPT is more powerful than Google with answers related to scoliosis, it is important to note that information on the internet should not be fully trusted and a medical professional is a more reliable source of information.