Comprehensive Summary
Collin et al. used ChatGPT 3.5, an open-source chatbot, to generate the 10 most asked questions about prostate cancer. They compared the results to the top 25 worldwide Google search trends for the 2022 calendar year using Google Trends. A board-certified urologist evaluated the responses for understandability and actionability using the Patient Education Materials Assessment Tool (PEMAT). They also evaluated for quality. The study found that ChatGPT generates questions that are representative of what a patient would search on Google. Responses scored a mean PEMAT score of 91.7% for understandability and 76% for actionability. ChatGPT may be more accurate for healthcare advice than web-based information from YouTube, TikTok, and Instagram, though ChatGPT has weak referencing and currentness.
Outcomes and Implications
AI shows promise in educating patients after a prostate cancer diagnosis, especially given that patients tend to turn to web-based information to self-educate on health problems. Patients also generally trust AI chatbots due to their low distinguishability from responses given by human providers. Still, the limitations of the ChatGPT LLM, especially the risk of misinformation, should also be considered. Further evaluation and optimization for healthcare use cases are needed. The study did not account for when a patient would likely ask additional questions through a dynamic dialogue with an in-person provider.