Comprehensive Summary
This project tested whether ChatGPT-3.5 could turn complex prostate cancer pathology reports into versions that are easier for patients to understand. Researchers created 50 sample reports: 25 biopsy and 25 surgery and asked 9 urologists from 5 countries to evaluate them. They also gathered feedback from 41 patients (82% of those approached). The results showed that the AI-made reports were much easier to read. On the Flesch Reading Ease scale (which ranges from 0 to 100, where higher means easier), biopsy reports improved from 31.2 ± 14.6 to 49 ± 5 (p < 0.001), and surgery reports improved from 41.4 ± 16.8 to 49.6 ± 3.4 (p = 0.02). However, the Flesch-Kincaid Grade Level, which estimates the U.S. school grade needed to understand the text, was slightly higher for AI reports,10.5 ± 0.7, compared to 8.7 ± 2.2 (p < 0.001), meaning the content remained somewhat advanced and still above the recommended reading levels (AMA: ~6th grade, CDC: ≤8th grade). Feedback was positive overall. Surgeons said the AI-generated reports used patient-friendly language (77% for biopsy; 95% for surgery) and were well-structured (69%/96%). They found the information accurate (92% for biopsy; 80% for surgery) and said they would consider using such tools in practice (95%/99%). Patients shared similar views: 90% found the AI reports easy to follow, 88% said they clearly explained the diagnosis, 86% felt they clarified the cancer’s extent, 83% understood the Gleason grade, and 76% said they would like to receive AI-generated reports in their own care.
Outcomes and Implications
This study suggests that AI tools like ChatGPT could help bridge the communication gap between patients and healthcare professionals by turning complex medical reports into clear, easy-to-understand summaries. For the public, this means patients may finally grasp their diagnoses and treatment details without feeling overwhelmed, which is something 90% of participants reported after reading the AI-generated reports, with 88% saying they better understood their condition. For medical professionals, the technology could support clearer patient discussions and reduce time spent explaining complex terms, with nearly all surgeons (95–99%) saying they would consider using such tools in practice. From a public health standpoint, this innovation could help address the widespread issue of low health literacy by making vital medical information more accessible and empowering people to participate actively in their care. However, the study also emphasizes that AI should assist, not replace, human expertise, with clinicians continuing to ensure accuracy, context, and compassion in every patient interaction.