Comprehensive Summary
This study evaluated the effectiveness of ChatGPT when summarizing radiology reports for patients, aiming for clarity and understanding. Thirty radiology reports, gathered with a wide range of imaging types, were given to ChatGPT with a prompt to summarize the report in a concise and patient-friendly manner, aiming to balance medical language and layman’s terms. The generated reports were then given to four radiologists to assess the representation of the original findings and also to patients to assess their satisfaction and comprehension of the new report when compared to the original. Regarding the radiologist survey, 80% of the generated reports were said to have represented the original findings well, and 90% had a solid balance of medical verbiage and patient-centric language. It was reported that 12% of reports overemphasized a finding, while 18% underemphasized. For the patient survey, there was an increase from 26% to 98% in confidence of understanding from the original report to the ChatGPT generated report, along with an increase of 8% to 91% in satisfaction of medical terminology. Time required for comprehension also was much better in generated reports, with an increase of satisfaction from 23% to 97%. Overall, patient and radiologist satisfaction were very high with the generated reports, but not without some shortcomings regarding emphasis of findings.
Outcomes and Implications
Patient comprehension of medical terminology and reports has been a challenge for a long time, as finding the right balance between using professional language and ensuring patient comprehension is very difficult for many medical professionals. The use of ChatGPT to summarize medical reports can improve patient satisfaction while ensuring that the findings are accurately represented. Further research is underway to evaluate accuracy when given patients’ own scans or reports, which could provide yet another way to improve the patient experience.