Comprehensive Summary
This study examines how different types of literacy (digital, AI, and scientific) shape people’s trust in medical artificial intelligence and how that trust carries over to physicians and hospitals. Researchers surveyed over a thousand university students in China to understand the relationships between these factors. They found that students with higher digital literacy tended to have more trust in medical AI, likely because they were more comfortable using technology in general. However, those with higher AI literacy, meaning they knew more about how AI actually works, were more skeptical, showing less trust in medical AI systems. The findings also revealed that trust in AI does not directly transfer to hospitals. Instead, trust first moves from AI to physicians and then from physicians to hospitals, suggesting that doctors serve as the main link between patients and healthcare institutions when it comes to integrating AI. Scientific literacy also played a key moderating role: people with higher scientific literacy showed weaker trust transfer from AI to doctors and hospitals, meaning that more scientifically minded individuals tend to think critically rather than automatically trusting technology or authority figures.
Outcomes and Implications
These results highlight that promoting trust in medical AI requires more than simply increasing public awareness or knowledge. Greater understanding of AI may lead to more caution and critical thinking rather than blind trust, showing the importance of balancing education with transparency. Healthcare organizations and policymakers should focus on helping patients understand both the benefits and the limitations of AI in medicine. Communication strategies should be tailored to different audiences, for example, providing straightforward explanations for the general public while offering more detailed, evidence-based discussions for scientifically literate groups. Because doctors play such a crucial role in connecting patients’ trust in AI to trust in hospitals, they need proper training and support to explain how AI assists in diagnosis and treatment. Finally, AI developers should design tools that are transparent, interpretable, and easy to explain, reinforcing trust in both the technology and the medical professionals who use it.