Comprehensive Summary
This paper studies how to create a legal framework to enforce the ethical use of AI in the field of public health. First the authors conducted a literature review on international laws and treaties such as WHO (World Health Organization) guidelines and GDPR (General Data Protection Regulation) in order to identify potential gaps in the governance of AI. They found that overall, the laws surrounding AI in public health are not flexible, often resulting in cultural bias in AI models. The authors propose a new method in regulating AI policy based on three main pillars: ethical accountability, regulatory adaptability and transparency. AI models need to be audited regulatory on bias, and must include data from a variety of communities, in order to make sure results are accurate for all populations. Additionally, a black box currently exists where physicians do not know why an AI model produces certain results, which needs to be changed according to the authors.
Outcomes and Implications
This research is important because AI is becoming increasingly important in patient care, and yet there are no clear laws set in place to ensure AI models are unbiased and acting ethically. Without more flexible laws in place, AI can add to the health disparities faced around the world. This paper highlights the importance of creating laws that enforce transparency while also allowing the sharing of international data, which can potentially improve the early detection of diseases in certain populations.