Comprehensive Summary
Thakker et al. write a discussion about the ethical deployment of artificial intelligence (AI) within low-resource settings. They discuss a case about a primary care doctor in a rural clinic wanting to use an AI tool to assess an irregular skin lesion on the back of a 78-year-old woman. The AI device is known to be trained using biased datasets that underrepresent diverse skin tones. The question discussed is whether such tools should be implemented while knowing their limitations. The authors note that AI can improve early detection and triage to prevent the progression of diseases. For example, Dermasensor, a primary care AI tool cleared by the FDA, has improved both management and diagnostic sensitivity by 10%. However, the training of AI tools relies on clinical images from datasets. If these datasets lack diversity, the model will underperform for patients with underrepresented skin types, potentially leading to higher rates of misdiagnosis and delayed treatment.
Outcomes and Implications
The author's discussion of AI use in low-resource settings addresses the need to bridge the gap of diagnostic bias in specialized care. Devices like Dermasensor have the ability to improve early detection and diagnosis of skin lesions. However, its use is restricted by the FDA for physicians only, which excluded patients in the 65% of rural regions that have primary care physician shortages. The authors suggest strategies for successful implementation of these AI tools in the future, including developing context-specific AI using diverse data, using the tool as a support to the physician, establishing clear regulatory frameworks, and using capacity-building measures like training local health works in AI-assisted diagnostics.