Comprehensive Summary
This landmark study introduces PanDerm, the first multimodal dermatology foundation model, pretrained through self-supervised learning on more than 2 million clinical dermatologic images spanning four imaging modalities-dermatopathology, dermoscopy, total-body photography (TBD), and clinical photographs-from 11 international institutions. PanDerm was benchmarked across 28 downstream tasks, including melanoma screening, lesion segmentation, metastasis prediction, risk stratification, and phenotype assessment, and achieved state-of-the-art performance using as little as 10% of the labeled data compared to existing models. In three independent reader studies, PanDerm outperformed clinicians by 10.2% in early-stage melanoma detection and improved dermatologists’ diagnostic accuracy by 11% on dermoscopic images. Notably, it enhanced general healthcare providers’ differential diagnosis accuracy by 16.5% across 128 skin conditions.
Outcomes and Implications
PanDerm marks a significant advance toward general-purpose AI systems in medicine, demonstrating how a multimodal, self-supervised foundation model can unify diverse clinical workflows in dermatology, from whole-body lesion screening to microscopic pathology interpretation. Its capacity to integrate heterogeneous imaging modalities supports longitudinal disease tracking, risk prediction, and prognostic analysis, significantly outperforming task-specific models and even human experts in certain areas. Clinically, PanDerm’s high data efficiency and generalization ability promise to reduce dependence on extensive expert labeling, accelerate diagnostic decision-making, and expand access to dermatologic expertise globally. This model sets a precedent for specialty-specific foundation models, potentially transforming medical AI by enabling holistic, multimodal decision-support systems that adapt to diverse clinical contexts while preserving accuracy and fairness across populations.