Comprehensive Summary
Slivnick et al. designed and externally validated a deep learning model that detects cardiac amyloidosis (CA) from a single apical four-chamber echocardiographic video clip. An ensemble of five 3D convolutional neural networks was trained on 2,612 cases (52% CA) drawn from a multisite, multiethnic dataset spanning the Mayo Clinic system and collaborating institutions, and externally validated across 18 global centers (n = 2,719; 597 CA and 2, 122 controls). After exclusion of uncertain outputs (~13%), the model achieved an AUROC of 0.93, sensitivity 85%, and specificity 93%, maintaining similar accuracy across amyloidosis subtypes (AL 84%, ATTRwt 85%, ATTRv 86%). In subgroup testing, AUROC remained high among patients referred for technetium-pyrophosphate scintigraphy (0.86) and in age-, sex-, and wall thickness-matched cohorts (0.92). Compared with established screening scores, the AI model outperformed both the transthyretin CA score (AUROC 0.73) and the increased-wall-thickness score (0.80), demonstrating superior calibration and greater clinical utility on decision-curve analysis for identifying patients requiring confirmatory imaging.
Outcomes and Implications
This study demonstrates that a single-view, video-based AI model can accurately and reproducibly identify cardiac amyloidosis without manual measurement, potentially transforming initial echocardiographic screening. The approach offers rapid and generalizable performance across diverse patient demographics, reducing diagnostic delays and unnecessary testing. Integration into routine echocardiography procedures could streamline triage, improve early detection, and enhance access to disease-modifying therapy. However, prospective implementation studies and real-world validation are essential for evaluating workflow integration, bias minimization, and clinical outcomes before broad adoption.