Comprehensive Summary
This study, presented by Bhuiyan and colleagues, investigates whether machine learning and deep learning models can accurately predict dengue fever at an early stage using only nonclinical, symptom-based data in the Bangladeshi population. The authors conducted a comparative analysis of 13 machine learning and deep learning models, using a dataset of 500 patient records collected during the 2024 dengue outbreak season, each containing 22 symptom-based features validated by a licensed physician. Models included tree-based, linear, instance-based classifiers, and a custom-built artificial neural network (ANN), with performance evaluated using accuracy, precision, recall, F1 score, AUC, and ROC curves. The results demonstrated that the custom hyperparameter-tuned ANN achieved the highest performance, with a testing accuracy of 97.5%, outperforming all other models, including strong tree-based methods such as random forest and extra trees. Explainable artificial intelligence techniques, including SHAP, LIME, and integrated gradients, consistently identified key predictive symptoms such as retro-ocular pain, lower neck or upper chest pain, swollen eyelids, headache, muscle or joint pain, and nausea. Feature analyses revealed age-dependent effects, with certain symptoms being more predictive in younger patients. In the discussion, the authors emphasize the value of symptom-based, explainable models for early dengue detection in resource-limited settings and acknowledge limitations related to dataset size and geographic scope.
Outcomes and Implications
Dengue fever remains a major public health threat in tropical regions, and early diagnosis is critical for preventing severe disease progression and reducing healthcare burden. By demonstrating that dengue can be predicted accurately before laboratory testing, the study addresses a major gap in current diagnostic workflows, especially in low-resource and high-burden settings like Bangladesh. Clinically, the findings suggest that symptom-based machine learning models could support frontline clinicians and public health workers in identifying high-risk patients earlier, enabling faster triage, isolation, and monitoring. The use of explainable AI strengthens clinical relevance by clarifying which symptoms drive predictions, improving trust and interpretability for healthcare providers. While the model is not intended to replace clinical judgment or laboratory confirmation, it could serve as a valuable decision-support or screening tool, particularly during outbreak surges. The authors note that broader clinical implementation would require validation on larger, multi-center datasets and potential integration with clinical and environmental data. Although no specific timeline for deployment is provided, the study implies that with further validation and regulatory oversight, such tools could be implemented in the near to medium term as part of digital public health surveillance and early-warning systems.