Comprehensive Summary
The aim of the study was to design a more efficient and accurate method for brain tumor segmentation to predict patient recovery, which is essential for diagnosis and treatment monitoring, using deep-learning networks. To accomplish this, a context boosting framework (CBF) was introduced to more accurately classify magnetic resonance imaging (MRI) areas as tumorous and refine tumor boundary precision. The custom loss function Log Cosh Focal Tversky (LCFT), which uses Focal Tversky loss and Log Cosh Dice loss, was utilized to ensure noise was minimized and elevate model learning. The 4-staged 2D-VNet++, which was originally designed for 3D medical image segmentation, was trained on 20 cases from an MRI image collection of brain tumors from 259 cases of high-grade glioma and 110 cases of low-grade glioma, using a conventional Dice loss. The model resulted in a Dice score of 99.287, Jaccard similarity index of 99.642, and Tversky index of 99.743. In relation to current techniques such as the Attention ResUNet with Guided Decoder (ARU-GD), MultiResUNet, and 2D-UNet, these findings indicate that the 4-staged 2D-VNet++ model performance surpasses modern tools. This model is also unique in that a custom CBF can change deep model depth to optimize outcomes. Limitations include the model’s reliance on high-end GPUs, and the small training dataset and batch sizes employed.
Outcomes and Implications
Brain tumors, which are cell growths in or near the brain, are a primary cause of 85-90% of primary central nervous system malignancies in the brain and have a low 5-year survival rate of 36% for all primary brain tumors. Tumor segmentation is a challenging task due to the wide range of glioma size and intensity, but provides information imperative for the diagnosis and treatment process. Couple neural network models, as utilized in this study, are beneficial in that they can self-learn hierarchical features like edge and texture details, are robust to noise and variation, and are scalable with data. While the results of the 4-staged 2D-VNet++ are promising, further exploration on a more diverse dataset, as well as assessing model robustness, are essential for future clinical applications, as well as comparisons with radiology segmentations and testing with radiology workflows.