Comprehensive Summary
Medical imaging is used to segment brain tumors, and it requires accurate tumor localization for improved diagnostics and for future treatment planning. Conventional segmentation models struggle with boundary delineation and generalization across heterogenous datasets. Data privacy concerns also limit the training of models on large-scale, multi-institutional datasets. To address the disadvantages of the current system, the authors propose a Hybrid Dual Encoder–Decoder Segmentation Model in Federated Learning. This model integrates EfficientNet with Swin Transformer as encoders and the decoders as BASNet (Boundary-Aware Segmentation Network) decorder and MaskFormer. The authors aim to have the model enhance segmentation accuracy and efficiency, requiring less total training time. This model leverages hierarchical feature extraction, self-attention mechanisms, and boundary-aware segmentation for better tumor delineation. The model achieves a Dice Coefficient of 0.94, an Intersection over Union (IoU) of 0.87 and reduces total training time, as was the aim. This is demonstrated via faster convergence in fewer rounds. The model also accomplishes strong boundary delineation performance, with a Hausdorff Distance (HD95) of 1.61, an Average Symmetric Surface Distance (ASSD) of 1.12, and a Boundary F1 Score (BF1) of 0.91, indicating precise segmentation contours. Evaluations on the Kaggle Mateuszbuda LGG-MRI segmentation dataset partitioned across multiple federated clients demonstrate consistent, high segmentation performance.
Outcomes and Implications
The findings show a model with increased efficiency, requiring less training time overall, and a stronger ability to delineate boundaries of tumors. These findings highlight the importance of integrating transformers, lightweight CNNs, and advanced decorders in a federated setup supports enhanced segmentation of brain tumors, while preserving the privacy of medical data.