Abdominal Organ and Tumor Segmentation with Federated Learning
In the field of healthcare, it is crucial to accurately segment abdominal organs and tumors in computed tomography (CT) images for clinical applications like computer-aided diagnosis and treatment planning. However, traditional methods of supervised learning face challenges due to a lack of training data and the expertise required for correct annotations.
Researchers have explored the use of partially annotated datasets and federated learning (FL) to overcome these challenges. FL allows collaborative training of a common model across multiple institutions without centralizing the data. In FL, each client trains a local model using their own data and sends model updates to a centralized server, which integrates these updates into a global model.
The Challenges of Data Heterogeneity
Data heterogeneity is a significant challenge in model aggregation in FL. When models from diverse sources are used with non-IID data, performance may suffer. Additionally, the varying dataset sizes of clients can affect the performance of the global model for tasks with less data. Researchers from National Taiwan University, Nagoya University, and NVIDIA Corporation propose a strategy to address data heterogeneity in FL for multi-class organ and tumor segmentation from partially annotated abdominal CT images.
The primary contributions of their work are:
- Their proposed conditional distillation federated learning (ConDistFL) framework enables the combined multi-task segmentation of abdominal organs and malignancies without the need for additional fully annotated datasets.
- In real-world FL settings, the proposed framework demonstrates stability and performance with lengthy local training steps and a small number of aggregations. This reduces data traffic and training time.
- They utilize an unreleased, fully annotated public dataset called AMOS22 to further test the models. Qualitative and quantitative evaluations show the robustness of their strategy.