当前位置: X-MOL 学术IEEE Trans. Cybern. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FedDBL: Communication and Data Efficient Federated Deep-Broad Learning for Histopathological Tissue Classification
IEEE Transactions on Cybernetics ( IF 9.4 ) Pub Date : 6-26-2024 , DOI: 10.1109/tcyb.2024.3403927
Tianpeng Deng 1 , Yanqi Huang 2 , Guoqiang Han 1 , Zhenwei Shi 2 , Jiatai Lin 1 , Qi Dou 3 , Zaiyi Liu 2 , Xiao-jing Guo 4 , C. L. Philip Chen 1 , Chu Han 2
Affiliation  

Histopathological tissue classification is a fundamental task in computational pathology. Deep learning (DL)-based models have achieved superior performance but centralized training suffers from the privacy leakage problem. Federated learning (FL) can safeguard privacy by keeping training samples locally, while existing FL-based frameworks require a large number of well-annotated training samples and numerous rounds of communication which hinder their viability in real-world clinical scenarios. In this article, we propose a lightweight and universal FL framework, named federated deep-broad learning (FedDBL), to achieve superior classification performance with limited training samples and only one-round communication. By simply integrating a pretrained DL feature extractor, a fast and lightweight broad learning inference system with a classical federated aggregation approach, FedDBL can dramatically reduce data dependency and improve communication efficiency. Five-fold cross-validation demonstrates that FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications. Furthermore, due to the lightweight design and one-round communication, FedDBL reduces the communication burden from 4.6 GB to only 138.4 KB per client using the ResNet-50 backbone at 50-round training. Extensive experiments also show the scalability of FedDBL on model generalization to the unseen dataset, various client numbers, model personalization and other image modalities. Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk. Code is available at https://github.com/tianpeng-deng/FedDBL.

中文翻译:


FedDBL:用于组织病理学组织分类的通信和数据高效联合深度广泛学习



组织病理学组织分类是计算病理学的一项基本任务。基于深度学习(DL)的模型取得了优异的性能,但集中式训练存在隐私泄露问题。联邦学习(FL)可以通过在本地保存训练样本来保护隐私,而现有的基于 FL 的框架需要大量注释良好的训练样本和多轮通信,这阻碍了它们在现实临床场景中的可行性。在本文中,我们提出了一种轻量级通用的 FL 框架,称为联邦深度广泛学习(FedDBL),以在有限的训练样本和仅一轮通信的情况下实现卓越的分类性能。通过简单地将预训练的深度学习特征提取器、快速、轻量级的广泛学习推理系统与经典的联合聚合方法集成,FedDBL 可以显着减少数据依赖性并提高通信效率。五折交叉验证表明,FedDBL 在仅一轮通信和有限训练样本的情况下大大优于竞争对手,甚至在多轮通信下达到了与竞争对手相当的性能。此外,由于轻量级设计和一轮通信,FedDBL 在 50 轮训练中使用 ResNet-50 主干网络,将每个客户端的通信负担从 4.6 GB 减少到仅 138.4 KB。大量实验还显示了 FedDBL 在模型泛化到未见过的数据集、各种客户数量、模型个性化和其他图像模式方面的可扩展性。由于不同客户端之间没有数据或深度模型共享,因此很好地解决了隐私问题,保证了模型安全,不存在模型反转攻击风险。 代码可在 https://github.com/tianpeng-deng/FedDBL 获取。
更新日期:2024-08-22
down
wechat
bug