当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CareFL: Contribution Guided Byzantine-Robust Federated Learning
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2024-10-10 , DOI: 10.1109/tifs.2024.3477912
Qihao Dong, Shengyuan Yang, Zhiyang Dai, Yansong Gao, Shang Wang, Yuan Cao, Anmin Fu, Willy Susilo

Byzantine-robust federated learning (FL) endeavors to empower service providers in acquiring a precise global model, even in the presence of potentially malicious FL clients. While considerable strides have been taken in the development of robust aggregation algorithms for FL in recent years, their efficacy is confined to addressing particular forms of Byzantine attacks, and they exhibit vulnerabilities when confronted with a spectrum of attack vectors. Notably, a prevailing issue lies in the heavy reliance of these algorithms on the examination of local model gradients. It is worth noting that an attacker possesses the ability to manipulate a carefully chosen small gradient of a model within a context where there could be millions of gradients available, thereby facilitating adaptive attacks. Drawing inspiration from the foundational Shapley value methodology in game theory, we introduce an effective FL scheme named CareFL . This scheme is designed to provide robustness against a spectrum of state-of-the-art Byzantine attacks. Unlike approaches that rely on the examination of gradients, CareFL employs a universal metric, the loss of the local model—independent of specific gradients, to identify potentially malicious clients. Specifically, in each aggregation round, the FL server trains a reference model using a small auxiliary dataset— the auxiliary dataset can be removed with a slight defense degradation trade-off. It employs the Shapley value to assess the contribution of each client-submitted model in minimizing the global model loss. Subsequently, the server selects client models closer to the reference model in terms of Shapley values for the global model update. To reduce the computational overhead of CareFL when the number of clients is relatively scaled-up, we construct its variant, namely CareFL + generally by grouping clients. Extensive experimentation conducted on well-established MNIST and CIFAR-10 datasets, encompassing diverse model architectures, including AlexNet, demonstrates that CareFL consistently achieves accuracy levels comparable to those attained under attack-free conditions when faced with five formidable attacks. CareFL and CareFL+ outperform six existing state-of-the-art Byzantine-robust FL aggregation methods, including FLTrust , across both IID and non-IID data distribution settings.

中文翻译:


CareFL:贡献引导的拜占庭鲁棒联邦学习



拜占庭稳健的联邦学习 (FL) 致力于使服务提供商能够获得精确的全球模型,即使存在潜在的恶意 FL 客户端也是如此。虽然近年来在开发稳健的 FL 聚合算法方面取得了长足的进步,但它们的功效仅限于解决特定形式的拜占庭攻击,并且在面对一系列攻击向量时,它们表现出脆弱性。值得注意的是,一个普遍的问题在于这些算法严重依赖局部模型梯度的检查。值得注意的是,攻击者有能力在可能有数百万个梯度可用的上下文中操纵精心挑选的模型小梯度,从而促进自适应攻击。从博弈论中的基础 Shapley 价值方法中汲取灵感,我们引入了一种名为 CareFL 的有效 FL 方案。此方案旨在提供针对一系列最先进的拜占庭攻击的稳健性。与依赖于梯度检查的方法不同,CareFL 采用一个通用指标,即局部模型的损失(独立于特定梯度)来识别潜在的恶意客户端。具体来说,在每一轮聚合中,FL 服务器使用一个小型辅助数据集训练参考模型——可以通过轻微的防御降级权衡来删除辅助数据集。它使用 Shapley 值来评估每个客户提交的模型在最小化全局模型损失方面的贡献。随后,服务器根据全局模型更新的 Shapley 值选择更接近参考模型的客户端模型。 当客户端数量相对扩大时,为了减少 CareFL 的计算开销,我们构建它的变体,即 CareFL +,一般通过对客户端进行分组。在成熟的 MNIST 和 CIFAR-10 数据集上进行的广泛实验,包括包括 AlexNet 在内的各种模型架构,表明 CareFL 在面对五种强大的攻击时,始终能达到与无攻击条件下相当的准确性水平。CareFL 和 CareFL+ 在 IID 和非 IID 数据分布设置中都优于现有的六种最先进的拜占庭稳健 FL 聚合方法,包括 FLTrust。
更新日期:2024-10-10
down
wechat
bug