当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 6-27-2024 , DOI: 10.1109/tifs.2024.3420126
Abbas Yazdinejad 1 , Ali Dehghantanha 1 , Hadis Karimipour 2 , Gautam Srivastava 3 , Reza M. Parizi 4
Affiliation  

Although federated learning offers a level of privacy by aggregating user data without direct access, it remains inherently vulnerable to various attacks, including poisoning attacks where malicious actors submit gradients that reduce model accuracy. In addressing model poisoning attacks, existing defense strategies primarily concentrate on detecting suspicious local gradients over plaintext. However, detecting non-independent and identically distributed encrypted gradients poses significant challenges for existing methods. Moreover, tackling computational complexity and communication overhead becomes crucial in privacy-preserving federated learning, particularly in the context of encrypted gradients. To address these concerns, we propose a robust privacy-preserving federated learning model resilient against model poisoning attacks without sacrificing accuracy. Our approach introduces an internal auditor that evaluates encrypted gradient similarity and distribution to differentiate between benign and malicious gradients, employing a Gaussian Mixture Model and Mahalanobis Distance for byzantine-tolerant aggregation. The proposed model utilizes Additive Homomorphic Encryption to ensure confidentiality while minimizing computational and communication overhead. Our model demonstrates superior performance in accuracy and privacy compared to existing strategies and encryption techniques, such as Fully Homomorphic Encryption and Two-Trapdoor Homomorphic Encryption. The proposed model effectively addresses the challenge of detecting maliciously encrypted non-independent and identically distributed gradients with low computational and communication overhead.

中文翻译:


抵御模型中毒攻击的鲁棒保护隐私的联邦学习模型



尽管联邦学习通过聚合用户数据而无需直接访问来提供一定程度的隐私,但它本质上仍然容易受到各种攻击,包括恶意行为者提交降低模型准确性的梯度的中毒攻击。在解决模型中毒攻击时,现有的防御策略主要集中于检测明文上的可疑局部梯度。然而,检测非独立且同分布的加密梯度对现有方法提出了重大挑战。此外,解决计算复杂性和通信开销对于保护隐私的联邦学习至关重要,特别是在加密梯度的背景下。为了解决这些问题,我们提出了一种强大的隐私保护联合学习模型,可以在不牺牲准确性的情况下抵御模型中毒攻击。我们的方法引入了一个内部审计员,该审计员评估加密的梯度相似性和分布,以区分良性和恶意梯度,采用高斯混合模型和马哈拉诺比斯距离进行拜占庭容忍聚合。所提出的模型利用加法同态加密来确保机密性,同时最大限度地减少计算和通信开销。与现有的策略和加密技术(例如完全同态加密和双活板门同态加密)相比,我们的模型在准确性和隐私性方面表现出了卓越的性能。所提出的模型有效地解决了以低计算和通信开销检测恶意加密的非独立且同分布梯度的挑战。
更新日期:2024-08-22
down
wechat
bug