当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SPFL: A Self-Purified Federated Learning Method Against Poisoning Attacks
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 6-27-2024 , DOI: 10.1109/tifs.2024.3420135
Zizhen Liu 1 , Weiyang He 2 , Chip-Hong Chang 2 , Jing Ye 1 , Huawei Li 1 , Xiaowei Li 1
Affiliation  

While Federated learning (FL) is attractive for pulling privacy-preserving distributed training data, the credibility of participating clients and non-inspectable data pose new security threats, of which poisoning attacks are particularly rampant and hard to defend without compromising privacy, performance or other desirable properties. In this paper, we propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model to supervise the training of aggregated model in each iteration. The purification is performed by an attention-guided self-knowledge distillation where the teacher and student models are optimized locally for task loss, distillation loss and attention loss simultaneously. SPFL imposes no restriction on the communication protocol and aggregator at the server. It can work in tandem with any existing secure aggregation algorithms and protocols for augmented security and privacy guarantee. We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against poisoning attacks. The attack success rate of SPFL trained model remains the lowest among all defense methods in comparison, even if the poisoning attack is launched in every iteration with all but one malicious clients in the system. Meantime, it improves the model quality on normal inputs compared to FedAvg, either under attack or in the absence of an attack.

中文翻译:


SPFL:一种对抗投毒攻击的自净化联邦学习方法



虽然联邦学习(FL)对于提取保护隐私的分布式训练数据很有吸引力,但参与客户端的可信度和不可检查的数据带来了新的安全威胁,其中中毒攻击尤其猖獗,并且在不损害隐私、性能或其他方面的情况下很难防御。理想的特性。在本文中,我们提出了一种自净化 FL (SPFL) 方法,使良性客户端能够利用本地净化模型的可信历史特征来监督每次迭代中聚合模型的训练。净化是通过注意力引导的自我知识蒸馏来执行的,其中教师和学生模型同时针对任务损失、蒸馏损失和注意力损失进行局部优化。 SPFL 对服务器上的通信协议和聚合器没有任何限制。它可以与任何现有的安全聚合算法和协议协同工作,以增强安全性和隐私保证。我们通过实验证明 SPFL 优于最先进的 FL 对中毒攻击的防御。相比之下,SPFL 训练模型的攻击成功率仍然是所有防御方法中最低的,即使在每次迭代中都对系统中除一个恶意客户端之外的所有客户端发起投毒攻击。同时,与 FedAvg 相比,无论受到攻击还是没有受到攻击,它都提高了正常输入的模型质量。
更新日期:2024-08-22
down
wechat
bug