当前位置: X-MOL 学术Future Gener. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning
Future Generation Computer Systems ( IF 6.2 ) Pub Date : 2024-12-14 , DOI: 10.1016/j.future.2024.107637
Renlong Chen, Hui Xia, Kai Wang, Shuo Xu, Rui Zhang

Split Federated Learning (SFL) enables organizations such as healthcare to collaborate to improve model performance without sharing private data. However, SFL is currently susceptible to model inversion (MI) attacks, which create a serious problem of risk for private data leakage and loss of accuracy. Therefore, this paper proposes an innovative framework called Knowledge Distillation Resistance Transfer for Split Federated Learning (KDRSFL). The KDRSFL framework combines one-shot distillation techniques with adjustment strategies optimized for attackers, aiming to achieve knowledge distillation-based resistance transfer. KDRSFL enhances the classification accuracy of feature extractors and strengthens their resistance to adversarial attacks. First, a teacher model with strong resistance to MI attacks is constructed, and then this capability is transferred to the client models through knowledge distillation. Second, the defense of the client models is further strengthened through attacker-aware training. Finally, the client models achieve effective defense against MI through local training. Detailed experimental validation shows that KDRSFL performs well against MI attacks on the CIFAR100 dataset. KDRSFL achieved a reconstruction mean squared error (MSE) of 0.058 while maintaining a model accuracy of 67.4% for the VGG11 model. KDRSFL represents a 16% improvement in MI attack error rate over ResSFL, with only 0.1% accuracy loss.

中文翻译:


KDRSFL:一种用于防御拆分联邦学习中模型反转攻击的知识蒸馏阻力转移框架



Split Federated Learning (SFL) 使医疗保健等组织能够在不共享私有数据的情况下进行协作以提高模型性能。然而,SFL 目前容易受到模型反转 (MI) 攻击,这造成了私人数据泄露和准确性损失的严重风险。因此,本文提出了一种创新框架,称为用于拆分联邦学习的知识蒸馏阻力转移 (KDRSFL)。KDRSFL 框架将 one-shot distillation 技术与针对攻击者优化的调整策略相结合,旨在实现基于知识蒸馏的阻力转移。KDRSFL 提高了特征提取器的分类准确性,并增强了它们对对抗性攻击的抵抗力。首先,构建一个对 MI 攻击具有较强抵抗力的教师模型,然后通过知识蒸馏将这种能力转移到客户端模型中。其次,通过攻击者感知训练进一步加强了客户端模型的防御。最后,客户端模型通过局部训练实现对 MI 的有效防御。详细的实验验证表明,KDRSFL 在 CIFAR100 数据集上对 MI 攻击表现良好。KDRSFL 实现了 0.058 的重建均方误差 (MSE),同时保持了 VGG11 模型的 67.4% 的模型精度。KDRSFL 表示 MI 攻击错误率比 ResSFL 高 16%,准确率仅损失 0.1%。
更新日期:2024-12-14
down
wechat
bug