当前位置: X-MOL 学术IEEE Commun. Surv. Tutor. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
IEEE Communications Surveys & Tutorials ( IF 34.4 ) Pub Date : 2024-02-07 , DOI: 10.1109/comst.2024.3361451
Yichen Wan 1 , Youyang Qu 2 , Wei Ni 3 , Yong Xiang 1 , Longxiang Gao 2 , Ekram Hossain 4
Affiliation  

Due to the greatly improved capabilities of devices, massive data, and increasing concern about data privacy, Federated Learning (FL) has been increasingly considered for applications to wireless communication networks (WCNs). Wireless FL (WFL) is a distributed method of training a global deep learning model in which a large number of participants each train a local model on their training datasets and then upload the local model updates to a central server. However, in general, non-independent and identically distributed (non-IID) data of WCNs raises concerns about robustness, as a malicious participant could potentially inject a “backdoor” into the global model by uploading poisoned data or models over WCN. This could cause the model to misclassify malicious inputs as a specific target class while behaving normally with benign inputs. This survey provides a comprehensive review of the latest backdoor attacks and defense mechanisms. It classifies them according to their targets (data poisoning or model poisoning), the attack phase (local data collection, training, or aggregation), and defense stage (local training, before aggregation, during aggregation, or after aggregation). The strengths and limitations of existing attack strategies and defense mechanisms are analyzed in detail. Comparisons of existing attack methods and defense designs are carried out, pointing to noteworthy findings, open challenges, and potential future research directions related to security and privacy of WFL.

中文翻译:


无线联邦学习的数据和模型中毒后门攻击及其防御机制:综合调查



由于设备能力的大幅提升、海量数据以及对数据隐私的日益关注,联邦学习(FL)越来越多地被考虑应用于无线通信网络(WCN)。无线 FL (WFL) 是一种训练全局深度学习模型的分布式方法,其中大量参与者各自在其训练数据集上训练本地模型,然后将本地模型更新上传到中央服务器。然而,一般来说,WCN 的非独立同分布(非 IID)数据引起了人们对鲁棒性的担忧,因为恶意参与者可能通过 WCN 上传有毒数据或模型,将“后门”注入到全局模型中。这可能会导致模型将恶意输入错误分类为特定目标类,而对于良性输入则表现正常。本次调查对最新的后门攻击和防御机制进行了全面回顾。它根据目标(数据中毒或模型中毒)、攻击阶段(本地数据收集、训练或聚合)和防御阶段(本地训练、聚合前、聚合期间或聚合后)对它们进行分类。详细分析了现有攻击策略和防御机制的优点和局限性。对现有攻击方法和防御设计进行了比较,指出了与 WFL 安全和隐私相关的值得注意的发现、开放的挑战以及未来潜在的研究方向。
更新日期:2024-02-07
down
wechat
bug