当前位置:
X-MOL 学术
›
IEEE Trans. Inform. Forensics Secur.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
MODEL: A Model Poisoning Defense Framework for Federated Learning via Truth Discovery
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2024-09-16 , DOI: 10.1109/tifs.2024.3461449 Minzhe Wu, Bowen Zhao, Yang Xiao, Congjian Deng, Yuan Liu, Ximeng Liu
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2024-09-16 , DOI: 10.1109/tifs.2024.3461449 Minzhe Wu, Bowen Zhao, Yang Xiao, Congjian Deng, Yuan Liu, Ximeng Liu
Federated learning (FL) is an emerging paradigm for privacy-preserving machine learning, in which multiple clients collaborate to generate a global model through training individual models with local data. However, FL is vulnerable to model poisoning attacks (MPAs) as malicious clients are able to destroy the global model by modifying local models. Although numerous model poisoning defense methods are extensively studied, they remain vulnerable to newly proposed optimized MPAs and are constrained by the necessity to presume a certain proportion of malicious clients. To this end, in this paper, we propose MODEL, a model poisoning defense framework for FL through truth discovery (TD). A distinctive aspect of MODEL is its ability to effectively prevent both optimized and byzantine MPAs. Furthermore, it requires no presupposed threshold for different settings of malicious clients (e.g., less than 33% or no more than 50%). Specifically, a TD-based metric and a clustering-based filtering mechanism are proposed to evaluate local models and avoid presupposing a threshold. Furthermore, MODEL is effective for non-independent and identically distributed (non-IID) training data. In addition, inspired by game theory, we incorporate a truthful and fair incentive mechanism in MODEL to encourage active client participation while mitigating the potential desire for attacks from malicious clients. Extensively comparative experiments demonstrate that MODEL effectively safeguards against optimized MPAs and outperforms the state-of-the-art.
中文翻译:
模型:通过真相发现进行联邦学习的模型中毒防御框架
联邦学习 (FL) 是一种新兴的隐私保护机器学习范例,其中多个客户端通过使用本地数据训练单个模型来协作生成全局模型。然而,FL 很容易受到模型中毒攻击 (MPA),因为恶意客户端能够通过修改本地模型来破坏全局模型。尽管许多模型中毒防御方法得到了广泛的研究,但它们仍然容易受到新提出的优化 MPA 的影响,并且受到必须假定一定比例的恶意客户端的限制。为此,在本文中,我们提出了 MODEL,一种通过真相发现(TD)进行 FL 的模型中毒防御框架。 MODEL 的一个独特之处是它能够有效防止优化 MPA 和拜占庭 MPA。此外,它不需要针对恶意客户端的不同设置预设阈值(例如,小于33%或不大于50%)。具体来说,提出了基于TD的度量和基于聚类的过滤机制来评估局部模型并避免预设阈值。此外,MODEL 对于非独立同分布(non-IID)训练数据是有效的。此外,受博弈论的启发,我们在MODEL中融入了真实、公平的激励机制,鼓励客户积极参与,同时降低恶意客户潜在的攻击欲望。广泛的对比实验表明,MODEL 可以有效防范优化的 MPA,并且性能优于现有技术。
更新日期:2024-09-16
中文翻译:
模型:通过真相发现进行联邦学习的模型中毒防御框架
联邦学习 (FL) 是一种新兴的隐私保护机器学习范例,其中多个客户端通过使用本地数据训练单个模型来协作生成全局模型。然而,FL 很容易受到模型中毒攻击 (MPA),因为恶意客户端能够通过修改本地模型来破坏全局模型。尽管许多模型中毒防御方法得到了广泛的研究,但它们仍然容易受到新提出的优化 MPA 的影响,并且受到必须假定一定比例的恶意客户端的限制。为此,在本文中,我们提出了 MODEL,一种通过真相发现(TD)进行 FL 的模型中毒防御框架。 MODEL 的一个独特之处是它能够有效防止优化 MPA 和拜占庭 MPA。此外,它不需要针对恶意客户端的不同设置预设阈值(例如,小于33%或不大于50%)。具体来说,提出了基于TD的度量和基于聚类的过滤机制来评估局部模型并避免预设阈值。此外,MODEL 对于非独立同分布(non-IID)训练数据是有效的。此外,受博弈论的启发,我们在MODEL中融入了真实、公平的激励机制,鼓励客户积极参与,同时降低恶意客户潜在的攻击欲望。广泛的对比实验表明,MODEL 可以有效防范优化的 MPA,并且性能优于现有技术。