当前位置:
X-MOL 学术
›
Inform. Fusion
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Secure fair aggregation based on category grouping in federated learning
Information Fusion ( IF 14.7 ) Pub Date : 2024-12-05 , DOI: 10.1016/j.inffus.2024.102838 Jie Zhou, Jinlin Hu, Jiajun Xue, Shengke Zeng
Information Fusion ( IF 14.7 ) Pub Date : 2024-12-05 , DOI: 10.1016/j.inffus.2024.102838 Jie Zhou, Jinlin Hu, Jiajun Xue, Shengke Zeng
Traditionally, privacy and fairness have been recognized as having different goals in federated learning. Privacy requires data features to be as undetectable as possible, pursuing data ambiguity. Fairness, on the other hand, requires fair aggregation of the global model through the features of the data. So most existing researches has separately addressed these two ethical concepts. It is crucial that ensure both privacy and fairness in federated learning systems. In this paper, we propose a privacy and fairness federated learning scheme based on cryptographic techniques. We divide the users who participate in global model training into different groups and divide the global model aggregation into two steps: intra-group aggregation and inter-group aggregation. During intra-group aggregation, the privacy of users’ gradient within a group is protected by adding masks. During inter-group aggregation, the fairness of federated learning is achieved by the gradient conflict mitigation method. Our approach allows servers to achieve privacy protection for gradients of all users while aggregating them fairly and supporting users’ offline behavior. We conduct several experiments on the MNIST and CIFAR-10 datasets to demonstrate that the proposed scheme is effective.
中文翻译:
在联邦学习中基于类别分组的安全公平聚合
传统上,隐私和公平被认为在联合学习中具有不同的目标。隐私要求数据特征尽可能不被检测,追求数据模糊性。另一方面,公平性要求通过数据特征公平地聚合全局模型。因此,大多数现有的研究都分别讨论了这两个伦理概念。确保联邦学习系统的隐私和公平性至关重要。在本文中,我们提出了一种基于密码学技术的隐私和公平性联邦学习方案。我们将参与全局模型训练的用户分为不同的分组,并将全局模型聚合分为两个步骤:组内聚合和组间聚合。在组内聚合期间,通过添加掩码来保护组内用户梯度的隐私。在组间聚合过程中,通过梯度冲突缓解方法实现联邦学习的公平性。我们的方法允许服务器实现对所有用户梯度的隐私保护,同时公平地聚合并支持用户的离线行为。我们在 MNIST 和 CIFAR-10 数据集上进行了多次实验,以证明所提出的方案是有效的。
更新日期:2024-12-05
中文翻译:
在联邦学习中基于类别分组的安全公平聚合
传统上,隐私和公平被认为在联合学习中具有不同的目标。隐私要求数据特征尽可能不被检测,追求数据模糊性。另一方面,公平性要求通过数据特征公平地聚合全局模型。因此,大多数现有的研究都分别讨论了这两个伦理概念。确保联邦学习系统的隐私和公平性至关重要。在本文中,我们提出了一种基于密码学技术的隐私和公平性联邦学习方案。我们将参与全局模型训练的用户分为不同的分组,并将全局模型聚合分为两个步骤:组内聚合和组间聚合。在组内聚合期间,通过添加掩码来保护组内用户梯度的隐私。在组间聚合过程中,通过梯度冲突缓解方法实现联邦学习的公平性。我们的方法允许服务器实现对所有用户梯度的隐私保护,同时公平地聚合并支持用户的离线行为。我们在 MNIST 和 CIFAR-10 数据集上进行了多次实验,以证明所提出的方案是有效的。