当前位置:
X-MOL 学术
›
IEEE Trans. Serv. Comput.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
An Efficient and Multi-Private Key Secure Aggregation Scheme for Federated Learning
IEEE Transactions on Services Computing ( IF 5.5 ) Pub Date : 2024-08-28 , DOI: 10.1109/tsc.2024.3451165 Xue Yang 1 , Zifeng Liu 1 , Xiaohu Tang 1 , Rongxing Lu 2 , Bo Liu 3
IEEE Transactions on Services Computing ( IF 5.5 ) Pub Date : 2024-08-28 , DOI: 10.1109/tsc.2024.3451165 Xue Yang 1 , Zifeng Liu 1 , Xiaohu Tang 1 , Rongxing Lu 2 , Bo Liu 3
Affiliation
In light of the emergence of privacy breaches in federated learning, secure aggregation protocols, which mainly adopt either homomorphic encryption or threshold secret sharing techniques, have been extensively developed to preserve the privacy of each client's local gradient. Nevertheless, many existing schemes suffer from either poor capability of privacy protection or expensive computational and communication overheads. Accordingly, in this paper, we propose an efficient and multi-private key secure aggregation scheme for federated learning. Specifically, we skillfully design a multi-private key secure aggregation algorithm that achieves homomorphic addition operation, with two important benefits: 1) both the server and each client can freely select public and private keys without introducing a trusted third party, and 2) the plaintext space is relatively large, making it more suitable for deep models. Besides, for dealing with the high dimensional deep model parameter, we introduce a super-increasing sequence to compress multi-dimensional data into one dimension, which greatly reduces encryption and decryption times as well as communication for ciphertext transmission. Detailed security analyses show that our proposed scheme can achieve semantic security of both individual local gradients and the aggregated result while achieving optimal robustness in tolerating client collusion. Extensive simulations demonstrate that the accuracy of our scheme is almost the same as the non-private approach, while the efficiency of our scheme is much better than the state-of-the-art baselines. More importantly, the efficiency advantages of our scheme will become increasingly prominent as the number of model parameters increases.
中文翻译:
一种面向联邦学习的高效多私钥安全聚合方案
鉴于联邦学习中隐私泄露的出现,主要采用同态加密或阈值秘密共享技术的安全聚合协议已经被广泛开发,以保护每个客户端本地梯度的隐私。然而,许多现有的方案要么隐私保护能力差,要么存在昂贵的计算和通信开销。因此,在本文中,我们提出了一种用于联邦学习的高效、多私钥安全聚合方案。具体来说,我们巧妙地设计了一种实现同态加法运算的多私钥安全聚合算法,具有两个重要的好处:1)服务器和每个客户端都可以自由选择公私钥,而无需引入受信任的第三方,以及 2)明文空间相对较大,使其更适合深度模型。此外,为了处理高维深度模型参数,我们引入了一个超增序,将多维数据压缩为一维,大大减少了加解密时间以及密文传输的通信。详细的安全性分析表明,我们提出的方案可以实现单个局部梯度和聚合结果的语义安全性,同时在容忍客户端串通方面实现最佳鲁棒性。广泛的模拟表明,我们方案的准确性与非私有方法几乎相同,而我们方案的效率比最先进的基线要好得多。更重要的是,随着模型参数数量的增加,我们方案的效率优势将越来越突出。
更新日期:2024-08-28
中文翻译:
一种面向联邦学习的高效多私钥安全聚合方案
鉴于联邦学习中隐私泄露的出现,主要采用同态加密或阈值秘密共享技术的安全聚合协议已经被广泛开发,以保护每个客户端本地梯度的隐私。然而,许多现有的方案要么隐私保护能力差,要么存在昂贵的计算和通信开销。因此,在本文中,我们提出了一种用于联邦学习的高效、多私钥安全聚合方案。具体来说,我们巧妙地设计了一种实现同态加法运算的多私钥安全聚合算法,具有两个重要的好处:1)服务器和每个客户端都可以自由选择公私钥,而无需引入受信任的第三方,以及 2)明文空间相对较大,使其更适合深度模型。此外,为了处理高维深度模型参数,我们引入了一个超增序,将多维数据压缩为一维,大大减少了加解密时间以及密文传输的通信。详细的安全性分析表明,我们提出的方案可以实现单个局部梯度和聚合结果的语义安全性,同时在容忍客户端串通方面实现最佳鲁棒性。广泛的模拟表明,我们方案的准确性与非私有方法几乎相同,而我们方案的效率比最先进的基线要好得多。更重要的是,随着模型参数数量的增加,我们方案的效率优势将越来越突出。