当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Shuffle Private Decentralized Convex Optimization
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 5-24-2024 , DOI: 10.1109/tifs.2024.3405183
Lingjie Zhang 1 , Hai Zhang 2
Affiliation  

In this paper, we consider the distributed local stochastic gradient descent (SGD) algorithm by parallelizing multiple devices in the setting of stochastic convex optimization (SCO). The losses in the majority of the earlier literatures are required to satisfy Lipschitzness and smoothness, and the privacy leakage may exist in the calculation of gradients. Hence, by incorporating the H_lder smooth loss and the shuffle model of differential privacy (DP) to study the local SGD algorithm in parallel multiple distributed devices, we proposed a distributed learning of local SGD with sequentially interactive shuffle private algorithm (Shuffle-DSGD) under equal intercommunication interval scheme. We provide the privacy guarantees by using advanced composition and shuffle protocol for vector summation. We also analyze the convergence bound of the Shuffle-DSGD and obtain the optimal excess population risk O(1/T)\mathcal {O}(1/T) up to logarithmic factors with gradient complexity O(n)\mathcal {O}(n) . It turns out that our convergence rate is superior to the one O(1/T__√)\mathcal {O}(1/\sqrt {T}) in the existing work and both of the gradient complexities are consistent. The effectiveness of our algorithms is demonstrated by synthetic and real datasets.

中文翻译:


Shuffle 私有分散凸优化



在本文中,我们通过在随机凸优化(SCO)设置中并行多个设备来考虑分布式局部随机梯度下降(SGD)算法。早期文献中的损失大多要求满足Lipschitzness和平滑度,梯度计算中可能存在隐私泄露。因此,通过结合H_lder平滑损失和差分隐私(DP)的shuffle模型来研究并行多个分布式设备中的本地SGD算法,提出了一种基于顺序交互式shuffle私有算法的本地SGD分布式学习(Shuffle-DSGD)等互通间隔方案。我们通过使用先进的组合和洗牌协议进行向量求和来提供隐私保证。我们还分析了 Shuffle-DSGD 的收敛界限,并获得最优的过量人口风险 O(1/T)\mathcal {O}(1/T) 直至梯度复杂度 O(n)\mathcal {O} 的对数因子(n) .事实证明,我们的收敛速度优于现有工作中的 O(1/T__√)\mathcal {O}(1/\sqrt {T}),并且两种梯度复杂度都是一致的。我们算法的有效性通过合成数据集和真实数据集得到证明。
更新日期:2024-08-22
down
wechat
bug