当前位置:
X-MOL 学术
›
ACM Comput. Surv.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Membership Inference Attacks and Defenses in Federated Learning: A Survey
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2024-11-14 , DOI: 10.1145/3704633 Li Bai, Haibo Hu, Qingqing Ye, Haoyang Li, Leixia Wang, Jianliang Xu
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2024-11-14 , DOI: 10.1145/3704633 Li Bai, Haibo Hu, Qingqing Ye, Haoyang Li, Leixia Wang, Jianliang Xu
Federated learning is a decentralized machine learning approach where clients train models locally and share model updates to develop a global model. This enables low-resource devices to collaboratively build a high-quality model without requiring direct access to the raw training data. However, despite only sharing model updates, federated learning still faces several privacy vulnerabilities. One of the key threats is membership inference attacks, which target clients’ privacy by determining whether a specific example is part of the training set. These attacks can compromise sensitive information in real-world applications, such as medical diagnoses within a healthcare system. Although there has been extensive research on membership inference attacks, a comprehensive and up-to-date survey specifically focused on it within federated learning is still absent. To fill this gap, we categorize and summarize membership inference attacks and their corresponding defense strategies based on their characteristics in this setting. We introduce a unique taxonomy of existing attack research and provide a systematic overview of various countermeasures. For these studies, we thoroughly analyze the strengths and weaknesses of different approaches. Finally, we identify and discuss key future research directions for readers interested in advancing the field.
中文翻译:
联邦学习中的成员推理攻击和防御:一项调查
联合学习是一种去中心化的机器学习方法,客户可以在其中在本地训练模型并共享模型更新以开发全局模型。这使得低资源设备能够协作构建高质量的模型,而无需直接访问原始训练数据。然而,尽管只共享模型更新,但联邦学习仍然面临一些隐私漏洞。主要威胁之一是成员推理攻击,它通过确定特定示例是否是训练集的一部分来针对客户的隐私。这些攻击可能会泄露实际应用中的敏感信息,例如医疗保健系统内的医疗诊断。尽管已经对成员推理攻击进行了广泛的研究,但在联邦学习中仍然没有专门针对它的全面和最新的调查。为了填补这一空白,我们根据该设置下的成员资格推理攻击特征,对成员推理攻击及其相应的防御策略进行了分类和总结。我们引入了现有攻击研究的独特分类法,并提供了各种对策的系统概述。对于这些研究,我们彻底分析了不同方法的优缺点。最后,我们为有兴趣推动该领域发展的读者确定并讨论了未来的关键研究方向。
更新日期:2024-11-14
中文翻译:
联邦学习中的成员推理攻击和防御:一项调查
联合学习是一种去中心化的机器学习方法,客户可以在其中在本地训练模型并共享模型更新以开发全局模型。这使得低资源设备能够协作构建高质量的模型,而无需直接访问原始训练数据。然而,尽管只共享模型更新,但联邦学习仍然面临一些隐私漏洞。主要威胁之一是成员推理攻击,它通过确定特定示例是否是训练集的一部分来针对客户的隐私。这些攻击可能会泄露实际应用中的敏感信息,例如医疗保健系统内的医疗诊断。尽管已经对成员推理攻击进行了广泛的研究,但在联邦学习中仍然没有专门针对它的全面和最新的调查。为了填补这一空白,我们根据该设置下的成员资格推理攻击特征,对成员推理攻击及其相应的防御策略进行了分类和总结。我们引入了现有攻击研究的独特分类法,并提供了各种对策的系统概述。对于这些研究,我们彻底分析了不同方法的优缺点。最后,我们为有兴趣推动该领域发展的读者确定并讨论了未来的关键研究方向。