当前位置: X-MOL 学术Future Gener. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Contribution prediction in federated learning via client behavior evaluation
Future Generation Computer Systems ( IF 6.2 ) Pub Date : 2024-11-30 , DOI: 10.1016/j.future.2024.107639
Ahmed A. Al-Saedi, Veselka Boeva, Emiliano Casalicchio

Federated learning (FL), a decentralized machine learning framework that allows edge devices (i.e., clients) to train a global model while preserving data/client privacy, has become increasingly popular recently. In FL, a shared global model is built by aggregating the updated parameters in a distributed manner. To incentivize data owners to participate in FL, it is essential for service providers to fairly evaluate the contribution of each data owner to the shared model during the learning process. To the best of our knowledge, most existing solutions are resource-demanding and usually run as an additional evaluation procedure. The latter produces an expensive computational cost for large data owners. In this paper, we present simple and effective FL solutions that show how the clients’ behavior can be evaluated during the training process with respect to reliability, and this is demonstrated for two existing FL models, Cluster Analysis-based Federated Learning (CA-FL) and Group-Personalized FL (GP-FL), respectively. In the former model, CA-FL, the frequency of each client to be selected as a cluster representative and in that way to be involved in the building of the shared model is assessed. This can eventually be considered as a measure of the respective client data reliability. In the latter model, GP-FL, we calculate how many times each client changes a cluster it belongs to during FL training, which can be interpreted as a measure of the client’s unstable behavior, i.e., it can be considered as not very reliable. We validate our FL approaches on three LEAF datasets and benchmark their performance to two baseline contribution evaluation approaches. The experimental results demonstrate that by applying the two FL models we are able to get robust evaluations of clients’ behavior during the training process. These evaluations can be used for further studying, comparing, understanding, and eventually predicting clients’ contributions to the shared global model.

中文翻译:


通过客户端行为评估预测联邦学习中的贡献



联邦学习 (FL) 是一种去中心化的机器学习框架,允许边缘设备(即客户端)训练全局模型,同时保护数据/客户端隐私,最近越来越受欢迎。在 FL 中,通过以分布式方式聚合更新的参数来构建共享的全局模型。为了激励数据所有者参与 FL,服务提供商必须在学习过程中公平评估每个数据所有者对共享模型的贡献。据我们所知,大多数现有解决方案都需要大量资源,通常作为额外的评估程序运行。后者会为大数据所有者带来昂贵的计算成本。在本文中,我们提出了简单有效的 FL 解决方案,展示了如何在训练过程中评估客户行为的可靠性,这在两个现有的 FL 模型中得到了证明,即基于聚类分析的联邦学习 (CA-FL) 和群体个性化 FL (GP-FL),分别。在前一个模型 CA-FL 中,评估每个客户被选为集群代表并以这种方式参与共享模型的构建的频率。这最终可以被视为衡量相应客户端数据可靠性的指标。在后一个模型 GP-FL 中,我们计算每个客户端在 FL 训练期间更改其所属集群的次数,这可以解释为衡量客户端不稳定行为的指标,即可以认为它不是很可靠。我们在三个 LEAF 数据集上验证了我们的 FL 方法,并将其性能与两种基线贡献评估方法进行了基准测试。 实验结果表明,通过应用这两个 FL 模型,我们能够在训练过程中对客户的行为进行稳健的评估。这些评估可用于进一步研究、比较、理解并最终预测客户对共享全球模型的贡献。
更新日期:2024-11-30
down
wechat
bug