当前位置:
X-MOL 学术
›
Future Gener. Comput. Syst.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
FedCOLA: Federated learning with heterogeneous feature concatenation and local acceleration for non-IID data
Future Generation Computer Systems ( IF 6.2 ) Pub Date : 2024-12-09 , DOI: 10.1016/j.future.2024.107674 Wu-Chun Chung, Chien-Hu Peng
Future Generation Computer Systems ( IF 6.2 ) Pub Date : 2024-12-09 , DOI: 10.1016/j.future.2024.107674 Wu-Chun Chung, Chien-Hu Peng
Federated Learning (FL) is an emerging training framework for machine learning to protect data privacy without accessing the original data from each client. However, the participating clients have different computing resources in FL. Clients with insufficient resources may not cooperate in the training due to hardware limitations. The restricted computing speeds may also slow down the overall computing time. In addition, the Non-IID problem happens when data distributions of the clients are varied, which results in lower performance for training. To overcome these problems, this paper proposes a FedCOLA approach to adapt various data distributions among heterogeneous clients. By introducing the feature concatenation and local update mechanism, FedCOLA supports different clients to train the model with different layers. Both communication load and time delay during collaborative training can be reduced. Combined with the adaptive auxiliary model and the personalized model, FedCOLA further improves the testing accuracy under various Non-IID data distributions. To evaluate the performance, this paper considers the effects and analysis of different Non-IID data distributions on distinct methods. The empirical results show that FedCOLA improves the accuracy by 5%, reduces 57% rounds to achieve the same accuracy, and reduces the communication load by 77% in the extremely imbalanced data distribution. Compared with the state-of-the-art methods in a real deployment of heterogeneous clients, FedCOLA reduces the time consumption by 70% to achieve the same accuracy and by 30% to complete 200 training rounds. In conclusion, the proposed FedCOLA not only accommodates various Non-IID data distributions but also supports the heterogeneous clients to train the model of different layers with a significant reduction of the time delay and communication load.
中文翻译:
FedCOLA:非 IID 数据异构特征连接和局部加速的联邦学习
联邦学习 (FL) 是一种新兴的机器学习训练框架,可在不访问每个客户端的原始数据的情况下保护数据隐私。但是,参与的客户端在 FL 中的计算资源不同,由于硬件限制,资源不足的客户端可能无法配合训练。受限的计算速度也可能会减慢整体计算时间。此外,当客户端的数据分布不同时,会发生 Non-IID 问题,这会导致训练性能降低。为了克服这些问题,本文提出了一种 FedCOLA 方法来适应异构客户端之间的各种数据分布。通过引入特征级联和本地更新机制,FedCOLA 支持不同的客户端用不同的层训练模型。可以减少协作训练期间的通信负载和时间延迟。结合自适应辅助模型和个性化模型,FedCOLA 进一步提高了各种 Non-IID 数据分布下的测试准确性。为了评估性能,本文考虑了不同 Non-IID 数据分布对不同方法的影响和分析。实证结果表明,FedCOLA 将准确率提高了 5%,减少了 57% 的轮次以达到相同的准确率,并在极度不平衡的数据分布中将通信负载降低了 77%。与异构客户端实际部署中的最先进方法相比,FedCOLA 将实现相同准确性的时间消耗减少了 70%,完成了 200 轮训练的时间减少了 30%。 总之,所提出的 FedCOLA 不仅容纳了各种非 IID 数据分布,还支持异构客户端训练不同层的模型,显着减少了时间延迟和通信负载。
更新日期:2024-12-09
中文翻译:
FedCOLA:非 IID 数据异构特征连接和局部加速的联邦学习
联邦学习 (FL) 是一种新兴的机器学习训练框架,可在不访问每个客户端的原始数据的情况下保护数据隐私。但是,参与的客户端在 FL 中的计算资源不同,由于硬件限制,资源不足的客户端可能无法配合训练。受限的计算速度也可能会减慢整体计算时间。此外,当客户端的数据分布不同时,会发生 Non-IID 问题,这会导致训练性能降低。为了克服这些问题,本文提出了一种 FedCOLA 方法来适应异构客户端之间的各种数据分布。通过引入特征级联和本地更新机制,FedCOLA 支持不同的客户端用不同的层训练模型。可以减少协作训练期间的通信负载和时间延迟。结合自适应辅助模型和个性化模型,FedCOLA 进一步提高了各种 Non-IID 数据分布下的测试准确性。为了评估性能,本文考虑了不同 Non-IID 数据分布对不同方法的影响和分析。实证结果表明,FedCOLA 将准确率提高了 5%,减少了 57% 的轮次以达到相同的准确率,并在极度不平衡的数据分布中将通信负载降低了 77%。与异构客户端实际部署中的最先进方法相比,FedCOLA 将实现相同准确性的时间消耗减少了 70%,完成了 200 轮训练的时间减少了 30%。 总之,所提出的 FedCOLA 不仅容纳了各种非 IID 数据分布,还支持异构客户端训练不同层的模型,显着减少了时间延迟和通信负载。