当前位置:
X-MOL 学术
›
Veh. Commun.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Federated learning on the go: Building stable clusters and optimizing resources on the road
Vehicular Communications ( IF 5.8 ) Pub Date : 2024-12-06 , DOI: 10.1016/j.vehcom.2024.100870 Sawsan AbdulRahman, Safa Otoum, Ouns Bouachir
Vehicular Communications ( IF 5.8 ) Pub Date : 2024-12-06 , DOI: 10.1016/j.vehcom.2024.100870 Sawsan AbdulRahman, Safa Otoum, Ouns Bouachir
With the proliferation of Internet of Things, leveraging federated learning (FL) for collaborative model training has become paramount. It has turned into a powerful tool to analyze on-device data and produce real-time applications while safeguarding user privacy. However, in vehicular networks, the dynamic nature of vehicles, coupled with resource constraints, gives rise to new challenges for efficient FL implementation. In this paper, we address the critical problems of optimizing computational and communication resources and selecting the appropriate vehicle to participate in the process. Our proposed scheme bypasses the communication bottleneck by forming homogeneous groups based on the vehicles mobility/direction and their computing resources. Vehicle-to-Vehicle communication is then adapted within each group, and communication with an on-road edge node is orchestrated by a designated Cluster Head (CH). The latter is selected based on several factors, including connectivity index, mobility coherence, and computational resources. This selection process is designed to be robust against potential cheating attempts, which prevents nodes from avoiding the role of CH to conserve their resources. Moreover, we propose a matching algorithm that pairs each vehicular group with the appropriate edge nodes responsible for aggregating local models and facilitating communication with the server, which subsequently processes the models from all edges. The conducted experiments show promising results compared to benchmarks by achieving: (1) significantly higher amounts of trained data per iteration through strategic CH selection, leading to improved model accuracy and reduced communication overhead. Additionally, our approach demonstrates (2) efficient network load management, (3) faster convergence times in later training rounds, and (4) superior cluster stability.
中文翻译:
外出联邦学习:在路上构建稳定的集群和优化资源
随着物联网的普及,利用联合学习 (FL) 进行协作模型训练变得至关重要。它已成为分析设备数据并生成实时应用程序同时保护用户隐私的强大工具。然而,在车辆网络中,车辆的动态特性加上资源限制,为高效 FL 实施带来了新的挑战。在本文中,我们解决了优化计算和通信资源以及选择合适的工具参与该过程的关键问题。我们提出的方案通过根据车辆的移动性/方向及其计算资源形成同构组来绕过通信瓶颈。然后,在每个组内调整车对车通信,并与道路边缘节点的通信由指定的集群头 (CH) 编排。后者是根据几个因素选择的,包括连通性指数、移动性连贯性和计算资源。此选择过程旨在防止潜在的作弊尝试,从而防止节点避免 CH 的角色以节省其资源。此外,我们提出了一种匹配算法,将每个车辆组与负责聚合本地模型并促进与服务器通信的适当边缘节点配对,服务器随后处理来自所有边缘的模型。与基准相比,所进行的实验显示出有希望的结果,因为它实现了:(1) 通过战略性 CH 选择,每次迭代的训练数据量显着增加,从而提高了模型准确性并减少了通信开销。 此外,我们的方法还展示了 (2) 高效的网络负载管理,(3) 在以后的训练轮次中更快的收敛时间,以及 (4) 卓越的集群稳定性。
更新日期:2024-12-06
中文翻译:
外出联邦学习:在路上构建稳定的集群和优化资源
随着物联网的普及,利用联合学习 (FL) 进行协作模型训练变得至关重要。它已成为分析设备数据并生成实时应用程序同时保护用户隐私的强大工具。然而,在车辆网络中,车辆的动态特性加上资源限制,为高效 FL 实施带来了新的挑战。在本文中,我们解决了优化计算和通信资源以及选择合适的工具参与该过程的关键问题。我们提出的方案通过根据车辆的移动性/方向及其计算资源形成同构组来绕过通信瓶颈。然后,在每个组内调整车对车通信,并与道路边缘节点的通信由指定的集群头 (CH) 编排。后者是根据几个因素选择的,包括连通性指数、移动性连贯性和计算资源。此选择过程旨在防止潜在的作弊尝试,从而防止节点避免 CH 的角色以节省其资源。此外,我们提出了一种匹配算法,将每个车辆组与负责聚合本地模型并促进与服务器通信的适当边缘节点配对,服务器随后处理来自所有边缘的模型。与基准相比,所进行的实验显示出有希望的结果,因为它实现了:(1) 通过战略性 CH 选择,每次迭代的训练数据量显着增加,从而提高了模型准确性并减少了通信开销。 此外,我们的方法还展示了 (2) 高效的网络负载管理,(3) 在以后的训练轮次中更快的收敛时间,以及 (4) 卓越的集群稳定性。