当前位置:
X-MOL 学术
›
J. Netw. Comput. Appl.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Federated deep reinforcement learning for task offloading and resource allocation in mobile edge computing-assisted vehicular networks
Journal of Network and Computer Applications ( IF 7.7 ) Pub Date : 2024-06-25 , DOI: 10.1016/j.jnca.2024.103941 Xu Zhao , Yichuan Wu , Tianhao Zhao , Feiyu Wang , Maozhen Li
Journal of Network and Computer Applications ( IF 7.7 ) Pub Date : 2024-06-25 , DOI: 10.1016/j.jnca.2024.103941 Xu Zhao , Yichuan Wu , Tianhao Zhao , Feiyu Wang , Maozhen Li
Mobile edge computing (MEC) enables computation intensive applications in the Internet of Vehicles (IoV) to no longer be limited by device resources. However, the lack of an effective task scheduling strategy will seriously affect users’ quality of experience (QoE). In this paper, a task type-based task offloading and resource allocation strategy is proposed to reduce delay and energy consumption during task execution. First, we establish communication, computing, and system cost models based on task offloading schemes, and model the joint optimization problem of task offloading and resource allocation as a Markov decision process. The utility function is obtained based on the task completion rate and the system cost. Second, an algorithm framework based on multi-agent deep deterministic policy gradient (MADDPG) is designed to solve the difficulty that traditional single-agent reinforcement learning algorithms are difficult to converge in a dynamic environment. In distributed scenarios, the proposed framework can also reduce system costs while handling more tasks. Finally, federated learning is introduced in the training process to reduce the impact of non-IID data while protecting privacy. Simulation results show that the proposed algorithm can effectively improve system processing efficiency and reduce device energy consumption compared to the popular reinforcement learning algorithms.
中文翻译:
用于移动边缘计算辅助车辆网络中任务卸载和资源分配的联合深度强化学习
移动边缘计算(MEC)使车联网(IoV)中的计算密集型应用不再受到设备资源的限制。然而,缺乏有效的任务调度策略将严重影响用户的体验质量(QoE)。本文提出了一种基于任务类型的任务卸载和资源分配策略,以减少任务执行过程中的延迟和能耗。首先,我们建立基于任务卸载方案的通信、计算和系统成本模型,并将任务卸载和资源分配的联合优化问题建模为马尔可夫决策过程。效用函数是根据任务完成率和系统成本获得的。其次,设计了基于多智能体深度确定性策略梯度(MADDPG)的算法框架,解决传统单智能体强化学习算法在动态环境下难以收敛的难题。在分布式场景中,所提出的框架还可以在处理更多任务的同时降低系统成本。最后,在训练过程中引入联邦学习,在保护隐私的同时减少非独立同分布数据的影响。仿真结果表明,与流行的强化学习算法相比,该算法能够有效提高系统处理效率,降低设备能耗。
更新日期:2024-06-25
中文翻译:
用于移动边缘计算辅助车辆网络中任务卸载和资源分配的联合深度强化学习
移动边缘计算(MEC)使车联网(IoV)中的计算密集型应用不再受到设备资源的限制。然而,缺乏有效的任务调度策略将严重影响用户的体验质量(QoE)。本文提出了一种基于任务类型的任务卸载和资源分配策略,以减少任务执行过程中的延迟和能耗。首先,我们建立基于任务卸载方案的通信、计算和系统成本模型,并将任务卸载和资源分配的联合优化问题建模为马尔可夫决策过程。效用函数是根据任务完成率和系统成本获得的。其次,设计了基于多智能体深度确定性策略梯度(MADDPG)的算法框架,解决传统单智能体强化学习算法在动态环境下难以收敛的难题。在分布式场景中,所提出的框架还可以在处理更多任务的同时降低系统成本。最后,在训练过程中引入联邦学习,在保护隐私的同时减少非独立同分布数据的影响。仿真结果表明,与流行的强化学习算法相比,该算法能够有效提高系统处理效率,降低设备能耗。