当前位置: X-MOL 学术IEEE Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Graph Reinforcement Learning for Mobile Edge Computing: Challenges and Solutions
IEEE NETWORK ( IF 6.8 ) Pub Date : 2024-03-29 , DOI: 10.1109/mnet.2024.3383242
Yixiao Wang 1 , Huaming Wu 1 , Ruidong Li 2
Affiliation  

With the increasing Quality of Service (QoS) requirements of the Internet of Things (IoT), Mobile Edge Computing (MEC) has undoubtedly become a new paradigm for locating various resources in the proximity of User Equipment (UE) to alleviate the workload of backbone IoT networks. Deep Reinforcement Learning (DRL) has gained widespread popularity as a preferred methodology, primarily due to its capability to guide each User Equipment (UE) in making appropriate decisions within dynamic environments. However, traditional DRL algorithms cannot fully exploit the relationship between devices in the MEC graph. Here, we point out two typical IoT scenarios, i.e., task offloading decision-making when dependent tasks in resource-constrained Edge Servers (ESs) are generated in UEs and orchestration of cross-ESs distributed service, where the system cost is minimized by orchestrating hierarchical networks. To further enhance the performance of DRL, Graph Neural Networks (GNNs) and their variants provide promising generalization ability to wide IoT scenarios. We accordingly give concrete solutions for the above two typical scenarios, namely, Graph Neural Networks-Proximal Policy Optimization (GNNPPO) and Graph Neural Networks-Meta Reinforcement Learning (GNN-MRL), which combine GNN with a popular Actor-Critic scheme and newly developed MRL. Finally, we point out four worthwhile research directions for exploring GNN and DRL for AI-empowered MEC environments.

中文翻译:


移动边缘计算的深度图强化学习:挑战和解决方案



随着物联网(IoT)对服务质量(QoS)要求的不断提高,移动边缘计算(MEC)无疑成为将各种资源定位在用户设备(UE)附近以减轻骨干网工作负载的新范例物联网网络。深度强化学习 (DRL) 作为首选方法已获得广泛流行,主要是因为它能够指导每个用户设备 (UE) 在动态环境中做出适当的决策。然而,传统的DRL算法无法充分利用MEC图中设备之间的关系。在这里,我们指出了两种典型的物联网场景,即资源受限的边缘服务器(ES)中的依赖任务在UE中生成时的任务卸载决策以及跨ES分布式服务的编排,其中通过编排最小化系统成本分层网络。为了进一步增强 DRL 的性能,图神经网络 (GNN) 及其变体为广泛的物联网场景提供了有希望的泛化能力。因此,我们针对上述两种典型场景给出了具体的解决方案,即图神经网络-邻近策略优化(GNNPPO)和图神经网络-元强化学习(GNN-MRL),它们将GNN与流行的Actor-Critic方案相结合,并提出了新的解决方案。制定了 MRL。最后,我们指出了四个有价值的研究方向,用于探索 AI 赋能的 MEC 环境中的 GNN 和 DRL。
更新日期:2024-03-29
down
wechat
bug