当前位置: X-MOL 学术IEEE Internet Things J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning-Based Multi-Tier Split Computing for Efficient Convergence of Communication and Computation
IEEE Internet of Things Journal ( IF 8.2 ) Pub Date : 7-25-2024 , DOI: 10.1109/jiot.2024.3426531
Yang Cao, Shao-Yu Lien, Cheng-Hao Yeh, Der-Jiunn Deng, Ying-Chang Liang, Dusit Niyato

With promising benefits of splitting deep neural network (DNN) computation loads to the edge server, split computing has been a novel paradigm achieving high-quality artificial intelligence (AI) services for energy-constrained user equipments (UEs). To satisfy the service demands of a large number of UEs, traditional edge-UE split computing evolves toward multi-tier split computing involving edge and cloud servers with different capabilities, leading to a “complex” optimization involving communication and computation. To tackle this challenge, in this paper, we propose a multi-tier deep reinforcement learning (DRL) decision-making scheme for distributed splitting point selection and computing resource allocation in three-tier UE-edge-cloud split computing systems. With the proposed scheme, the high-dimensional optimization can be tackled by UEs and an edge server with different control cycles through performing local decision-making tasks in a sequential manner. Based on the policies updated by UEs and edge server in successive stages, the overall performance of split computing can be continuously improved, which is justified through a theoretical convergence performance analysis. Comprehensive simulation studies show that the proposed multi-tier DRL decision-making scheme outperforms conventional split computing schemes in terms of the overall latency, inference accuracy and energy efficiency, to practice multi-tier split computing.

中文翻译:


基于学习的多层分割计算,实现通信与计算的高效融合



凭借将深度神经网络 (DNN) 计算负载拆分到边缘服务器的巨大优势,拆分计算已成为一种为能源受限的用户设备 (UE) 实现高质量人工智能 (AI) 服务的新颖范式。为了满足大量UE的服务需求,传统的边缘-UE分离计算向涉及不同能力的边缘和云服务器的多层分离计算演进,导致通信和计算的“复杂”优化。为了应对这一挑战,在本文中,我们提出了一种多层深度强化学习(DRL)决策方案,用于三层UE-边缘-云分割计算系统中的分布式分割点选择和计算资源分配。通过所提出的方案,UE和边缘服务器可以通过顺序执行本地决策任务来解决具有不同控制周期的高维优化问题。基于UE和边缘服务器连续阶段更新的策略,分体计算的整体性能可以不断提高,这通过理论收敛性能分析得到了证明。综合仿真研究表明,所提出的多层DRL决策方案在整体延迟、推理精度和能源效率方面优于传统的分割计算方案,实现了多层分割计算。
更新日期:2024-08-22
down
wechat
bug