当前位置:
X-MOL 学术
›
Veh. Commun.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Deep Reinforcement Learning based running-track path design for fixed-wing UAV assisted mobile relaying network
Vehicular Communications ( IF 5.8 ) Pub Date : 2024-10-28 , DOI: 10.1016/j.vehcom.2024.100851 Tao Wang, Xiaodong Ji, Xuan Zhu, Cheng He, Jian-Feng Gu
Vehicular Communications ( IF 5.8 ) Pub Date : 2024-10-28 , DOI: 10.1016/j.vehcom.2024.100851 Tao Wang, Xiaodong Ji, Xuan Zhu, Cheng He, Jian-Feng Gu
This paper studies a fixed-wing unmanned aerial vehicle (UAV) assisted mobile relaying network (FUAVMRN), where a fixed-wing UAV employs an out-band full-duplex relaying fashion to serve a ground source-destination pair. It is confirmed that for a FUAVMRN, straight path is not suitable for the case that a huge amount of data need to be delivered, while circular path may lead to low throughput if the distance of ground source-destination pair is large. Thus, a running-track path (RTP) design problem is investigated for the FUAVMRN with the goal of energy minimization. By dividing an RTP into two straight and two semicircular paths, the total energy consumption of the UAV and the total amount of data transferred from the ground source to the ground destination via the UAV relay are calculated. According to the framework of Deep Reinforcement Learning and taking the UAV's roll-angle limit into consideration, the RTP design problem is formulated as a Markov Decision Process problem, giving the state and action spaces in addition to the policy and reward functions. In order for the UAV relay to obtain the control policy, Deep Deterministic Policy Gradient (DDPG) is used to solve the path design problem, leading to a DDPG based algorithm for the RTP design. Computer simulations are performed and the results show that the DDPG based algorithm always converges when the number of training iterations is around 500, and compared with the circular and straight paths, the proposed RTP design can save at least 12.13 % of energy and 65.93 % of flight time when the ground source and the ground destination are located 2000 m apart and need to transfer 5000 bit / Hz of data. Moreover, it is more practical and efficient in terms of energy saving compared with the Deep Q Network based design.
中文翻译:
基于深度强化学习的固定翼无人机辅助移动中继网络跑道路径设计
本文研究了一种固定翼无人机 (UAV) 辅助移动中继网络 (FUAVMRN),其中固定翼无人机采用带外全双工中继方式为地面源-目的地对提供服务。经证实,对于 FUAVMRN 来说,直线路径并不适合需要传输大量数据的情况,而如果地面源-目的地对的距离较大,则环形路径可能会导致吞吐量低。因此,以能量最小化为目标,研究了 FUAVMRN 的跑轨路径 (RTP) 设计问题。通过将 RTP 划分为两条直线路径和两条半圆形路径,可以计算出无人机的总能耗以及通过 UAV 中继从地面源传输到地面目的地的数据总量。根据深度强化学习的框架,考虑到无人机的滚动角限制,将 RTP 设计问题表述为马尔可夫决策过程问题,除了策略和奖励函数外,还给出了状态和动作空间。为了使无人机中继获得控制策略,采用深度确定性策略梯度 (DDPG) 来解决路径设计问题,从而为RTP 设计提供了一种基于 DDPG 的算法。计算机仿真结果表明,当训练迭代次数在 500 次左右时,基于 DDPG 的算法总是收敛,与圆形和直线路径相比,所提出的 RTP 设计在地源和地目的地相距 2000 m 且需要传输 5000bit/Hz 数据时,至少可以节省 12.13 % 的能源和 65.93 % 的飞行时间。此外,与基于 Deep Q Network 的设计相比,它在节能方面更加实用和高效。
更新日期:2024-10-28
中文翻译:
基于深度强化学习的固定翼无人机辅助移动中继网络跑道路径设计
本文研究了一种固定翼无人机 (UAV) 辅助移动中继网络 (FUAVMRN),其中固定翼无人机采用带外全双工中继方式为地面源-目的地对提供服务。经证实,对于 FUAVMRN 来说,直线路径并不适合需要传输大量数据的情况,而如果地面源-目的地对的距离较大,则环形路径可能会导致吞吐量低。因此,以能量最小化为目标,研究了 FUAVMRN 的跑轨路径 (RTP) 设计问题。通过将 RTP 划分为两条直线路径和两条半圆形路径,可以计算出无人机的总能耗以及通过 UAV 中继从地面源传输到地面目的地的数据总量。根据深度强化学习的框架,考虑到无人机的滚动角限制,将 RTP 设计问题表述为马尔可夫决策过程问题,除了策略和奖励函数外,还给出了状态和动作空间。为了使无人机中继获得控制策略,采用深度确定性策略梯度 (DDPG) 来解决路径设计问题,从而为RTP 设计提供了一种基于 DDPG 的算法。计算机仿真结果表明,当训练迭代次数在 500 次左右时,基于 DDPG 的算法总是收敛,与圆形和直线路径相比,所提出的 RTP 设计在地源和地目的地相距 2000 m 且需要传输 5000bit/Hz 数据时,至少可以节省 12.13 % 的能源和 65.93 % 的飞行时间。此外,与基于 Deep Q Network 的设计相比,它在节能方面更加实用和高效。