当前位置: X-MOL 学术Veh. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fully distributed multi-agent processing strategy applied to vehicular networks
Vehicular Communications ( IF 5.8 ) Pub Date : 2024-06-04 , DOI: 10.1016/j.vehcom.2024.100806
Vladimir R. de Lima , Marcello L.R. de Campos

This work explores distributed processing techniques, together with recent advances in multi-agent reinforcement learning (MARL) to implement a fully decentralized reward and decision-making scheme to efficiently allocate resources (spectrum and power). The method targets processes with strong dynamics and stringent requirements such as cellular vehicle-to-everything networks (C-V2X). In our approach, the C-V2X is seen as a strongly connected network of intelligent agents which adopt a distributed reward scheme in a cooperative and decentralized manner, taking into consideration their channel conditions and selected actions in order to achieve their goals cooperatively. The simulation results demonstrate the effectiveness of the developed algorithm, named Distributed Multi-Agent Reinforcement Learning (DMARL), achieving performances very close to that of a centralized reward design, with the advantage of not having the limitations and vulnerabilities inherent to a fully or partially centralized solution.

中文翻译:


应用于车辆网络的全分布式多智能体处理策略



这项工作探索了分布式处理技术,以及多智能体强化学习(MARL)的最新进展,以实施完全去中心化的奖励和决策方案,以有效地分配资源(频谱和功率)。该方法针对具有强动态性和严格要求的流程,例如蜂窝车联网 (C-V2X)。在我们的方法中,C-V2X 被视为一个强连接的智能代理网络,它以合作和去中心化的方式采用分布式奖励方案,考虑其信道条件和选择的操作,以合作实现其目标。模拟结果证明了所开发算法的有效性,该算法名为分布式多代理强化学习(DMARL),其性能非常接近集中式奖励设计,其优点是不存在全部或部分固有的限制和漏洞。集中式解决方案。
更新日期:2024-06-04
down
wechat
bug