当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A DQN based approach for large-scale EVs charging scheduling
Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2024-08-21 , DOI: 10.1007/s40747-024-01587-w
Yingnan Han, Tianyang Li, Qingzhu Wang

This paper addresses the challenge of large-scale electric vehicle (EV) charging scheduling during peak demand periods, such as holidays or rush hours. The growing EV industry has highlighted the shortcomings of current scheduling plans, which struggle to manage surge large-scale charging demands effectively, thus posing challenges to the EV charging management system. Deep reinforcement learning, known for its effectiveness in solving complex decision-making problems, holds promise for addressing this issue. To this end, we formulate the problem as a Markov decision process (MDP). We propose a deep Q-learning (DQN) based algorithm to improve EV charging service quality as well as minimizing average queueing times for EVs and average idling times for charging devices (CDs). In our proposed methodology, we design two types of states to encompass global scheduling information, and two types of rewards to reflect scheduling performance. Based on this designing, we developed three modules: a fine-grained feature extraction module for effectively extracting state features, an improved noise-based exploration module for thorough exploration of the solution space, and a dueling block for enhancing Q value evaluation. To assess the effectiveness of our proposal, we conduct three case studies within a complex urban scenario featuring 34 charging stations and 899 scheduled EVs. The results of these experiments demonstrate the advantages of our proposal, showcasing its superiority in effectively locating superior solutions compared to current methods in the literature, as well as its efficiency in generating feasible charging scheduling plans for large-scale EVs. The code and data are available by accessing the hyperlink: https://github.com/paperscodeyouneed/A-Noisy-Dueling-Architecture-for-Large-Scale-EV-ChargingScheduling/tree/main/EV%20Charging%20Scheduling.



中文翻译:


基于DQN的大规模电动汽车充电调度方法



本文解决了节假日或高峰时段等高峰需求期间大规模电动汽车 (EV) 充电调度的挑战。电动汽车行业的快速发展凸显了现有调度方案的缺陷,难以有效管理激增的大规模充电需求,从而对电动汽车充电管理系统提出了挑战。深度强化学习以其在解决复杂决策问题方面的有效性而闻名,有望解决这一问题。为此,我们将问题表述为马尔可夫决策过程(MDP)。我们提出了一种基于深度 Q 学习 (DQN) 的算法,以提高电动汽车充电服务质量,并最大限度地减少电动汽车的平均排队时间和充电设备 (CD) 的平均空闲时间。在我们提出的方法中,我们设计了两种类型的状态来包含全局调度信息,以及两种类型的奖励来反映调度性能。基于此设计,我们开发了三个模块:用于有效提取状态特征的细粒度特征提取模块、用于彻底探索解空间的改进的基于噪声的探索模块以及用于增强Q值评估的决斗模块。为了评估我们提案的有效性,我们在复杂的城市场景中进行了三个案例研究,其中包括 34 个充电站和 899 辆预定的电动汽车。这些实验的结果证明了我们的建议的优势,展示了与文献中的现有方法相比,它在有效找到更好的解决方案方面的优越性,以及它在为大型电动汽车生成可行的充电调度计划方面的效率。代码和数据可通过访问超链接获取:https://github。com/paperscodeyouneed/A-Noisy-Dueling-Architecture-for-Large-Scale-EV-ChargingScheduling/tree/main/EV%20Charging%20Scheduling。

更新日期:2024-08-21
down
wechat
bug