当前位置:
X-MOL 学术
›
IEEE T. Evolut. Comput.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Deep Reinforcement Learning Assisted Genetic Programming Ensemble Hyper-Heuristics for Dynamic Scheduling of Container Port Trucks
IEEE Transactions on Evolutionary Computation ( IF 11.7 ) Pub Date : 2024-03-25 , DOI: 10.1109/tevc.2024.3381042 Xinan Chen 1 , Ruibin Bai 2 , Rong Qu 3 , Jing Dong 4 , Yaochu Jin 5
IEEE Transactions on Evolutionary Computation ( IF 11.7 ) Pub Date : 2024-03-25 , DOI: 10.1109/tevc.2024.3381042 Xinan Chen 1 , Ruibin Bai 2 , Rong Qu 3 , Jing Dong 4 , Yaochu Jin 5
Affiliation
Efficient truck dispatching is crucial for optimizing container terminal operations within dynamic and complex scenarios. Despite good progress being made recently with more advanced uncertainty-handling techniques, existing approaches still have generalization issues and require considerable expertise and manual interventions in algorithm design. In this work, we present deep reinforcement learning-assisted genetic programming hyper-heuristics (DRL-GPHH) and their ensemble variant (DRL-GPEHH). These frameworks utilize a reinforcement learning agent to orchestrate a set of auto-generated genetic programming (GP) low-level heuristics, leveraging the collective intelligence, ensuring advanced robustness and an increased level of automation of the algorithm development. DRL-GPEHH, notably, excels through its concurrent integration of a GP heuristic ensemble, achieving enhanced adaptability and performance in complex, dynamic optimization tasks. This method effectively navigates traditional convergence issues of deep reinforcement learning (DRL) in sparse reward and vast action spaces, while avoiding the reliance on expert-designed heuristics. It also addresses the inadequate performance of the single GP individual in varying and complex environments and preserves the inherent interpretability of the GP approach. Evaluations across various real port operational instances highlight the adaptability and efficacy of our frameworks. Essentially, innovations in DRL-GPHH and DRL-GPEHH reveal the synergistic potential of reinforcement learning and GP in dynamic truck dispatching, yielding transformative impacts on algorithm design and significantly advancing solutions to complex real-world optimization problems.
中文翻译:
深度强化学习辅助遗传编程集成超启发式集装箱港口卡车动态调度
高效的卡车调度对于在动态和复杂的场景下优化集装箱码头运营至关重要。尽管最近在更先进的不确定性处理技术方面取得了良好进展,但现有方法仍然存在泛化问题,并且需要大量的专业知识和算法设计方面的手动干预。在这项工作中,我们提出了深度强化学习辅助遗传编程超启发式(DRL-GPHH)及其集成变体(DRL-GPEHH)。这些框架利用强化学习代理来编排一组自动生成的遗传编程(GP)低级启发式方法,利用集体智慧,确保算法开发的高级鲁棒性和更高的自动化水平。值得注意的是,DRL-GPEHH 通过并发集成 GP 启发式集成而表现出色,在复杂的动态优化任务中实现了增强的适应性和性能。该方法有效地解决了稀疏奖励和广阔动作空间中深度强化学习(DRL)的传统收敛问题,同时避免了对专家设计的启发式方法的依赖。它还解决了单个 GP 个人在变化和复杂的环境中表现不足的问题,并保留了 GP 方法固有的可解释性。对各种真实港口运营实例的评估凸显了我们框架的适应性和有效性。从本质上讲,DRL-GPHH 和 DRL-GPEHH 的创新揭示了强化学习和 GP 在动态卡车调度中的协同潜力,对算法设计产生变革性影响,并显着推进复杂现实世界优化问题的解决方案。
更新日期:2024-03-25
中文翻译:
深度强化学习辅助遗传编程集成超启发式集装箱港口卡车动态调度
高效的卡车调度对于在动态和复杂的场景下优化集装箱码头运营至关重要。尽管最近在更先进的不确定性处理技术方面取得了良好进展,但现有方法仍然存在泛化问题,并且需要大量的专业知识和算法设计方面的手动干预。在这项工作中,我们提出了深度强化学习辅助遗传编程超启发式(DRL-GPHH)及其集成变体(DRL-GPEHH)。这些框架利用强化学习代理来编排一组自动生成的遗传编程(GP)低级启发式方法,利用集体智慧,确保算法开发的高级鲁棒性和更高的自动化水平。值得注意的是,DRL-GPEHH 通过并发集成 GP 启发式集成而表现出色,在复杂的动态优化任务中实现了增强的适应性和性能。该方法有效地解决了稀疏奖励和广阔动作空间中深度强化学习(DRL)的传统收敛问题,同时避免了对专家设计的启发式方法的依赖。它还解决了单个 GP 个人在变化和复杂的环境中表现不足的问题,并保留了 GP 方法固有的可解释性。对各种真实港口运营实例的评估凸显了我们框架的适应性和有效性。从本质上讲,DRL-GPHH 和 DRL-GPEHH 的创新揭示了强化学习和 GP 在动态卡车调度中的协同潜力,对算法设计产生变革性影响,并显着推进复杂现实世界优化问题的解决方案。