当前位置: X-MOL 学术IEEE Trans. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Not Only Rewards but Also Constraints: Applications on Legged Robot Locomotion
IEEE Transactions on Robotics ( IF 9.4 ) Pub Date : 2024-05-14 , DOI: 10.1109/tro.2024.3400935
Yunho Kim 1 , Hyunsik Oh 1 , Jeonghyun Lee 1 , Jinhyeok Choi 1 , Gwanghyeon Ji 1 , Moonkyu Jung 1 , Donghoon Youm 1 , Jemin Hwangbo 1
Affiliation  

Several earlier studies have shown impressive control performance in complex robotic systems by designing the controller using a neural network and training it with model-free reinforcement learning. However, these outstanding controllers with natural motion style and high task performance are developed through extensive reward engineering, which is a highly laborious and time-consuming process of designing numerous reward terms and determining suitable reward coefficients. In this article, we propose a novel reinforcement learning framework for training neural network controllers for complex robotic systems consisting of both rewards and constraints . To let the engineers appropriately reflect their intent to constraints and handle them with minimal computation overhead, two constraint types and an efficient policy optimization algorithm are suggested. The learning framework is applied to train locomotion controllers for several legged robots with different morphology and physical attributes to traverse challenging terrains. Extensive simulation and real-world experiments demonstrate that performant controllers can be trained with significantly less reward engineering, by tuning only a single reward coefficient. Furthermore, a more straightforward and intuitive engineering process can be utilized, thanks to the interpretability and generalizability of constraints.

中文翻译:


不仅是奖励,还有约束:腿式机器人运动的应用



一些早期的研究通过使用神经网络设计控制器并使用无模型强化学习对其进行训练,在复杂的机器人系统中展示了令人印象深刻的控制性能。然而,这些具有自然运动风格和高任务性能的出色控制器是通过广泛的奖励工程开发的,这是一个设计大量奖励项并确定合适的奖励系数的高度费力且耗时的过程。在本文中,我们提出了一种新颖的强化学习框架,用于训练由奖励和约束组成的复杂机器人系统的神经网络控制器。为了让工程师适当地反映他们对约束的意图并以最小的计算开销处理它们,建议了两种约束类型和有效的策略优化算法。该学习框架用于训练多个具有不同形态和物理属性的腿式机器人的运动控制器,以穿越具有挑战性的地形。广泛的模拟和现实世界实验表明,通过仅调整单个奖励系数,可以使用显着更少的奖励工程来训练高性能控制器。此外,由于约束的可解释性和普遍性,可以利用更直接和直观的工程过程。
更新日期:2024-05-14
down
wechat
bug