当前位置: X-MOL首页全球导师 国内导师 › 李衍杰

个人简介

2006年7月在中国科学技术大学获得博士学位,2006年8月至2008年8月在香港科技大学从事博士后研究。2008年9月加入哈尔滨工业大学深圳校区。承担了国家自然科学青年基金项目1项,面上项目1项,国家自然科学联合基金项目1项,博士点基金项目1项,深圳基础布局项目2项和深圳市基础研究项目3项,以第一作者在Automatica,European Journal of Operational Research,IEEE Transactions on SMC-B Cybernetics,DEDS等期刊上发表多篇论文,获得何毓琦院士授予的何潘清漪优秀论文奖。 教育经历 2001-2006 中国科学技术大学 自动化系 硕/博士 1997-2001 青岛大学 数学系 学士 研究与工作经历 2010-至今 哈尔滨工业大学 深圳研究生院 副教授 2008-2010 哈尔滨工业大学 深圳研究生院 助理教授 2006-2008 香港科技大学 电子与计算机工程系 博士后 2013 University of New South Wales, Visiting Fellow 科研项目 2022-2025 深圳市科技计划重点项目:精密组装机器人智能感知与控制方法研究 2019-2023 国家自然科学基金:时间和部分可观因素影响下的逆向强化学习最优性和安全性研究 2018-2022 国家自然科学联合基金:基于视觉的室内移动机器人定位与导航关键技术研究 2019-2022 深圳市基础布局:智能无人仓库系统关键技术研究 2018-2020 深圳市基础计划:非马氏环境下的逆向强化学习理论及其在导航控制中的应用 2014-2016 深圳市基础计划:基于优化理论的智能手机个性化动态节能技术研究 2012-2014 深圳市基础计划重点:智能电网输电线路无人直升机检测的关键问题研究(JC201104210048A) 2011-2013 国家自然科学基金:半Markov决策过程基于灵敏度的优化及应用(61004036) 2011-2012 教育部博士点新教师基金:基于性能灵敏度的平均报酬强化学习方法研究(20102302120071) 2011-2013 深圳市基础计划:基于多机器人的智能仓储系统关键技术(JC201005260179A)

研究领域

强化学习;逆向强化学习;随机决策与优化;离散事件动态系统;无人机控制;智能自主无人系统

近期论文

查看导师新发文章 (温馨提示:请注意重名现象,建议点开原文通过作者单位确认)

A review of graph-based multi-agent pathfinding solvers: From classical to beyond classical. Gao, J., Li, Y., Li, X., Yan, K., Lin, K., & Wu, X. Knowledge-Based Systems, 2023. Motion Planner with Fixed-Horizon Constrained Reinforcement Learning for Complex Autonomous Driving Scenarios. Lin, K., Li, Y., Chen, S., Li, D., Wu, X. IEEE Transactions on Intelligent Vehicles, 2023. TAG: Teacher-Advice Mechanism With Gaussian Process for Reinforcement Learning. Lin, K., Li, D., Li, Y., Chen, S., Liu, Q., Gao, J., Jin, Y., & Gong, L. IEEE Transactions on Neural Networks and Learning Systems, 2023. A fully distributed adaptive event-triggered control for output regulation of multi-agent systems with directed network. Shi, X., Li, Y., Liu, Q., Lin, K., & Chen, S. Information Sciences, 2023. Learning Real-Time Dynamic Responsive Gap-Traversing Policy for Quadrotors with Safety-Aware Exploration. Chen, S., Li, Y., Lou, Y., Lin, K., & Wu, X. IEEE Transactions on Intelligent Vehicles, 2022. A Two-Objective ILP Model of OP-MATSP for the Multi-Robot Task Assignment in an Intelligent Warehouse. Gao, J., Li, Y., Xu, Y., & Lv, S. Applied Sciences, 2022. Rotating consensus for double-integrator multi-agent systems with communication delay. Shi, X., Li, Y., Yang, Y., Sun, B., & Li, Y.. ISA Transactions, 2021. Online Extrinsic Parameter Calibration for Robotic Camera–Encoder System. Wang, X., Chen, H., Li, Y., & Huang, H. IEEE Transactions on Industrial Informatics, 2019. Vision and laser fused SLAM in indoor environments with multi-robot system. Chen, H., Huang, H., Qin, Y., Li, Y., Liu, Y. Assembly Automation, 2019. Coupling Based Estimation Approaches for the Average Reward Performance Potential in Markov Chains. Li, Y., Wu, X., Lou, Y., Chen, H., Li, J.. Automatica, 2018. Motion Tracking of the Carotid Artery Wall From Ultrasound Image Sequences: a Nonlinear State-Space Approach. Gao, Z., Li, Y., Sun, Y., etc. IEEE Transactions on Medical Imaging, 2018. Online optimization of dynamic power management. Zhai, J.-F., Li, Y.-J., Chen, H.-Y. Control Theory and Applications, 2018. Autonomous wi-fi relay placement with mobile robots. Gao, Y., Chen, H., Li, Y., Lyu, C., Liu, Y. IEEE/ASME Transactions on Mechatronics, 2017. A unified approach to time-aggregated Markov decision processes. Li, Y., Wu, X. Automatica, 2016. A basic formula for performance gradient estimation of semi-Markov decision processes. Li, Y., Cao, F. European Journal of Operational Research, 2013. Finding optimal memoryless policies of POMDPs under the expected average reward criterion. Li, Y., Yin, B., Xi, H. European Journal of Operational Research, 2011. Partially observable Markov decision processes and performance sensitivity analysis. Li, Y., Yin, B., Xi, H. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2008. Conference Papers Multi-Agent Path Finding with Time Windows: Preliminary Results. Gao J., Liu Q., Chen S., Yan K., Li X. & Li Y. International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2023. Battery Management for Warehouse Robots via Average-Reward Reinforcement Learning. Mu, Y., Li, Y., Lin, K., Deng, K., & Liu, Q. In IEEE International Conference on Robotics and Biomimetics (ROBIO), 2022. Multi-Robot Real-time Game Strategy Learning based on Deep Reinforcement Learning. Deng, K., Li, Y., Lu, S., Mu, Y., Pang, X., & Liu, Q. In IEEE International Conference on Robotics and Biomimetics (ROBIO), 2022. Multi-agent Pathfinding with Communication Reinforcement Learning and Deadlock Detection. Ye, Z., Li, Y., Guo, R., Gao, J., & Fu, W. In Intelligent Robotics and Applications: 15th International Conference, (ICIRA), 2022. Decision Making for Autonomous Driving Via Multimodal Transformer and Deep Reinforcement Learning. Fu, W., Li, Y., Ye, Z., & Liu, Q. In IEEE International Conference on Real-time Computing and Robotics (RCAR), 2022. A Mapless Navigation Method Based on Reinforcement Learning and Local Obstacle Map. Pang, X., Li, Y., Liu, Q., and Deng, K.”,” 2022 China Automation Congress (CAC), Xiamen, China, 2022. Exploration via Distributional Reinforcement Learning with Epistemic and Aleatoric Uncertainty Estimation. Liu, Q., Li, Y., Liu, Y., Chen, M., Lv, S., & Xu, Y. IEEE International Conference on Automation Science and Engineering, 2021. Towards Autonomous Driving Decision by Combining Self-attention and Deep Reinforcement Learning. Chen, M., Li, Y., Liu, Q., Lv, S., Xu, Y., & Liu, Y. IEEE International Conference on Real-time Computing and Robotics, 2021. Efficient Power Grid Topology Control via Two-Stage Action Search. Liu, Y., Li, Y., Liu, Q., Xu, Y., Lv, S., & Chen, M. International Conference on Intelligent Robotics and Applications, 2021. An Overview of Robust Reinforcement Learning. Chen, S., Li, Y. IEEE International Conference on Networking, Sensing and Control, 2020. Robust identification of visual markers under boundary occlusion condition. Chang, R., Li, Y., Wu, C. IEEE International Conference on Robotics and Biomimetics, 2019. Deep Reinforcement Learning Apply in Electromyography Data Classification. Song, C., Chen, C., Li, Y., Wu, X. IEEE International Conference on Cyborg and Bionic Systems, 2019. A deep reinforcement learning algorithm with expert demonstrations and supervised loss and its application in autonomous driving. Liu, K., Wan, Q., Li, Y. Chinese Control Conference, 2018. Visual Grasping for a Lightweight Aerial Manipulator Based on NSGA-II and Kinematic Compensation. Fang, L., Chen, H., Lou, Y., Li, Y., Liu, Y. IEEE International Conference on Robotics and Automation, 2018. Singularity-Robust Hybrid Visual Servoing Control for Aerial Manipulator. Quan, F., Chen, H., Li, Y., …Chen, J., Liu, Y. IEEE International Conference on Robotics and Biomimetics, 2018. A monocular vision localization algorithm based on maximum likelihood estimation. Chen, S., Li, Y., Chen, H. IEEE International Conference on Real-Time Computing and Robotics, 2018. An Inverse Reinforcement Learning Algorithm for semi-Markov Decision Processes. Tan, C., Li, Y., Cheng, Y. IEEE International Conference on Information and Automation, 2018. Online calibration for monocular vision and odometry fusion. Wang, X., Chen, H., Li, Y.** Proceedings of 2017 IEEE International Conference on Unmanned Systems, 2018. A cross-coupled iterative learning control design for biaxial systems based on natural local approximation of contour error. Liu, S., Li, Y.** Chinese Control Conference, 2017. The control of two-wheeled self-balancing vehicle based on reinforcement learning in a continuous domain. Xia, P., Li, Y.** Youth Academic Annual Conference of Chinese Association of Automation, 2017. Face recognition based on convolutional neural network & support vector machine. Guo, S., Chen, S., Li, Y.** IEEE International Conference on Information and Automation, IEEE, 2017. Real-Time tracking a ground moving target in complex indoor and outdoor environments with UAV. Chen, S., Guo, S., Li, Y.** IEEE International Conference on Information and Automation, 2017. Average Reward Reinforcement Learning for Semi-Markov Decision Processes. Yang, J., Li, Y., Chen, H., Li, J. International Conference on Neural Information Processing, 2017. Visual Servo Tracking Control of Quadrotor with a Cable Suspended Load. Jia, E., Chen, H., Li, Y., Lou, Y., Liu, Y. International Conference on Computer Vision Systems, 2017. A semi-Markov decision process based dynamic power management for mobile devices. Zhang, M., Li, Y., Chen, H. IEEE International Conference on Real-Time Computing and Robotics, 2016. Autonomous WiFi-relay control with mobile robots. Gao, Y., Chen, H., Li, Y., Liu, Y. IEEE International Conference on Real-Time Computing and Robotics, 2016. Sample-path based performance sensitivity construction of semi-Markov systems. Li, Y., Zhang, J. Chinese Control Conference, 2016. An online optimization for dynamic power management. Zhai, J., Li, Y., Chen, H. IEEE International Conference on Industrial Technology, 2016. A Gradient Learning Optimization for Dynamic Power Management. Li, Y., Jiang, F. IEEE International Conference on Systems, Man, and Cybernetics, 2015. Visual laser-SLAM in large-scale indoor environments. Liang, X., Chen, H., Li, Y., Liu, Y. 2016 IEEE International Conference on Robotics and Biomimetics, 2016. An adaptive kalman filter to estimate state-of-charge of lithium-ion batteries. Luo, Z., Li, Y., Lou, Y. IEEE International Conference on Information and Automation, 2015. A simulation study of control methods for three-phase energy storage inverter. Du, J., Li, Y., Lou, Y. IEEE International Conference on Information and Automation, 2015. A unified approach for semi-Markov decision processes with discounted and average reward criteria. Li, Y., Wang, H., Chen, H. The World Congress on Intelligent Control and Automation (WCICA), 2015. Auction-based multi-agent task assignment in smart logistic center. Guo, Y., Li, Y., Zhang, Y. Chinese Control Conference, 2014. Convex optimization of battery energy storage station in a micro-grid. Zhang, R., Li, Y., Lou, Y. IEEE International Conference on Information and Automation, 2013. Sensitivity-based inverse reinforcement learning. Tao, Z., Chen, Z., Li, Y.** Chinese Control Conference, 2013. Performance analysis of a small-scale unmanned helicopter under large wind disturbance. Zeng, W., Zhu, X., Li, Y., Li, L. Chinese Control Conference, 2013. An average reward performance potential estimation with geometric variance reduction. Li, Y. Chinese Control Conference, 2012. An average-reward reinforcement learning algorithm based on Schweitzer’s Transformation. Li, J., Ren, J., Li, Y.** Chinese Control Conference, 2012. Reinforcement learning algorithms for semi-Markov decision processes with average reward. Li, Y. IEEE International Conference on Networking, Sensing and Control, 2012 Less computational unscented Kalman filter for practical state estimation of small scale unmanned helicopters. Zeng, W., Zhu, X., Li, Y., Li, Z. IEEE International Conference on Robotics and Automation, 2011. RVI reinforcement learning for Semi-Markov decision processes with average reward. Li, Y., Cao, F. The World Congress on Intelligent Control and Automation (WCICA), 2010. An improvement of policy gradient estimation algorithms. Li, Y., Cao, F., Cao, X.-R. International Workshop on Discrete Event Systems, 2008.

学术兼职

2010-至今 IEEE 会员 2012-至今 中国运筹学会会员

推荐链接
down
wechat
bug