个人简介
2002.9—2006.6:毕业于211院校—辽宁大学 数学与应用数学专业 获理学学士学位
2006.9—2008.8:毕业于985院校-东北大学 运筹学与控制论专业 获理学硕士学位
2008.8—2018.7: 工作于黑龙江省牡丹江师范学院理学院 讲师
2018.7—至今: 工作于浙江省中国计量大学 讲师
2013.9-2017.10: 就读于985院校东北大学攻读博士学位, 控制理论与控制工程专业,师从长江学者张化光教授.
主要从事最优控制理论研究和机器人系统建模与仿真的研究,在SCI期刊IEEE Transactions on Neural Networks and Learning Systems(1区影响因子6.108), Neurocomputing(影响因子3.317), International Journal of Adaptive Control and Signal Processing(影响因子1.346), International Journal of System Science(影响因子2.285), 《控制理论与应用》, 《数学的实践与认识》等杂志上发表SCI等论文15余篇. 并受邀担任International Journal of Adaptive Control and Signal Processing审稿人.
近期论文
查看导师新发文章
(温馨提示:请注意重名现象,建议点开原文通过作者单位确认)
1.Xiaohong Cui, Huaguang Zhang, Yanhong Luo and Peifu Zu. Online finite-horizon optimal learning algorithm for nonzero-sum games with partially unknown dynamics and constrained inputs [J]. Neurocomputing, 2016,185: 37-44. (SCI, EI检索)
2.Huaguang Zhang, Xiaohong Cui, Yanhong Luo and He Jiang. Finite-Horizon tracking control for unknown nonlinear systems with saturating actuators [J]. IEEE Transactions on Neural Networks and Learning Systems, 29(4), 2018,pp: 1200- 1212. (导师第一作者,SCI, EI检索)
3.Xiaohong Cui, Huaguang Zhang, Yanhong Luo and He Jiang. Finite-horizon optimal control of unknown nonlinear time-delay systems [J]. Neurocomputing, 2017, 238: 277-285.(SCI, EI检索)
4.Xiaohong Cui, Huaguang Zhang, Yanhong Luo and He Jiang. Adaptive dynamic programming for tracking design of uncertain nonlinear systems with disturbances and input constraints [J]. International Journal of Adaptive Control and Signal Processing, 2017DOI: 10.1002/acs.2786. (SCI, EI检索)
5.Xiaohong Cui, Yanhong Luo and Huaguang Zhang. An adaptive dynamic programming algorithm to solve optimal control of uncertain nonlinear systems [C]. 2014 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL). pp.259-264.
6.崔小红, 罗艳红, 张化光, 祖培福. 未知饱和控制系统有穷域最优控制[J]. 控制理论与应用, 2016, 33(5): 631-637. ( EI检索)
7.Kezhen Han, Jian Feng, Xiaohong Cui. Fault-tolerant optimized tracking control for unknown discrete-time linear systems using a combined reinforcement learning and residual compensation methodology[J], International Journal of System Science. DOI:10.1080/00207721.2017.1344890. (SCI, EI检索)